My papers can be found on , or below.
[1] Rahul Parhi, and Michael Unser. Banach space optimality of neural architectures with multivariate nonlinearities, Submitted. (2023). http://arxiv.org/abs/2310.03696.
[2] Rahul Parhi, and Michael Unser. Distributional extension and invertibility of the k-plane transform and its dual, Submitted. (2023). http://arxiv.org/abs/2310.01233.
[3] Rahul Parhi, and Robert D. Nowak. Deep learning meets sparse regularization: A signal processing perspective, IEEE Signal Processing Magazine. 40 (2023) 63–74. doi: 10.1109/MSP.2023.3286988.
[4] Joseph Shenouda, Rahul Parhi, and Robert D. Nowak. “A continuous transform for localized ridgelets”. In: Fourteenth International Conference on Sampling Theory and Applications, 2023.
[5] Rahul Parhi, and Michael Unser. “Modulation spaces and the curse of dimensionality”. In: Fourteenth International Conference on Sampling Theory and Applications, 2023.
[6] Ronald DeVore, Robert D. Nowak, Rahul Parhi, and Jonathan W. Siegel. Weighted variation spaces and approximation by shallow ReLU networks, Submitted. (2023). http://arxiv.org/abs/2307.15772.
[7] Rahul Parhi, and Michael Unser. The sparsity of cycle spinning for wavelet-based solutions of linear inverse problems, IEEE Signal Processing Letters. 30 (2023) 568–572. doi: 10.1109/LSP.2023.3275916.
[8] Joseph Shenouda, Rahul Parhi, Kangwook Lee, and Robert D. Nowak. Vector-valued variation spaces and width bounds for DNNs: Insights on weight decay regularization, Submitted. (2023). http://arxiv.org/abs/2305.16534.
[9] Rahul Parhi, and Robert D. Nowak. Near-minimax optimal estimation with shallow ReLU neural networks, IEEE Transactions on Information Theory. 69 (2023) 1125–1140. doi: 10.1109/TIT.2022.3208653.
[10] Rahul Parhi, and Robert D. Nowak. “On continuous-domain inverse problems with sparse superpositions of decaying sinusoids as solutions”. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, pp. 5603–5607. doi: 10.1109/ICASSP43922.2022.9746165.
[11] Rahul Parhi, and Robert D. Nowak. What kinds of functions do deep neural networks learn? Insights from variational spline theory, SIAM Journal on Mathematics of Data Science. 4 (2022) 464–489. doi: 10.1137/21M1418642.
[12] Rahul Parhi, and Robert D. Nowak. Banach space representer theorems for neural networks and ridge splines, Journal of Machine Learning Research. 22 (2021) 1–40. http://jmlr.org/papers/v22/20-583.html.
[13] Rahul Parhi, and Robert D. Nowak. The role of neural network activation functions, IEEE Signal Processing Letters. 27 (2020) 1779–1783. doi: 10.1109/LSP.2020.3027517.
[14] Rahul Parhi, Michael Schliep, and Nicholas Hopper. “MP3: A more efficient private presence protocol”. In: Financial Cryptography and Data Security, 2018, pp. 38–57. doi: 10.1007/978-3-662-58387-6_3.
[15] Rahul Parhi, Chris H. Kim, and Keshab K. Parhi. “Fault-tolerant ripple-carry binary adder using partial triple modular redundancy (PTMR)”. In: IEEE International Symposium on Circuits and Systems (ISCAS), 2015, pp. 41–44. doi: 10.1109/ISCAS.2015.7168565.