# A Jackson-type estimate in terms of the \(\tau\)-modulus for neural network operators in \(L^{p}\)-spaces

## Keywords:

Neural network operators, averaged moduli of smoothness, Jackson-type estimates, sigmoidal functions, Hardy-Littlewood maximal function## Abstract

In this paper, we study the order of approximation with respect to the \(L^{p}\)-norm for the (shallow) neural network (NN) operators. We establish a Jackson-type estimate for the considered family of discrete approximation operators using the averaged modulus of smoothness introduced by Sendov and Popov, also known by the name of \(\tau\)-modulus, in the case of bounded and measurable functions on the interval \([-1,1]\). The results here proved, improve those given by Costarelli (J. Approx. Theory 294:105944, 2023), obtaining a sharper approximation. In order to provide quantitative estimates in this context, we first establish an estimate in the case of functions belonging to Sobolev spaces. In the case \(1 < p <+\infty\), a crucial role is played by the so-called Hardy-Littlewood maximal function. The case of \(p=1\) is covered in case of density functions with compact support.

## References

G. A. Anastassiou, L. Coroianu and S. G. Gal: Approximation by a nonlinear Cardaliaguet-Euvrard neural network operator of max-product kind, J. Comput. Anal. Appl., 12 (2) (2010), 396–406.

C. Bardaro, P. L. Butzer, R. L. Stens and G. Vinti: Approximation error of the Whittaker cardinal series in terms of an averaged modulus of smoothness covering discontinuous signals, J. Math. Anal. Appl., 316 (2006), 269–306.

M. Cantarini, L. Coroianu, D. Costarelli, S. G. Gal and G. Vinti: Inverse result of approximation for the max-product neural network operators of the Kantorovich type and their saturation order, Mathematics, 10 (63) (2022), 1–11.

F. Cao, Z. Chen: The approximation operators with sigmoidal functions, Comput. Math. Appl., 58 (4) (2009), 758–765.

F. Cao, Z. Chen: The construction and approximation of a class of neural networks operators with ramp functions, J. Comput. Anal. Appl., 14 (1) (2012), 101–112.

P. Cardaliaguet, G. Euvrard: Approximation of a function and its derivative with a neural network, Neural Netw., 5 (2) (1992), 207–220.

S. Chakraverty, D. M. Sahoo and N. R. Mahato: McCulloch-Pitts neural network model, In: Concepts of Soft Computing: Fuzzy and ANN with Programming, Springer, Singapore (2019).

D. Costarelli: Density results by deep neural network operators with integer weights, Math. Model. Anal., 27 (4) (2022), 547–560.

D. Costarelli: Approximation error for neural network operators by an averaged modulus of smoothness, J. Approx.Theory, 294 (2023), 105944.

D. Costarelli, R. Spigler: Approximation results for neural network operators activated by sigmoidal functions, NeuralNetw., 44 (2013), 101–106.

D. Costarelli, R. Spigler: How sharp is the Jensen inequality?, J. Inequal. Appl., 2015:69 (2015), 1–10.

D. Costarelli, G. Vinti: Voronovskaja type theorems and high order convergence neural network operators with sigmoidal functions, Mediterr. J. Math., 17 (2020), Article ID: 77.

G. Cybenko: Approximation by superpositions of a sigmoidal function, Math. Control Signals Systems, 2 (1989), 303–314.

R. A. DeVore, G. G. Lorentz: Constructive approximation, Springer Science & Business Media, Vol. 303, (1993).

C. Fang, J. D. Lee, P. Yang and T. Zhang: Modeling from features: A mean-field framework for over-parameterized deep neural networks, arXiv preprint (2020), https://arxiv.org/abs/2007.01452 (Accessed 23 December 2020).

D. Gotleyb, G. Lo Sciuto, C. Napoli, R. Shikler, E. Tramontana and M. Wozniak: Characterization and modeling of organic solar cells by using radial basis neural networks, In: Artificial Intelligence and Soft Computing, (2016), 91–103.

I. Gühring, M. Raslan: Approximation rates for neural networks with encodable weights in smoothness spaces, Neural Netw., 134 (2021), 107–130.

P. C. Kainen, V. Kurková and A. Vogt: Approximative compactness of linear combinations of characteristic functions, J. Approx. Theory, 257 (2020), Article ID: 105435.

V. Kurková, M. Sanguineti: Classification by Sparse Neural Networks, IEEE Trans. on Neural Netw. Learning Syst., 30 (9) (2019), 2746–2754.

S. Langer: Approximating smooth functions by deep neural networks with sigmoid activation function, J. Multivariate Anal., 182 (2021), 104696.

B. Li, S. Tang and H. Yu: Better Approximations of High Dimensional Smooth Functions by Deep Neural Networks with Rectified Power Units, arXiv:1903.05858v4, (2019).

A. Mishra, P. Chandra, U. Ghose and S. S. Sodhi: Bi-modal derivative adaptive activation function sigmoidal feedforward artificial neural networks, Appl. Soft Comput., 61 (2017), 983–994.

B. Sendov, V. A. Popov: The averaged moduli of smoothness: applications in Numerical Analysis and Approximation, Volume in Pure and Applied Mathematics,Wiley - Interscience Series of Texts, Monographs, and Tracts, Chichester - New York - Brisbane - Toronto - Singapore (1988).

E. M. Stein: Singular Integrals, Princeton, New Jersey (1970).

A. F. Timan: Theory of approximation of functions of a real variable, MacMillan, New York (1965).

D. Yarotsky: Universal Approximations of Invariant Maps by Neural Networks, Constr. Approx., (2021), https://doi.org/10.1007/s00365-021-09546-1.

## Downloads

## Published

## How to Cite

*Modern Mathematical Methods*,

*2*(2), 90–102. Retrieved from https://modernmathmeth.com/index.php/pub/article/view/42

## Issue

## Section

## License

Copyright (c) 2024 Lorenzo Boccali, Danilo Costarelli, Gianluca Vinti

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.