In the present paper, we consider one-hidden layer ANNs with a feedforward
architecture, also referred to as shallow or two-layer networks, so that the
structure is determined by the number and types of neurons. The determination
of the parameters that define the function, called training, is done via the
resolution of the approximation problem, so by imposing the interpolation
through a set of specific nodes. We present the case where the parameters are
trained using a procedure that is referred to as Extreme Learning Machine (ELM)
that leads to a linear interpolation problem. In such hypotheses, the existence
of an ANN interpolating function is guaranteed. The focus is then on the
accuracy of the interpolation outside of the given sampling interpolation nodes
when they are the equispaced, the Chebychev, and the randomly selected ones.
The study is motivated by the well-known bell-shaped Runge example, which makes
it clear that the construction of a global interpolating polynomial is accurate
only if trained on suitably chosen nodes, ad example the Chebychev ones. In
order to evaluate the behavior when growing the number of interpolation nodes,
we raise the number of neurons in our network and compare it with the
interpolating polynomial. We test using Runge's function and other well-known
examples with different regularities. As expected, the accuracy of the
approximation with a global polynomial increases only if the Chebychev nodes
are considered. Instead, the error for the ANN interpolating function always
decays and in most cases we observe that the convergence follows what is
observed in the polynomial case on Chebychev nodes, despite the set of nodes
used for training.