Abstract

Within the conventional framework of a native space structure, a smooth kernel generates a small native space, and radial basis functions stemming from the smooth kernel are intended to approximate only functions from this small native space. In this paper, we embed the smooth radial basis functions in a larger native space generated by a less smooth kernel and use them to interpolate the samples. Our result shows that there exists a linear combination of spherical radial basis functions that can both exactly interpolate samples generated by functions in the larger native space and near best approximate the target function.

1. Introduction

Many scientific questions boil down to synthesizing an unknown but definite function from finitely many samples . The purpose is to find a functional model that can effectively represent or approximate the underlying relation between the input and the output . If the unknown functions are defined on spherical domains, then data collected by satellites or ground stations are usually not restricted on any regular region and are scattered. Thus, any numerical method relying on the structure of a “grid” is doomed to fail.

The success of the radial basis function networks methodology in Euclidean space derives from its ability to generate estimators from data with essentially unstructured geometry. Therefore, it is natural to borrow this ideal to deal with spherical scattered data. This method, called the spherical radial basis function networks (SRBFNs), has been extensively used in gravitational phenomenon [1, 2], image processing [3, 4], and learning theory [5].

Mathematically, the SRBFN can be represented as where , , and are called the connection weight, center, and activation function in the terminology of neural networks, respectively. Here and hereafter, denotes the unit sphere embedded in the -dimensional Euclidean space, . Both the connection weights and the centers are adjustable in the process of training. We denote by the collection of functions formed as (1).

A basic and classical approach for training SRBFN is to construct exact interpolant based on the given samples , that is, to find a function in such that

If the activation function is chosen to be positive definite [2], then the matrix is nonsingular. Thus, the system of (2) can be easily solved by taking the scattered data as the centers of SRBFN. This method has been already named the spherical basis function (SBF) method. Under this circumstance, the connection weights are determined via where , , denotes the transpose of the matrix (or vector) , and denotes the inverse matrix of .

For the SBF method, if the target function belongs to the native space associated with the activation function , then the best SBF approximant of is characterized by the exact interpolation: where are determined by (3). This property makes the SBF interpolation strategy popular in spherical scattered data fitting [614]. However, there are also two disadvantages for the SBF method. On one hand, since the centers of SRBFN interpolants are chosen as the scattered data, the interpolation capability depends heavily on their geometric distributions. This implies that we cannot obtain a satisfactory interpolation error estimate if the data are “badly” located on the sphere. On the other hand, the well known “native space barrier" [11, 12, 15] shows that (4) only holds for a small class of smooth functions if is smooth. Therefore, for functions outside , the SBF interpolants are not guaranteed to be the best approximants.

Along the flavor of the previous papers [7, 11, 12], we use SRBFNs to interpolate functions in a large native space associated with the kernel which is less smooth than and study its interpolation capability. Different from the previous work, the centers are chosen in advance to be quasi-uniform located on spheres, which makes the interpolation error depend on the number rather than the geometric distribution of centers. Our purpose is not to give the detailed error estimate of the SRBFN interpolation. Instead, we focus on investigating the relation between interpolation and approximation for SRBFN. Indeed, we find that there exists an SRBFN interpolant which is also the near best approximant of functions in , when the number of centers and geometric distribution of the scattered data satisfy a certain assumption. That is, for a suitable choice of , there exists a function such that(1) exactly interpolates the samples ;(2),

where is a constant depending only on , , and .

2. Positive Radial Basis Function on the Sphere

It is easy to deduce that the volume of , , satisfies where denotes the surface area element on . For integer , the restriction to of a homogeneous harmonic polynomial of degree on the unit sphere is called a spherical harmonic of degree . The span of all spherical harmonics of degree is denoted by , and the class of all spherical harmonics (or spherical polynomials) of degree is denoted by . It is obvious that . The dimension of is given by and that of is .

Denote by an orthonormal basis of ; then the following addition formula [16, 17] holds where is the Legendre polynomial with degree and dimension . The Legendre polynomial can be normalized such that and satisfies the orthogonality relation where is the usual Kronecker symbol.

Positive definite radial basis functions on spheres were introduced and characterized by Schoenberg [18]. Namely, a radial basis function is positive definite if and only if its expansion has all Fourier-Legendre coefficients and . We define the native space as with its inner product where .

3. Interpolation and Near Best Approximation

Let be a set of points and be the spherical distance between and . We denote by , , and the mesh norm, separation radius, and mesh ratio of , respectively. It is easy to check that these three quantities describe the geometric distribution of points in . The -uniform set is defined by the family of all centers sets with .

Let and satisfy with

The Sobolev embedding theorem [12] implies that if , then and are continuously embedded in , and so there are reproducing kernel Hilbert spaces, with reproducing kernels being and , respectively.

The aim of this section is to study the relation between the exact interpolation and best approximation for with its centers set and activation function satisfying (12) and (14). It is obvious that such a is a linear space. The following Theorem 1 shows that there exists an SRBFN interpolant which can near best approximate in the metric of , where satisfies (13) and (14).

Theorem 1. Let be the set of scattered data with separation radius , and . If and , where is a constant depending only on and , then, for every , there exists an SRBFN interpolant such that(i) exactly interpolates the samples , (ii).

Remark 2. Similar results have been considered for spherical polynomials both in and . Narcowich et al. [11, 12] proved that there exists a spherical polynomial interpolant of degree at most which can also best approximate the target both in and .

To prove Theorem 1, we need the following three lemmas, which can be found in [12, Proposition ], [12, Theorem 5.5], and [19, Example 2.10], respectively.

Lemma 3. Let be a (possibly complex) Banach space, a subspace of , and a finite-dimensional subspace of , the dual of . If for every and some , independent of , then for there exists such that interpolates on ; that is, for all . In addition, approximates in the sense that .

Lemma 4. Let . If , then there is a such that

Lemma 5. Let be defined in (13) and (14) and . Then for arbitrary set of real numbers , we have where is a constant depending only on and .

Now we provide the proof of Theorem 1.

Proof of Theorem 1. We apply Lemma 3 to the case in which the underlying space is the native space . Let and . So in order to prove Theorem 1, it suffices to prove that for arbitrary real numbers such that Since is a reproducing kernel Hilbert space with its reproducing kernel, we have Similarly, we obtain where is the orthogonal projection of to in the metric of . Then the Pythagorean theorem yields that Thus, Lemma 3 with yields that in order to prove (18), it suffices to prove Let , be the best polynomial approximation of in the metric of ; then for arbitrary , we have It follows from [12, Page 382] that there exists a constant, , depending only on such that for arbitrary , there holds
Let be the best approximation of in the metric of . Then it follows from Lemma 4 and the well-known Bernstein inequality [17] that Since is the best polynomial approximation of in the metric of , a simple computation yields Furthermore, it follows from Lemma 5 that Then, together with yields that where is a constant depending only on , , and . Thus, there exists a constant depending only on , , , and such that for arbitrary , there holds Inserting (24) and (29) into (23), we finish the proof of (22) and then complete the proof of Theorem 1.

Acknowledgments

An anonymous referee has carefully read the paper and has provided to us numerous constructive suggestions. As a result, the overall quality of the paper has been noticeably enhanced, to which we feel much indebted and are grateful. The research was supported by the National 973 Programming (2013CB329404), the Key Program of National Natural Science Foundation of China (Grant no. 11131006), and the National Natural Science Foundations of China (Grant no. 61075054).