Abstract

Recent and forthcoming advances in instrumentation, and giant new surveys, are creating astronomical data sets that are not amenable to the methods of analysis familiar to astronomers. Traditional methods are often inadequate not merely because of the size in bytes of the data sets, but also because of the complexity of modern data sets. Mathematical limitations of familiar algorithms and techniques in dealing with such data sets create a critical need for new paradigms for the representation, analysis and scientific visualization (as opposed to illustrative visualization) of heterogeneous, multiresolution data across application domains. Some of the problems presented by the new data sets have been addressed by other disciplines such as applied mathematics, statistics and machine learning and have been utilized by other sciences such as space-based geosciences. Unfortunately, valuable results pertaining to these problems are mostly to be found in publications outside of astronomy. Here we offer brief overviews of a number of concepts, techniques and developments that are vital to the analysis and visualization of complex datasets and images. One of the goals of this paper is to help bridge the gap between applied mathematics and artificial intelligence on the one side and astronomy on the other.

1. Introduction

Astronomy is undergoing a rapid, unprecedented and accelerating growth in both the amount and the intrinsic complexity of data. This results partly from past and future large sky surveys: the Sloan Digital Sky Survey [1], the Large Synoptic Survey Telescope (LSST) [2, 3], ESA’s GAIA mission [4], Pan-STARRS [5, 6], the Palomar Transient Factory [7], LAMOST [8], and the Palomar-Quest Survey [9]. Smaller surveys and catalogs comprised of 103–104 objects appear annually. The increasing availability of multiple-object spectrographs deployed at ground-based observatories enables observers to obtain spectra of hundreds of objects in a single exposure [1016]. Million-object spectrographs have been proposed and are undergoing design studies (e.g., [17, 18]). All together, and sometimes individually, such projects are creating for astronomy massive multitemporal and multispectral data sets comprised of images spanning multiple wavebands and including billions of objects. Furthermore, the Virtual Observatory (VO) is undertaking to combine existing historical data from all wavelengths into what will be, from the user’s perspective, a single data set of gigantic size and unprecedented complexity (http://www.ivoa.net/).

In addition, mathematically new (for astronomy) forms of data are starting to appear, such as those of the ESA Planck mission in which the cosmic microwave background (CMB) is characterized by a matrix at each point in the sky [19].

Other great challenges arise from the so-called three-dimensional (3D) reconstructions. For example, a very important and difficult problem of solar astrophysics is 3D reconstruction of coronal mass ejections [20]. The resulting reconstruction problem cannot be solved via classical methods and must be addressed by more modern image processing methods like compressed sensing (see Section 5). Another example of tomographic reconstruction is coming from the neutral hydrogen mapping using the redshifted 21 cm line that has recently emerged as a promising cosmological probe. A number of radio telescopes are currently being proposed, planned or constructed to observe the redshifted 21 cm hydrogen line from the Epoch of Reionization (e.g., Fast Fourier Transform Telescope, [21]).

The richness and complexity of new data sets will provide astronomy with a wealth of information and most of the research progress expected from such sets inherently rests in their enormity and complexity. In order to take full advantage of immense multispectral, multitemporal data sets, their analysis should be automated, or at least semi-automated, and should consist of the following steps: detection, characterization and classification of various features of interest, followed, if necessary, by automated decision-making and possibly by intelligent automatic alerts (e.g. for follow-up observations or data quality problems). Moreover, a real-time processing may be required. Complex data also call for scientific visualization rather than ordinary illustrative visualization. By scientific visualization we mean visualization that does not simply reproduce visible things, but makes the things visible, thus enabling extraction of meaningful patterns from multiparametric data sets and ultimately facilitating analysis. All this requires the development and adaption of modern methods for data representation, analysis and visualization. Methods now standard in astronomy are often ineffective because the data sets are too large and too complex for existing tools to be scaled in a straightforward way from handling several parameters up to dozens or more. There are also important limitations inherent in the mathematical algorithms familiar to astronomers. Scientific visualization, dimensionality reduction, and non-parametric methods in general are among the least-advanced categories of tools in astronomy because until this century there have not been data sets requiring new approaches related to those aspects of the data. All this not only creates a critical need for new sophisticated tools, but moreover urgently calls for new paradigms for the analysis, visualization and organization of heterogeneous, multiresolution data across application domains.

The astronomical community is becoming increasingly aware of the fact that new and advanced methods of applied mathematics, statistics and information technology should be developed and utilized. Three of the State of the Profession Position Papers submitted to the Astronomy and Astrophysics Decadal Survey strongly emphasized astronomy’s need for new computational and mathematical tools in the coming decade [2224] (see: http://sites.nationalacademies.org/bpa/BPA_049492). A variety of individual and organizational initiatives and research projects have been proposed or are underway to at least partially address this looming challenge by developing or implementing a new generation of methods of data representation, processing and visualization.

However, there is a large communication gap between astronomy and other fields where adequate solutions exist or are being developed: applied mathematics, statistics and artificial intelligence.

The principal objectives of this paper are twofold. First, we wish to bring attention to some specific needs for new data analysis techniques. Second, we describe some innovative approaches and solutions to some of these problems and give examples of novel tools that permit effective analysis, representation, and visualization of the new multispectral/multitemporal data sets that offer enormous richness if they are mined with the appropriate tools. The extensive amount of relevant work already accomplished in disciplines outside of astronomy does not allow us to offer a complete review of all aspects of these complex topics and problems, but we have selected a number of important examples.

The structure of the paper is as follows. In Section 2 we discuss challenges related to semi-automated processing of low-dimensional images and describe briefly our framework for advanced astronomical image processing. Section 3 describes problems posed by complex astronomical data sets; Section 4 addresses some of these problems and presents some innovative methods for nonlinear dimension reduction, sampling on graphs and manifolds; it also describes briefly a new unifying method for image segmentation and information visualization that is based on physical intuition derived from synchronization of nonlinear oscillations; Section 5 reviews some recent approaches to the challenges posed by high-dimensional, complex data sets and their importance for cosmological and astrophysical problems; in particular, we briefly discuss in this part a new tool called needlets for processing data that is laying on the sphere, applications of needlets to the analysis of CMB data, and a generalization of the wavelet-like transforms to Riemannian manifolds. The paper ends with a conclusion.

2. Framework for Processing and Visualizing Astronomical Images

Vast data sets demand automated or semi-automated image processing and quality assessment of the processed images. Indeed, the sheer number of observed objects awaiting analysis makes obvious the need for sophisticated automation of object detection, characterization and classification. Adapting recent advances of computer vision and image processing for astronomy and designing and implementing an image processing framework (see below) that would utilize these continuing achievements, remains, however, a major challenge.

Developers and users alike have realized that there is more to creating an application than simple programming. The objective of creating a flexible (see below) application is customarily achieved by exploiting the object-oriented paradigm [25, 26]. In computer science such systems are called frameworks [27]. Frameworks are flexible, and users can extend frameworks’ capabilities by installing plug-ins without having to rewrite the basic code. An application framework consists of computational (processing) modules, data, and an interactive interface. Frameworks are extendable (i.e., they can easily adopt software products to changes of specification) and reusable (the software elements are able to serve for construction of many different new applications).

The keystone elements of a system that unifies a wide range of methods for astronomical image processing should be computational and visualization modules. Such a platform-independent framework with an integrated environment was described in Pesenson et al. [28]. It provides many common visualization features plus the ability to combine these images (overlays, etc.) in new and unique ways. The framework has a very intuitive GUI that allows the user to set up a custom pipeline interactively and to process images singly or in a batch mode. The final products, as well as the results of each step, are viewable with the framework. It also provides access to different data archives and can easily incorporate custom modules written in any programming language. Figures 14 give a few self-explanatory examples that demonstrate some of the framework’s functionality.

The framework deals primarily with image regularization and segmentation. These are fundamental first steps for the detection and characterization of image elements or objects and as such they play principal roles in the realization of automated computer vision applications. Image segmentation can roughly be described as the process that subdivides an image into its constituent parts (objects) and extracts those parts of interest. Since the inception of image segmentation in the 1960s, a large number of techniques and algorithms for image segmentation have been developed.

However, due to revolutionary advances in instrumentation, the complexity of images has changed significantly: the extension of grey level images to multi- and hyperspectral images, from 2D images to 3D, from still images to sequences of images, tensor-valued images (polarization data), and so forth. Some modern, cutting-edge methods for image processing have been developed lately, and are being developed today, by information scientists outside of astronomy. The substantial progress in this direction made by the image processing and computer vision communities [30, 31], has found multiple applications in physics, technology, and bio-medical sciences. Unfortunately, for the most part these advances have not yet been utilized by the astronomical community.

Multiscale image representation and enhancement are such approaches. They have become important parts of computer vision systems and modern image processing. The multiscale approach has proven to be especially useful for image segmentation for feature, and artifact detection. It enables a reliable search for objects of widely different morphologies, such as faint point sources, diffuse supernova remnants, clusters of galaxies, undesired data artifacts, as well as unusual objects needing detailed inspection by a scientist. It is well known that in astronomical images one often sees both point sources and extended objects such as galaxies embedded in extended emission (see, e.g., Figures 2, 6 and 7). Because of the issue of robustness with respect to noise (Section 4), a careful preprocessing is required before one can safely apply image segmentation or dimension reduction. So, an adequate way of preprocessing is what needs to be addressed first.

One effective approach to denoising is based on partial differential equations and may be seen as the local adaptive smoothing of an image along defined directions that depend on local intensities. One wants to smooth an image while preserving its features by performing a local smoothing mostly along directions of the edges while avoiding smoothing orthogonally to these edges. Many regularization schemes have been developed so far for the case of simple two-dimensional scalar images. An extension of these algorithms to vector-valued images is not straightforward. For a gray-scale image, the gradient is always perpendicular to the level set objects of the image; however, in the multi-channel case, this quality does not hold. Applying nonlinear diffusion to each channel or spectral band separately is one possible way of processing multi- and hyperspectral cubes; however, it does not take advantage of the richness of the multi/hyperspectral data. Moreover, if the edge detector acts only on one channel, it may lead to an undesirable effect, such as color blurring, where edges in color images are blurred due to the different local geometry in each channel. Hence, a coupling between image channels should appear in the equations through the local vector geometry. We achieve this by implementing a nonlinear diffusion on a weighted graph, thus generalizing the approach adopted by Pesenson et al. [28]. The governing equation is linear, but the nonlinearity enters through the weights assigned to the graph edges. This algorithm respects the local smoothing geometry and thus serves well as a preprocessing step required for dimension reduction (Section 4).

Our framework handles two-dimensional scalar images and paves the way to the semi- and automated image processing and image quality assessment. However, the ability to extract useful knowledge from high-dimensional data sets is becoming more and more important (Section 3). It is closely related to finding complicated structural patterns of interrelated connections and heavily depends on the ability to reduce dimensions of the raw data. This problem constitutes a big challenge for the scientific and technology communities. To extend the framework’s functionality to high-dimensional images and data sets we have been developing novel, practical algorithms for dimension reduction. In the next section we describe some of the challenges presented by modern, high-dimensional data sets.

3. Complex, Massive Data Sets

Astronomy has long found the use of multiple data dimensions to be crucial aids to progress. For example, surveys of H-alpha sources when cross-correlated with spectral types guided astronomy to discover interesting new types of objects such as Herbig AeBe stars [32]. Cross-comparison of brightnesses, redshifts, and optical object morphologies led to the discovery of quasars. X-ray emission proved to be a highly efficient method of identifying pre-main sequence objects in a large field of view (e.g., review [33]). The fact that the same object (SS 433) was noticed to exhibit H-alpha emission and strong radio and optical variability led astronomers to investigate further and thus discover the existence of microquasars [3436]. Other discoveries from the use of multidimensional data using just a few dimensions include such things as Be X-ray binaries and soft gamma ray repeaters.

Simple color-magnitude diagrams are another traditional tool taking advantage of multiple data dimensions (e.g., cataloging YSO candidates from Spitzer survey data, Whitney et al. [37]). In addition, it is well known that simple color-color plots using four colors provide ways to efficiently and fairly reliably classify into physical types a large number of objects (IRAS data, e.g., [38, 39]; 2MASS data, e.g., [40] MSX data, e.g., [41, 42]; Spitzer data, e.g., [43]) as well as to discover interesting new objects as outliers (e.g., Luminous Red Novae, [44, 45]). The meaning and usefulness of more complicated multiwavelength cross-correlations across widely-separated wavelength domains remain a nontrivial but fruitful challenge still being explored (e.g., microquasars identified by comparing radio, IR, and X-ray properties [46]).

As multiwavelength data have become available for huge numbers of objects in the past few decades, the number of data dimensions for a typical object has grown beyond what can be visualized and studied using classical color-color plots and correlations using only a few dimensions. For example, Egan et al. [41] write “six-dimensional data are difficult to represent; in addition, it is not clear that all of the colors yield completely independent information” (see Figure 5).

While Figure 5 successfully permits one to identify meaningful patterns, it is easily imagined that attempting to display a much larger number of data dimensions by simply using many more colors and symbols overlaid on the same plot and same region of parameter space would not be a useful form of scientific visualization. Also, two-dimensional plots are obviously not useful for expressing relationships among more than four colors. Effective scientific visualization of a large number of data dimensions requires new techniques.

The catalog information available for a great many objects already includes such information as magnitude, several optical and IR colors, metallicities, spectral types, and so on. A galaxy catalog would of course include other parameters such as morphology and redshift. When spectra are considered and compared in detail, the huge number of emission and absorption features obviously compounds the problem vastly. Still more complexity is added when one attempts to correlate a large grid of models with a large data set having many dimensions (e.g., the YSO analysis [37]) in order to create feedback for models based on statistically significant samples rather than on a few putative prototypes.

Other examples of high-dimensional data include, but are not limited to, multiparametric data sets (e.g., the manifold of galaxies, [47, 48]), multitemporal, multispectral and hyperspectral data sets and images, and high-resolution simulations on massively parallel computers. The magnitude of the computational challenge for pattern recognition and classification algorithms is suggested by the fact that the VO will contain billions of data points each having hundreds of dimensions [49].

All these examples clearly demonstrate that the automated and semiautomated processing required by the unprecedented and accelerating growth in both the amount and the complexity of astronomical data demands new ways of information representation. In the next section we discuss such approaches and describe some of the original algorithms we have developed in the course of this ongoing work.

4. Dimensionality Reduction and Sampling for Complex and Large Data Sets

Because approaches to complex data require advanced mathematics, astronomers who wish to take advantage of them will need at least some basic knowledge of new, unfamiliar mathematical concepts and terminology. The full practical adoption of such methods requires interdisciplinary scientists who understand the new approaches in depth and are interested in working with astronomers to adapt and apply the methods to astronomical data sets. This is already happening as part of some research. The first step for a “neophyte” astronomer, however, is to learn “what is out there” as a basis for further investigation and consideration of the utility of various methods. The goal of this section is to offer a very basic introduction and explanation of some of these unfamiliar concepts.

Machine learning [50] is becoming increasingly important in astronomy (see extensive reviews [51, 52]). The main objectives of machine learning are clustering (automatic identification of groups of similar objects), and classification (assigning labels to instances). However, high dimensionality complicates machine learning and can easily thwart the entire effort. It also becomes a formidable obstacle in computing numerical solutions in Bayesian statistics for models with more than a few parameters.

Bellman [53] coined the term “curse of dimensionality”, to describe how difficult it was to perform high-dimensional numerical integration. For example, 100 evenly-spaced sample points suffice to sample a unit interval with no more than 0.01 distance between points; an equivalent sampling of a 10-dimensional unit hypercube with a lattice with a spacing of 0.01 between adjacent points would require 1020 sample points. Thus, in some sense, the 10-dimensional hypercube can be said to be a factor of 1018 “larger” than the unit interval. Obviously, this will make many computational tasks intractable for high-dimensional data sets.

Euclidian spaces are usually used as models for traditional astronomical data types (scalars, arrays of scalars). Another general manifestation of high dimensionality is the fact that in a high-dimensional Euclidean space, volume expands far more rapidly with increasing diameter than it expands in lower-dimensional spaces. Indeed, if one compares the size of the unit ball with the unit cube as the dimension of the space increases, it turns out that the unit ball becomes an insignificant volume relative to that of the unit cube. Thus, in some sense, nearly all of the high-dimensional space is “far away” from the center. This is called the “empty space phenomenon” [54, 55]: the high-dimensional unit space can be said to consist almost entirely of the “corners” of the hypercube, with almost no “middle”. One more important example of the unexpected properties of high-dimensional Euclidean spaces is the following behavior of the Gaussian distribution in high dimensions: the radius of a hypersphere that contains 95% of the distribution grows as the dimensionality increases.

These sorts of problems demonstrate that in order to make practical the extraction of meaningful structures from multiparametric, high-dimensional data sets, a low-dimensional representation of data points is required. Dimension reduction (DR) is motivated by the fact that the more we are able to reduce the dimensionality of a data set, the more regularities (correlations) we have found in it and therefore, the more we have learned from the data. Data dimension reduction is an active branch of applied mathematics and statistics [50]. It consists of methods for finding lower-dimensional representation of high-dimensional data, without losing a significant amount of information, by constructing a set of basis functions that capture patterns intrinsic to a particular state space. DR methods greatly increase computational efficiency of machine learning algorithms, improve statistical inference and enables effective scientific visualization and classification. From a large set of images obtained at multiple wavebands, effective dimension reduction provides a comprehensible, informatio -rich single image with minimal information loss and statistical details, unlike a simple coadding with arbitrary, empirical weights (see a simple four-wavelength example in Figures 6 and 7).

Classical approaches to dimension reduction not unfamiliar to astronomy are principal components analysis (PCA) [56] and multidimensional scaling [57]. Although first applied to astronomy by Bijaoui [56] in 1974, PCA was not commonly used until the 1990’s [58, 59]; see also a general introduction to PCA in [60]. It has been used for such things as classification of galaxies and quasars [59, 61], photometric and spectroscopic redshift estimation (e.g., [62, 63], sky subtraction [64] and optical spectral indicators [65]. PCA has been used for the simultaneous analysis of dozens of data parameters for each member of a sample of 44 active galactic nuclei [66], which appears to be the largest number of parameters yet analyzed using PCA in an astronomical study.

PCA has a serious drawback in that it does not explicitly consider the structure of the manifold on which the data may possibly reside. In differential geometry an -dimensional manifold is a metric space that on a small scale resembles the n-dimensional Euclidean space; thus a circle is a one-dimensional manifold, while a sphere is a two-dimensional manifold. PCA is intrinsically linear, so if data points form a nonlinear manifold, then obviously, there is no rotation & shift of the axis (this is what a linear transform like PCA provides) that can “unfold” such manifold. In other words, if the data are mainly confined to an almost linear low- dimensional subspace, then simple linear methods such as PCA can be used to discover the subspace and estimate its dimensionality. If, on the other hand, the data lie on (or near) a highly nonlinear low-dimensional submanifold, then linear methods are not effective in capturing the fine meaningful structures in the data. A blind application of linear methods may result in a complete misrepresentation of the data.

We have recently developed some advanced, original methods for performing nonlinear DR, which do not suffer from the limitations of PCA. In what follows, we briefly describe these methods. First, we introduce some more concepts and methods that have proved to be effective in the area of machine learning.

4.1. Graphs

In the context of data retrieval and processing, dimensionality reduction methods based on graph theory have proved to be very powerful. In mathematics and computer science, graph theory is the study of mathematical structures to model relations between objects [67]. Graph theory has been successfully applied to a wide range of very different disciplines, from biology to social science, computing and physics. Among many applications, graphs and manifolds have been used to address mining databases, Internet search engines, computer graphics, computation reordering, image processing, and so forth. In what follows we discuss various data structure and algorithms that we have already developed and are developing.

The graph representation of structured data provides a fruitful model for the relational data mining process. A graph is a collection of nodes and links between them; the nodes represent data points and the weights of the links or edges indicate the strength of relationships. A graph in which each graph edge is replaced by a directed graph edge is called a directed graph, or diagraph [67]. Diagraphs are used for context-sensitive browsing engines and for ranking hyper-linked documents for a given query.

The modern approach to multidimensional images or data sets is to approximate them by graphs or Riemannian manifolds. The first important, and very challenging, step is to convert such a data cloud to a weighted finite graph. The next important problem is the choice of “right” weights that should be assigned to edges of the constructed graph. The weight function describes a notion of “similarity” between the data points and as such strongly affects the analysis of the data. The weights should entirely be determined by application domain. The most obvious way to assign weights is to use a positive kernel like an exponential function whose exponent depends on the local Euclidean distance between data points and a subjectively chosen parameter called “bandwidth”. (There are also other ways of assigning weights, which depend on more complex mathematics than we discuss in this article.)

Next, after constructing a weighted graph, one can introduce the corresponding combinatorial Laplace operator [67, 68]. In machine learning, methods based on graph Laplacians have proved to be very powerful [50]. The eigenfunctions and eigenvalues of the Laplacian form a basis, thus allowing one to develop a harmonic or Fourier analysis on graphs. By further developing harmonic analysis on graphs and manifolds [6981], we have devised innovative algorithms for data compression and nonlinear data dimension reduction. These results enable one to overcome PCA’s limitations for handling nonlinear data manifolds and also allow one to deal effectively with incomplete data (such as missing observations or partial sky coverage).

4.2. Hypergraphs

Most existing data mining and network analysis methods are limited to pairwise interactions. However, sets of astronomical objects usually exhibit multiple relationships, so restricting analysis to the dyadic (pairwise) relations leads to loss of important information and to missing discoveries. Triadic, tetradic or higher interactions offer great practical potential. This has led to the approach based on hypergraphs (e.g., [82]). Hypergraphs constitute an important extension of graphs that allow edges to connect more than two vertices simultaneously. Thus hypergraphs provide a more comprehensive description of feature relations and structures. Hypergraphs furnish a much more adequate approach to real world data sets and allow one to deal with clustering and classifications using higher order relations. It has been shown that, in general, there does not exist a graph model that correctly represents the cut properties of the corresponding hypergraph (Ihler et al. [83]). Thus new mathematical methods are required to take advantage of the richness of information available in hypergraphs. In order to provide means for analysis of databases with multiple relationships, we are currently developing extensions of the original methods described in the previous subsection.

4.3. Fractals

Data sets having fractional dimensions (“fractals”: Mabdelbrot [84]; Schroeder [85]; Faloutsos [86]) have been suggested to represent many phenomena in astronomy. Examples include star formation on galactic scales [87], the stellar initial mass function [88], galaxy distributions in space [89, 90], cloud distributions in active galactic nuclei [91], the cosmic microwave background [92], thermonuclear flame velocities in Type Ia supernovae [93], and the mass spectrum of interstellar clouds [94]. Such data sets exhibit a dimensionality which is often much lower than the dimension of the Euclidian space they are embedded into. For example, the distribution of galaxies in the universe has dimension .23. The difference between the two dimensions occurs because the fractal dimensionality is intrinsic. The intrinsic dimension of a graph reflects the intrinsic dimension of the sampled manifold.

Obviously, an important first step in practical dimensionality reduction is a good estimate of the intrinsic dimension of the data. Otherwise, DR is no more than a risky guess since one does not know to what extent the dimensionality can be reduced. To enable analysis of astronomical data sets that exhibit a fractal nature, we are currently developing a practical concept of spectral dimensionality, as well as original algorithms for sampling, compression and embedding fractal data.

4.4. The Petascale Connection

The approaches described above dealt with compact manifolds and finite graphs. However, massive data sets are more adequately described by noncompact manifolds and infinite graphs. In order to deal with extremely large data sets we extended dimension reduction to non-compact manifolds and infinite graphs. We are also working on generalizing the Fourier analysis to non-compact Riemannian manifolds and infinite quantum and combinatorial graphs, directed graphs, hypergraphs and some fractals. Implementation of these algorithms, and incorporation of them into the framework described in Section 2, will enable a more adequate analysis of massive data sets.

4.5. Robustness of Dimension Reduction Algorithms with Respect to Noise

Despite the important and appealing properties of the above mentioned dimension reduction algorithms, both linear and nonlinear approaches are sensitive to noise, outliers, and missing measurements (bad pixels, missing data values). Because noise and data imperfections may change the local structure of a manifold, locality preservation means that existing algorithms are topologically unstable and not robust against noise and outliers [95].

This is obviously a serious drawback because astronomical data are always corrupted by noise. Budavári et al. [96] presented a robust version of PCA and applied it to astronomical spectra. Their approach addressed the issues of outliers, missing information, large number of dimensions and the vast amount of data by combining elements of robust statistics and recursive algorithms that provide improved eigensystem estimates step by step. However, as it was mentioned earlier, the PCA method is intrinsically linear and cannot be applied to data points residing on a nonlinear manifold.

Thus the practical usage of dimension reduction demands careful improvement of signal-to-noise ratio without smearing essential features. Implementing a nonlinear diffusion on weighted graphs (Section 2) enables us to apply dimension reduction to noisy data sets as well.

4.6. Image Segmentation, Unsupervised Learning and Information Visualization

In what follows we briefly describe a new, original unifying approach to segmentation of images in particular and pattern recognition and information visualization in general. Image segmentation (see Section 2) plays a principal role in the realization of automated computer processing, as a previous stage for the recognition of different image elements or objects.

Solutions to the manifold learning problem can be based on intuition derived from physics. A good example of this is the approach of Horn and Gottlieb [97] that is based on the Schrödinger operator, where they constructed an approximate eigenfunction, and computed its corresponding potential. The clusters were then defined by the minima of the potential.

We developed [98] an alternative that is also based on physical intuition, this one being derived from synchronization of nonlinear oscillations [99, 100]. Approximating a multidimensional image or a data set by a graph and associating a nonlinear dynamical system with each node enables us to unify the three seemingly unrelated tasks: image segmentation, unsupervised learning and data visualization. Pattern recognition may benefit significantly from new methods of visualization and representation of data, since they may uncover important relationships in multidimensional data sets. Bringing out patterns by setting data points into oscillatory motion is a very effective way of visualizing data [101]. Our method is a feature-based, multilevel algorithm that finds patterns by employing a nonlinear oscillatory motion. At the same time, the oscillatory motion reveals to the eye patterns by making them oscillate coherently with a frequency different from the rest of the graph. Patterns are detected recursively inside the graph, and the found features are either collapsed into single nodes, forming a hierarchy of patterns or can be zoomed in and studied individually. This method can be described as a precomputed animation and it enables both qualitative (by eye) and quantitative discovery of correlations (see Figure 8).

The approaches presented in this section enable interpolation, smoothing and immersions of various complex (dozens or hundreds of useful parameters associated with each astronomical object) and large data sets into lower-dimension Euclidean spaces. Classification in lower dimensional space can be done more reliably than in high dimensions. Thus, DR can be significantly beneficial as a pre-processing step for many existing astronomical packages, such as for example, the popular source extractor SExtractor [102]. Incorporating into our framework (Section 2) the graph/manifold-based tools described in this section, will allow to address multiple pattern recognition (clustering) tasks as well as visualization of multidimensional data. Altogether, these approaches provide important generalizations of the tools for spectral clustering and dimensionality reduction, and enable more adequate representation, effective data retrieval and analysis of complex, modern astronomical data sets.

5. Some Recent Approaches to the Challenges of Data Intensive Astronomy

Data intensive astrophysics requires an interdisciplinary approach that will include elements of applied mathematics [67, 103], modern statistical methods [104, 105], machine learning [50], computer vision [31] and image processing [30]. The breadth and complexity of the work relevant to modern astronomical data challenges is large, and does not permit a full treatment here. However, it is valuable to briefly mention a number of important problems, approaches and efforts that have been pursued.

The problem of processing data that lay on a manifold is very important for cosmological data analysis. The standard, powerful data analysis package HealPix [106] processes data on a two-dimensional manifold—the sphere. The concept of needlets (second generation spherical wavelets) has recently attracted a lot of attention in the cosmological literature. The first application of needlets to cosmological data was provided by Pietrobon et al. [107]. They analyzed cross-correlation of Wilkinson Microwave Anisotropy Probe (WMAP) CMB data with NRAO VLA Sky Survey (NVSS) radio galaxy data. The approach based on needlets enabled more accurate statistical results related to the dark energy component. The investigation of needlets from the probabilistic viewpoint and their relevance for the statistical analysis of random fields on the sphere was done for the first time by Baldi et al. [108]. A thorough presentation of the spherical needlets for CMB data analysis is given by Marinucci et al. [109]. Various issues related to CMB, such as spectrum estimation, detection of features and anisotropies, mapmaking were addressed by Faÿ et al. [110], Pietrobon et al. [111] (see also references there). The CMB models are best analyzed in the frequency domain, where the behavior at different multipoles can be investigated separately; on the other hand, such problems as missing observations or partial sky coverage make impossible the evaluation of spherical harmonic transforms. The needlets allow for a very efficient simple reconstruction formula that enables one to perform frequency analysis by using only partial information about data and providing means for handling masked data. Besides, needlets exemplify other important properties that are not generally shared by other spherical wavelet constructions: they do not rely on any kind of tangent plane approximation; they have good localization properties in both pixel and harmonic space; and needlet coefficients are asymptotically uncorrelated at any fixed angular distance (which makes their use in statistical procedures very promising). All these unique properties make needlets a very valuable tool in various areas of CMB data analysis. Recently, Geller and Marinucci [112] introduced spin needlets as a tool for the analysis of spin random fields. Geller et al. [113] adopted the spin needlet approach for the analysis of CMB polarization measurements. Lately Geller and Mayeli [114] constructed continuous wavelets and nearly tight frames (needlets) on compact manifolds.

The essential part of the general analysis based on the wavelet-like constructions is sampling of bandlimited functions. The mathematical foundations of sampling on arbitrary Riemannian manifolds of bounded geometry were laid down by Pesenson [115, 116]. Recently Geller and Pesenson [117], constructed bandlimited and highly concentrated tight frames on compact homogeneous manifolds. These results can be considered as an extension of the wavelet-like transforms to Riemannian manifolds, thus enabling one to reconstruct information from incomplete data defined on an arbitrary manifold.

Multiscale data analysis has proved to be a very powerful tool in many fields. Applications of multiscale image analysis to astronomy were pioneered by Starck et al. [118] and Starck and Murtagh [119].

Spectral methods and diffusion maps have recently emerged as effective approaches to nonlinear dimensionality reduction [120122]. The diffusion maps approach has successfully been applied to analysis of astronomical spectra by Richards et al. [123].

Manifold learning may be seen as a DR procedure aiming at capturing the degrees of freedom and structures (clusters, patterns) within high-dimensional data. Manifold learning nonlinear algorithms such as isometric mapping (ISOMAP) by Tenenbaum et al. [124] and local linear embedding (LLE) by Roweis and Saul [125] project high-dimensional manifold data into a low-dimensional space by preserving the local geometric features. Ball and Brunner [51] provide a very broad review of the current state of machine learning and data mining in astronomy.

The AstroNeural collaboration group [52] implemented tools based on neural networks, fuzzy-C sets and genetic algorithms, and applied them to perform complex tasks such as unsupervised and supervised clustering and time series analysis. D’Abrusco et al. [126] present a supervised neural network approach to the determination of photometric redshifts. D’Abrusco et al. [127] describe a method for the photometric selection of candidate quasars in multiband surveys based on the probabilistic principal surfaces. Pasian et al. [128] give a general review of the development of tools to be subsequently used within the international VO.

Comparato et al. [129] show how advanced visualization tools can help the researcher in investigating and extracting information from data. Their focus is on VisIVO, a new open-source graphics application that blends high performance multidimensional visualization techniques and up-to-date technologies to cooperate with other applications and to access remote, distributed data archives.

Draper et al. [130] discuss the GAIA application for analyzing astronomical image and show how the PLASTIC protocol has been used to inter-operate with VO enabled applications.

The Center for Astrostatistics (CASt) at Pennsylvania State University provides a wealth of resources (codes, data, tutorials, programs, etc.) related to challenges in statistical treatments of astrophysical data: http://astrostatistics.psu.edu/.

A large amount of practical and up-to-date information (texts, tutorials, preprints, software, etc.) related to Bayesian inference in astronomy and other fields is provided by T. Loredo on the website Bayesian Inference for the Physical Sciences (BIPS) at http://www.astro.cornell.edu/staff/loredo/bayes/index.html.

The International Computational Astrostatistics (InCA) Group at Carnegie Mellon University develops and applies new statistical methods to inference problems in astronomy and cosmology, with an emphasis on computational nonparametric approaches (see for details http://www.stat.cmu.edu/~inca/index.html).

The AstroMed project at Harvard University’s IIC is dedicated to the application of medical image visualization to 3D astronomical data [131].

Compressed sensing and the use of sparse representations offer another promising new approach. Traditionally it has been considered unavoidable that any signal must be sampled at a rate of at least twice its highest frequency in order to be represented without errors. However, a technique called compressed sensing that permits error-free sampling at a lower rate has been the subject of much recent research. It has great promise for new ways to compress imaging without significant loss of information, thus ameliorating analysis limitations deriving from limitations of computing resources. The power of compressed sensing was strikingly illustrated when an object was successfully imaged in some detail by a camera composed of a single pixel [132]. Bobin et al. [133] discuss recent advances in signal processing that use sparse representations. Relatively nontechnical introductions to compressed sensing may be found in Candes and Wakin [134] and Romberg [135]. The first astronomical results in the literature based on the use of compressed sensing are that of Wiaux et al. [136].

Various data types together with methods used for their representation are briefly summarized in Table 1. The table is not comprehensive, but it provides a quick overview of what was discussed above.

6. Conclusion

Extremely large data sets, as well as the analysis of hundreds of objects each having a large number of data dimensions, present astronomy with unprecedented challenges. The challenges are not only about database sizes in themselves, but about how intelligently one organizes, analyzes, and navigates through the databases, and about the limitations of existing data analysis approaches familiar to astronomy. The answers to these challenges are not trivial, and for the most part lie in complex fields of research well outside the training and expertise of almost all astronomers. Fortunately, other disciplines such as imaging science and earth sciences have for many years been grappling with the same sorts of problems. Fruitful interdisciplinary work has already become a regular feature of research in those other disciplines, and has resulted in applications of crucial value to other sciences seeking to take advantage of complex, giant data sets in their respective fields. This work has brought about many helpful applications and promising paths for further progress that potentially have significant value to astronomy.

Multidimensional image processing, image fusion (combining information from multiple sensors in order to create a composite enhanced image) and dimension reduction (finding lower-dimensional representation of high-dimensional data) are effective approaches to tasks that are crucial to multitemporal, multiwavelength astronomy: study of transients, large-scale digital sky surveys, archival research, and so forth. These methods greatly increase computational efficiency of machine learning algorithms and improve statistical inference, thus facilitating automated feature selection, data segmentation, classification and effective scientific visualization (as opposed to illustrative visualization). Dimensionally reduced images also offer an enormous savings in storage space and database-transmission bandwidth for the user, without significant loss of information, if appropriate methods are used.

To effectively use the large, complex data sets being created in 21st Century astronomy, significant interdisciplinary communication and collaboration between astronomers and experts in the disciplines of applied mathematics, statistics, computer science and artificial intelligence will be essential. The concepts and approaches described in this paper are among the first steps in such a broad, long-term interdisciplinary effort that will help bridge the communication gap between astronomy and other disciplines. These concepts, and the approaches derived from them, will help to provide practical ways of analysis and visualization of the increasingly large and complex data sets. Such sophisticated new methods will also help to pave the way for effective automated analysis and processing of giant, complex data sets.

Acknowledgments

The authors would like to thank Alanna Connors for stimulating discussions and useful suggestions. They would also like to thank the referee for constructive suggestions that led to substantial improvements of the article. The first author would like to thank Michael Werner for support. This work was carried out with funding from the National Geospatial-Intelligence Agency University Research Initiative (NURI), Grant HM1582-08-1-0019, and support from NASA to the California Institute of Technology and the Jet Propulsion Laboratory.