Abstract
We introduce a new method to calculate the multiscale 3D filamentation of SDSS DR5 galaxy clusters and also applied it to N-body simulations. We compared the filamentation of the observed versus mock samples in metric space on scales from 8βMpc to 30βMpc. Mock samples are closer to the observed sample than random samples, and one of the mock samples behaves better than another one. We also find that the observed sample has a large filamentation value at a scale of 10βMpc, which is not found from either mock samples or random samples.
1. Introduction
From redshift surveys such as the Sloan Digital Sky Survey (SDSS; [1]) and the Two-Micron All Sky Survey (2MASS; Skrutskie et al. 2000) the local (few to many tens of Megaparsecs) Universe shows intricate patterns with clusters, filaments, bubbles, sheet-like structures, and so-called voids. For a review of the structural analysis of the Universe, see Weinberg [2]. At the same time, Lambda Cold Dark Matter (LCDM) models have been developed; see Gill et al. [3] and Dolag et al. [4]. Several simulations incorporating dark energy have been created, such as the Millennium Simulation done by Croton et al. [5] and another N-body simulation by Berlind et al. [6]. These models describe a Universe that consists mainly of dark energy and dark matter and calculate the evolution of the Universe from a short time after the big bang to the present time. As complicated evolution systems are sensitive to the initial conditions [7, 8], the initial conditions of those simulations are strictly limited by current observations. Work has been done to verify the similarity between the real Universe and the simulated Universe [6, 9, 10] and they correspond well, based on the comparative techniques used in these studies.
To supplement the widely used correlation function and power spectrum [11, 12], alternatives have been proposed to quantify structure in the galaxy distribution, such as the genus curve [13], percolation statistics [13β15], rhombic cell analysis [16], void probability functions [17], high-order correlation function [18], and multifractal measures [19]. Filamentation is a traditional way to describe the structure of the galaxy distribution and measures of this property are widely used in the research of the real universe and simulations [20]. In this paper, we consider a wide range of smoothing levels for multi-scale filtering [10]. By varying the size of the smoothing function over a range of scales, a complete multi-scale filament form description of galaxy distributions becomes possible. Key facets of our filamentation approach are consideration of any given map as an element in the space of all such maps and definition of a distance function in that space to make the space of all maps into a topological space (Adams 1992). Moreover, the other methods just listed focus on summary statistics that convey little of the geometric and topological properties of the galaxy distribution. Our method also gives desired quantitative summary statistics of the difference between maps. However, a primary benefit of our method is that the filament function is straightforward and simple to understand and particularly useful in map comparisons.
2. 3D Filamentation Analysis
2.1. Filament Function Definition
First we summarize the 2-D filamentation approach [10]; the definition of the diameter of a set is Components are defined as isolated high-density regions in the map. The size shape and number of components will vary as a function of threshold values [10].
The filament index previously used in our 2D analysis is defined as where is the perimeter, is the area and is the diameter. Now we define the 3D filament index where is the componentβs surface, is the volume and is the diameter.
This definition of the filament index satisfies intuitive requirements.(1)The index should be proportional to .(2)The index should be inversely proportional to volume, with fixed surface and diameter. The fatter the object is the smaller index it should have. In other words, we can increase the volume and maintain the diameter and surface values (the surface increased on the body has been cancelled out by the surface decreased by the reduced spikes) (see Figure 1).(3)The filament value should be proportional to the surface. With fixed diameter and volume, the larger the surface is, the larger filament value it should have, as in Figure 2.
(a)
(b)
(a)
(b)
Therefore the filament index can be used to quantitatively characterize the complexity of the object.
2.2. The Distance between Maps and Multithreshold Values
If we want to compare filamentation between two maps, we define their metric distance as where is the threshold value, from minimum to maximum voxel intensity [21], and . We only keep pixels above the threshold value once a threshold value is defined for a map. is the filament function and and are maps. Multithreshold values can give us a full understanding of distance of two maps. However, different threshold values set can possibly get different distances. Here we use 10 threshold values equal spaced from maximum to minimum value of the map. The reason is because we think 10 threshold values are enough to fully describe the map and there is no reason to give some specific thresholds of different weight than others.
In order to obtain the distance between the filament functions of the images under study, in this paper we apply this method in two ways. One way is that the observed images are compared to uniform images, giving us information on βhow farβ the samples fall from uniformity, thus giving quantitative information on the complexity of the observed images. Another way is that all simulation images are compared to SDSS observed images; thus, each measured distance gives quantitative information as to βhow farβ the simulation image is from observed data sets. Clearly, the larger the distance is, the βfartherβ the simulation image under study is from the observational data. The distances are calculated for the filamentation function, for each of the mock sample data sets, and for each size scale considered.
2.3. Gaussian Smoothing and Multiscale Analysis
The 2D Gaussian smoothing function (5) is where is a smoothing length, and it governs the level of smoothing of the discrete data. The smoothing length obviously influences the structure analysis: underestimated smoothing length will cause huge numbers of false oscillations, but overestimated smoothing length will remove real features of structure. Figure 3 is an example of Gaussian smoothing.
Gaussian filtering can be described by where is a two-dimensional function representing the image under study and is the Gaussian function (4), which can also be defined as a wavelet. is the scale parameter, and is a position vector. Thus, the convolution between the point distribution images under study and the Gaussian filter at several different values of the scale parameter yields the continuous gray-scale images from which the output functions and then the metric space coordinates can be calculated.
Gaussian filtering results in images with different filtering scales. In this paper we use a set of smoothing lengths from 10βMpc to hundreds of Mpc. Figure 4 is a 2-D example of sketching this process.
Multi-scale analysis is then possible with the using of different Gaussian smoothing length. We can extract specific scale components after smoothing with specific length. Multi-scale analysis is important in the geometry analysis of galaxy distribution as the geometry property is generally different on different scales.
3. Data
3.1. Observed Data
We use the SDSS Data Release 5 as our galaxy sample. We restrict our sample to regions of the sky where the completeness (ratio of obtained redshifts to spectroscopic targets) is greater than 90%, redshift range is and and ( and are the telescope coordinates). Our final sample covers 2904 on the sky and contains 406594 galaxies (~40,000 galaxies after applying volume-limiting selection, as in the next paragraph).
We use volume limited (VL) samples for example, [22], by choosing an upper cutoff in distance and calculating the absolute magnitude according to the apparent magnitude limit of the telescope and this upper cutoff. The relationship between a galaxy's apparent magnitude and absolute magnitude is given by the expression is the absolute magnitude, is the apparent magnitude, and is the distance from the observer. We only keep those galaxies whose absolute magnitude value is smaller than (brighter than) for our faintest detectable galaxy at our redshift limit; this will ensure the selected galaxy sample is substantially complete to our magnitude limit.
3.2. Redshift Distance Formula
From Weinberg ([23], Page 42, we neglect (radiation) in the current matter-dominant Universe), Here , is the Hubble constant, is the redshift, is the object redshift, and is the luminosity distance (distance based on luminosity or magnitude). The function is function when (open Universe). It is only when (closed Universe). When , all terms including will disappear. Equation (6) is used to calculate the distance of SDSS samples.
3.3. Mock Samples
Our first mock sample is from the NYU Value-Added Galaxy Catalog [6]. They use the Hashed-Oct Tree (HOT) code [24] to make an N-body simulation with the Lambda-Cold Dark Matter (LCDM) cosmological model, with , , , , , and . is the total matter mass. Density is in units of the critical density for closure, . and are densities of baryons and dark energy at the present day. The Hubble constant , is the simulationβs initial density perturbation spectral index, while is the rms linear mass fluctuation within a sphere of radius 8βMpc/h extrapolated to . This model is in agreement with a wide variety of cosmological observations (see, e.g., Spergel et al. 2004). Initial conditions were set up using the transfer function calculated for this cosmological model by CMBFAST [25]. Then they used the friends-of-friends (FOF) algorithm to identify galaxy halos in simulation, with FOF length equal to 0.2 times the mean interparticle separation. After getting haloes, based on the Halo Occupation Distribution (HOD, which is a model to get the probability distribution that a halo (dark matter particles cluster) of mass contains galaxies), they created the NYU Value-Added Galaxy Catalog employing some other restrictions, such as relations between spatial and velocity distributions of galaxies and dark matter within halos [26].
The second mock sample is Millennium Run semianalytic galaxy catalogue [5] based on the Millennium Run LCDM N-body simulation [9]. The Millennium Simulation used revised GADGET2 [5] code and also used the βTreePMβ (pure dark matter code, [27]) method to evaluate gravitational forces. It is a combination of a hierarchical βtreeβ algorithm and a classical, Fourier transform particle-mesh method. The following cosmological parameters are from Springel's paper [9]: , , , , , and . Those parameter values are consistent with a combined analysis of the galaxy surveys and first year WMAP [9] data.
The catalogues only include galaxies above our magnitude completeness limit ( and ), for a total of about 9 million galaxies in the full simulation box (500βMpc/h on a side).
We also created a random sample with the same criteria as the SDSS data, such as volume geometry, spatial density, and selection functions (window functions). The random sample is used for calibrating the MST, and we anticipate the random sample should be very different from the observed sample on most scales, as the observed sample does show some structures (such as filaments), which cannot be found in the random sample (Figure 5).
In our research we use nonequal triangles (faster to calculate) to approximate the surface of components, as in Figure 6.
(a)
(b)
3.4. Standard Deviation
To estimate errors of random mock samples, we choose 12 random samples with different seeds (initial conditions) when we calculate the metric distance between the observed sample and the random mock samples. We also extract 12 NYU samples from the same cubic simulation but with different orientations (and minimized overlapping (~20% overlap) of the sample regions) to get deviation of the NYU sample. For the MPA sample and observed sample we cannot make subsamples (due to the limited size of the original data) and thus they have no error bars (we borrow the error bars from the NYU sample for some figures).
4. Results
We chose 8~30βMpc as the range of smoothing lengths (FWHM) and analyzed the clumps with 10 threshold values equal spaced from maximum to minimum value of the map. From (4) we get the overall filament value (each clump has same weight regardless of the different size). To illustrate the filamentation property of the observed data, we compare observed image with uniform image (, in other words, no filamentation at all). Figure 7 shows the calculated filament values for the observational SDSS data.
We can see there is a turning point around 10βMpc scale. With the definition of filamentation index, clumps at first become less filamentary (from 5.3 to 2.4) with the increasing smoothing scale, but after 10βMpc smoothing scale they become more filamentary (from 2.4 to 3.5). This suggests the possible existence of large filaments in the SDSS sample. Then function is flat (around 3.5) at 20βMpc scale and larger.
Now we look at the difference among mock samples and the observed sample. First we compare all samples with the observed sample (4).
We can distinguish filament value of random sample from other samples very well (β₯6 difference) and find that the NYU sample behaves slightly better than the MPA sample (around 2).
We now know the metric distance between the mock and observed samples (shown in the -axis of Figure 8, calculated from (4)), but we do not know if mock samples have greater or less of filamentation than the observed sample. We only know the βdistance,β with no sign. So we set in (4), then we will get a new metric distance, with sign. The results are shown in Figure 9.
This new information shows that NYU tends to have less filamentation, while MPA generally has more, than the observed sample, and filament function reflects that NYU is closer to observed sample than MPA samples (more than 3 difference for filament function). In the small scale (<10βMpc), the filament values of both mock samples are smaller (negative metric distance) than the observed samples; interestingly the random sample has a larger filament value than the observed sample on small scales.
5. Conclusions
We have used our filament index definition on multiple scales to study the filamentation of galaxy distributions. The technique gives a detailed filamentation description of galaxy distributions in metric space, on scales from approximately 8βMpc to 30βMpc showing statistically strong differences among the samples. We also find that filament function has minimums around 10βMpc in Figures 8 and 9, reflecting that there are some filament structures above 10βMpc scale in SDSS galaxy distribution.
The key motivation of this research is to supplement traditional tools with a more informative way of quantifying the similarity in the βvisualβ filamentation properties between simulations and the observed Universe. It was demonstrated that two N-body simulations have done a good job of approximating our Universe and that NYUr is significantly closer to the observed sample than MPAr. We have the expected result that the random sample is much different from all other samples at virtually all scales for filamentation.
Acknowledgments
The Millennium Run simulation used in this paper was carried out by the Virgo Supercomputing Consortium at the Computing Center of the Max-Planck Society in Garching. The semianalytic galaxy catalog is publicly available at http://www.mpa-garching.mpg.de/galform/agnpaper/. The authors thank Andreas A. Berlind for providing the NYU Mock Galaxy Catalog.