Table of Contents Author Guidelines Submit a Manuscript
International Journal of Biomedical Imaging
Volume 2012, Article ID 958142, 8 pages
http://dx.doi.org/10.1155/2012/958142
Research Article

Selective Extraction of Entangled Textures via Adaptive PDE Transform

1Department of Mathematics, Michigan State University, East Lansing, MI 48824, USA
2Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48824, USA

Received 29 August 2011; Accepted 11 October 2011

Academic Editor: Shan Zhao

Copyright © 2012 Yang Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Texture and feature extraction is an important research area with a wide range of applications in science and technology. Selective extraction of entangled textures is a challenging task due to spatial entanglement, orientation mixing, and high-frequency overlapping. The partial differential equation (PDE) transform is an efficient method for functional mode decomposition. The present work introduces adaptive PDE transform algorithm to appropriately threshold the statistical variance of the local variation of functional modes. The proposed adaptive PDE transform is applied to the selective extraction of entangled textures. Successful separations of human face, clothes, background, natural landscape, text, forest, camouflaged sniper and neuron skeletons have validated the proposed method.

1. Introduction

Texture is one of the important features characterizing many natural and man-made images. Texture characterization and analysis are usually performed according to the spatial as well as frequency variations of brightness, pixel intensities, color, and texture orientation in the different regions of the image corresponding to different types of textures. For example, the roughness or bumpiness of an image usually refers to variations in the intensity values, or gray levels. Texture segmentation, recognition, and interpretation are critical for human visual perception and processing. As a result, research on texture analysis has received considerable attention in recent years. A large number of approaches has been proposed for texture classification and segmentation [116]. In general, texture analysis methods fall into two categories: statistical methods which analyze the Fourier power spectrum, gray level values, and various variance matrices of the input image, and structural methods which are knowledge-based algorithms with an emphasis on the structural primitives and their placement rules. Some examples of such methods include Markov random field models [17, 18], simultaneous autoregressive model [19], and fractal models [20]. Among many existing approaches, local variation minimization has been a popular and powerful technique in image analysis [21] with applications to the texture modeling [22]. Multiphase segmentation approaches are based on the structural division of gray scales [23]. More recently, multiresolution approaches have become more important in texture analysis [19, 2426], where fixed-size neighborhood and window size are used to derive features at varying scales corresponding to the input image at different resolutions.

In general, the total texture extraction has become a mature technique in real applications. However, despite the progress in the past few decades, selective extraction of entangled textures encounters a number of difficulties. One difficulty is due to spatial entanglement, including orientation mixing of various textures. Another difficulty is due to gray-scale entanglement, especially the near-continuous merging of various textures. The other difficulty is due to frequency entanglement when two similar but different textures share overlapping frequency band in the frequency domain. This difficulty would especially plague texture analysis when many high-frequency textures coexist.

In this work, we propose an adaptive partial differential equation (PDE) transform approach for selective extraction of entangled textures. By using arbitrarily high-order PDEs, the PDE transform is able to decompose signals, images, and data into functional modes, which exhibit appropriate time-frequency localizations [2731]. Additionally, the PDE transform is able to provide a perfect reconstruction. Unlike wavelet transform or Fourier transform, the PDE transform offers results in the physical domain, which enables straightforward mode analysis and secondary processing. Based on the image mode functions generated by the PDE transform method, the adaptive PDE transform algorithm calculates the variance of the local variation of the image mode functions followed by the corresponding thresholding analysis.

2. PDE Transform Method

In the past two decades, PDE-based image processing approaches have raised a strong interest in the image processing and applied mathematical communities and have opened new approaches for image denoising, enhancement, edge detection, restoration, segmentation, and so forth. The use of PDEs for image analysis started as early as 1980s when Witkin first introduced diffusion equation for image denoising [32]. The time evolution of an image under a diffusion operator is formally equivalent to the lowpass filter. After Perona and Malik introduced anisotropic diffusion equation in 1990 [33], nonlinear PDEs have found great applications for a variety of image processing tasks such as edge detection and denoising. Two important advances in the history of image processing, namely, the Perona-Malik equation and the total variation methods [21], employ second-order nonlinear PDEs for image analysis. The Willmore flow, proposed in 1920s, is a fourth-order geometric PDE and has also been used for surface analysis. In the past decade, fourth-order nonlinear PDEs have attracted much attention in image analysis [3436].

Arbitrarily high-order nonlinear PDEs were introduced by Wei in 1999 to more efficiently remove image noise in edge-preserving image restoration [34]: where is the image function, and are edge-sensitive diffusion coefficients and enhancement operator, respectively. The Perona-Malik equation is recovered at and . As in the original Perona-Malik equation, the hyperdiffusion coefficients in (1) can be chosen in many different ways. For instance, one can set where the values of constants depend on the noise level, and and are chosen as the local statistical variance of and : The notation above denotes the local average of centered at position . In this algorithm, the statistical measure based on the variance is important for discriminating image edges from noise. As such, one can bypass the image preprocessing, that is, the convolution of the noise image with a test function or smooth mask.

In general, the nonlinear PDE operators described above serve as lowpass filters. PDE-based nonlinear highpass filters were introduced by Wei and Jia [37] in 2002. They constructed two weakly coupled PDEs to act as a highpass filter. Recently, this approach has been combined with Wei's earlier arbitrarily high-order nonlinear PDE operator to give [29] where and are made edge sensitive. As lowpass filters, both and when is even. Similarly, both and when is odd. We can define a PDE transform as where can be regarded as a coupled nonlinear PDE operator. In order for (5) to work properly, we choose . As shown in our earlier work, by increasing the order of the highest derivative, one can increase frequency localization and accuracy of the PDE transform for mode decomposition [29]. The frequency selection of also depends on the evolution time. High-order PDEs are integrated by using the Fourier pseudospectral method [29].

In the PDE transform, intrinsic mode functions are systematically extracted from residues , that is, where is the th mode function. Here, the residue function is given by where . Therefore, is a perfect reconstruction of in terms of all the mode functions and the last residue. The mode decomposition algorithm given in (6) is inherently nonlinear, even if a linear PDE operator might be used.

The PDE transform is applied to Figure 1(a) to extract the three textures in Figures 1(b), 1(c), and 1(d). Note that only one texture is isolated at each time, which means the proposed PDE transform is able to perform a controlled or selective segmentation of textures. The PDEs of up to order 200 have been used for the selective texture segmentation. Numerically, such high-order linear PDE needs to be solved in the frequency domain [29]. Due to the ideal frequency localization, three textures are separated with clear boundary sharpness.

fig1
Figure 1: Extraction of various embedded textures using the PDE transform. (a) shows the original image composed of various horizontal and vertical textures. (b)–(d) show the three texture patterns extracted by applying the PDE transform, one at each time. (e) shows the edge mode obtained by applying the PDE transform to (a). (f) shows the variance of the local variation of the image mode function (e). (g) and (h) show the projection, or average, of the variance in (f) along - and -direction, respectively.

3. Adaptive PDE Transform Algorithm

The separation of textures that are highly entangled in spatial locations, frequency ranges, and gray scales become a challenge, and conventional segmentation techniques are in general not applicable for such cases. For example, highly oscillatory textures can be separated from slowly varying background but cannot be separated from another texture with overlapping frequency distribution purely based on frequency fingerprints. To selectively distinguish such entangled textures of high frequency, one needs a mode decomposition algorithm that is able to be highly localized in frequency. Second-order PDEs are poorly localized in the frequency domain [29]. Whereas, the PDE transform with high-order PDEs provides desirable frequency localization [29]. However, the PDE transform by itself does not perform well for the separation of entangled textures. To this end, we introduce an adaptive PDE transform algorithm for selective texture extraction. The essence of the adaptive PDE algorithm lies in the realization that features of various textures are closely correlated with both the magnitude and smoothness of the gray-scale values, or, equivalently, the local variation of the image mode functions. Similar ideas have been implemented in other methods such as total variation [21].

Nonlinear PDEs have been widely applied to detect images with noises. However, despite better image edge protection, the nonlinear anisotropic diffusion operator may still break down when the gradient generated by noise is comparable to image edges and features [38]. Application of a preconvolution with a smoothing function to the image can practically alleviate the instability and reduce gray-scale oscillation, but the image quality is often degraded. One alternative solution introduced by Wei [34] is to statistically discriminate noise from image edges by a measure based on the local statistical variance of the image or its gradient. Such a local statistical variance based edge-stopping algorithm was found to work very well for image restoration.

Similar statistical analysis can be employed to perform selective texture extraction for images containing highly entangled and overlapping textures. In the present approach, we first compute the local variation of each pixel of the image mode functions obtained by the high-order PDE transform. Unlike the total variation, the local variation is still a function, of which the variance can be calculated: where is the th mode function obtained by the PDE transform (7), and is evaluated locally over the neighbor pixels. Equation (8) yields a statistical analysis which is used for various texture separation and segmentation with appropriate threshold values. Various threshold values need to be chosen to select the range of the variance corresponding to the particular texture of interest. All the previously classified textures are registered for sequential/recursive texture extractions. A flowchart of the adaptive algorithm of PDE transform is shown in Figure 2.

958142.fig.002
Figure 2: Algorithm of adaptive PDE transform for entangled texture separation.

Figure 1(e) shows the edge mode obtained by applying the PDE transform to Figure 1(a). Figure 1(f) shows the variance of the local variation of gray scale calculated using the adaptive PDE transform. Figures 1(g) and 1(h) show the projection, or average, of the variance in Figure 1(f) along - and -direction, respectively. By slicing out different domain of the variance in Figure 1(f), three different textures in Figures 1(b)1(d) are then perfectly separated from each other.

4. Applications

In this section, the adaptive PDE transform is applied to three different cases to illustrate its superior capability of selective texture separation. The three images feature different types of entangled textures. Figure 3(a) contains textures overlapping in the physical space with entangled frequency fingerprints. Figures 5(a) and 6(a) contain spatially segmented textures overlapping in the frequency domain. Figure 7 contains textures with overlapping textures highly entangled in both the frequency and spatial domains.

fig3
Figure 3: Extraction and separation of texts, background watermark, and textures of (a). Shown in the 3(b) and 3(c) are the image mode function and extracted texture using the proposed adaptive PDE transform.
4.1. Text-Image Separation

The adaptive PDE transform method employing the variance of the local variation of the image mode functions is applied to several benchmark test cases. In particular, separation of text and texture can be regarded as a generalized type of texture analysis. In Figure 3, texts of various fonts are imprinted on the background image. Additional background watermark in Chinese is also presented in Figure 3(a). The separation of English title from both background image and Chinese characters is a challenging task in terms of texture analysis because of the high degree of entanglement of very similar textures. Due to the font size difference in this application, high-order PDE transform plays an extremely important role in differentiating modes with slightly different frequency characteristics. In Figure 3(b), the PDE transform successfully suppresses the low-frequency parts and extracts the mode with frequency band mainly corresponding to texts. Such a procedure is similar to the edge detection in a general image processing. Statistical segmentation is then performed on the high-frequency mode. A suitable threshold value is used to cut off the region with low variance and yields only the texts as shown in Figure 3(c).

4.2. Selective Texture Extraction

The present algorithm of selective texture extraction is also tested on one of the most widely used images, the Barbara, in Figure 5. Barbara image is a benchmark test for edge detection and denoising. It contains fine details of different textures such as the table cloth, curtain behind Barbara, scarf, and clothes on her. Distinctions between all these textures and the background are much larger than those among these textures, which leads to the difficulty of selective texture separation and segmentation. Due to the tiny difference between the frequency or spectrum features of different textures mentioned above, a highly frequency-selective separation method is required. However, the conventional Fourier method is not applicable for this case since the textures are entangled in the frequency domain. Moreover, conventional statistical segmentation approaches do not perform well for this case due to the gray-scale entanglement. The present adaptive PDE transform method performs well for the selective texture extraction in the Barbara image. The total texture, or image edge, is extracted from the high-frequency mode of the PDE transform as shown in Figure 5(b). The variance of the local variation is shown in Figure 4, which is calculated and employed for selective texture extraction and separation with appropriate thresholding values. The resulting textures are shown in Figures 5(c)5(f) which correspond to those of clothes, curtain, and table cloth, respectively. The four textures in Figure 5 are superimposed on the original image for the purpose of a clearer visualization.

958142.fig.004
Figure 4: Adaptive PDE transform for selective texture extraction in the Barbara image. The variance of the local variation is shown in the top chart.
fig5
Figure 5: PDE transform is applied on (a) to extract edges of all textures into 5(b). Adaptive PDE transform is then applied to extract different textures from 5(b). In 5(c)–5(f), all the textures are superimposed on the original image for better viewing.
fig6
Figure 6: Sniper detection by using adaptive PDE transform method. Textures 1, 2, and 3 are, respectively, from the forest, the tree trunk, and the sniper.
fig7
Figure 7: Neuron image classification by using the adaptive PDE transform.

In Figure 6, the present adaptive PDE transform is applied to detect a sniper hidden in the forest (Figure 6(a)). The whole image is composed of highly entangled textures. The boundaries between these textures are very challenging to be identified appropriately. In our approach, variance of the local variation is calculated and used for texture separation as in the previous examples. By appropriate thresholding, the variance can be decomposed into three regions corresponding to those of the forest, the tree trunk, and the sniper. The resulting texture modes are shown in Figures 6(b)6(d).

4.3. Natural Neuron Skeleton Analysis

In the previous introduction to the adaptive PDE transform algorithm and applications, local variation is defined and calculated for the intensity of image mode functions to selectively extract textures beyond the total texture extraction. The selective texture extraction can be generalized to indicate any spatial parts of the image characterized with specific (and usually functionally important) spatial orientation and/or frequency oscillation, such as different parts in the neuron synapses, brain cells, and retina vasculatures. In Figure 7(a), the image of a typical neuron is shown. With advanced imaging techniques made available, research scientists have been able to obtain more and clearer 2D images and 3D data of various neuron cells and networks, whose study will be important for identifying the relation between phenotype and genotype patterns in physiology and molecular biology. Closely related to the advancement in the experimental imaging techniques, various improved computational image processing techniques have been proposed to better analyze neuron images. Neuron morphology study has become more and more important since the shape and branching of dendrites in neurons are closely related to the structure and functioning of the neuron network. Advancements in both experimental imaging techniques and computational image enhancements have led to better visualization and exploration of neuron morphology [3945]. In the study of neuron morphology, image processing and segmentation of cultured neuron skeletons provide details of how neuron grow and branches. In this work, we apply the adaptive PDE transform to the study of “natural” neuron skeleton to segment and classify neuron skeletons into desirable classes according to the spatial extension and frequency oscillation of neuron dendrites, very much like the way of dividing a total image texture into several selective fine textures. Such separation and classification enable secondary processing and analysis of neuron morphology, such as the computation of surface areas (for 2D images) or volumes (for 3D data) for different classes of neuron skeletons. Specifically, we aim to separate different parts, or textures, such as soma, dendrites, axon, terminal or lobe, and numerous ramifications, from the neuron imaging as shown in Figures 7(b)7(d), where three classes of neuron parts are separated according to the spatial extension and frequency oscillation. Surface area of each class is listed in Table 1. Ratios of these surface areas and many other geometric ratios of neuron morphology are related, on both molecular and cellular levels, to the many physiological diseases as well as the classification of neuron synapses.

tab1
Table 1: Classification of natural neuron skeletons.

5. Conclusion

Selective extraction and separation of image textures involving spatial entanglement, gray-scale mixing, and high-frequency overlapping are challenging tasks in image analysis. In this work, we introduce an appropriate adaptation to our earlier partial differential equation (PDE) transform [29] to construct an adaptive PDE transform algorithm. The adaptation is realized via a proper thresholding with the statistical variance of the local variation of image functional mode functions. The present PDE transform enables one to decompose and separate modes with entanglement in both spatial and frequency domains. The proposed method is applied to several challenging benchmark images. Textures of very similar features in the same image are successfully decomposed and separated using the present adaptive PDE transform method.

Acknowledgments

This work was supported in part by NSF Grants CCF-0936830 and DMS-1043034; NIH Grant GM-090208; MSU Competitive Discretionary Funding Program Grant 91-4600.

References

  1. F. Zhang, X. Ye, and W. Liu, “Image decomposition and texture segmentation via sparse representation,” IEEE Signal Processing Letters, vol. 15, pp. 641–644, 2008. View at Publisher · View at Google Scholar · View at Scopus
  2. Y. Dong and J. Ma, “Wavelet-based image texture classification using local energy histograms,” IEEE Signal Processing Letters, vol. 18, pp. 247–250, 2011. View at Google Scholar
  3. K. I. Kim, S. H. Park, and H. J. Kim, “Kernel principal component analysis for texture classification,” IEEE Signal Processing Letters, vol. 8, no. 2, pp. 39–41, 2001. View at Publisher · View at Google Scholar · View at Scopus
  4. R. M. Haralick, “Statistical and structural approaches to texture,” Proceedings of the IEEE, vol. 67, no. 5, pp. 786–804, 1979. View at Google Scholar · View at Scopus
  5. H. Wechsler, “Texture analysis—a survey,” Signal Processing, vol. 2, no. 3, pp. 271–282, 1980. View at Google Scholar · View at Scopus
  6. A. C. Bovik, “Analysis of multichannel narrow-band filters for image texture segmentation,” IEEE Transactions on Signal Processing, vol. 39, no. 9, pp. 2025–2043, 1991. View at Publisher · View at Google Scholar · View at Scopus
  7. J. Malik, S. Belongie, T. Leung, and J. Shi, “Contour and texture analysis for image segmentation,” International Journal of Computer Vision, vol. 43, no. 1, pp. 7–27, 2001. View at Publisher · View at Google Scholar · View at Scopus
  8. M. Elad, J. L. Starck, P. Querre, and D. L. Donoho, “Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA),” Applied and Computational Harmonic Analysis, vol. 19, no. 3, pp. 340–358, 2005. View at Publisher · View at Google Scholar · View at Scopus
  9. A. Khotanzad and R. L. Kashyap, “Feature selection for texture recognition based on image synthesis,” IEEE Transactions on Systems, Man and Cybernetics, vol. 17, no. 6, pp. 1087–1095, 1987. View at Google Scholar · View at Scopus
  10. V. Caselles, J. M. Morel, G. Sapiro, and A. Tannenbaum, “Introduction to the special issue on partial differential equations and geometry-driven diffusion in image processing and analysis,” IEEE Transactions on Image Processing, vol. 7, pp. 269–273, 1998. View at Google Scholar
  11. M. Bertalmío, “Strong-continuation, contrast-invariant inpainting with a third-order optimal PDE,” IEEE Transactions on Image Processing, vol. 15, no. 7, pp. 1934–1938, 2006. View at Publisher · View at Google Scholar · View at Scopus
  12. A. Haddad and Y. Meyer, “An improvement of Rudin-Osher-Fatemi model,” Applied and Computational Harmonic Analysis, vol. 22, no. 3, pp. 319–334, 2007. View at Publisher · View at Google Scholar · View at Scopus
  13. J. B. Garnett, T. M. Le, Y. Meyer, and L. A. Vese, “Image decompositions using bounded variation and generalized homogeneous Besov spaces,” Applied and Computational Harmonic Analysis, vol. 23, no. 1, pp. 25–56, 2007. View at Publisher · View at Google Scholar · View at Scopus
  14. J. Gilles and Y. Meyer, “Properties of BV - G structures + textures decomposition models. Application to road detection in satellite images,” IEEE Transactions on Image Processing, vol. 19, no. 11, pp. 2793–2800, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. P. Maurel, J. F. Aujol, and G. Peyre, “Locally parallel texture modeling,” SIAM Journal on Imaging Sciences, vol. 4, no. 1, pp. 413–447, 2011. View at Google Scholar
  16. V. Duval, J. F. Aujol, and L. A. Vese, “Mathematical modeling of textures: application to color image decomposition with a projected gradient algorithm,” Journal of Mathematical Imaging and Vision, vol. 37, no. 3, pp. 232–248, 2010. View at Publisher · View at Google Scholar · View at Scopus
  17. G. R. Cross and A. K. Jain, “Markov random field texture models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 5, no. 1, pp. 25–39, 1983. View at Google Scholar · View at Scopus
  18. D. Geman, S. Geman, C. Graffigne, and P. Dong, “Boundary detection by constrained optimization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 609–628, 1990. View at Publisher · View at Google Scholar · View at Scopus
  19. J. Mao and A. K. Jain, “Texture classification and segmentation using multiresolution simultaneous autoregressive models,” Pattern Recognition, vol. 25, no. 2, pp. 173–188, 1992. View at Publisher · View at Google Scholar · View at Scopus
  20. A. P. Pentland, “Shading into texture,” Artificial Intelligence, vol. 29, no. 2, pp. 147–170, 1986. View at Google Scholar · View at Scopus
  21. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D, vol. 60, no. 1—4, pp. 259–268, 1992. View at Google Scholar · View at Scopus
  22. L. A. Vese and S. J. Osher, “Modeling textures with total variation minimization and oscillating patterns in image processing,” Journal of Scientific Computing, vol. 19, no. 1-3, pp. 553–572, 2003. View at Publisher · View at Google Scholar · View at Scopus
  23. F. Crosby and S. H. Kang, “Multiphase segmentation for 3D flash lidar images,” Journal of Pattern Recognition Research, vol. 6, pp. 193–200, 2011. View at Google Scholar
  24. A. Khotanzad and J. Y. Chen, “Unsupervised segmentation of textured images by edge detection in multidimensional feature,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, pp. 414–421, 1989. View at Google Scholar
  25. S. Peleg, J. Naor, R. Hartley, and D. Avnir, “Multiple resolution texture analysis and classification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 6, no. 4, pp. 518–523, 1984. View at Google Scholar · View at Scopus
  26. S. Krishnamachari and R. Chellappa, “Multiresolution Gauss-Markov random field models for texture segmentation,” IEEE Transactions on Image Processing, vol. 6, no. 2, pp. 251–267, 1997. View at Google Scholar · View at Scopus
  27. Y. Wang, G. W. Wei, and S. Y. Yang, “Iterative filtering decomposition based on local spectral evolution kernel,” Journal of Scientific Computing. In press. View at Publisher · View at Google Scholar
  28. Y. Wang, G. W. Wei, and S. Y. Yang, “Mode decomposing evolution equations,” Journal of Scientific Computing. In press. View at Publisher · View at Google Scholar
  29. Y. Wang, G. W. Wei, and S. Y. Yang, “Partial differential equation transform: variational formulation and fourier analysis,” International Journal for Numerical Methods in Biomedical Engineering, vol. 27, no. 12, pp. 1996–2020, 2011. View at Publisher · View at Google Scholar
  30. Q. Zheng, S. Y. Yang, and G. W. Wei, “Biomolecular surface construction by PDE transform,” International Journal for Numerical Methods in Biomedical Engineering. In press. View at Publisher · View at Google Scholar
  31. L. Hu, S. Y. Yang, Q. Zheng, and G. W. Wei, “PDE transform for hyperbolic conservation laws,” SIAM Journal on Scientific Computing. In press.
  32. A. P. Witkin, “Scale-space filtering,” in Readings in Computer Vision: Issues, Problems, Principles, and Paradigms, pp. 329–332, 1987. View at Google Scholar
  33. P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629–639, 1990. View at Publisher · View at Google Scholar · View at Scopus
  34. G. W. Wei, “Generalized Perona-Malik equation for image restoration,” IEEE Signal Processing Letters, vol. 6, no. 7, pp. 165–167, 1999. View at Publisher · View at Google Scholar · View at Scopus
  35. Y. L. You and M. Kaveh, “Fourth-order partial differential equations for noise removal,” IEEE Transactions on Image Processing, vol. 9, no. 10, pp. 1723–1730, 2000. View at Publisher · View at Google Scholar · View at Scopus
  36. M. Lysaker, A. Lundervold, and X. C. Tai, “Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time,” IEEE Transactions on Image Processing, vol. 12, no. 12, pp. 1579–1589, 2003. View at Publisher · View at Google Scholar · View at Scopus
  37. G. W. Wei and Y. Q. Jia, “Synchronization-based image edge detection,” Europhysics Letters, vol. 59, no. 6, pp. 814–819, 2002. View at Publisher · View at Google Scholar · View at Scopus
  38. M. Nitzberg and T. Shiota, “Nonlinear image filtering with edge and corner enhancement,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 8, pp. 826–833, 1992. View at Publisher · View at Google Scholar · View at Scopus
  39. R. A. Graf and I. M. Cooke, “Outgrowth morphology and intracellular calcium of crustacean neurons displaying distinct morphologies in primary culture,” Journal of Neurobiology, vol. 25, pp. 1558–1569, 1994. View at Google Scholar
  40. R. Yuste and T. Bonhoeffer, “Morphological changes in dendritic spines associated with long-term synaptic plasticity,” Annual Review of Neuroscience, vol. 24, pp. 1071–1089, 2001. View at Publisher · View at Google Scholar · View at Scopus
  41. J. C. Fiala, B. Allwardt, and K. M. Harris, “Dendritic spines do not split during hippocampal LTP or maturation,” Nature Neuroscience, vol. 5, no. 4, pp. 297–298, 2002. View at Publisher · View at Google Scholar · View at Scopus
  42. R. F. Dacheux, M. F. Chimento, and F. R. Amthor, “Synaptic input to the on-off directionally selective ganglion cell in the rabbit retina,” Journal of Comparative Neurology, vol. 456, no. 3, pp. 267–278, 2003. View at Google Scholar
  43. P. J. Broser, R. Schulte, S. Lang et al., “Nonlinear anisotropic diffusion filtering of three-dimensional image data from two-photon microscopy,” Journal of Biomedical Optics, vol. 9, no. 6, pp. 1253–1264, 2004. View at Publisher · View at Google Scholar · View at Scopus
  44. E. Jurrus, M. Hardy, T. Tasdizen et al., “Axon tracking in serial block-face scanning electron microscopy,” Medical Image Analysis, vol. 13, no. 1, pp. 180–188, 2009. View at Publisher · View at Google Scholar · View at Scopus
  45. Y. Livneh and A. Mizrahi, “Long-term changes in the morphology and synaptic distributions of adultborn neurons,” The Journal of Comparative Neurology, vol. 519, no. 11, pp. 2212–2224, 2011. View at Publisher · View at Google Scholar