Biomedical Signal Processing and Modeling Complexity of Living Systems 2013View this Special Issue
Self-Adaptive Image Reconstruction Inspired by Insect Compound Eye Mechanism
Inspired by the mechanism of imaging and adaptation to luminosity in insect compound eyes (ICE), we propose an ICE-based adaptive reconstruction method (ARM-ICE), which can adjust the sampling vision field of image according to the environment light intensity. The target scene can be compressive, sampled independently with multichannel through ARM-ICE. Meanwhile, ARM-ICE can regulate the visual field of sampling to control imaging according to the environment light intensity. Based on the compressed sensing joint sparse model (JSM-1), we establish an information processing system of ARM-ICE. The simulation of a four-channel ARM-ICE system shows that the new method improves the peak signal-to-noise ratio (PSNR) and resolution of the reconstructed target scene under two different cases of light intensity. Furthermore, there is no distinct block effect in the result, and the edge of the reconstructed image is smoother than that obtained by the other two reconstruction methods in this work.
The classical reconstruction methods include the nearest neighbor algorithm, bilinear interpolation, and bicubic interpolation algorithm [1, 2]. According to existing research, the reconstruction accuracy of bilinear interpolation is higher than that of the nearest neighbor algorithm, and the former can get better image reconstruction results. However, the reconstructed image by bilinear interpolation appears saw-tooth and blurring sometimes . Although the reconstruction results of bicubic interpolation are better than the others, they always lose efficiency and take much more time. As a compromise, bilinear interpolation is often used for research. These algorithms can improve the reconstruction quality of the original image to some extent. However, only the correlation between the local and global pixels is considered in these algorithms. Interpolation-based reconstruction methods do improve the effect of image reconstruction, but they destroy the high-frequency detailed information of the original image [4, 5].
Some studies have found that insects have a relatively broad living environment, for instance, the mantis shrimp can live between 50 m and 100 m depth underwater. In such living environment, the light condition changes dramatically, due to the combined effect of sunlight and water media. To adapt to the changing environment, this species, whose ommatidia structure is fixed, must regulate the light acceptance angle adaptively [6, 7]. Through the joint action of the lens and the rhabdome, the mantis shrimp has different degrees of overlapping images in the whole region of the ommatidia. The ommatidia get the different optical information depending on the different lighting conditions. Under the light and the dim environment conditions, the mantis shrimp can regulate the length of rhabdome and lens through relaxing or contracting the myofilament. Based on the biological mechanism above, the ommatidia visual field can be narrowed or expanded to get a relatively stable number of incoming photons and a better spatial resolution. Ultimately, the imaging system can reach balance between the visual field and the resolution , as shown in Figure 1. According to Schiff’s  research, the imaging angle and visual field of the mantis shrimp ommatidia both change while the light intensity condition changes. For instance, the ommatidia visual field is 5° under dim-adapted pattern, but the corresponding visual field will be only 2° under bright-adapted pattern, and some other species also have similar characteristics [10–14].
Recently, the compressed sensing theory provides a new approach for computer vision [15–17], image acquisition [18, 19], and reconstruction [20–22]. This method can get the reconstruction results as effectively as the traditional imaging systems do, or even higher quality (in resolution, SNR, etc.), with fewer sensors, lower sampling rate, less data volume, and lower power consumption [23–27]. According to the compressed sensing theory, the compressive sampling can be executed effectively if there is a corresponding sparse representation space. Currently, the compressed sensing theory and application of the independent-channel signal have been developed in-depth, such as single-pixel camera imaging .
By the combined insect compound eye imaging mechanism with compressed sensing joint sparse model (JSM-1) model [29–32], we use the spatial correlation of multiple sampled signals to get the compressive sampling and reconstruction. Inspired by the light-dim self-adaptive regulatory mechanism of insect compound eyes (ICE), this paper proposes an ICE-based adaptive reconstruction method (ARM-ICE). The new method can execute multiple compressive sampling on the target scene. According to the environment light intensity, it can regulate the sampling visual field to control imaging. The simulation results show that, in contrast to the image-by-image reconstruction and bilinear interpolation algorithm, the new method can reconstruct the target scene image under two kinds of light intensity conditions with higher-peak signal-to-noise ratio (PSNR). The new method also improves the resolution and detailed information of reconstruction.
In the first section, we describe the imaging control mechanism of insect compound eyes, compressed sensing theory, and current research of bionic compound eyes imaging system. Section 2 demonstrates the ARM-ICE imaging system pattern from three aspects: visual field self-adaptive adjusting, sampling, and reconstruction. Section 3 completes the ARM-ICE system simulation under the dim and light conditions and then analyzes the imaging results and the comparison of relevant parameters. In Section 4, we conclude with possible topics for future work.
2. Compressed Sensing-Based Arm-Ice Imaging System Pattern
Figure 2 shows an ARM-ICE imaging system pattern. The purple lines represent the light environment visual field, while the blue lines represent the dim environment visual field. The target scene is imaged, respectively, by the compound eye lens array. The isolation layer is composed by multichannel opening shade blocks, which can be controlled. And each port of shade blocks is connected to a corresponding little lens of compound eye lenses. This structure sets a number of independent controllable light-sensitive cells. Each port of isolation layer opens at different time. The feedback signal controls them to regulate the relative position to make the light from target scene to the n light-sensitive cells. The corresponding area is sparsely sampled in the digital micromirror device. Measurement data can be obtained in the imaging plane. Ultimately, the processor reconstructs the target scene according to the -sparse property of data sensed on the wavelet basis Ψ and the uncorrelated measurement matrix Φ.
2.1. Arm-ICE Visual Field Self-Adaptive Regulation
According to the biological research, in the insect compound eyes system under different light intensities, the angle of imaging and the visual field change accordingly [33–37]. Inspired by this self-adaptive ability, this paper mimics the insect compound eye system on its imaging control mechanism based on light intensity sensitivity, to expand or narrow the scope of visual field and overlapping field by regulating the position of the lenses.
According to the results of biological research, the relationship between light intensity, imaging pore size, and other factors can be described as (1), hereby to regulate the lenses position to achieve the overlap visual field  where indicates the visual field range, indicates the maximum detectable spatial frequency, which can be regarded as a constant, is the mean contrast of the scene, indicates the number of the photons captured by an input port, and shows the total variance for environmental light intensity.
From (1), the visual field can be calculated according to the set while the light intensity changes. Based on the biological principle above, the visual field range can be regulated according to the environment light intensity.
2.2. Compressive Sampling
The digital micromirror device (DMD) senses the optical information from the lenses array, and then makes sparse sampling. The principle is inner product the optical signal from the lenses array perception and DMD measurement basis vector , and make the result as the output voltage of the DMD device at the moment . The output voltage of the photodiode can be expressed as the inner product of the desired image with a measurement basis vector [26, 28, 29]: where the value of is related to the position of DMD micro-mirror; when the micromirror turns +10°, ; when the micromirror turns −10°, . is the direct current offset, which can be measured by setting all mirrors to −10°.
Based on the principle of measurement matrix of a single DMD device, we can use the DMD device array to get sparse signals of image system. The compound eye lenses and the isolation layer constitute n light-sensitive independent cells, each of which is controlled by the isolation layer to open at different time. The array jointly senses the target scene data : where expresses the common information of the perception data and expresses the specific information of each lens. Vector indicates the perception data from n light-sensitive units. The perception data can be regarded as -sparse on wavelets basis due to the spatial correlation: where is the sparse vector coefficient, consisting of the high-frequency subset ( is subset at scale ) and the low-frequency subset of wavelet transform. After light-sensitive lenses obtain , k-sparse signal is used to generate measurement data of the image plane from the measurement matrix on the DMD device: where matrix is a 0-1 matrix, which consists of the output voltage of the DMD device in (2) at the moment m. Equation (5) can also be described as follows:
2.3. Joint Reconstrucion
According to the multichannel captured data, which are -sparse on wavelet basis and the inconsistency of the measurement matrix with the wavelet basis , the processor runs the decoding algorithm to reconstruct the target scene:
The optimized sparse solution can be gotten by solving the issue of optimizing norm. The reconstruction of captured data from each lens can be indicated as follows: . An important issue during the reconstruction process is how to calculate the wavelet basis . Assume the set of captured data is already known, and . Each light-sensitive sensor captures the target scene from different views, so its obtained data can be divided into two parts: the common part and the particular part . indicates the lifting wavelet transform after J times’ recursion: where is the low-frequency coefficient set, is the high-frequency coefficient set, P is the linear prediction operator, and U is the linear update operator. Using the spatial correlation of captured data, can be calculated by . contains fewer information relatively.
For , after k times’ recursive lifting wavelet transform:
After resetting the wavelet coefficients which are under threshold value in , the sparsely structured can be used to reconstruct the original signal exactly. Assuming that is a lifting wavelet inverse transform, as the linear prediction operator and the linear update operator are both linear operations, therefore， and are both linear transforms. can be expressed as follows: where . Since , the initial data can be reconstructed exactly.
3. Four-Channel Arm-ICE Imaging System Pattern Simulation
According to the ARM-ICE visual field self-adaptive adjustment mechanism under different surrounding light intensities described in Section 2.1, in this section, we simulate a four-channel ARM-ICE imaging system. When the surrounding light intensity turns strong, the lenses array regulates their relative positions according to (1) automatically. The simulation results are shown in Figure 3; Figure 3(a) is the target scene under strong illumination environment, whose brightness value is 144.8527 Nits. Figure 3(b) is the joint reconstruction image from photoelectric coupler array, and its reconstructed PSNR is 41.9113 dB. Figure 3(c) is a reconstructed image by linear interpolation method, and its PSNR is 27.8246 dB under the same sampling rate as ARM-ICE. Figure 3(d) is an image-by-image reconstruction, and its PSNR is 27.8246 dB under the same sampling rate as ARM-ICE.
When the surroundings are dim, the compound eye lenses array contracts to the central area, sacrificing the visual field to improve the reconstruction resolution of target scene. The simulation results are shown in Figure 4. Figure 4(a) is the target scene under the dim conditions whose brightness value is 103.3661 Nits. Put the brightness values into (1) and calculate the lenses’ positions at the moment. Figure 4(b) is the joint reconstruction image from photoelectric coupler array, and its reconstructed PSNR is 44.4705 dB. Figure 4(c) is the reconstructed image by linear interpolation method. PSNR is 36.5021 dB at the same sampling rate. Figure 4(d) is the reconstruction result of image-by-image, whose PSNR is 29.5852 dB.
From the reconstruction effect, the result of linear interpolation method is superior to the result reconstructed by image-by-image. However, there is still obvious block effect, and lack of smoothness at the edge direction. Correspondingly, the image reconstructed by ARM-ICE has a significant improvement in resolution. From Figures 3 and 4, we can see that there is no distinct block effect in the result and the edges of the reconstructed image are smoother compared to the results of the other two reconstruction methods studied in this work.
Figure 5 is the comparison of PSNR-Sampling rates under low light and strong light conditions (144.8527 Nits). The three black lines in the figure show the comparison results under the strong light condition, in which the black dotted line shows the result of ARM-ICE, the black diamond line shows the result of bilinear interpolation, and the black five-pointed star-shaped line shows the result of image-by-image reconstruction. It can be concluded from the figure that the PSNR of ARM-ICE is higher than bilinear interpolation and image-by-image reconstruction under different sampling rates under the strong light condition.
The three red lines in the figure show the comparison obtained under the low light condition (103.3661 Nits), in which the red dotted line shows the result of ARM-ICE reconstruction, the red diamond line shows the result of bilinear interpolation, and the red five-pointed star-shaped line shows the result of image-by-image reconstruction. It can be seen from the figure that when the target scene is under low light condition, the PSNR of ARM-ICE at different sampling rates is higher than bilinear interpolation and image-by-image reconstruction.
Inspired by the imaging mechanism and the adaptive regulatory regulation mechanism of the insect compound eyes, this paper proposes a reconstruction method, which regulates the scale of the sampling area adaptively according to the surrounding light intensity condition. The imaging system pattern of the new method can complete the multichannel independent sampling in the target scene almost at the same time. Meanwhile, the scale of the sampling area and the optical signal redundancy can be regulated adaptively to achieve the imaging control. Compared with the traditional methods, the resolution of the reconstructed image by ARM-ICE method has been significantly improved. The reconstructed image with the proposed method has three features: higher resolution, no distinct block effect, and smooth edge.
Simulation results indicate that the new method makes the PSNR of the reconstructed image higher under two kinds of light conditions. However, the reconstruction quality under low light conditions is improved by the proposed algorithm at the cost of the scale of the visual field. Therefore, the key issue in the future work would be how to reconstruct high-resolution large scenes in low light conditions.
This paper was supported by the National Natural Science Foundation of China (No. 61263029 and No. 61271386). The authors thank Wang Hui, a graduate student of Hohai University, for helping in research work.
R. C. Kenneth and R. E. Woods, Digital Image Processing, Publishing House of Electronics Industry, Beijing, China, 2002.
F. G. B. D. Natale, G. S. Desoli, and D. D. Giusto, “Adaptive least-squares bilinear interpolation (ALSBI): a new approach to image-data compression,” Electronics Letters, vol. 29, no. 18, pp. 1638–1640, 1993.View at: Google Scholar
L. Chen and C. M. Gao, “Fast discrete bilinear interpolation algorithm,” Computer Engineering and Design, vol. 28, p. 15, 2007.View at: Google Scholar
S. Y. Chen and Z. J. Wang, “Acceleration strategies in generalized belief propagation,” IEEE Transactions on Industrial Informatics, vol. 8, p. 1, 2012.View at: Google Scholar
N. M. Kwok, X. P. Jia, D. Wang et al., “Visual impact enhancement via image histogram smoothing and continuous intensity relocation,” Computers & Electrical Engineering, vol. 37, p. 5, 2011.View at: Google Scholar
L. Z. Xu, M. Li, A. Y. Shi et al., “Feature detector model for multi-spectral remote sensing image inspired by insect visual system,” Acta Electronica Sinica, vol. 39, p. 11, 2011.View at: Google Scholar
F. C. Huang, M. Li, A. Y. Shi et al., “Insect visual system inspired small target detection for multi-spectral remotely sensed images,” Journal on Communications, vol. 32, p. 9, 2011.View at: Google Scholar
H. Schiff, “A discussion of light scattering in the Squilla rhabdom,” Kybernetik, vol. 14, no. 3, pp. 127–134, 1974.View at: Google Scholar
B. Dore, H. Schiff, and M. Boido, “Photomechanical adaptation in the eyes of Squilla mantis (Crustacea, Stomatopoda),” Italian Journal of Zoology, vol. 72, no. 3, pp. 189–199, 2005.View at: Google Scholar
H. Ikeno, “A reconstruction method of projection image on worker honeybees' compound eye,” Neurocomputing, vol. 52–54, pp. 561–566, 2003.View at: Google Scholar
J. Gál, T. Miyazaki, and V. B. Meyer-Rochow, “Computational determination of refractive index distribution in the crystalline cones of the compound eye of Antarctic krill (Euphausia superba),” Journal of Theoretical Biology, vol. 244, no. 2, pp. 318–325, 2007.View at: Publisher Site | Google Scholar
M. F. Duarte and R. G. Baraniuk, “Spectral compressive sensing,” IEEE Transactions on Signal Processing, vol. 6, 2011.View at: Google Scholar
L. Z. Xu, X. F. Ding, X. Wang, G. F. Lv, and F. C. Huang, “Trust region based sequential quasi-Monte Carlo filter,” Acta Electronica Sinica, vol. 39, no. 3, pp. 24–30, 2011.View at: Google Scholar
J. Treichler and M. A. Davenport, “Dynamic range and compressive sensing acquisition receivers,” in Proceedings of the Defense Applications of Signal Processing (DASP '11), 2011.View at: Google Scholar
A. Y. Shi, L. Z. Xu, and F. Xu, “Multispectral and panchromatic image fusion based on improved bilateral filter,” Journal of Applied Remote Sensing, vol. 5, Article ID 053542, 2011.View at: Google Scholar
L. Z. Xu, X. F. Li, and S. X. Yang, “Wireless network and communication signal processing,” Intelligent Automation & Soft Computing, vol. 17, pp. 1019–1021, 2011.View at: Google Scholar
D. Baron, B. Wakin, and S. Sarvotham, “Distributed Compressed Sensing,” Rice University, 2006.View at: Google Scholar
D. Baron and M. F. Duarte, “An information-theoretic approach to distributed compressed sensing,” in Proceedings of the Allerton Conference on Communication, Control, and Computing, vol. 43, Allerton, Ill, USA, 2005.View at: Google Scholar
D. Baron, M. F. Duarte, S. Sarvotham, M. B. Wakin, and R. G. Baraniuk, “Distributed compressed sensing of jointly sparse signals,” in Proceedings of the 39th Asilomar Conference on Signals, Systems and Computers, pp. 1537–1541, November 2005.View at: Google Scholar
M. B. Wakin, S. Sarvotham, and M. F. Duarte, “Recovery of jointly sparse signals from few random projections,” in Proceedings of the Workshop on Neural Information Proccessing Systems, 2005.View at: Google Scholar