Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2012 / Article
Special Issue

Information and Modeling in Complexity

View this Special Issue

Research Article | Open Access

Volume 2012 |Article ID 356454 | 11 pages |

A Novel SFP Extracting/Tracking Framework for Rigid Microstructure Measurement

Academic Editor: Shengyong Chen
Received04 May 2011
Accepted15 Jun 2011
Published15 Aug 2011


3D measurement and reconstruction of rigid microstructures rely on the precise matches of structural feature points (SFPs) between microscopic images. However, most of the existing algorithms fail in extracting and tracking at microscale due to the poor quality of the microscopic images. This paper presents a novel framework for extracting and matching SFPs in microscopic images under two stereo microscopic imaging modes in our system, that is, fixed-positioning stereo and continuous-rotational stereo modes, respectively. A 4-DOF (degree of freedom) micro visual measurement system is developed for 3D projective structural measurement of rigid microstructures using the SFPs obtained from microscopic images by the proposed framework. Under the fixed-positioning stereo mode, a similarity-pictorial structure algorithm is designed to preserve the illumination invariance in SFPs matching, while a method based on particle filter with affine transformation is developed for accurate tracking of multiple SFPs in image sequence under the continuous-rotational stereo mode. The experimental results demonstrate that the problems of visual distortion, illumination variability, and irregular motion estimation in micro visual measurement process can be effectively resolved by the proposed framework.

1. Introduction

Micro stereovision technology makes it possible to achieve 2D/3D information extraction and 3D reconstruction. It has been extensively used in micromanipulation, microassembly, microrobot navigation and bioengineering, and so forth. 3D reconstruction of the small objects in microscopic image is a challenging work according to the small sizes. Existing methods as structured-light-based 3D reconstruction [1, 2], and binocular stereo methods [3]. are not suitable directly for micro applications and then definitely need to be modified. For rigid 3D microstructures, the accurate matching of structural feature points between microscopic images is one of the most important steps in micro stereovision computation. In this paper we define the structural feature points (SFPs) as those key points that can fully represent the rigid microstructures, which are those intersection points of 3 planes on the micro 3D structure (Figure 1(a)). Although the vision method based on tracking of feature points plays an important role in normal-scale computer vision applications as 3D reconstruction, segmentation, and recognition, there is little research focused on the microscale applications except for medical applications. This is due to poor contrast and worse quality of microscopic image mainly resulting from the imaging complexity of optical microscope. The complex structure of microscopic lens system [4] requires the variety of optical elements that can introduce a wide array of image distortions.

In this paper we present a framework for extracting and matching SFPs in microscopic images for dealing with fixed-positioning and continuous-rotational stereo imaging modes, respectively. We believe that the proposed framework perfectly plays an important role in 3D reconstruction based on microscopic image.

First, since there exist some specific drawbacks in microscopic images as blurred edges, geometrical distortions, serious dispersion, and disturbance by noises (especially coming from illumination changes), which result in more troubles for SFPs’ detection and matching, many popular key point detecting and matching methods [59] failed to achieve similar accuracy in microscopic images (Figures 1(b)1(d)) in comparison with those results obtained when employed on normal-scale images. New methods that are suitable for SFPs’ extracting and matching in microscopic images are highly required. We develop a novel illumination-invariant method based on similarity-pictorial structure algorithm (similarity-PS) to solve the SFPs’ extracting and matching problem in the fixed-positioning stereo mode.

Second, existing feature point tracking algorithms for continuous image sequences are usually classified as methods based on template, motion parameters, and color patch [10], none of them meet the needs of SFPs’ tracking in microscale images. The representative research achievements include: continuously adaptive mean shift (Camshift) [11] taking color histogram as object mode in rich-color images for the tracking tasks, which performs poorly when tracking the feature points in complicated background with areas of similar color. Yao and Chellappa first designed a probabilistic data association filter with extended Kalman filter (EKF) [12] to estimate the rotational motion and continuously track the feature points in frames, which effectively resolved occlusion problem. However, the real location must be predicted by probability analysis during arbitrary moving process of the object.

Buchanan proposed a combining local and global motion models with Kanade-Lucas-Tomasi (KLT) tracker to accurately track multiple feature points of nonrigid moving objects [13]. But if motion predictions cannot be made for the subsequences consisting of the initial frames, this strategy must fail. Kwon et al. presented a template tracking approach based on applying the particle filtering (PF) algorithm on the affine group [14, 15], which can accurately track a large single object, but it performs well only in single template tracking. We extended their method to the multiple-points tracking case in the proposed monocular microvision system in the continuous-rotational stereo mode. Since the images of the measuring rigid micro structure keep fixed spatial relationship of global structures and local features affine invariabilities during the rotating process, the tracking problem is greatly simplified with affine transformation and covariance descriptor in our framework.

2. Proposed Framework

2.1. Imaging Modes of Proposed Micro Stereoscopic Measurement System

We developed a 4-DOF micro stereoscopic measurement system for 3D micro objects measurement. The system consists of a 4-DOF stage, a fixedmounted illumination and a SLM-based optical imaging system. A monocular tube microscope is used in the system for reducing the imaging complexity. In our system the stereoscopic imaging relationship can be realized in two modes.

(i) Fixed-Positioning Stereo Mode (Mode 1)
The rotational transform by a fixed rotation angle is performed, followed by a tilting motion. Images are captured at the end of each movement. Problem: The changes of illumination direction bring huge contrast and intensity changes to the microscopic images captured at different position.

(ii) Continuous-Rotational Stereo Mode (Mode 2)
Tilting movement is performed firstly, followed by a continuously rotational movement. Image sequences are captured during the rotation process. Problem:The motion blur caused by continuous rotational motion will decrease the quality of microscopic image sequences.

2.2. Similarity-PS Method for SFPs Matching of Mode 1
2.2.1. Pictorial Structure Method

A PS method is represented by a collection of parts which has spatial relationships between certain pairs. The PS model can be expressed by a graph 𝐺=(𝑉,𝐸), where the vertices 𝑉={𝑣1,𝑣2,𝑣3,,𝑣𝑛} correspond to the parts and {𝑣𝑖,𝑣𝑗}𝐸 present the edge for each pair of connected parts 𝑣𝑖 and 𝑣𝑗. An object is expressed by a configuration 𝐿=(𝑙1,𝑙2,𝑙𝑛) where 𝑙𝑖 represents the location for each part 𝑣𝑖. For each part 𝑣𝑖, the appearance match cost function 𝑎𝑖(𝐼,𝑙) represents how well the part matches the image 𝐼 when placed at location 𝑙. A simple pixel template matching is used for this cost function in [16]. The connections between the locations of parts present the structure match cost. The cost function 𝑡𝑖𝑗(𝑣𝑖,𝑣𝑗) represents how well the locations 𝑙𝑖 of 𝑣𝑖 and 𝑙𝑗 of 𝑣𝑗 agree with the object model. Therefore, the cost function 𝐿 for PS includes 2 parts (the appearance cost function and structure cost function). 𝐿=argmin𝑣𝑖𝑉𝑎𝑖1,𝑙𝑗+𝑣𝑖,𝑣𝑗𝐸𝑡𝑖𝑗𝑙𝑖,𝑙𝑗.(2.1) The best match of SFPs can be obtained by minimizing 𝐿.

2.2.2. The Local Self-Similarity Descriptor

Self-similarity descriptor is proposed by Buchanan and Fitzgibbon [13].

Figure 2 illustrates the procedure for generating the self-similarity descriptor 𝑑𝑞 centered at 𝑞 with a large image. 𝑞 is a pixel in the input image.

The green square region is a small image patch (typically55,33) centered at 𝑞. The lager blue square region is a big image region (typically 3030,4040) centered with 𝑞, too. First, the small image patch is compared with a larger image region using sum of square differences (SSDs). The CIE 𝐿𝑎𝑏 color space transforming is needed for color images. Second, the correlation surface is normalized to eliminate illumination influences. Finally, the normalized correlation surface is transformed into a “correlation surface” 𝑆𝑞(𝑥,𝑦)𝑆𝑞(𝑥,𝑦)=expSSD𝑞(𝑥,𝑦)maxvarnoise,varauto(𝑞),(2.2) where SSD𝑞(𝑥,𝑦) is the normalized correlation surface and varnoise is a constant number that corresponds to acceptable photometric variations (in color, illumination or due to noise, which is 150 in the paper). varauto(𝑞) is the maximal variance of the difference of all patches within a very small neighborhood of 𝑞 (of radius 1) relative to the patch centered at 𝑞.

The correlation surface SSD𝑞(𝑥,𝑦) is then transformed into log-polar coordinates centered at 𝑞 and portioned into 204 bins (𝑚=20 angles, 𝑛=4 radius). We choose the maximal value in every bin (it can help the descriptor to adapt to nonrigid deformation). We choose all the maximal values to form an 𝑚𝑛 vector as a self-similarity descriptor centered at 𝑞. Finally, this descriptor vector is normalized to the range [01] by linearly stretching its values.

2.2.3. Similarity-PS Algorithm

We introduce the local “self-similarity” descriptor-matching approach for SFPs’ detection into a simplified PS model. This subsection covers 3 main steps as follows

Extraction of the “Template” Description T
Manually marked SFPs’ coordinates on the micro structure are used for the training process. We describe the marked points by self-similarity descriptors. For every point, the average values of all the examples of descriptor as a trained descriptor are calculated, acting as the appearance description for the model.

Dense Computation of the Local Self-Similarity Descriptor
These descriptors 𝑑𝑞 are computed throughout the tested image I with 2 pixels apart from each other in this paper. The higher precision will be obtained if the searching process covers every pixel.

Detection of Similar Descriptors of T within I
The region centered at SFPs is chosen as the interesting region in 𝐼, which has the smallest weighted Euclidean distance between self-similarity descriptor vector within 𝐼 and the one from the training descriptor. The coordinates of all interest points in 𝐼 are recorded as the locations for the candidate key points. We usually find out many candidate key points for one marked point. In our experiment there are 200 points being chosen (but it varies for different templates), and their Euclidean distances between the candidate key points descriptors and the trained descriptors are recorded as 𝑎𝑖(𝐼,𝑙𝑗) which are linearly normalized to the range of [01]. Therefore the appearance cost function 𝑎𝑖(𝐼,𝑙𝑗) is determined. We substituted the appearance model in (2.1) with the obtained self-similarity model; then, the best-matching SFPs for the PS model can be obtained by minimizing 𝐿.
Since the microscopic images are not always of the same size, to satisfy the scale invariance, we calculate the self-similarity descriptors at multiple scales both on the patterns and testing images. Moreover, the scale-invariant structures are obtained by multiplying a scale factor to all the mean and variance value of structural distances of SFPs.

2.3. PF-Based Multiple SFPs Tracking of Mode 2
2.3.1. The Affine Motion of Tracking Points

A single tracked point 𝑏𝑡(𝑖)=(𝑥𝑡(𝑖),𝑦𝑡(𝑖))𝑇(𝑖=1,,𝑘,𝑡=1,,𝑁, representing the number of tracked points and frames, resp.) cannot satisfy the accurate tracking in micro sequences

therefore it is necessary to take advantage of templates. Initializing multiple points with a set of given center coordinates 𝑏𝑡=(𝑥𝑡(1),𝑦𝑡(1),,𝑥𝑡(𝑖),𝑦𝑡(𝑖))𝑇, a set of small regions point templates 𝑇(𝑋𝑡(𝑖)) are denoted as a group of windows which surround each tracked point for describing their main characteristics. The coordinates of each 𝑇(𝑋𝑡(𝑖)) are obtained via multiplication, the homogeneous coordinates with affine transformation matrix 𝑋𝑡(𝑖),so the most important step in tracking process is to calculate the state 𝑋𝑡(𝑖), for all point templates in each frame. An affine motion can be divided into a translation 𝑀𝑡(𝑖) and a rotation 𝑅𝑡 all the points of a rigid object are rotating with same angular velocity. The translation vectors are denoted by 𝑀𝑡(𝑖)=(𝑀𝑥𝑡(1),𝑀𝑦𝑡(1),,𝑀𝑥𝑡(𝑖),𝑀𝑦𝑡(𝑖))𝑇, and the affine motion action of a point can be written as (𝑏𝑡(𝑖))=𝑅𝑡𝑏𝑡(𝑖)+𝑀𝑡(𝑖) and 𝑅𝑡𝑀𝑡(𝑖)01=expΓ𝛾00,(2.3) where 𝑙 is a linear function representing the affine motion of points 𝑏𝑡(𝑖), 𝑅𝑡 is an invertible2×2rotation matrix, and 𝑅𝑡𝑅2, 𝑀𝑡(𝑖) is a 2×𝑖 translation vector 𝑀𝑡(𝑖)𝑅2. The second form of (2.1) using the exponential map is for the convenience of denoting the affine transformation of the imaging process.

2.3.2. Tracking by Particle Filter with Affine Motion

State estimate and sample for tracked points in each frame: the efficiency of the particle filter tracking algorithm mainly relies on the importance of random sampling and calculate the weights 𝑤𝑡(𝑖,𝑗). The measurement likelihood 𝑝(𝑦𝑡(𝑖)𝑋𝑡(𝑖)) is independent of the current state particles 𝑋(𝑖,𝑗)0𝑡. The measurement state equation can be expressed in the discrete setting as 𝑦𝑋𝑡(𝑖)𝑋=𝑓(𝑖)𝑡1𝑎exp𝑖𝑋log(𝑖)𝑡21𝑋(𝑖)𝑡1+3𝑚=1𝜉𝑚𝜀𝜏𝑡+𝑣𝑡,(2.4) where vt is a measurement zero-mean Gaussian noise. 𝑎𝑖 is the AR process parameter. τt is zero-mean Gaussian noise. 𝜀=(1/12)1/2 represents obtaining 12 frames per second. The affine transformation basis elements 𝜉𝑚(𝑚=1,,3) denoting the templates have produced translation, shearing or scaling transformation.

It is necessary to approximate the particles {𝑅𝑡(𝑖,1),,𝑅𝑡(𝑖,𝐺)} and resample according to their weights at every timestep. To optimize the computational procedure and avoid directly calculating, all the resample particles are expected to be quite similar to each other. Denote 𝑀𝑡(𝑖)as the arithmetic mean of 𝑀𝑡(𝑖,𝑗), the sample mean of 𝑋𝑡(𝑖,𝑗) can be approximated as 𝑅𝑡(𝑖)𝑀𝑡(𝑖)01=expΓ𝛾=𝑅00𝑡,max(𝑖)1exp𝐺𝐺𝑗=1𝑅log𝑡,max(𝑖)1𝑅𝑡(𝑖,𝑗)𝑀𝑡(𝑖)01.(2.5)Calculate covariance with descriptor for the tracked point templates: the spatial structure constraint vector =(𝑥𝑡(𝑖),𝑦𝑡(𝑖)𝑏𝑡(𝑖,𝑑),𝑏𝑡(𝑖,𝜃),𝐼(𝑋𝑡(𝑖)),𝐼𝑥,𝐼𝑦,tan1(𝐼𝑥/𝐼𝑦),𝐼𝑥𝑥,𝐼𝑦𝑦)𝑇 is added in our method for minimizing the influence of illumination, distortion, motion-blur, and noise interference. Here 𝑏𝑡(𝑖,𝑑),𝑏𝑡(𝑖,𝜃) represent the distance and the angle between origin (or polar axis) and each tracked pixel in polar coordinate system. 𝐼(𝑋𝑡(𝑖)) denotes the image pixel intensity. 𝐼𝑥, 𝐼𝑦, 𝐼𝑥𝑥, 𝐼𝑦𝑦 represent the first- and second-order image derivatives in the Cartesian coordinates system [14, 16]. 𝑠 is the size of template window is the mean value of (𝑋𝑡(𝑖)). The covariance descriptor 𝑆 of the point templates patches can be given as 𝑆𝑋(𝑖)𝑡=0=1𝑠1𝑠𝑝=1(𝑋𝑡(𝑖))𝑝(𝑋𝑡(𝑖))𝑝𝑇.(2.6)Measure relative distance: for ensuring the covariance descriptors are changing successively, it is necessary to collect image covariance descriptors and calculate the principal eigenvector and the geodesic distance between two group elements {𝑆(𝑋𝑡(𝑖)),𝑆}and{𝐼(𝑋𝑡(𝑖)),𝐼}. The measurement function is defined using the distance-from-feature space, distance-in-feature space, and similarity comparison purposes [18] then the measurement equation in [19] can be more explicitly expressed as 𝑦𝑡(𝑖)=log𝑆𝑋𝑡(𝑖)log𝑆2𝑀𝑛=1𝑐𝑛2𝑀𝑛=1𝑐2𝑛𝜌𝑛𝐼𝑋𝑡(𝑖)𝐼+𝑣𝑡(𝑖),(2.7) where cn is the projection coefficient and ρn is the eigenvalues corresponding to principal eigenvector, and the two parameters are used to calculate the distance-in-feature space, and 𝐼 represents the point mean intensity. The measurement likelihood is described as 𝑝𝑦𝑡(𝑖)𝑋𝑡(𝑖)𝑦exp𝑡𝑇(𝑖)𝑅1𝑦𝑡(𝑖),(2.8) where 𝑅 is the covariance of zero-mean Gaussian noise 𝑣𝑡. When the measurement likelihood 𝑝(𝑦𝑡(𝑖)𝑋𝑡(𝑖)) is gained, we can calculate and normalize the importance weights for the tracked points and then realize multiple SFPs’ tracking in long microscopic sequences.

3. Experimental Results and Discussion

The SFPs’ detecting/matching experiments are implemented on a standard microchip with 3D microstructures. Some of the results by both PS algorithm and proposed method are shown in Figure 3, while the contrast accumulated error measured by the Euclidean distance in pixels of the tracking points to the ground truth positions is shown in Figure 4. It proves that our method enhances the correct detecting rate of SFPs’ for images under large changed illumination in Mode 1. The reason for this improvement is that the changed gray value is largely caused by the varying illumination while the local structure keeps static. Therefore, it is a good choice to introduce the local self-similarity descriptors into PS for our application.

We also provide some SFPs’ tracking results in Mode 2 by our proposed method and KLT for comparison, as shown in Figure 5. The tracking results by the proposed method obviously show a higher localization accuracy of the SFPs’ in long micro image sequence.

For further demonstrating the effectiveness of our method, we present some of the 3D projective reconstruction results based on the tracking results in Figure 6. Obviously the results of the proposed method perfectly reconstruct the 3D structure from the microscopic sequences, while the KLT method failed.

4. Conclusions

This paper developed a framework of SFPs’ extracting and matching in microscopic images for 3D micro stereoscopic measurement in two stereo imaging modes. The proposed SFPs’ tracking framework ensures the illumination invariance and the robustness in the fixed-positioning stereo mode and continuous-rotational stereo mode, respectively. The effectiveness of our tracking framework has been empirically verified in visual 3D projective reconstruction with microscopic images. In Mode 2 of our system, there is an inevitable tracking error caused by motion blur. Therefore we plan to use the method described in [20] to deal with this problem in our future work. Our future research will also focus on the micro visual measurement planning (similar to the method described in [21]) and optimal 3D micro model representation [22].


This work was supported by the National Natural Science Foundation of China (NSFC-60870002, 60802087, 60573123, 60605013), NCET, and the Science and Technology Department. of Zhejiang Province (2009C21008, 2010R10006, Y1090592, Y1110688).


  1. S. Y. Chen, Y. F. Li, and J. Zhang, “Vision processing for realtime 3-D data acquisition based on coded structured light,” IEEE Transactions on Image Processing, vol. 17, no. 2, pp. 167–176, 2008. View at: Publisher Site | Google Scholar
  2. S. Y. Chen, Y. F. Li, Q. Guan, and G. Xiao, “Real-time three-dimensional surface measurement by color encoded light projection,” Applied Physics Letters, vol. 89, no. 11, Article ID 111108, 2006. View at: Publisher Site | Google Scholar
  3. A. Klaus, M. Sormann, and K. Karner, “Segment-based stereo matching using belief propagation and a self-adapting dissimilarity measure,” in Proceedings of the 18th International Conference on Pattern Recognition (ICPR '06), pp. 15–18, August 2006. View at: Publisher Site | Google Scholar
  4. H. W. Schreier, D. Garcia, and M. A. Sutton, “Advances in light microscope stereo vision,” Experimental Mechanics, vol. 44, no. 3, pp. 278–288, 2004. View at: Publisher Site | Google Scholar
  5. C. Harris and M. Stephens, “A combined comer and edge detector,” in Proceedings of the Alvey Vision Conference, pp. 189–192, 1988. View at: Google Scholar
  6. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. View at: Publisher Site | Google Scholar
  7. K. Mikolajczyk and C. Schmid, “Scale and affine invariant interest point detectors,” International Journal of Computer Vision, vol. 60, no. 1, pp. 63–86, 2004. View at: Publisher Site | Google Scholar
  8. S. Belongie, J. Malik, and J. Puzicha, “Shape matching and object recognition using shape contexts,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 4, pp. 509–522, 2002. View at: Publisher Site | Google Scholar
  9. Y. Ke and R. Sukthankar, “PCA-SIFT: a more distinctive representation for local image descriptors,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '04), pp. II506–II513, July 2004. View at: Google Scholar
  10. Z. Luo, Y. Zhuang, Y. Pan, and F. Liu, “Feature tracking algorithms based on two cameras,” Journal of Computer-Aided Design and Computer Graphics, vol. 14, no. 7, pp. 646–650, 2002. View at: Google Scholar
  11. L. Ye and Y. Wang, “Real-time tracking of the shoot point from light pen based on camshift,” in Proceedings of the 1st International Conference on Intelligent Networks and Intelligent Systems (ICINIS '08), pp. 560–564, November 2008. View at: Publisher Site | Google Scholar
  12. Y.-S. Yao and R. Chellappa, “Dynamic feature point tracking in an image sequence (EKF),” in Proceedings of the International Conference on Pattern Recognition (ICPR '94), vol. 12, pp. 654–657, 1994. View at: Google Scholar
  13. A. Buchanan and A. Fitzgibbon, “Combining local and global motion models for feature point tracking,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '07), pp. 1–8, June 2007. View at: Publisher Site | Google Scholar
  14. J. Kwon, K. M. Lee, and F. C. Park, “Visual tracking via geometric particle filtering on the affine group with optimal importance functions,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR '09), pp. 991–998, June 2009. View at: Publisher Site | Google Scholar
  15. J. Kwon and F. C. Park, “Visual tracking via particle filtering on the affine group,” International Journal of Robotics Research, vol. 29, no. 2-3, pp. 198–217, 2010. View at: Publisher Site | Google Scholar
  16. P. F. Felzenszwalb and D. P. Huttenlocher, “Pictorial structures for object recognition,” International Journal of Computer Vision, vol. 61, no. 1, pp. 55–79, 2005. View at: Publisher Site | Google Scholar
  17. X. Tan, F. Song, Z. H. Zhou, and S. Chen, “Enhanced pictorial structures for precise eye localization under uncontrolled conditions,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR '09), pp. 1621–1628, Miami, Fla, USA, June 2009. View at: Publisher Site | Google Scholar
  18. S. Baker and I. Matthews, “Lucas-Kanade 20 years on: a unifying framework,” International Journal of Computer Vision, vol. 56, no. 3, pp. 221–255, 2004. View at: Publisher Site | Google Scholar
  19. J. Kwon and F. C. Park, “Visual tracking via particle filtering on the affine group,” International Journal of Robotics Research, vol. 29, no. 2-3, pp. 198–217, 2010. View at: Publisher Site | Google Scholar
  20. S. Y. Chen and Y. F. Li, “Determination of stripe edge blurring for depth sensing,” IEEE Sensors Journal, vol. 11, no. 2, Article ID 5585653, pp. 389–390, 2011. View at: Publisher Site | Google Scholar
  21. S. Y. Chen and Y. F. Li, “Vision sensor planning for 3-D model acquisition,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 35, no. 5, pp. 894–904, 2005. View at: Publisher Site | Google Scholar
  22. S. Y. Chen and Q. Guan, “Parametric shape representation by a deformable NURBS model for cardiac functional measurements,” IEEE Transactions on Biomedical Engineering, vol. 58, no. 3, pp. 480–487, 2011. View at: Publisher Site | Google Scholar

Copyright © 2012 Sheng Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

686 Views | 474 Downloads | 1 Citation
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.