Information and Modeling in ComplexityView this Special Issue
Research Article | Open Access
Liang Wang, Fuqing Duan, Ke Lv, "Fisheye-Lens-Based Visual Sun Compass for Perception of Spatial Orientation", Mathematical Problems in Engineering, vol. 2012, Article ID 460430, 15 pages, 2012. https://doi.org/10.1155/2012/460430
Fisheye-Lens-Based Visual Sun Compass for Perception of Spatial Orientation
In complex aeronautical and space engineering systems, conventional sensors used for environment perception fail to determine the orientation because of the influences of the special environments, wherein geomagnetic fields are exceptional and the Global Positioning System is unavailable. This paper presents a fisheye-lens-based visual sun compass that is efficient in determining orientation in such applications. The mathematical model is described, and the absolute orientation is identified by image processing techniques. For robust detection of the sun in the image of the visual sun compass, a modified maximally stable extremal region algorithm and a method named constrained least squares with pruning are proposed. In comparison with conventional sensors, the proposed visual sun compass can provide absolute orientation with a tiny size and light weight in especial environments. Experiments are carried out with a prototype validating the efficiency of the proposed visual sun compass.
Mobile robots possessing various sensors are widely employed for taking measurements and performing tasks in outdoor applications in many fields, including the complex aeronautical and space engineering systems. These applications present many great challenges, one of which is that conventional sensors do not work in emerging special environments. For example, in some planets, magnetic heading detection devices are nonfunctional because of the negligible magnetic fields, and the Global Positioning System (GPS) receiver does not work because there is no GPS available. Therefore, a novel sensor is required to perceive the environment and the robot’s own state.
The special aeronautical and space environments have little influence on visual sensors. In addition, being a type of passive sensor, visual sensors do not need a precise and complex mechanical structure, indicating that vibration has little effect on them and there is no mechanical fault. More importantly, visual sensors can provide abundant information, including details of depth, texture, illumination, and so on, with only one measurement (image) [1, 2]. All of these characteristics ensure that visual sensors have the potential to overcome the aforementioned challenges. Therefore, the visual sun compass is proposed to determine orientation for applications in aeronautical and space engineering systems.
The concept of the sun compass is derived from studies of the migration of birds . It has been found that birds navigate and orient themselves with the sun as the reference during their migration. Some ornithologists analyzed this sun compass both in theory and by experiment . Then, sun compasses were designed and used for navigation. These compasses were found to perform well in regions with exceptional geomagnetic fields where the magnetic compass fails to work. However, the existing sun compasses (denoted as the classical sun compass) were designed on the basis of the sundial. To obtain precise orientation measurements, the sundial should be sufficiently large, which makes its volume and weight very huge and limits its applications. As a mechanical sensor, it is difficult to embed such a compass in a measuring system and coordinate it with electronic sensors.
The visual sun compass proposed in this paper is composed of a fisheye lens and a camera. With improvements and advances in manufacturing techniques, fisheye lens and cameras with high precision and compact structure have been developed, which make the visual sun compass satisfy the requirements of many applications and be easily embedded in a measuring system. In addition, such cameras can cooperate with other electronic sensors because of its electronic output. This type of lens has a large field of view of about 180 degrees, and there is no occlusion to the view and no requirement for precise mechanical assembly.
There exist some related works in the literature. The star tracker , which comprises a conventional lens and an Active Pixel Sensor camera, is a very precise attitude-measurement sensor. This sensor uses stars as the frame of reference and performs estimation of orientation by identifying the observed stars and measuring their relative positions. However, this sensor is susceptible to various errors, particularly the bright sunlight, because it has high sensibility and is significantly dependent on the star identification algorithm. In addition, the high costs limit its use. Sun sensors are another type of instrument that is similar to the proposed visual sun compass. The sun sensor proposed by Cozman and krotkov  is made up of a telephoto lens and a camera. Because a telephoto lens has a very small field of view, a precise mechanical tracking system is needed to capture the sun, and this makes the sensor susceptible to vibration and mechanical fault. Sun sensors proposed by Deans et al.  and Trebi-Ollennu el al. , which comprise a fisheye lens and a camera, can determine the relative orientation. The imaging model of a pinhole camera is used to describe the imaging mechanism of the fisheye lenses. In addition, extensions with some distortion terms have been proposed; these do not take the characteristics of fisheye lenses into account, but rather introduce some nuisance parameters and increase the computational burden. The centroid of pixels whose intensity values are above a certain threshold is taken as the image of the sun’s center while taking image measurements of the sun [6–9]. In fact, the centroid of these pixels may not be the image of the sun’s center because of image noise and outliers.
In this paper, a mathematical model of the visual sun compass, which takes into account characteristics of fisheye lenses, is presented. An effective method for detecting and delineating the sun’s image is provided to robustly calculate the azimuth and elevation angle of the sun. Finally, with the knowledge of the celestial navigation, an estimation of absolute orientation can be obtained. The remainder of this paper is organized as follows. In Section 2, the classical sun compass has been introduced. The model of the proposed visual sun compass has been described in Section 3. Section 4 provides a method for detecting and delineating the sun’s image in the visual sun compass. Section 5 presents some experimental results and, finally, in Section 6 the conclusions of this study are presented.
2. The Classical Sun Compass
The design of the classical sun compass is based on the sundial, which is originally used to measure time on the basis of the position of the sun’s shadow. A sundial is composed of a style (usually a thin rod or a straight edge) and a dial plate on which hour (or angle) lines are marked. The sun casts a shadow from the style onto the dial plate (see Figure 1(a)). As the sun moves, the shadow aligns itself with different angle lines on the plate. If the absolute direction is known, the time at the measuring moment can be calculated on the basis of the shadow. The classical sun compass inverts the traditional role of a sundial. Instead of measuring time, it calculates the absolute direction with the shadow and the time information at the measuring moment.
To calculate the absolute direction, first, the dial plate of the classical sun compass should be adjusted to be horizontal. Then, the direction to be estimated should be aligned with one angle line; in general, the angle line with mark 0 is selected. The angle of the style’s shadow, that is, the angle in Figure 1(b), and the time at the measuring moment should be measured. To determine the direction (), the azimuth angle of the sun must be calculated. With the knowledge of celestial navigation, the azimuth angle can be calculated. As shown in Figure 2, is the observer’s position, is the latitude of the observer, is the north pole of the planet, is the projection point of the sun on the planet, is the declination of the sun (the declination is the angle between the rays of the sun and the planet’s equator), and is the local hour angle (the local hour angle of a point is the angle between the half plane determined by this point and the planet’s axis, and the half plane determined by the sun’s projection point and the planet’s axis, and it can be used to describe the position of a point on the celestial sphere). In the spherical triangle , by the spherical law of cosines (also called the cosine rule for sides), we have In (2.1), the observer’s latitude should be known, and from this the declination of the sun can be calculated on the basis of the time at measuring moment, and the local hour angle can be calculated with longitudes of the observer and the projection point of the sun. After the side of spherical triangle , , being obtained by the spherical law of cosines, the azimuth angle of the sun, , can be expressed as In addition, by the spherical cosine law for sides, Thus, the azimuth angle can be determined uniquely, and the direction () can be obtained. In general, it is difficult to obtain precise values of the observer’s longitude and latitude in advance. However, the exact position can be obtained with assumed values of two or more positions and observed azimuth angles by the intercept method .
3. The Visual Sun Compass
The proposed visual sun compass, which is composed of a fisheye lens and a camera, is essentially a fisheye camera. Figure 3 shows its imaging process. For a space point (where denotes the transpose of a vector or matrix), its fisheye image is , whereas its virtual perspective image would be as obtained by a pinhole camera. In comparison with a pinhole camera, a fisheye camera has a large field of view of about 180 degrees, which ensures that the sun can be captured with only one image when its optical axis points at the sky. Further, it can be regarded as a sundial, where the optical axis is the style and the image plane is the dial plate. Thus, the sun needs to be identified in the captured fisheye image. Then the azimuth and elevation angle of the sun can be calculated with its imaging model. If the optical axis upright points at the sky when capturing images, the absolute direction (and by the intercept method  even the exact position) can be calculated with the time at the image-capture moment and the longitude and latitude of the observer by the method presented in Section 2. Otherwise, the measured azimuth and elevation angles should, first, be transformed into the ground coordinate system with the assistance of an inclinometer or inertial sensor.
The image of a space point is whereas it would be by a pinhole camera; is the incidence angle of (or the direction of the incoming ray from ); is the distance between the image point and the principal point ; and is the focal length.
3.1. Imaging Model of Fisheye Camera
A pinhole camera model with some distortion terms is used to describe the fisheye lens and determine the relative direction in [6, 7], which do not consider the characteristics of fisheye lenses. In fact, fisheye lenses are usually designed to follow some types of projection geometry. Table 1 shows some ideal projection models of fisheye lenses . Because the ideal model is not strictly satisfied and, in some cases, the type of a fisheye lens may be unknown in advance; a more general mathematical model is needed to describe the visual sun compass.
Various imaging models of fisheye cameras have been proposed in the literature. These models can be divided into two categories as follows.
3.1.1. Mapping from an Incident Ray to a Fisheye Image Point
A fisheye camera can be described by the mapping from an incident ray to a fisheye image point: or , where , the incidence angle is the angle between the principle axis and the incident ray (see Figure 3), and .
3.1.2. Mapping from a Virtual Perspective Image to a Fisheye Image
A fisheye camera can also be described with the mapping from a virtual perspective image to a fisheye image by or , where . The rational function  belongs to this class where and are two parameters for the lens type.
The former class is more straightforward and popular and represents the real imaging process of a fisheye camera . In fact, the latter class can also be described in the form of the former. For example, (3.1) can be expressed as .
Different to fisheye cameras, central catadioptric cameras have an elegant unified imaging model . Ying and Hu  discovered that this unified imaging model could be extended to some fisheye cameras.
As shown in Figure 4, the unified imaging model for central catadioptric cameras can be described as follows. A space point is first projected to a point on the unit sphere. Then, it is mapped on the image plane, , via a projection from the point . The coordinates of the point on the image plane are Now, with , we have a mapping: , Equation (3.3) can describe the imaging process of catadioptric projections with parabolic, hyperbolic, and elliptical mirrors with different and . Let , for all ; then (3.3) can describe the imaging process of a fisheye camera with the orthogonal projection. Let , , then (3.3) can describe the imaging process of a fisheye camera with the stereographic projection. All of these mean that the unified imaging model (3.3) can be used to describe some fisheye cameras, which are consistent with conclusions of Ying and Hu .
However, for fisheye cameras with equidistant and equisolid angle projections, which are more widely used in real applications, the unified imaging model does not fit. Assuming without loss of generality that and , submit and into (3.3), respectively. The obtained type parameters are and . This shows that the parameters of these two types of fisheye cameras vary with the incident angle . This indicates that these fisheye cameras do not have a unique central view point. Therefore, the unified model (3.3) does not exactly describe the imaging process of all fisheye cameras.
In fact, the Maclaurin Series of all fisheye lenses listed in Table 1 have the same form Without loss of generality, we take (3.4) as the model of fisheye lenses. Different from the model proposed by Trebi-Ollennu et al. , there are only odd-order terms. As shown in Figure 5, there are only slight differences between the ideal models and their 3rd- and 5th-order Maclaurin Series when , that is, within the field of view of fisheye lenses. In practice, the 3rd-order Maclaurin Series is taken as a compromise between precision and computation efficiency: Then, the projection procedure of a fisheye camera can be described with the following four steps.
Step 1. A space point in the world coordinate system is transformed to incident angles in the camera coordinate system with the relative rotation and translation of two coordinate systems:
Step 2. The incoming direction is transformed into the normalized image coordinates . In this case, the general form (3.5) of fisheye cameras’ model is considered. Then, the normalized image coordinates can be expressed as
Step 3. To compensate for deviations of the fisheye model from the real camera, distortion terms, called radial distortion and decentering distortion, can be introduced. The former represents the distortion of ideal image points along radial directions from the distortion center. The latter represents the distortion of ideal image points in tangential directions. The distortions can be expressed as functions of radial distance : where is the coefficient of radial distortions and the coefficient of decentering distortions.
Step 4. Apply affine transformation to the coordinates . Assuming that the pixel coordinate system is orthogonal, we get the pixel coordinates
where , are the scale factors along the horizontal and vertical directions, respectively, and is the principal point.
Therefore, we have a more general imaging model of fisheye cameras:
Parameters in this model can be obtained by camera calibration. It appears rational that an elaborate model (3.10) with distortions (3.8) may perfectly model the real imaging process. However, we find that, just like the case of a pinhole camera , a more elaborate modeling would not help (negligible when compared with sensor quantization), but cause numerical instability. In addition, it makes the calibration of the system more complicated and brings in a very high computation cost, which most applications cannot afford currently. Therefore, we omit distortions and take the following model in practice:
3.2. The Model of the Visual Sun Compass
With the imaging model of fisheye cameras (3.11), the absolute direction of a point in the captured image can be determined from its pixel coordinates . Firstly the normalized image coordinates can be expressed as Then, the incidence angles of the incoming ray in the visual sun compass coordinate system can be calculated with If the optical axis upright points at the sky when taking images, which means the visual sun compass coordinate system is parallel with the ground coordinate system and the rotation matrix between them is a identity matrix, angles calculated by (3.13) are also the incidence angles in the ground coordinate system. Otherwise, the calculated should be first transformed to the ground coordinate system. With an inclinometer or inertial sensor, the rotation matrix from the visual sun compass coordinate system to the ground plane coordinate system, , can be obtained. The unit vector indicating the incidence angle of an image point is denoted as in the ground plane coordinate system, which satisfies The corresponding azimuth angle and elevation angle in the ground plane coordinate system can be expressed as: With (3.12)–(3.15), the incidence angle in the ground plane coordinate system of an image point can be calculated, where parameters of the fisheye lens, , can be calibrated off-line in advance. The obtained azimuth angle corresponds to the shown in Section 2. Then, with local time and estimated longitude, the angle can be calculated. Finally, the absolute direction can be derived with the obtained and , as shown in Section 2.
4. Detecting the Sun in the Image
To calculate the absolute direction with the proposed visual sun compass, the sun’s image in captured images should be detected and measured. In practice, image noises may make the apparent outline of the sun’s image greatly deviate from the real configuration, and outliers may generate false alarms with regard to the sun’s image. Therefore, the conventional method, which takes the centroid of pixels whose intensity values are above a certain threshold as the image of the sun’s center, may fail to find the real image of the sun’s center. Because the image of a sphere under the unified imaging model is an ellipse and fisheye lenses model (3.11) approximates the unified imaging model (3.3), we take the image of the sun as a blob. Then, use a modified maximally stable extremal region (MSER) algorithm, which can remove redundant blobs and accelerate the processing speed of MSER, to detect the blob. Finally, fit an ellipse with the blob’s contour and take the ellipse’s center as the image of the sun’s center. The details are as follows.
Firstly, narrow down the threshold’s varying range of the MSER and detect the sun’s image. The MSER  was originally proposed to establish tentative correspondences between a pair of images taken from different viewpoints. Its principle can be summarized as follows . For a gray-level picture, take all possible thresholds. The pixels whose intensity values are equal to or greater than the threshold are taken as white, and the others are denoted as black. By increasing the threshold continually, a series of binary pictures would be seen, where the subscript indicates the frame corresponding to a threshold . Initially, a white frame would be obtained. Thereafter, black spots corresponding to local intensity minima would emerge and grow with increase in the threshold. Neighboring regions corresponding to two local minima will merge to a larger one gradually, and the last frame would be black. An MSER is a connected region in with little size change across a range of thresholds.
By the original MSER method, not only the blob corresponding to the image of the sun but also some blobs corresponding to other regions would be found. Consider that the intensity value of the sun’s image is very close to 255 because it is sufficiently bright. Let the threshold vary only in a small range when performing MSER detection, where is a variable of limiting the varying range of thresholds. The computation cost can be dramatically reduced and those regions with great difference from the sun’s image in intensity can be removed.
Secondly, remove the false alarms of the sun’s image with the aspect ratio constraint. The sun’s image in the visual sun compass is close to a circle. The constraint that the aspect ratio of the sun’s image approximates to 1 can be used to further refine detected MSERs. By these modifications, only one or some nested MSERs can be obtained.
Thirdly, remove redundant nested MSERs and obtain the blob corresponding to the sun’s image. In this step, the kernel MSER, which refers to the most stable one among nested MSERs, is proposed to remove redundant nested MSERs. For a sequence of nested MSERs, , , , , , , where , the kernel MSER satisfies where is the standard deviation of the intensity of all pixels in MSER . The kernel MSER takes not only the maximally stable region’s cardinality but also the statistical property of intensity into account.
Fourthly, prune contour points are contaminated by image noise and fit ellipse with the remaining points. Because the blob detected by the modified MSER is a continuous region, only few contiguous outliers and noises contaminate the sun’s image. For further robust processing, an ellipse is fitted with points of the blob contour to describe the sun’s image. A direct method to fit ellipse is the constrained least squares (LS) method . In images captured by the visual sun compass, although contiguous outliers and contaminated points are within the detected blob, they are far away from the real geometric configuration and greatly change the shape of the sun’s image. Therefore, the distance from the boundary point to the blob’s center is taken as the measure to purify the data. Points whose distance is the least under a certain percentage are used to fit the ellipse, and other points which are far away from the geometric configuration are taken as outliers and pruned. This method is called the constrained LS with pruning.
Finally, use coefficients of the fitted ellipse to calculate its center. Then, take the center as the image of the sun’s center to estimate the sun’s azimuth and elevation angle.
A prototype of the visual sun compass (as shown in Figure 6) was built, which comprised a Feith intelligent CANCam and a FUJION FE185C057HA-1 fisheye lens. A two-dimensional (2D) calibration board  was used to calibrate the visual sun compass. The calibrated parameters in (3.11) are , , , , , and . Experiments with this prototype were conducted to validate the proposed visual sun compass.
5.1. Detecting the Sun
Some images were captured with the optical axis of the sun compass’s prototype upright pointing at the sky at different hours in the daytime from May to July. The sun was detected in these images with the original MSER method, the conventional method, and the proposed modified MSER method. Figure 7(a) shows an image captured by the visual sun compass. Figure 7(b) shows the blobs detected by the original MSER method. It can be seen that, with the original MSER method, some redundant blobs were detected and the image of the sun’s center could not be uniquely determined. For the conventional method, the estimated center is far away from the real configuration as shown in Figure 7(c), because intensity values of outliers are close to those of the sun’s image. By applying the proposed modified MSER and the constrained LS with pruning to this image, the obtained result is shown in Figure 7(d). It is evident that only one region is obtained and the difference between the fitted ellipse and the image of the sun is slight. These results prove the validity of the proposed method.
5.2. Orientation Estimation
By the proposed visual sun compass model, the elevation angle of the sun and “the angle of the style’s shadow” can be calculated with the calibrated parameters and the detected sun’s center. Table 2 shows some results, where “G.T.” denotes the ground truth and “est.” denotes the estimation. It is obvious that, for the estimated sun’s elevation angle , there is a little difference from the ground truth (values from the astronomic almanac corresponding to the time and the latitude and longitude information). Then, the azimuth angle of the sun and the absolute directions in the ground plane coordinate system are calculated with the time, the latitude, and longitude information. Results are shown in Table 2. For the azimuth angle , the difference between the estimated values and the ground truth is very slight. The errors of the absolute direction with proposed method are less than 3.5°. They may arise from several sources. The primary source is the current rudimentary experimental setup: the visual sun compass is made horizontal using barely eye inspection with a spirit level, and the ground truth of the estimated direction is obtained by eye inspection of a simple magnetic needle compass. Nonetheless, experiments show the validity of the proposed visual sun compass.
We also applied the prototype of the visual sun compass on a mobile robot to carry out orientation estimation. Figure 8 shows the mobile robot, on which the visual sun compass prototype, a Crossbow VG700CB-200 IMU, and other sensors are mounted. The IMU can provide its own attitude relative to the ground coordinate system. The relative rotation between the IMU and the visual sun compass can be calibrated off-line in advance. Then, the rotation from the visual sun compass to the ground coordinate system, in (3.14), can be determined. Orientation experiments are conducted with this mobile robot to validate the proposed visual sun compass. Some results are shown in Table 3. With the help of the IMU, the deviations from the ground truth are less than those reported above. All of these results prove the validity of the proposed visual sun compass.
A visual sun compass is proposed in this paper. It is competent for orientation in environments with a tiny size and light weight, such as in aeronautical and space applications where the conventional sensors cannot function. A mathematical model and the absolute orientation method are presented. Further, a modified MSER and constrained LS with pruning are proposed to deal with severe distortion while detecting the sun in captured images. Real image experiments show the validity of the proposed visual sun compass. In comparison with conventional orientation sensors, the proposed visual sun compass can not only work in special environments but also provide the absolute orientation. The measurements of the proposed visual sun compass are not precise enough as yet. Future steps for improvement of the proposed visual sun compass include building a more precise experiment platform, refining the measurements by finding a more precise calibration method, and analyzing the uncertainties of the projection of the sun’s center.
This work was supported by the NSFC (Grant no.: 61101207, 60872127, and 61103130) and National Program on Key Basic Research Project (973 Program) (Grant no.: 2010CB731804-1, 2011CB706901-4).
- M. Y. Kim and H. Cho, “An active trinocular vision system of sensing indoor navigation environment for mobile robots,” Sensors and Actuators, A, vol. 125, no. 2, pp. 192–209, 2006.
- S. Y. Chen, Y. F. Li, and J. Zhang, “Vision processing for realtime 3-D data acquisition based on coded structured light,” IEEE Transactions on Image Processing, vol. 17, no. 2, pp. 167–176, 2008.
- K. M. Creer and M. Sanver, Methods in Palaeo-Magnetism, Elsevier, Armsterdam, The Netherlands, 1967.
- W. Roellf and V. H. Bezooijen, “Star sensor for autonomous attitude control and navigation,” in Optical Technologies for Aerospace Sensing, Proceedings of the SPIE, pp. 1–28, Boston, Mass, USA, November 1992.
- F. Cozman and E. Krotkov, “Robot localization using a computer vision sextant,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 106–111, May 1995.
- M. C. Deans, D. Wettergreen, and D. Villa, “A sun tracker for planetary analog rovers,” in Proceedings of the 8th International Symposium on Artificial Intelligence, Robotics and Automation in Space, pp. 577–583, September 2005.
- A. Trebi-Ollennu, T. Huntsberger, Y. Cheng, E. T. Baumgartner, B. Kennedy, and P. Schenker, “Design and analysis of a sun sensor for planetary rover absolute heading detection,” IEEE Transactions on Robotics and Automation, vol. 17, no. 6, pp. 939–947, 2001.
- S. Y. Chen, H. Tong, and C. Cattani, “Markov models for image labeling,” Mathematical Problems in Engineering, vol. 2012, Article ID 814356, 18 pages, 2012.
- S. Y. Chen, H. Tong, Z. Wang, S. Liu, M. Li, and B. Zhang, “Improved generalized belief propagation for vision processing,” Mathematical Problems in Engineering, vol. 2011, Article ID 416963, 12 pages, 2011.
- J. Favill, Primer of Celestial Navigation, Mallock Press, 2007.
- D. Schneider, E. Schwalbe, and H. G. Maas, “Validation of geometric models for fisheye lenses,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 64, no. 3, pp. 259–266, 2009.
- C. Bräuer-Burchardt and K. Voss, “A new algorithm to correct fish-eye- and strong wide-angle-lens-distortion from single images,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '01), pp. 225–228, October 2001.
- C. Geyer and K. Daniilidis, “A unifying theory for central panoramic systems and practical implications,” in Proceedings of the European Conference on Computer Vision, pp. 445–462, 2000.
- X. Ying and Z. Hu, “Can we consider central catadioptric cameras and fisheye cameras within a unied imaging model,” in Proceedings of the European Conference on Computer Vision, pp. 442–455, 2004.
- Z. Y. Zhang, “Camera calibration,” in Emerging Topics in Computer Vision, G. Medioni and S. B. Kang, Eds., IMSC Press, 2004.
- J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide baseline stereo from maximally stable extremal regions,” in Proceedings of British Machine Vision Conference, vol. 21, pp. 384–393, 2002.
- A. Fitzgibbon, M. Pilu, and R. B. Fisher, “Direct least square fitting of ellipses,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 5, pp. 476–480, 1999.
Copyright © 2012 Liang Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.