- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Advances in Optical Technologies

Volume 2013 (2013), Article ID 295950, 23 pages

http://dx.doi.org/10.1155/2013/295950

## End-to-End Image Simulator for Optical Imaging Systems: Equations and Simulation Examples

^{1}Selex Galileo, Via A. Einstein, 35, Florence, 50013 Campi Bisenzio, Italy^{2}Department of Electronics & Telecommunications, University of Florence, Via S. Marta 3, 50139 Florence, Italy

Received 7 August 2012; Accepted 27 September 2012

Academic Editor: Marija Strojnik

Copyright © 2013 Peter Coppo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The theoretical description of a simplified end-to-end software tool for simulation of data produced by optical instruments, starting from either synthetic or airborne hyperspectral data, is described and some simulation examples of hyperspectral and panchromatic images for existing and future design instruments are also reported. High spatial/spectral resolution images with low intrinsic noise and the sensor/mission specifications are used as inputs for the simulations. The examples reported in this paper show the capabilities of the tool for simulating target detection scenarios, data quality assessment with respect to classification performance and class discrimination, impact of optical design on image quality, and 3D modelling of optical performances. The simulator is conceived as a tool (during phase 0/A) for the specification and early development of new Earth observation optical instruments, whose compliance to user’s requirements is achieved through a process of cost/performance trade-off. The Selex Galileo simulator, as compared with other existing image simulators for phase C/D projects of space-borne instruments, implements all modules necessary for a complete panchromatic and hyper spectral image simulation, and it allows excellent flexibility and expandability for new integrated functions because of the adopted IDL-ENVI software environment.

#### 1. Introduction

Hyper-spectral imaging has dramatically changed the rationale of remote sensing of the Earth relying on spectral diversity.

Since the pioneering Hyperion mission launched in 2001 [1], hyper spectral imaging airborne and satellite sensors have shown their utility by obtaining calibrated data for determining a wide variety of bio- and geophysical products from the collected imagery.

However, all sensors have their own set of performance characteristics, response functions, noise statistics, and so on, which determine and can challenge the validity of the generated data products. Through simulation of the sensor response, the utility of a new sensor design can be ascertained prior to construction, by running algorithms on simulated remote sensing data sets. In the case of existing well-characterised sensors the generation of simulated data assists in debugging sensor problems and provides a better understanding of a particular sensor’s performance in new operational environments.

In this paper, an end-to-end Selex Galileo (SG) simulation tool developed in the ENVI-IDL [2] environment for the generation of simulated data from airborne/space-borne optical and infrared instruments, starting from high resolution imagery is presented.

High resolution hyper-spectral data from airborne campaigns can be typically used as input for space-borne sensors simulations. As an alternative, the input images can be completely synthesized by modelling the geometrical and spectral characteristics of the observed targets. The simulator is based on six different modules describing the reflectance scenario, the atmospheric conditions, the instrument models and the atmospheric inversion model.

The core modules aim to simulate instrument performances (spectral, spatial, and radiometric) from a variety of sensor parameters including optics, detector, scanning, and electronics characteristics. The Atmospheric module is based on the standard Modtran [3] model, whereas the scenario simulation module aims at associating a spectral signature to each pixel of a synthetic thematic map, whenever a high resolution image taken by an airborne instrument is not available.

Compared to a detailed instrument simulator, typically developed for the realization and commissioning phases (B/C phases) of a spaceborne/airborne payload, the proposed simplified end-to-end simulator is conceived as a tool (phase 0/A) to enable the rapid dimensioning of a new optical instrument and to trace the link between user and instrument requirements. SG simulator (SG_SIM) pursues a similar philosophy as other approaches useful for 0/A phases (e.g., SENSOR, MODO, CAMEO, and PICASSO), and it includes all main functions (implemented in the IDL-ENVI SW environment) necessary for a complete hyper spectral image simulation, which are not often simultaneously present in the others.

For instance, in comparison to SENSOR [4] the control of spectral mixing and the generation of synthetic scenes are also considered, whereas in comparison to US simulators, for example CAMEO [5] and PICASSO [6–8]), the extension to the MWIR/LWIR spectral bands, and a 3D reflectance rendering are missing.

After a detailed theoretical description of SG_SIM model equations and its key concepts (Section 2), some simulation examples for satellite and airborne hyper spectral and panchromatic data study cases are reported (Section 3).

#### 2. Simulator Equations Description

The flow diagram of the software tool is shown in Figure 1. The input data can be either airborne reflectance images at high spatial, spectral, and radiometric resolution or synthetic reflectance maps, coming from a thematic map and a reflectance data base, and specifications for the instrument to be simulated (e.g., spatial and spectral response, sampling, transfer function, noise model, viewing geometry, and quantisation).

The simulation procedure consists of four different processing steps. First the at-sensor radiance images are obtained by using the Atmospheric Modtran code, then the signal is spatially, spectrally, and radiometric degraded by applying the specific instrument response models to generate the instrument simulated radiance image.

##### 2.1. Atmospheric Simulation

The Atmospheric Module ingests as input a reflectance image taken at high spatial and spectral resolution which is then transformed into sensor radiance images by using the atmospheric radiances and transmittances generated by the Modtran code.

A preliminary simplified atmospheric model has been used. It considers Lambertian surface scattering, near-nadir observation, no adjacency effects, and a flat Earth. The input spectral radiance for an observation sensor at altitude is obtained on the basis of the following relationship, derived from the radiative transfer model depicted in Figure 2. The radiance is described from the following:
with(i) = Top of atmosphere sun irradiance (W/m^{2}/*μ*m);(ii) = Earth surface reflectance;(iii) = Sun observation angle (function of latitude, longitude, day of the year, and time);(iv) = Earth-Sun distance normalised to mean (depending from day of the year);(v) Total downwards atmosphere transmission;(vi) = Total upwards atmosphere transmission from ground to the observation altitude ;(vii) = Scattered atmosphere radiance (W/m^{2}/sr/*μ*m) from ground to the observation altitude ;(viii) = Total Atmosphere radiance (W/m^{2}/sr/*μ*m) which represents the input to the instrument at altitude .

The downwards/upwards atmospheric transmittances , and the atmospheric radiance depend on the concentration of all atmospheric gases and the aerosols distribution. The simulator allows the control of the major variable atmospheric gases (i.e., the columnar water vapour and CO_{2} contents), the aerosols visibility at a certain observation altitude , and the aerosols profile. These parameters can be controlled by means of Modtran code inputs, while the other parameters are considered constant. A dedicated graphical interface is used to create the Modtran input charts.

Generally the surface reflectance’s images , used as input to the simulator, come from a data base of experimental airborne or ground truth data acquired with other spectrometers, and they are affected by the spectral response of those instruments used for the database acquisition.

The radiances are generated from the Modtran code at the maximum spectral resolution (1 cm^{−1}) and are convolved with the spectral response (SR) of the instrument used to generate the data base. This spectral response (SR) is approximated with a Gaussian function with the centre wavelength and the Full Width at Half Maximum (FWHM) equal to , where the integral is performed in a spectral range centred in , and can be written as in the following:
With(i) the output spectral radiance obtained from Modtran by using the real surface reflectivity for each spectral pixel ;(ii) the normalised spectral response of instrument used to generate the data base for the th spectral channel (with the central wavelength) as a function of wavelength . Each has been simulated with a Gaussian response centred at and FWHM equal to .(iii) the output spectral radiance from Modtran by using the data base reflectivity value.(iv) the weighting mean of the Earth surface reflectivity within the () spectral response, which represents the data-base reflectivity value.(v)(W/m^{2}/sr/*μ*m) the mean spectral radiance within the () spectral response.

The high resolution at sensor radiance is simulated with Modtran code for different values of the surface reflectivity and a 3D Look-Up-Table (reflectivity, radiance, and wavelength) is generated. Finally, for each wavelength, the simulation module determines the best linear fit between radiance and surface reflectivity, which is applied to all input reflectivity image pixels to generate the at sensor radiances .

##### 2.2. Spectral Degradation

The second processing block applies a spectral degradation where the at-sensor radiance image is further spectrally degraded to the spectral channels and response of the airborne/satellite instrument to be simulated by means of a spectral interpolation and a convolution with the Instrument Chromatic Response (ICR). The represents the normalized to maximum instrument response for the th spectral channel (with defined as the central wavelength) to a spatially uniform monochromatic source as a function of wavelength . The at-sensor radiance (W/m^{2}/sr) is obtained from the following:

##### 2.3. Spatial Degradation

The spatial degradation module ingests the at-sensor radiance image and degrades it to the required spatial sampling. This process is applied by means of a convolution between the input image and the Instrument Spatial Response (ISR) of the optical sensor to be simulated followed by a resampling process (decimation) (Figure 3). The ISR is defined as the response of the overall instrument in a given spatial pixel to a spectrally uniform point source as a function of its position in space. The spatially degraded radiance image is described by the following:

The ISR is calculated as the Inverse Fourier Transform of the Modulation Transfer Function (MTF), which assumes the overall system is a linear shift invariant system. Then a “cascade model” for the system MTF is applied on the hypothesis of independent subsystems.

The hypothesis of independent subsystems is exact for many instruments, while the use of MTF, without taking into account the phase effects is valid only as a first approximation in incoherent imaging systems using well-corrected optics [10].

The “cascade model” (Figure 4) takes into account the hypothesis of separability of spatial frequency variables. Due to the properties of the Fourier Transform, the separability in the frequency domain corresponds to separability in the space domain. The along-track and across-track MTFs are calculated starting from a theoretical formulation and the Inverse Fourier Transform is calculated and normalized to a unit integral for both. In this way two unidimensional digital filters have been obtained and convolved with the high resolution image by means of the following: where:(i) the Instrument Spatial Response along (e.g., along-track),(ii) the Instrument Spatial Response along (e.g., across-track).The along- and across-track are calculated taking into account the image degradation contributions reported in Table 1.

The image quality can be affected from many factors such as the size of detector (spatial aperture), the detector degradations (e.g., pixel cross talk or charge transfer & reading smearing in CCD), the integration time during image motion (temporal aperture) caused from satellite motion or the scanning system (resp. for a push broom or a whiskbroom system), the electronic filtering, the focal plane jitter (instrument micro-vibrations), the optics diffraction and aberrations [11].

These components can influence both the across-track and/or the along-track MTF depending on the direction of scanning and the disposition of detector. Some of these components are described in annex.

Examples of simulated MTF and SRF functions for airborne and spaceborne instruments that were generated with the simulator are reported in Section 3.1.

##### 2.4. Radiometric Degradation

The fourth processing module accounts for radiometric degradation. A random noise term is added to the images to simulate the (Noise Equivalent Difference Radiance in W/m^{2}/sr) of the optical instrument. The radiance of each pixel () and of the th spectral band (with central wavelength ) is substituted with a random value taken from a Gaussian distribution, where represents the mean radiance value (W/m^{2}/sr) and (W/m^{2}/sr) the noise equivalent radiance, which is the standard deviation of the instrument temporal noise. The relationship between and the pixel radiance value is described in the following:

where is the noise variance of the detector (dark current, read-out and Johnson noises) plus FEE/ADC (Front End Electronics/Analog to Digital Converter) for the th spectral band and the product is the photon noise variance, which is proportional to the input signal . A Gaussian distribution function for the noise is a good approximation also for the photon noise, because the Poisson distribution approximates a Gaussian function for a high number of generated photocarries.

is the minimum variation of the input radiance which can be measured and represents the radiometric resolution of the instrument. Another representation of the sensor noise can be derived from the signal-to-noise ratio (SNR) for each pixel and for each wavelength. The SNR can be obtained from the following:
The photon noise formulation reported in (6) ( is based on the relationship between the number of acquired electrons and the integrated input radiance (W/m^{2}/sr) for each spectral channel as described in the following:
with(i) = Total mean in band (with ICR) instrument transmittance;(ii) = Input pupil area (m^{2});(iii) Scene pixel IFOV (sr);(iv) = Integration time (sec);(v) Mean in band (with ICR) detector quantum efficiency (electrons/photon);(vi) Joule**μ*m)/ = Energy of a photon of wavelength (*μ*m);(vii) (W/m^{2}/sr) = Spectrally integrated mean radiance in the ICR;(viii) the coefficient of proportionality between the number of acquired electrons and the input radiance .

The photon noise equivalent difference radiance is related to the photon noise equivalent difference electron , which can be obtained from the standard deviation of the Poisson noise distribution. This standard deviation is equal to the square root of the number of electrons itself and is described from the following:

The and coefficients, which depend from the selected spectral channel and are fed as input to the simulator, can be derived from the radiometric model of the simulated optical sensor or they can be evaluated on the basis of acquired images of homogeneous targets acquired by the sensor [13–16].

Two additional procedures have been implemented to permit the analysis of the simulated images (Sections 2.5 and 2.6).

##### 2.5. Atmospheric Correction

The first permits the retrieval of surface reflectance from airborne and spaceborne sensor radiances. Two standard methods can be used: one is based on Modtran code, by inverting (1), to obtain the surface reflectance from the instrument radiance and the second based on the standard ENVI-FLAASH [17] software, which allows aerosols to be estimated by means of the dark pixel method (water bodies, shadowed area, and dense vegetation) and the water vapour map to be estimated by means of the 820, 940, and 1135 nm absorption bands ratio method [18].

##### 2.6. Synthetic Image Generation Module

The second procedure permits to quantitatively evaluate the impact of instrumental parameters on simulated image quality when a low noise airborne input image is not available.

In particular it allows the creation of black and white bar test images with different modulations (square or sinusoidal), periods and shading, to the scope to evaluate the impact on the image quality of instrument parameters such as MTF and noise as a function of the spatial sampling interval and the target reflectivity, and to analyse the minimum detectable albedo contrast as a function of spatial frequency and illumination conditions (Figure 5).

It is also possible to generate synthetic hyper-spectral surface reflectance images at the desired spatial and spectral resolution by using as input a thematic map of the zone under investigation (derived synthetically or from a classification) and a spectral library of the surface materials of interest. A statistical mixing of spectral signatures for each zone with the Dirichlet method permits to control the percentage of the statistical variability [19].

The following further statistical variability, devoted to a better representation of a real scenario, can be introduced [20]: (i)a uniform or Gaussian variability for each spectral signature due to a possible spatial variation of the substance composition, such as contaminants, oxidation, and ageing, and so forth,(ii)a beta function distributed statistical variation of illumination, which takes into account possible image errors due uncompensated observation and surface slope angles,(iii)a Gaussian variability due to scenario noise, coming from uncompensated atmospheric and environment effects or uncompensated errors of sensors used to obtain the spectral library data.

Then the surface reflectance , represented from a column vector for each wavelength, is obtained by means of the following matrix mixing relationship: with(i) a matrix representing the end-members (pure elements) reflectance for each wavelength;(ii) a column vector representing the statistical variability of abundances, according to a Dirichlet distribution;(iii) a column vector representing the mean value of abundances for each wavelength;(iv) a scalar parameter which describes the degree of statistical mixing ( no mixing, all random mixing);(v) a statistical parameter obtained from a beta distribution, describing the illumination variation for each pixel;(vi) a diagonal matrix derived from a uniform or a Gaussian density, representing the spatial variation of end members;(vii) a column vector representing a Gaussian scenario noise, that is uncompensated atmospheric retrieval and/or sensors errors used to obtain the library data set.

#### 3. Simulations Examples

Several simulation tests were performed to assess the potential of the tool for the instrument image quality and applications evaluation in the framework of the study and the testing phases of the Selex Galileo SIMGA airborne hyper spectral camera and the HypSeo (ASI-PRISMA precursor [21]) spaceborne hyper spectral and panchromatic cameras phase A study.

Such activities allowed also the validation of the simulator by means of real SIMGA data acquired on clay soil targets during an airborne campaign of 2009 in Mugello (Tuscany, I) test site, where ground truth data were collected simultaneously at the same time of overflights [16].

Some examples of simulation of a 3D map representation of the ISR function have been produced for the purpose of evaluating the instrument image quality, which is generally given from the FWHM of the instrument spatial response or by the ratio between the integral of the spatial response within a delimitated spatial domain (e.g., 1 Spatial Sampling Interval) and the integral in all spatial domain, which is generally called integrated energy (in percentage unit).

A more detailed analysis based on another image quality parameter (SNR*MTF) has been done to trade-off the image quality of a panchromatic camera as a function of some instrument parameters (e.g., pupil diameter and spatial sampling) for different atmospheric conditions (summer/winter and rural/urban aerosol) aiming to a better definition of the instrument requirements.

Finally VIS and SWIR radiance and reflectance simulated images have been generated for some specific targets related to civil (land use) and dual use applications for terrestrial and marine environments to the scope of understanding the instrument capabilities for targets’ discrimination. These two dual use applications has been simulated during the testing phase of the airborne SG instrument (SIMGA) by means of targets of small green panels over vegetation cover and small grey panels under water, and then verified by means of an airborne campaign on a controlled area.

##### 3.1. SRF 3D Maps for Integrated Energy Calculations

The simulator permits a 3D representation of the SRF map by using as input a delta function. As an example this representation has been done to evaluate the spatial resolution (defined in terms of percentage of integrated energy of SRF within a certain space domain) of the airborne SG SIMGA hyperspectral camera by taking into account both the laboratory measurements and the smearing effect introduced from the detector integration which occurs during platform motion. The along and the across-track MTF and SRF contributions are displayed in Figures 6(a) and 6(b), respectively, for the VIS and the SWIR channels. The instrument parameters used in the simulations are reported in Tables 2(a) and 2(b). From the tables it appears that the ratio between the FWHM of the SRF and the Spatial Sampling Distance (SSD) is much lower for SWIR channels (0.87 along scan*1.05 across scan) with respect to the VIS ones (2.70 along scan*1.49 across scan), showing that the SRF of VIS channels has a larger width (in ±2/3 pixels) with respect to that of the SWIR ones (±1 pixel) (see also Figures 6(a) and 6(b)).

The integrated energy calculation performed within an area of 1 SSD*1 SSD of the VIS and SWIR 3D maps also confirms that the energy content within a pixel is much lower for VIS respect to SWIR channels (the same happens within the same ground size of 1.333 m*1.333 m):(i)Integrated Energy in 0.706 m*0.706 m for VIS (1 SSD*1 SSD) = 19%,(ii)Integrated Energy in 1.333 m*1.333 m for VIS (1.9 SSD*1.9 SSD) = 51%,(iii)Integrated Energy in 1.333 m*1.333 m for SWIR (1 SSD*1 SSD) = 61%.

In conclusion the spatial resolution of VIS channels is coarser with respect to that of SWIR channels, also if the spatial sampling is better (0.706 m respect to 1.333 m).

A further exercise was done to simulate the 3D Hypseo SRF [16] by using the instrument parameters reported in Table 3 and the Hypseo MTF model [21]. The FWHM of the spatial response is 24.4 m*20.6 m (along-scan*across-scan) while the integrated energy in 1 SSD*1 SSD (20 m*20 m) is 53%, which is a value substantially equal to that estimated for the airborne SIMGA instrument at 1.33 m*1.33 m of pixel size.

##### 3.2. Satellite Panchromatic Image Quality Requirements

The simulator permits to study the impact of system design parameters on the instrument image quality. To this scope a parametric analysis of the performance of the HypSEO-PAN Camera as a function of the pupil diameter dimension for different spatial sampling, atmospheric and illumination conditions was performed on the basis of simulated test images and instrument parameters (Table 4), to trade-off the instrument sizing with the image quality.

We have adopted as a first approximation of image quality criterion the Minimum Resolvable Contrast (MRC) at a certain spatial frequency , which is equal to the inverse of the product , where SNR is calculated for uniform scenes (spatial frequency ) [11]. As a rule of thumb, the adopted value of MRC = 10% gives the following threshold relationship for target identification with spatial frequency :

Two different kind of input images have been used for the simulations: a surface reflectance image, based on the IKONOS panchromatic camera at ~1 m spatial sampling and a bar synthetic image at a spatial sampling of 0.5 m.

In Figure 7 a comparison between simulations of the Hypseo panchromatic image, obtained from the IKONOS image, at different spatial sampling intervals and different Hypseo pupil diameters is shown for the low radiance case (case “B” in Table 4). The product has been calculated and the results are displayed in Table 5. The simulation with a high pupil diameter of 300 mm (case (b) in Table 5) is better from image quality point of view (lower GSD, high SNR*MTF, and optimum targets discrimination in Figure 7(b)) but it has a large impact on instrument sizing. All other images are strongly affected from diffraction, due to the pupil size of 150 mm, but the case with 5 m of GSD (case (d) in Table 5) seems better for SNR*MTF parameter and targets discrimination (in case (a) and (c) of Figure 7 the instrument noise overlays all other possible image features).

Another simulation with synthetic bars has been done to verify the previous results, by changing the sampling and the illumination conditions avoiding any possible effect coming from the degraded characteristics of the IKONOS image quality. In Figure 8 an HypSEO PAN simulation, from a synthetic bar image, at different spatial resolution (GSD) and pupil diameter () is shown for a high (case “A”) and a low (case “B”) radiance case, with parameters represented in Table 4. The input synthetic image in the horizontal direction is composed by 5 sequences of grey-black bars, each consisting of 10 cycles at fixed period (5, 7, 10, 15, and 20 m). In the vertical direction the albedo range of grey bars is between 10% and 20%, whereas the albedo of black bars is constant (10%). The up and down arrows in Figure 8 indicate the periods for which the criterion is satisfied or not.

The radiance (), the Signal to Noise Ratio (SNR), and the Modulation Transfer Function (MTF) values corresponding to the extreme simulated albedo values are represented in Table 6 while the product has been reported in Table 7.

The results confirm that image quality as defined from this kind of metric is improved by increasing pupil diameter (from 150 to 300 mm) at equal spatial sampling (GSD = 2.5 m), because of an increased SNR and a reduced diffraction effect on MTF. The image quality is also improved as spatial sampling decreases from 2.5 to 3.5 m (pupil diameter = 150 mm) because of an increased SNR.

The HypSEO PAN nominal case Ground Spatial Sampling (GSD) of 5 m and pupil diameter () of 150 mm seems a good compromise in terms of image quality with respect to the others, because the simulation results are not so different with respect to the case with m ( mm) (SNR*MTF is higher for low radiance case) and from an instrument design point of view it appears more feasible with respect to the best case with m ( mm).

For PAN nominal case the above criterion is satisfied only for periods larger than the Nyquist period of 10 m at high radiances, but some oscillations affected by aliasing can also be observed at low periods as 7 m.

Finally an example of simulated Hypseo PAN image ( m, mm) obtained from airborne high resolution MIVIS data in a forest environment has been performed (Figure 9(c)) to the scope of testing image fusion methods based on the sharpening of hyperspectral image by means of panchromatic observations [22].

##### 3.3. Satellite Hyperspectral Land Use Classification

Another important use of the simulator has regarded the demonstration of potential applications of the HYPSEO SG spaceborne hyperspectral camera.

A simulation of the HYPSEO SG space-borne hyperspectral camera was performed by using as input the airborne MIVIS reflectance images acquired on a Tuscany (I) test site (S. Rossore Park and Arno River mouth) at 2.5 m spatial resolution [23]. The instrumental parameters are reported in Table 3 [24].

A MIVIS reflectance image is transformed into the satellite HYPSEO radiance (Figure 9) ( Km) by using the atmospheric model parameters of Table 4. Then the HYPSEO radiance image is obtained by means of a spectral resampling of the MIVIS image to the 210 spectral bands of Hypseo, with a Gaussian Instrument Chromatic Response (with nm) and a spatial resampling to the HYPSEO spatial sampling interval of 20 m by using the simulated spatial response, and adding the noise by means of parameters coming from the HYPSEO radiometric model. Moreover an HYPSEO reflectance image has been obtained after removal of atmospheric effects introduced by MODTRAN code (Table 4).

A land use classification map based on the Spectral Angle Mapper (SAM) algorithm [25] from the HYPSEO simulated reflectance image is shown in Figure 10. The confusion matrix shows a good correlation between classified and ground truth data, compared with multispectral sensors [23], confirming the instrument capabilities for this kind of application.

##### 3.4. Target “Camouflage” in Rural Background

The dual use capability for targets discrimination with camouflage panels embedded in vegetation has been evaluated during the testing phase of the SIMGA airborne hyperspectral instrument. To this scope some simulations was done during the SIMGA project phase. The simulated instrument SIMGA reflectance images been obtained by using the MODTRAN code in a standard atmospheric condition (Table 8), the measured instrument spatial response (Figures 6(a) and 6(b) and Tables 2(a) and 2(b)) and the instrument noise [16]. In Figure 11 a SIMGA reflectance image of simulated green panels over vegetation after FLAASH inversion algorithm is shown. The result of simulation showed that green panels were clearly distinguished respect to vegetation, because of their higher reflectance in the SWIR bands (1.2 and 1.6 micron), so validating the utility of the hyperspectral sensor for this kind of application. Moreover a validation of the simulation was obtained during an airborne campaign performed in S. Rossore park (Tuscany, I), where different green panels were placed over green grass. In Figure 12 the green panels are clearly distinguished in the SWIR bands while the contrast in VIS bands is negligible.

##### 3.5. Underwater Submerged Targets

Another dual use capability regarding the discrimination of underwater submerged targets was tested by means of SIMGA image simulations and verified with overflights in a controlled zone.

In order to test the detection capabilities of small grey panels under water a direct bathymetric model has been developed to simulate the total reflectance of shallow waters on the basis of chlorophyll, sediment, and yellow substance content, the bottom and panel reflectance and the water depth height.

The total reflectance (Figure 13) has been calculated by means of the surface and subsurface reflectance with the following relationship [26]: With (i) the Fresnel reflectivity at the interface air-water which takes into account the reflection of the subsurface radiance into the water (~0.021),(ii) is the incident angle of the radiation coming from below the water which generates a refraction in air at the angle in the observation direction [ with the water refraction index],(iii) reflectance depends from surface roughness and foam, but in this analysis has been taken constant and equal in first approximation to 0.021.

The subsurface reflectance is obtained by means of a two-flux algorithm (Figure 13) which yields the following analytical relationship [27] for a water layer of uniform optical properties and thickness (m), above a reflecting bottom with reflectance :where(i),(ii) and represent, respectively, the total absorption and scattering coefficients, including that of water, chlorophyll, and yellow substance,(iii)the range of validity is .

The total absorption and backscattering coefficients are calculated from a three component water colour model ([28, 29]), which has been adapted for class 1 and class 2 waters [30].

In this model the total absorption and backscattering coefficients (m^{−1}) are obtained as a linear combination of that of water, chlorophyll, sediment, and yellow substance with the following relationships:
where(i)the suffix , , , means, respectively, water, chlorophyll, sediment, and yellow substance,(ii), , and represent the chlorophyll, sediment, and yellow substance content ( in mg/m^{3}, , and in g/m^{3}),(iii), respectively, the chlorophyll, sediment and yellow substance specific absorption coefficients (m^{2}/mg and m^{2}/g, resp.) shown in Figure 14 [30](iv) ((nm)/550)^{−4.3},(v),
(vi) [/,(vii)m^{2}/mg if mg/m^{3},(viii) m^{2}/mg if mg/m^{3},(ix),(x),
(xi) m^{2}/g,(xii)m^{2}/mg if mg/m^{3},(xiii) m^{2}/mg if mg/m^{3}.

The concentration of the three water components can be divided in(i)completely correlated type 1 waters characterised by a rather stable correlation between optically active substances, with phytoplankton concentration as dominant, in this case the yellow substance backscattering and sediment absorption coefficients has been considered as related to the chlorophyll with the following relationships [12, 29]:
(ii)completely uncorrelated coastal type 2 waters, with no correlation between the three water components, when high concentration of sediments and yellow substances exist,(iii)partially correlated coastal type 2 waters for which it is possible to retrieve a partial correlation between the three water components [30]. Examples are given by the following relationships:(1)Gulf of Naples [31]
(2)Northern basin of the Adriatic Sea [32]
(3)Tirrenian Sea near Migliarino-S. Rossore (Tuscany) [33]
A reflectance simulation of S. Rossore waters at 1 m of bottom depth obtained by means of the two flux model is displayed in Figure 15. The Total Reflectance represents simulations of correlated waters type 2 model ( mg/m^{3}, g/m^{3}, g/m^{3}), with the S. Rossore (I) bottom sand and grey panels reflectance measurements performed with a Field Spec portable spectrometer.

Finally simulated SIMGA reflectance and radiance images of a marine environment (sand, waters with sand bottom at 2 m and 8 m of depths, panels of 1 m*1 m and 2 m*2 m under water at 1 m and 0.2 m of depth) have been performed (Figure 16). SIMGA radiance has been simulated at 1.5 km airborne altitude with MODTRAN code and at the SIMGA spatial (1 m for VIS and 2 m for SWIR bands) and spectral resolution (2.4 nm for VIS and 10.8 nm for SWIR). The simulated SIMGA reflectance image has been obtained through the inversion of MODTRAN parameters used for the direct simulation and results show that all grey panels (both at 0.2 m or 1 m depth) can be clearly distinguished both in low (2 m) and high (8 m) depth waters (Figure 16).

This result was validated (Figure 17) by means of SIMGA overflights on the Morto mouth river (S. Rossore park in Tuscany, I), where two different grey panels were submerged. The two panels are clearly detectable in the visible part of the spectrum, so demonstrating the capability of the SIMGA hyperspectral instrument for this kind of application.

#### 4. Conclusions

An end-to-end software tool (SG_SIM) for the simulation of airborne/satellite optical sensors images has been implemented in ENVI-IDL environment. Input images can be either high resolution airborne or synthetic data. The simulator features three separate modules: the reflectance scenario, which generates a desired reflectance image with spectral mixtures, the atmosphere module, which converts the input reflectance map into the at-sensor radiance image, and the instrument module, which simulates the main degradations introduced by the instrument (ISR, MTF, ICR and noise). As other end-to-end simulators the SG_SIM Simulator integrates a complete atmospheric radiative transfer modelling which could easily refined through the implementation of most MODTRAN options and it includes all main functions and features necessary for a complete hyperspectral image simulation such as ISR&MTF, ICR and noise sources. Compared to the other simulators (e.g., SENSOR, [4]), SG_SIM allows also the control of spectral mixing and the generation of synthetic scenario, but is lacking of MWIR/LWIR spectral bands, 3D reflectance simulation, and DEM ray-tracing functions as included in CAMEO [5]. The implementation and further development of the SG_SIM approach was boosted significantly by the Selex Galileo S.p.A. airborne imaging system SIMGA and by other phase 0/A studies carried out for preliminary evaluations of image quality and product accuracy from new classes of space-borne optical sensors. The validation of the simulator is reported in [16], whereas in this paper the simulator’s theoretical basis and some simulation examples have been described. For the simulated cases the following results can be outlined:(i)the 3D representation of the SRF allows the visual inspection of the spatial pixel response for image quality analysis,(ii)the potentials of the simulator for the HYPSEO Panchromatic camera trade-off analysis between project parameters (pupil diameter, optics degradations, detector noise, etc.) and system performances (SNR, spatial resolution, etc.) have been demonstrated by using as inputs synthetic bars and an IKONOS images with different radiance and surface albedo levels,(iii)the potentials of the HYPSEO hyperspectral camera for vegetation mapping has been demonstrated on the basis of a MIVIS airborne scene rescaled at satellite level and ground truth data,(iv)the potentials for detection of camuffled targets in a rural background has been demonstrated in the SWIR bands by means of a simulation of a synthetic scenario with green panels at different size,(v)the potentials for the identification of submerged targets in the visible spectral range at airborne level (1 m of spatial resolution) have been demonstrated by means of the simulation of a synthetic scenario with submerged grey panels and the implementation of a direct bathymetric-water color model to generate surface reflectance as input to the scene simulator,(vi)real airborne data on submerged and camuffled targets have confirmed the results from simulations performed before the flight campaign,

These results demonstrate the potentials of the proposed simplified end-to-end simulator as a preliminary aid tool (during phase 0/A) for the dimensioning of new optical instruments to trace the link between user and instrument requirements.

#### 5. Annex

The formulation of the following MTF components implemented in the SG_SIM model are described in the following paragraphs:(i)Detector pixel size(ii)Detector cross talk(iii) CCD detector charge transfer(iv) Image motion during integration time(v)Electronic filtering(vi) Focal plane random jitter during integration time(vii)Optics diffraction and aberrations.

##### 5.1. Detector Pixel Size

The finite size of detector pixel permits the spatial integration of the signal coming from a finite spatial region on ground and this introduces a sort of degradation of the original high resolution image. This effect is analogous to a spatial filter windowing with a rect function which is 1 within a certain spatial rectangular domain and 0 outside: The transfer function of this function obtained by its Fourier Transform is represented with the following relationship: where(i) are the spatial frequencies along the and direction,(ii), are the detector size along and directions,(iii)the sinc functions are expressed from the following relationships:

##### 5.2. Detector Cross Talk

The detector cross talk between two successive pixels is taken into account as a first approximation by assuming a trapezoidal spatial windowing filter, instead of a rectangular one, which can be obtained by means of a convolution between two rect functions, one representing the detector size and the other representing the cross talk size between two successive pixels : The Transfer function is obtained from the following relationship:

##### 5.3. CCD Detector Charge Transfer

For a CCD (Charge Capacitance Device) detector the reading of electrons acquired in each pixel of the matrix is performed by mean of a charge transfer from a pixel to the other. In this way the total transfer efficiency is related to the pixel-to-pixel Charge Transfer Efficiency (CTE) and the total number of transfers . As the CTE is not 1 some losses are present at the end of the reading time which implies a reduction of image contrast and then an MTF less than 1 in the direction of output register: With(i)CTE the pixel Charge Transfer Efficiency (greater then 99,99%),(ii) the total number of charge transfers,(iii) the spatial frequency and the Nyquist frequency of the system (half of the sampling frequency , with the detector pitch).

##### 5.4. Image Motion During Integration Time

The effect of temporal acquisition (integration time greater than zero) during the image motion (with velocity ), which happens along the scan direction (the satellite velocity in a push broom system) introduces an image blur, which can be taken into account with a PSF similar to a function, which represents the temporal aperture along the motion: With the following MTF: The worst case happens when the integration time is equal or larger than the dwell time, that is, the spatial displacement equivalent to a pixel size, while the best case is for a short integration time.

##### 5.5. Electronic Filtering

An electronic system can introduce a temporal smoothing due to a finite frequency bandwidth and thus a reduced space/temporal response. This effect has been simulated by using a general formulation based on the following Butterworth filter response: With(i) the frequency,(ii) the Nyquist frequency,(iii) the order of Butterworth filter,(iv) the ratio between the 3 dB filter frequency and the Nyquist frequency ; this ratio should be between 2.2 and 3 for a good reproduction of a square wave.

The above equation correctly reproduces the behaviour of the classical low-pass filter for .

##### 5.6. Focal Plane Random Jitter during Integration Time

For high frequencies random vibration of the focal plane a Gaussian spatial response (PSF) can be taken into account. The Fourier Transform of PSF is still a Gaussian function, representing the MTF, with the following relationship: With(i) the detector pitch,(ii) the fraction of pixel representing the rms values of random fluctuations,(iii) the spatial frequency at detector level.

##### 5.7. Optics Diffraction and Aberrations

The MTF related to diffraction from optics has been evaluated by using the O’Neill formulas, valid for diffraction in presence of a telescope with central obscuration.

The following formulation for MTF diffraction term is used [34]:
with , and other parameters defined as follows:(i) = obscuration factor = ratio between the obscuration diameter and the pupil diameter,(ii) = spatial frequency at detector level (cm^{−1}),(iii) = cut-off, (iv) cut-off = optics cut-off (cm^{−1}) at detector level = 1/(*-number),(v) = wavelength;(vi)-number = , ratio between the focal length and the pupil diameter .

The , , and parameters are defined from the following relationships: The above MTF formulation for optics diffraction can be simplified to the following well known diffraction relationship in absence of central obscuration, which is zero for Regarding possible optics aberrations the model takes into account, as a first approximation, the following exponential fitting function: with and representing two empirical parameters used to approximate all optics degradation effects.

#### Acknowledgments

The authors wish to thank L. Tommasi of Selex Galileo, R. Bonsignori of Eumetsat (formerly with Selex Galileo) and F. Pecchioni of University of Florence for the useful technical discussions and contributions. The authors also wish to acknowledge the ASI HYPSEO Phase A/B study under ASI/CSM/vdc/299/00 Contract for instrument specification and modelling.

#### References

- J. S. Pearlman, P. S. Barry, C. C. Segal, J. Shepanski, D. Beiso, and S. L. Carman, “Hyperion, a space-based imaging spectrometer,”
*IEEE Transactions on Geoscience and Remote Sensing*, vol. 41, no. 6, pp. 1160–1173, 2003. View at Publisher · View at Google Scholar · View at Scopus - ENVI software Exelis Visual Information Solutions, http://www.exelisvis.com/language/en-us/productsservices/envi.aspx.
- MODTRAN software, http://modtran5.com/.
- A. Börner, L. Wiest, P. Keller et al., “SENSOR: a tool for the simulation of hyperspectral remote sensing systems,”
*ISPRS Journal of Photogrammetry and Remote Sensing*, vol. 55, no. 5-6, pp. 299–312, 2001. View at Publisher · View at Google Scholar · View at Scopus - I. R. Moorhead, M. A. Gilmore, A. W. Houlbrook et al., “CAMEO-SIM: a physics-based broadband scene simulation tool for assessment of camouflage, concealment, and deception methodologies,”
*Optical Engineering*, vol. 40, no. 9, pp. 1896–1905, 2001. View at Publisher · View at Google Scholar · View at Scopus - S. A. Cota, C. J. Florio, D. J. Duvall, and M. A. Leon, “The use of the general image quality equation in the design and evaluation of imaging systems,” in
*Remote Sensing System Engineering II*, vol. 7458 of*Proceedings of SPIE*, 2009. View at Publisher · View at Google Scholar - S. A. Cota, J. T. Bell, R. H. Boucher, et al., “PICASSO: an end-to-end image simulation tool for space and airborne imaging systems,”
*Journal of Applied Remote Sensing*, vol. 4, no. 1, Article ID 043535, 2010. - S. A. Cota, T. S. Lomhein, C. J. Florio, et al., “PICASSO: an end-to-end image simulation tool for space and airborne imaging systems: II. Extension to the Thermal Infrared—equation and methods,” in
*Imaging Spectrometry XVI*, vol. 8158 of*Proceedings of SPIE*, p. 81580, 2011. View at Publisher · View at Google Scholar - D. Labate, F. Butera, L. Chiarantini, and M. Dami, “SIMGA HYPER: Hyperspectral Avionic System Calibration Results,” Technical note Galileo Avionica, 19 February 2007.
- J. W. Goodman,
*Introduction to Fourier Optics*, McGraw-Hill, New York, NY, USA, 1968. - G. C. Holst and T. S. Lomhein,
*CMOS/CCD Sensors and Camera Systems*, JCD Publishing and SPIE Press, 2007. - S. Tassan, “SeaWiFS potential for remote sensing of marine Trichodesmium at sub- bloom concentration,”
*International Journal of Remote Sensing*, vol. 16, no. 18, pp. 3619–3627, 1995. View at Scopus - L. Alparone, M. Selva, B. Aiazzi, S. Baronti, F. Butera, and L. Chiarantini, “Signal-dependent noise modelling and estimation of new-generation imaging spectrometers,” in
*Proceedings of the 1st Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS '09)*, pp. 1–4, Grenoble, France, August 2009. View at Publisher · View at Google Scholar · View at Scopus - L. Alparone, M. Selva, L. Capobianco, S. Moretti, L. Chiarantini, and F. Butera, “Quality assessment of data products from a new generation airborne imaging spectrometer,” in
*Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS '09)*, vol. 4, pp. 422–425, July 2009. View at Publisher · View at Google Scholar · View at Scopus - B. Aiazzi, L. Alparone, S. Baronti, F. Butera, L. Chiarantini, and M. Selva, “Benefits of signal dependent noise reduction for spectral analysis of data from advanced imaging spectrometers,” in
*Proceedings of the Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS '11)*, Lisbon, France, June 2011. View at Publisher · View at Google Scholar · View at Scopus - P. Coppo, L. Chiarantini, and L. Alparone, “Design and validation of an end-to-end simulator for imaging spectrometers,”
*Optical Engineering*, vol. 51, no. 11, Article ID 111721, 2012, Special session on Hyperspectral Imaging Systems. View at Publisher · View at Google Scholar - ENVI FLAASH atmospheric correction software and Exelis Visual Information Solutions, http://www.exelisvis.com/portals/0/pdfs/envi/Flaash_Module.pdf.
- S. Adler-Golden, A. Berk, L. S. Bernstein, et al., “FLAASH, a MODTRAN4 atmospheric correction package for hyperspectral data retrievals and simulations,” in
*Proceedings of the 7th Jet Propulsion Laboratory (JPL) Airborne Earth Science Workshop*, JPL Publication 97-21, pp. 9–14, 1998. - C. Ann Bateson, G. P. Asner, and C. A. Wessman, “Endmember bundles: a new approach to incorporating endmember variability into spectral mixture analysis,”
*IEEE Transactions on Geoscience and Remote Sensing*, vol. 38, no. 2, pp. 1083–1094, 2000. View at Publisher · View at Google Scholar · View at Scopus - J. M. P. Nascimento and J. M. B. Dias, “Does independent component analysis play a role in unmixing hyperspectral data?”
*IEEE Transactions on Geoscience and Remote Sensing*, vol. 43, no. 1, pp. 175–187, 2005. View at Publisher · View at Google Scholar · View at Scopus - D. Labate, M. Ceccherini, A. Cisbani et al., “The PRISMA payload optomechanical design, a high performance instrument for a new hyperspectral mission,”
*Acta Astronautica*, vol. 65, no. 9-10, pp. 1429–1436, 2009. View at Publisher · View at Google Scholar · View at Scopus - A. Garzelli, B. Aiazzi, S. Baronti, M. Selva, and L. Alparone, “Hyperspectral image fusion,” in
*Proceedings of the Hyperspectral 2010 Workshop*, pp. 17–19, Frascati, Italy, March 2010, (ESA SP-683, May 2010). - P. Coppo, L. Chiarantini, F. Maselli, S. Migliorini, I. Pippi, and P. Marcoionni, “Application test of the OG-HYC hyperspectral camera,” in
*Sensors, Systems, and Next-Generation Satellites V*, vol. 4540 of*Proceedings of SPIE*, pp. 147–158, September 2001. View at Publisher · View at Google Scholar · View at Scopus - A. Bini, D. Labate, A. Romoli et al., “Hyperspectral earth observer (HYPSEO) program,” in
*Proceedings of the 52nd International Astronautical Federation (IAF '01)*, Toulouse, France, October 2001. - ENVI Spectral Angle Mapper (SAM) algorithm and Exelis Visual Information Solutions, http://www.exelisvis.com/portals/0/tutorials/envi/SAM_SID_Classification.pdf.
- I. S. Robinson,
*Satellite Oceanography*, John Wiley & Sons, New York, NY, USA, 1985, Ellis Orwood Lim. - S. Tassan, “An algorithm for the identification of benthic algae in the Venice Lagoon from thematic mapper data,”
*International Journal of Remote Sensing*, vol. 13, no. 15, pp. 2887–2909, 1992. View at Scopus - A. Morel and Prieur, “Analysis of variations in ocean colour,”
*Limnology and Oceanography*, vol. 22, pp. 709–722, 1977. - S. Sathyendranath, L. Prieur, and A. Morel, “A three-component model of ocean colour and its application to remote sensing of phytoplankton pigments in coastal waters,”
*International Journal of Remote Sensing*, vol. 10, no. 8, pp. 1373–1394, 1989. View at Scopus - S. Tassan, “Local algorithms using SeaWiFS data for the retrieval of phytoplankton, pigments, suspended sediment, and yellow substance in coastal waters,”
*Applied Optics*, vol. 33, no. 12, pp. 2369–2378, 1994. View at Scopus - S. Tassan and M. Ribera d'Alcalá, “Water quality monitoring by thematic mapper in coastal environments. A performance analysis of local biooptical algorithms and atmospheric correction procedures,”
*Remote Sensing of Environment*, vol. 45, no. 2, pp. 177–191, 1993. View at Scopus - B. Sturm, “Ocean colour remote sensing: a status report,” in
*Satellite Remote Sensing for Hydrology and Water Management*, E. C. Barret, Ed., pp. 243–277, Gordon and Breach Science Publishers, New York, NY, USA, 1990. - P. Coppo, L. Chiarantini, F. Maselli, et al., “Test Applicativi della camera Iperspettrale Cosmo-Skymed,” Contratto ALS-US-SBC-0058/99, doc. N. SKC-GAL-TN-008, Dicembre 2000.
- E. L. O’Neill, “Transfer function for an annular aperture,”
*The Journal of the Optical Society of America*, vol. 46, pp. 285–288, 1956. View at Publisher · View at Google Scholar