Table of Contents Author Guidelines Submit a Manuscript
Advances in Meteorology
Volume 2015, Article ID 956920, 16 pages
http://dx.doi.org/10.1155/2015/956920
Research Article

An Assessment of Data from the Advanced Technology Microwave Sounder at the Met Office

Met Office, Fitzroy Road, Exeter EX1 3PB, UK

Received 19 December 2014; Revised 4 February 2015; Accepted 18 February 2015

Academic Editor: Yuanfu Xie

Copyright © 2015 Amy Doherty et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

An appraisal of the Advanced Technology Microwave Sounder (ATMS) for use in numerical weather prediction (NWP) is presented, including an assessment of the data quality, the impact on Met Office global forecasts in preoperational trials, and a summary of performance over a period of 17 months operational use. After remapping, the noise performance (NEΔT) of the tropospheric temperature sounding channels is evaluated to be approximately 0.1 K, comparing favourably with AMSU-A. However, the noise is not random, differences between observations and simulations based on short-range forecast fields show a spurious striping effect, due to 1/f noise in the receiver. The amplitude of this signal is several tenths of a Kelvin, potentially a concern for NWP applications. In preoperational tests, adding ATMS data to a full Met Office system already exploiting data from four microwave sounders improves southern hemisphere mean sea level pressure forecasts in the 2- to 5-day range by 1-2%. In operational use, where data from five other microwave sounders is assimilated, forecast impact is typically between −0.05 and −0.1 J/kg (3.4% of total mean impact per day over the period 1 April to 31 July 2013). This suggests benefits beyond redundancy, associated with reducing already small analysis errors.

1. Introduction

The Polar Operational Environmental Satellite (POES) series of satellites has provided key data for Numerical Weather Prediction (NWP) and climate studies since 1978. Over the next decade, continuity of these important observations will be provided by instruments of the US Joint Polar Satellite System (JPSS) [1]. The first satellite in the series was launched on 28 October 2011 and is now known as the Suomi National Polar-orbiting Partnership (Suomi-NPP).

Microwave temperature sounding data for NWP and climate applications was originally provided by the Microwave Sounding Unit (MSU) carried onboard satellites launched during the period 1978–1994 and more recently by the Advanced Microwave Sounding Unit (AMSU) carried onboard satellites launched between 1998 and 2012 [2]. One further AMSU-A instrument will be launched as part of EUMETSAT’s Metop series (Metop-C, currently scheduled for 2018), with humidity sounding data provided by the Microwave Humidity Sounder (MHS). JPSS uses a new microwave sounding instrument, the Advanced Technology Microwave Sounder (ATMS) [3], a cross-track scanning microwave radiometer similar to AMSU-A and MHS combined.

Before operational assimilation of new data by NWP centres it is standard practice to assess data quality with respect to NWP model fields and other similar instruments and to evaluate the data through data assimilation experiments. Recent studies have shown the value of using NWP fields to assess data quality from microwave sounding instruments ([4, 5]) and a detailed assessment of ATMS relative to the ECMWF NWP model has been produced by [6]. This paper presents a similar analysis with respect to the Met Office NWP model. Also reported here are an in depth investigation of the spurious striping signal seen in the innovation maps, the preoperational trial results, and a summary of the operational performance of ATMS in the Met Office system from April 2013 to November 2014.

Section 2 of this report introduces the ATMS instrument and key aspects of the Met Office data assimilation scheme. Section 3 describes findings from a comparison of ATMS data with similar AMSU/MHS data and with NWP model fields and also presents the findings of a study into the characteristics of the striping signal. Section 4 summarises the results of two assimilation experiments in which ATMS data are added to a full Met Office assimilation and forecasting system. Section 5 provides further evidence of the efficacy of ATMS in the Met Office system when it is assimilated operationally over a period of 17 months. Section 6 presents a summary and conclusions.

2. Data and Assimilation System

2.1. Instrument Characteristics

ATMS is a cross-track scanning microwave radiometer similar to the AMSU/MHS instruments flown on the National Oceanic and Atmospheric Administration’s NOAA-15 to -19 satellites and European Meteorological Satellite Agency’s (EUMETSAT) Metop-A and -B satellites. ATMS has 22 channels: 5 sensitive to the surface in clear conditions, or to water vapour, rain, and cloud when conditions are not clear (at 23, 31, 50, 51, and 89 GHz, channels 1, 2, 3, 4, and 16, resp.), 11 temperature sounding channels around the 50–60 GHz oxygen band (channels 5–15), and 6 moisture sounding channels around the 183 GHz water vapour band (channels 17–22) [3].

Channels 3–15 all share the same feedhorn, local oscillator, and receiver front end, unlike AMSU-A, which has a separate LO/mixer for each of channels 3–8 with a shared LO/mixer for channels 9–14.

ATMS has 96 footprints per scan line, each separated by 1.11°. The footprint size varies with channel, the 23 and 31 GHz channels have a 5.2°-beam width, the temperature sounding channels (50–60 GHz) have a 2.2°-beam width, and the moisture sounding channels (~183 GHz) have a 1.1°-beam width. The lower frequency channels (below 100 GHz) are, therefore, highly oversampled.

The oversampling of the 50–60 GHz temperature sounding channels is accompanied by shorter integration times per footprint and results in high radiometric noise values, relative to equivalent AMSU channels. In current operational data assimilation systems, errors in the short range forecast fields, expressed as observation equivalent brightness temperatures, are typically in the range 0.05–0.10 K for mid-tropospheric temperature sounding channels. This places very demanding requirements on the performance of microwave sounding instruments, in terms of radiometric performance [7] and systematic biases in the data. Preprocessing of the ATMS data is, therefore, required to reduce the noise to acceptable levels for use in NWP.

2.2. The Met Office Data Assimilation Scheme
2.2.1. Preprocessing

The near-real-time global data stream for ATMS is generated by NOAA’s Interface Data Processing Segment (IDPS), and the data are distributed to European users by EUMETSAT. The data used in this work are the antenna temperatures, which are derived from the Temperature Data Record (TDR) product.

Using the ATOVS and AVHRR Preprocessing Package (AAPP), the ATMS data are remapped and spatially averaged to improve the noise performance and replicate the AMSU footprint size [8]. The data assessed here have been manipulated to a beam width of 3.3° (4.8° for 23 and 31 GHz, channels 1 and 2) using Fourier techniques. They are resampled to give one field of view in three (i.e., 32 fields of view) across the scan and are also resampled at a rate of 1 in 3 in the along-track direction. All the data used in this study have been remapped in this manner.

Although satellite radiances are assimilated directly at the Met Office, a one-dimensional variational analysis (1D-Var) is performed first to act as a quality control filter and to allow the derivation of additional parameters which are used subsequently [9]. The 1D-Var performs a variational retrieval of atmospheric state (T, q) and surface variables (Tskin) at the location of the observation with background (prior) information from the previous T + 6 hour forecast interpolated to the location of the observation. Background errors used in 1D-Var are represented by an error covariance matrix consistent with the full 4D-Var B-matrix and observation errors are more aggressive, with values typically ~75% of those used in 4D-Var.

Quality control is applied using the following:(1)a gross error check on the brightness temperatures;(2)a convergence check in 1D-Var;(3)a check on the background profile;(4)O minus B check on channels used for assimilation;(5)RTTOV error checking on the profile during minimisation;(6)O minus R check on channels used for assimilation;(7)cloud and rain flagging;(8)rejection of surface sensitive channels over land.

2.2.2. Assimilation System

The Met Office variational data assimilation system is based on incremental 4D-Var [10, 11]. The nonlinear forecast model currently has a 25 km resolution in midlatitudes; for the data assimilation experiments described here a reduced resolution of 40 km was used. The model has 70 levels from the surface to 80 km.

The operational analysis makes use of data from a range of conventional observations, including surface, sonde, and aircraft observations as well as data from satellite instruments including five ATOVS instruments (from NOAA-15, NOAA-18, NOAA-19, Metop-A, and Metop-B), advanced infrared sounder data from the Atmospheric Infrared Sounder (AIRS) and the Infrared Atmospheric Sounding Interferometer (IASI), global positioning system (GPS) radio occultation (GPSRO) data, ground based GPS, atmospheric motion vectors, geostationary radiances, and scatterometer data. The data assimilation experiments make use of a slightly earlier operational system and do not include ATOVS data from Metop-B.

NWP model fields are mapped to brightness temperatures using radiative transfer modelling; these are routinely compared with the radiance measurements. Generally, differences will be nonzero and will comprise large-scale, slowly-varying, systematic biases, small scale day-to-day features resulting from local errors in the forecast model fields, and a random component from the instrument noise. In NWP assimilation systems it is crucial that the stationary or quasi-stationary biases (which may originate from forecast model, the radiative transfer model, or the measurement) are eliminated prior to assimilation, leaving only the errors in the model fields to be corrected. This process is termed bias correction and can be performed within the assimilation process itself (variational bias correction [12]) or can be a static correction which is updated when required.

The Met Office currently uses a static bias correction scheme based on [13] which corrects for cross-track and air mass related biases as well as instrument calibration errors, represented by global offsets.

As mentioned in the introduction, a detailed assessment of ATMS data has been carried out independently relative to the ECMWF assimilation system [6]. There are some key differences between the Met Office system and that at ECMWF.(1)The Met Office uses AAPP to remap the data to lower resolution during the preprocessing stage.(2)Vertical resolution: the work carried out in this paper used a forecast model on 70 vertical levels, while the work of Bormann et al. [6] used a model with 91 levels.(3)RTTOV version 9 was used in the Met Office work and version 10 in the ECMWF work.(4)Static bias correction is currently used at the Met Office and variational bias correction is used at ECMWF.

3. Data Quality

3.1. Comparison with the NWP Model

During the data assimilation process, model equivalents of the observed brightness temperatures are computed using a fast radiative transfer (RT) model (RTTOV version 9, [14]). Model fields from short range forecasts are interpolated in space and time to the location of the observations using forecast fields at T + 3, +6, and +9 hours launched from a previous analysis. Simulated or background brightness temperatures are then generated using the RT model. Differences between observations and simulations, also known as innovations or first guess departures, can then be used to diagnose errors in the observations.

For the ATMS temperature sounding channels (50–60 GHz, channels 5–15), the innovations are generally a few tenths of a Kelvin. For the humidity sounding channels (~183 GHz, channels 18–22) the innovations are usually larger, at 1-2 K (1). Window channel innovations are larger still at several Kelvin.

Figure 1 shows uncorrected and corrected innovation plots (O-B and C-B) for the key ATMS tropospheric temperature sounding channels 7–10 on 7 November 2013 at QU00. Large amplitude latitudinal and cross-track biases are clearly visible in the fields before bias correction (left hand column) and largely absent in those after bias correction (right hand column) indicating that the bias correction scheme is effective. The residual biases in the corrected data are less than 0.1 K over much of the globe. Also clearly visible in the corrected fields, however, is an unphysical horizontal striping pattern, which is large enough (up to 0.2 K) to potentially degrade the influence of ATMS data on analyses and forecasts. The regular pattern of white lines (missing data) is a consequence of the fact that the spatial filtering is performed on blocks of duration 320 s, and the edge scans are discarded. A latitudinal dependence of the cross-track biases due to the nonzero emissivity of the reflector is not corrected for in the current bias correction scheme; plans to address this in the future may improve the bias correction further.

Figure 1: ATMS channels 7–10, difference between observations, and model background in the Met Office system at 0Z on 7 November 2013. LHS: uncorrected and RHS: corrected.
3.2. Striping

An investigation of the striping was carried out to quantify the variability associated with the signal.

The calibrated antenna temperatures were obtained from the TDRs using the AAPP for a case study over the UK on 13 August 2012 01:43 to 01:54. Coverage is shown in Figure 2. The data were received by direct broadcast.

Figure 2: ATMS channel 16, 20120813 01:43 to 01:54.

The atmospheric signal was removed from the case study data by taking the difference between the raw measured brightness temperatures and those obtained through spatial filtering to remove the fine structure. To generate the spatially filtered scene, a fast Fourier transform method described in [8] was used to remap all channels to the beam width of channels 1 and 2. A beam width of 5.2° corresponds to a distance of ~80 km on the ground; over this distance there should be little or no striping signal as the largest correlations occur on a timescale of less than one scan (~17 km; see Figure 6), while the calibration process will remove any striping that occurs on longer timescales than 7 scans (~120 km). A difference between latitudinal and longitudinal scene variations is not expected over an 80 km distance. The three fields of view (FOV) at the edge of each scan are discarded as these may be contaminated by edge effects, leaving 90 FOVs across each scan. For channels 7–15, which are not sensitive to the surface, the resulting signal is dominated by instrument noise.

The along-scan and along-track variabilities were compared by looking at the difference scene over a FOV region. The along-scan variability was obtained by averaging the 90 along-track spots and vice versa. These quantities are shown in Figure 3 for channels 7–11 and Figure 4 for channels 12–15. They show that for channels 7–13 the along-track variabilities (left hand plots) are a factor 2-3 larger than the cross-track variabilities (right hand plots). Some interchannel correlations are apparent, for example, the positive spike at scan 36 in channels 7–9 and the negative spike at line 81 for all channels.

Figure 3: Along-track (a) and cross-track (b) variabilities due to instrument noise for channels 7–11.
Figure 4: Along-track (a) and cross-track (b) variabilities for channels 12–15.

Interchannel correlations were also investigated. Figure 5 shows the correlation coefficient between pairs of brightness temperature difference fields for channels 7–15.

Figure 5: ATMS interchannel correlations. Note that all correlation coefficients are positive, so they can be displayed on a logarithmic scale.
Figure 6: Correlations between groups of 4 pixels for ATMS channel 7. Groups 1–23 are earth views, 24 is cold cal, 25 is warm cal, and 26–48 are earth views of the next scan.

There are significant correlations between the temperature sounding channels, which is consistent with a study using the Desroziers diagnostic [15] and also with [16]. It may be possible to extend this analysis to other channels by choosing a suitable clear-sky sea region.

To examine the correlations between the striping signal and calibration counts the earth scan was divided into groups of 4 pixels and each group was averaged to create a time series; the cold space counts and warm counts were also averaged. Each group was then selected in turn and the correlation coefficient between it and (i) all groups in the earth scan, (ii) space counts, (iii) warm counts, and (iv) all groups in the next earth scan were computed. The results for the four central groups for channel 7 are shown in Figure 6. This plot (and those for other channels not shown) suggests that for the lower atmospheric sounding channels there are correlations between neighbouring earth views and the calibration views, but the correlations are only significant over a time period smaller than one scan. The moisture and window channels do not show clear correlations, perhaps because these channels are inherently noisier.

This striping study found that(1)both calibrated antenna temperatures and earth counts showed the striping signal;(2)there is evidence for interchannel correlations in the signal;(3)the striping is not calibration noise and cannot be eliminated by changing the calibration view averaging scheme;(4)striping introduces spatial and spectral correlations which could be significant for NWP;(5)the characteristic timescale of the striping signal is less than one scan period.The impact of striping on the overall noise is discussed in Section 3.3.

These results are based on a limited sample of data and robustness could be increased by further studies with a larger dataset. The results are borne out, however, by the subsequent discovery that the instrument manufacturers were aware of a 1/f or flicker noise [17] caused by a low noise amplifier present within the ATMS instrument configuration.

The effect of the striping is allowed for within the assimilation by inflating the observation error for the affected ATMS channels.

3.3. NEΔT Monitoring for ATMS

The noise equivalent delta temperature (NEΔT) is an important quantity for any radiometer and is commonly monitored using the calibration counts for the black body and the cold space view readings (e.g., [18]).

The ATMS Sensor Data Records (SDRs) contain an internal estimate of NEΔT: one warm NEΔT and one cold NEΔT per scan line. This is defined as the standard deviation of the four cold/warm calibration view readings divided by the gain (to convert from counts to K). However, this estimate of NEΔT does not account for any noise sources that have a time scale longer than four times the integration time. As shown in the previous section, long-period fluctuations significantly affect the performance of the instrument; therefore, in this section a modified method is described that is suitable for routine monitoring of the NEΔT.

For each scan line, , we compute the difference between the four warm view counts for that line and a reference count consisting of the mean of the warm view counts from the lines , , , , , and (note that line is omitted). Then the NEΔT is the standard deviation of these differences, divided by the channel gain. This method ensures that both random noise and longer period fluctuations (1/f noise) are properly accounted for, in a similar way to the operational calibration procedure.

We can also compute an effective NEΔT for a 3 × 3 averaged brightness temperature field; in this case we use lines , , , , , and to derive the reference count, to avoid unwanted correlations between the lines under test and the lines used as reference. After subtracting the reference counts for each line, 3 × 3 averaging of the resulting differences is performed before computing the final standard deviation. The 3 × 3 NEΔT is a useful diagnostic because the spatially filtered brightness temperature field used in the NWP analysis is expected to have similar noise properties to a 3 × 3 averaged field; both have beamwidths comparable to AMSU-A (except for channels 1-2, for which the AAPP manipulation produces a smaller beamwidth than the 3 × 3 average and hence a larger NEΔT).

The warm-view NEΔTs are shown in Figure 7, together with the warm-view NEΔT from the global SDR files. The effective 3 × 3 NEΔT for the key lower atmospheric temperature sounding channels (50–60 GHz, channels 6–10) is in the range 110–160 mK. This compares favourably with the NEΔTs for AMSU-A on Metop-A and NOAA-19, which are the range 130–180 mK. The total noise, computed using the method described in this section, is larger than the SDR noise, due to the effect of striping: typically 20% larger for the temperature sounding channels 3–10, but as much as 50% larger for channel 16.

Figure 7: NEΔT for ATMS, derived from warm view counts. Light blue: total noise for individual samples, using the method described in Section 3.3; orange: noise estimates from global SDR files; black: for 3 × 3 sample averaging (i.e., field of view for channels 3–15 comparable with AMSU-A). The values for total noise and 3 × 3 noise were derived from one month of direct broadcast data received at Exeter (November 2014).

We can also see that the ratio between the single-sample NEΔT and 3 × 3 NEΔT varies between approximately 3 (the value that would be expected if the noise is random) and 2 (for the channels that are most affected by 1/f noise). In other words, the presence of striping degrades the ability to reduce noise through spatial averaging.

3.4. Comparison with AMSU/MHS

To assess the data quality from ATMS, it is worthwhile to compare first-guess departures with those from equivalent sensors, in this case from AMSU/MHS, both before and after bias correction. Comparison of uncorrected data provides information on the raw data quality, whilst for the corrected data it shows the relative magnitude of residual biases in the data to be assimilated.

Comparison of ATMS and AMSU uncorrected and corrected innovations is shown in Figure 8 for data from January-February 2012. Figure 8(a) shows the mean and standard deviation of the uncorrected brightness temperature difference for the temperature sounding channels and Figure 8(b) the same for the corrected values. Before bias correction the ATMS values for the sounding channels are lower than or of the same order as the AMSU values, but after bias correction the standard deviations for ATMS are consistently slightly higher than for AMSU, despite the good radiometric performance. The reason is probably that in the Met Office system AMSU observations are remapped to the HIRS grid by AAPP, which results in a spreading of the AMSU footprint and a reduction of the NEΔT by a factor of 0.68. The ATMS observations are not remapped in this way.

Figure 8: Mean and standard deviation of uncorrected (O-B) and corrected (C-B) innovations for ATMS and AMSU 50–60 GHz temperature sounding channels ((a) and (b)) and 183 GHz humidity sounding channels ((c) and (d)) January to February 2012.

For the humidity sounding channels (Figures 8(c) and 8(d)) the mean and standard deviations of the brightness temperature differences are broadly similar for ATMS and AMSU. The effect of instrument noise is masked by the larger signal from errors in the background humidity field at these frequencies (standard deviation of the first guess departures for MHS/AMSU-B channels is a factor of 10 larger than those of temperature sounding channels from AMSU-A).

4. Assimilation Experiments

The impact of the ATMS data on global analyses and forecasts was tested by adding the ATMS data to a full Met Office observing system. Results from a summer season are presented here for the period 28 June–28 August 2012. The low resolution version of the operational configuration described in Section 2.2.2 was used for experiments and the corresponding controls.

The ATMS observation errors, expressed in the observation error covariance matrices, [19], were derived from those for NOAA-19 AMSU channels. For channels with frequencies below 183 GHz, the NOAA-19 values were scaled by the ratio of NEΔT for equivalent channels on the two instruments, using the ATMS prelaunch specification. For channel 4 (51 GHz), which has no counterpart on AMSU, the error was estimated by interpolating between the values for channels 3 and 5. To account for the impact of striping values were then inflated to a minimum of 0.35 K. An error of 4 K was used for the 183 GHz channels. The errors for all channels are assumed to be uncorrelated. In practice, it is known that the striping noise introduces interchannel correlations (Section 3.2), but these are ignored.

ATMS temperature and moisture sounding channels (6–15 and 18–22) were used in the trial. Channels 8–15 were used over all surfaces, 18–22 over sea only, 6 over sea and sea ice, and 7 over all surfaces except high land.

Quality control tests screen out observations in the presence of deep cloud and precipitation [2022] following the treatment of AMSU data, the data being sufficiently similar after remapping. The tests applied are an O-B threshold test, a liquid water test from the AAPP [22] to screen out ATMS channel 18 and 6, a cirrus cost test to screen out channels 18–20 [21], and the Bennartz rain test [20] which screens out channels 7–9, 21, and 22.

Observations are thinned prior to assimilation to reduce data volume and avoid spatial correlations. The operational treatment of AMSU was followed, with observations thinned to one per 154 km in the tropics and 1 per 125 km in the extratropics and a temporal thinning time of 3 hours. Bias correction was calculated using two weeks of data from June 2012. An additional assimilation experiment was also carried out with reduced thinning distance of 80 km and a time window of 1 hour.

4.1. Impact on Forecast Skill

Figure 9 shows the percentage change in forecast RMS errors. The impact of assimilating ATMS is positive overall for both experiments, with the biggest impact seen in the Southern Hemisphere. The reduced thinning experiment performs slightly better for nearly all parameters.

Figure 9: Changes in root mean square (RMS) errors in mean sea level pressure (PMSL), 500 hPa geopotential height (H500) winds at 850 hPa (W850) and 250 hPa (W250) for the northern hemisphere (NH), tropics (TR), and southern hemisphere (SH) at forecast ranges from T + 24 hours to T + 120 hours for experiments in which ATMS data is added with spatial thinning set to 125/154 km and 80 km. Verification is relative to observations during the period 28 June–28 August 2012. Changes are relative to a control experiment which is similar in configuration to the Met Office operational configuration as at January 2013.
4.2. Impact on Background Fits for AMSU

Figure 10 shows the impact of assimilating ATMS on the background fits to NOAA-18 AMSU temperature sounding channels normalised to the standard deviation of the control during the period 28 June–28 August 2012. Results are shown for operational thinning (red line) and reduced thinning (blue line) for the southern hemisphere, tropics, and northern hemisphere. Adding ATMS with operational thinning generally improves the background fit for AMSU. Results are more variable for reduced thinning. NOAA-19 and Metop-A AMSU temperature sounding channels showed similar results (not shown).

Figure 10: Standard deviation of corrected brightness temperature difference for NOAA-18 AMSU temperature sounding channels, normalised with respect to the control experiment (black). Southern Hemisphere (a), tropics (b), and Northern Hemisphere (c). Red line is operational thinning; blue line shows reduced thinning (28 June–28 August 2012).
4.3. Impact on Background Fits for IASI and AIRS

Figure 11 shows the impact of ATMS on the fits to IASI and AIRS channels; the blue line is the reduced thinning trial and the red line is the operational thinning. Numbers are normalised to the standard deviation of the control (black line). Introducing ATMS data in general improves the fit of IASI to background, with the reduced thinning (blue line) performing marginally better than the operational thinning (red line). The exception to this is for the window and ozone channels with wave numbers ~820–1140 where reduced thinning shows a marked positive impact and operational thinning shows a marked degradation. The reason for this behaviour is not immediately apparent and warrants some further investigation.

Figure 11: Standard deviation of corrected brightness temperature difference for IASI (a) and AIRS ((b) not a linear scale on the -axis), normalised with respect to the control experiment (black). Red line shows the results for operational thinning; blue line shows the results for reduced thinning (28 June–28 August 2012).

For AIRS, operational thinning performs better for the channels which peak in the stratosphere (wave numbers up to about 650 cm−1) and reduced thinning performs better for wave numbers 2385 cm−1 and higher (water vapour and short wave channels). In between, the picture is mixed with reduced thinning generally giving more positive impact.

5. Operational Performance of ATMS

ATMS has been used operationally at the Met Office since April 2013. Its performance is stable and it contributes to the performance of the model, complimenting the other instruments. Forecast Sensitivity to Observations analysis [23] is carried out routinely at the Met Office. A time series of the impact from ATMS within the full system as calculated by this technique is shown in Figure 12 for November 2014. The sensitivity to ATMS of the system is about 1/5 of that seen from all the AMSU-A instruments combined, as shown in Figure 13 for January 2014.

Figure 12: Time series of the FSO statistics for ATMS in the Met Office operational system for November 2014.
Figure 13: Total impact of all assimilated obs. in the Met Office DA system. Magnitude of ATMS contribution is about a fifth of that from 5 AMSU-A instruments (January 2014).

Impact on the forecast examined on a channel by channel basis shows that as expected the temperature sounding channels (53–55 GHz, channels 6–9) give the greatest impact followed by the moisture sounding channels around the 183 GHz water vapour line (channels 18–22). Figure 14 shows this for January 2014 in the operational Met Office system.

Figure 14: FSO statistics separated out by ATMS channel (January 2014).

A corresponding plot for ATOVS is shown in Figure 15; a similar pattern is seen for these instruments with the tropospheric sounding channels exerting the most influence and the 183 GHz water vapour channels giving a smaller but still significant impact. For each channel, the impact from ATOVS is of the order of 4 times that of ATMS. The volume of ATOVS data (with 5 instruments currently assimilated at the Met Office) is partly responsible for the difference in impact between two nominally identical channels; also contributing will be the additional noise from the striping signal in the ATMS data (compared to the very good noise in the remapped ATOVS data; see Section 3.3) and the less finely tuned quality control for the newer instrument.

Figure 15: ATOVS FSO statistics by channel (January 2014).

The reliability of the ATMS data can be seen in the two-year time series monitoring plot in Figure 16. In Figure 16(a) the mean of the corrected innovations (blue line) shows seasonal variability but (apart from a blip in May 2014) stays close to zero; the RMS of the corrected innovations (red line) is less variable. Figure 16(b) shows the observation count over the same period. Data dropouts are more frequent than for AMSU-A on all but the NOAA-15 platform; this is partly due to S-NPP data transfer issues between NOAA and EUMETSAT. Typically 6000 ATMS obs are assimilated every cycle.

Figure 16: Time series for ATMS channel 6. (a) The mean (blue) and RMS (red) of the corrected innovations and (b) the assimilated observation count over a two-year period, May 2013–February 2015.

6. Summary and Conclusions

An initial assessment of ATMS data has been carried out using four methods: (1) inspection of observations and of differences between model and observation, (2) comparisons with AMSU and MHS data, (3) preoperational assimilation experiments, and (4) monitoring of operational performance in the Met Office NWP system.

Despite the radiometric performance advantage over AMSU the standard deviations of bias corrected differences from the model for most of the temperature sounding channels (6–15) are slightly worse than for AMSU-A equivalents. This can be explained by differences in the treatment of the observations within the Met Office processing system (AMSU being mapped to the HIRS grid).

A striping effect from the noise introduced by the low noise amplifier in the ATMS instrument is noticeable in the corrected innovations, and an inflated observation error was used to compensate for possible interchannel correlations.

For the humidity sounding channels examined (, ,  GHz, channels 18, 20, and 22) the performance of ATMS is very close to that of AMSU-B/MHS. For the surface viewing channels (1–3, 5, 16, and 17) the ATMS data shows slightly larger bias and standard deviation (not shown).

Two assimilation experiments were conducted in which ATMS was added to a full Met Office system for a summer season. For the experiments using a thinning distance of 154 km in the tropics and 125 km in the extratropics, a small positive impact on RMS errors for a number of parameters was seen. For the experiment using reduced thinning (80 km), impact was more strongly positive, although closer inspection of the fit to background of other instrument types when ATMS was added to the system showed a more mixed response which requires further study.

Assimilation of ATMS data, with the processing configuration as described in the forecast impact experiments without reduced thinning, was made operational in the Met Office global model on 30 April 2013. Routine monitoring of the data has shown that ATMS data have had a significant impact on forecast quality over the period to November 2014, despite the fact that 5 other microwave sounders (ATOVS instruments on NOAA-15, NOAA-18, NOAA-19, Metop-A, and Metop-B) are already assimilated.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

Thanks to James Cotton for the FSO plots.

References

  1. J. H. Marburger, “Restructuring the national polar-orbiting operational environmental satellite system,” White House Office of Science and Technology Policy Fact Sheet 2011, R&D Budget Submission, 2010. View at Google Scholar
  2. G. Goodrum, K. B. Kidwell, and W. Winston, Eds., NOAA KLM User's Guide, NOAA, NOAA-NESDIS/NCDC, Suitland, Md, USA, 1999, http://www.ncdc.noaa.gov/oa/pod-guide/ncdc/docs/klm/index.htm.
  3. C. Muth, P. S. Lee, J. C. Shiue, and W. A. Webb, “Advanced technology microwave sounder on NPOESS and NPP,” in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS '04), pp. 2454–2458, September 2004. View at Scopus
  4. W. Bell, S. J. English, B. Candy et al., “The assimilation of SSMIS radiances in numerical weather prediction models,” IEEE Transactions on Geoscience and Remote Sensing, vol. 46, no. 4, pp. 884–900, 2008. View at Publisher · View at Google Scholar · View at Scopus
  5. Q. Lu, W. Bell, P. Bauer, N. Bormann, and C. Peubey, “Characterizing the FY-3A microwave temperature sounder using the ECMWF model,” Journal of Atmospheric and Oceanic Technology, vol. 28, no. 11, pp. 1373–1389, 2011. View at Publisher · View at Google Scholar · View at Scopus
  6. N. Bormann, A. Fouilloux, and W. Bell, “Evaluation and assimilation of ATMS data in the ECMWF system,” Journal of Geophysical Research D: Atmospheres, vol. 118, no. 23, pp. 12970–12980, 2013. View at Publisher · View at Google Scholar · View at Scopus
  7. W. Bell, S. Di Michele, P. Bauer et al., “The radiometric sensitivity requirements for satellite microwave temperature sounding instruments for numerical weather prediction,” Journal of Atmospheric and Oceanic Technology, vol. 27, no. 3, pp. 443–456, 2010. View at Publisher · View at Google Scholar · View at Scopus
  8. NWP SAF, “Annex to AAPP scientific documentation: pre-processing of ATMS and CrIS,” Document NWPSAF-MO-UD-027, 2011, http://nwpsaf.eu/deliverables/aapp/NWPSAF-MO-UD-027_ATMS_CrIS.pdf. View at Google Scholar
  9. J. R. Eyre, “Inversion of cloudy satellite sounding radiances by nonlinear optimal estimation. II: application to TOVS data,” Quarterly Journal—Royal Meteorological Society, vol. 115, no. 489, pp. 1027–1037, 1989. View at Publisher · View at Google Scholar · View at Scopus
  10. P. Courtier, J.-N. Thepaut, and A. Hollingsworth, “A strategy for operational implementation of 4D-Var, using an incremental approach,” Quarterly Journal of the Royal Meteorological Society, vol. 120, no. 519, pp. 1367–1387, 1994. View at Publisher · View at Google Scholar · View at Scopus
  11. F. Rawlins, S. P. Ballard, K. J. Bovis et al., “The met office global four-dimensional variational data assimilation scheme,” Quarterly Journal of the Royal Meteorological Society, vol. 133, no. 623, pp. 347–362, 2007. View at Publisher · View at Google Scholar · View at Scopus
  12. T. Auligné, A. P. McNally, and D. P. Dee, “Adaptive bias correction for satellite data in a numerical weather prediction system,” Quarterly Journal of the Royal Meteorological Society, vol. 133, no. 624, pp. 631–642, 2007. View at Publisher · View at Google Scholar · View at Scopus
  13. B. A. Harris and G. Kelly, “A satellite radiance-bias correction scheme for data assimilation,” Quarterly Journal of the Royal Meteorological Society, vol. 127, no. 574, pp. 1453–1468, 2001. View at Publisher · View at Google Scholar · View at Scopus
  14. R. Saunders, M. Matricardi, A. Geer, P. Rayer, O. Embury, and C. Merchant, “RTTOV9 science and validation plan,” RTTOV-9 Science and Validation Report NWPSAF-MO-TV-020, 2010. View at Google Scholar
  15. N. Bormann, W. Bell, A. Fouilloux, I. Mallas, N. Atkinson, and S. Swadley, “Initial results from using ATMS data at ECMWF,” in Proceedings of the 18th International TOVS Study Conference, Toulouse, France, March 2012.
  16. Z. Qin, X. Zou, and F. Weng, “Analysis of ATMS striping noise from its Earth scene observations,” Journal of Geophysical Research D: Atmospheres, vol. 118, no. 23, pp. 13214–13229, 2013. View at Publisher · View at Google Scholar · View at Scopus
  17. R. F. Voss, “1/f (flicker) noise : a brief review,” in Proceedings of the 33rd Annual Symposium on Frequency Control, pp. 40–46, June 1979. View at Scopus
  18. N. C. Atkinson and S. McLellan, “Initial evaluation of AMSU-B in-orbit data,” in Microwave Remote Sensing of the Atmosphere and Environment, vol. 3503 of Proceedings of SPIE, pp. 276–287, Beijing, China, August 1998.
  19. A. C. Lorenc, “Analysis methods for numerical weather prediction,” Quarterly Journal of the Royal Meteorological Society, vol. 112, pp. 1177–1194, 1986. View at Google Scholar · View at Scopus
  20. R. Bennartz, A. Thoss, A. Dybbroe, and D. B. Michelson, “Precipitation analysis using the advanced microwave sounding unit in support of nowcasting applications,” Meteorological Applications, vol. 9, no. 2, pp. 177–189, 2002. View at Publisher · View at Google Scholar · View at Scopus
  21. S. J. English, J. R. Eyre, and J. A. Smith, “A cloud-detection scheme for use with satellite sounding radiances in the context of data assimilation for numerical weather prediction,” Quarterly Journal of the Royal Meteorological Society, vol. 125, no. 559, pp. 2359–2378, 1999. View at Publisher · View at Google Scholar · View at Scopus
  22. T. Labrot, L. Lavanant, K. Whyte, N. Atkinson, and P. Brunel, “AAPP documentation, scientific description,” NWP SAF Document NWPSAF-MF-UD-001, Satellite Application Facility on Numerical Weather Prediction, 2006. View at Google Scholar
  23. A. C. Lorenc and R. T. Marriott, “Forecast sensitivity to observations in the Met Office Global numerical weather prediction system,” Quarterly Journal of the Royal Meteorological Society, vol. 140, no. 678, pp. 209–224, 2014. View at Publisher · View at Google Scholar · View at Scopus