Abstract

This paper evaluates the sensitivity to cumulus and microphysics schemes, as represented in numerical simulations of the Weather Research and Forecasting model, in characterizing a deep convection event over the Cuban island on 1 May 2012. To this end, 30 experiments combining five cumulus and six microphysics schemes, in addition to two experiments in which the cumulus parameterization was turned off, are tested in order to choose the combination that represents the event precipitation more accurately. ERA Interim is used as lateral boundary condition data for the downscaling procedure. Results show that convective schemes are more important than microphysics schemes for determining the precipitation areas within a high-resolution domain simulation. Also, while one cumulus scheme captures the overall spatial convective structure of the event more accurately than others, it fails to capture the precipitation intensity. This apparent discrepancy leads to sensitivity related to the verification method used to rank the scheme combinations. This sensitivity is also observed in a comparison between parameterized and explicit cumulus formation when the Kain-Fritsch scheme was used. A loss of added value is also found when the Grell-Freitas cumulus scheme was activated at 1 km grid spacing.

1. Introduction

Deep convection systems associated with sea breeze convergence are a common feature over the Cuban island during the May to October rainy season each year [1]. These systems form along the island, where surface winds converge, producing strong updrafts that carry warm moist air aloft. They are characterized by a chain of cumulonimbus clouds that vary in intensity depending on the available energy in the atmosphere. In extreme cases, they can produce large amounts of precipitation, thunderstorms, hails, and even tornadoes. Recent research has detected that this mechanism is the third most important phenomenon leading to intense precipitation after troughs and tropical waves in Cuba [Alvarez L., 2014. Personal Communication]; the importance of this mechanism and its frequency of occurrence in the rainy season are two motivating factors for the present study.

Deep convection and breeze convergence have been widely studied over the past years. Three recent publications have analyzed breeze convergence cases from a numerical perspective using explicit cumulus formation at 3 km grid spacing with the Fifth-Generation Penn State/NCAR Mesoscale Model (MM5) and the Weather Research and Forecasting (WRF) model [24]. They described and characterized particular types of sea breeze convergence and even highlighted how this phenomenon impacts the local atmospheric circulation. However, they did not analyze how it becomes an important forcing mechanism (such as the case of the Cuban island) leading to deep convection events where precipitation intensity and its spatial distribution are the most important features.

Moreover, deep convection systems have been studied from different views and applications. While some research studies have focused on observations, characterizing the distribution of deep convection systems [5], their thermodynamic properties [6], or the linkage to the associated precipitation with cloud-to-ground lightning [7], others have focused on the numerical perspective of simulating deep convection and the assessment of model performance. In this respect, Weisman et al. [8] and Clark et al. [9] analyzed explicit convective precipitation forecasts using the WRF model at 3-4 km resolution and compared their results with operational model forecasts using the Betts-Miller-Janjic cumulus parameterization [10] at lower resolutions (12 km and 20 km, resp.). Results from the first research work suggested significant value added to the high-resolution forecasts in representing the convective systems and the diurnal convective cycle. The second study found that, by means of a neighborhood-based verification approach, the skill of the explicit convection forecasts was better than the parameterized convection forecasts for rainfall thresholds greater than 6 mm and spatial scales, of the spatial statistic measure, larger than the inner domain resolutions (3-4 km). However, both studies compared explicit cumulus forecasts with low resolution operational forecasts using only one cumulus parameterization scheme.

Kleczek et al. [11] centered their study in a planetary boundary layer (PBL) sensitivity analysis using WRF at 3 km grid spacing and no cumulus parameterizations. Their results revealed substantial differences between the PBL schemes. In particular, they found that nonlocal schemes tend to produce higher temperatures and higher wind speeds than local schemes. The main limitation of these studies is that they did not analyze the model sensitivity to cumulus and microphysics parameterizations and domain resolution. Other studies have considered this type of analysis. For example, Warner and Hsu [12] analyzed the sensitivity of explicit resolved convection in nested domains to physical inconsistencies in the parameterized convection used in the outer domains. Here, only three cumulus schemes were varied and only one microphysics parameterization was used. Ruiz et al. [13] focused their work on the sensitivity of short-term forecasts over South America. They also used three cumulus parameterizations and only one microphysics scheme and varied other physics options, such as the PBL and soil model parameterizations. In this case, precipitation was not verified. Raktham et al. [14] varied the microphysical schemes using four different options, while there were only two cumulus schemes. However, no explicit convection experiment was performed in this study since the WRF domain resolution was 36 kilometers.

It is well known that local processes, such as deep convection, are difficult to represent in model simulations [15]. Since their dynamics are scale-dependent, they have to be reproduced with the aid of convective parameterizations to account for dynamical processes undergoing at smaller scales compared to the model resolution. Studies have shown that a number of processes (i.e., radiation fluxes, microphysical processes) also need to be taken into account in order to represent deep convection [16]. For instance, Pereira and Rutledge [17] stated that deep convection in the tropics is related to cold cloud microphysical processes, such as those involving the formation of ice, snow, graupel, and hail. Russo et al. [18] studied deep convection from a modeling framework using a variety of numerical weather prediction models, including the WRF model. They showed that high-resolution models are needed if precipitation rates over tropical islands are to be well characterized. Thus, a proper representation of deep convection in numerical modeling should include all related physical processes, in addition to the use of high horizontal resolution that can capture the local-forcing mechanisms leading to the formation of convective systems.

Hence, the present paper is interested in enhancing the understanding of the WRF model applied to tropical regions and its sensitivity to the plethora of cumulus and microphysics parameterization schemes available in the model. This study will focus on a high-resolution precipitation forecast for one case of deep convection over Cuba. The deep convection event of 1 May 2012 was significant because it affected almost the entire island producing heavy rainfall in just a few hours. In particular, the following questions will be addressed: (a) Can WRF accurately represent the convective precipitation for the 1 May 2012 event? (b) Which cumulus-microphysics scheme combination leads to the best performance in characterizing convective precipitation for this event? Here, we hypothesize that: (a) since convective precipitation in the tropics is formed by processes of mixed-phase clouds, a microphysics parameterization scheme, which includes graupel production processes, is expected to characterize precipitation more accurately [19] and (b) the Kain-Fritsch cumulus parameterization is expected to perform best since it improves as resolution increases when compared with other schemes [16].

The structure of this paper is as follows. The case study, a deep convection event on 1 May 2012, the experiment design, and the verification approach will be described in Section 2 (Data and Methods). The results will be discussed in Section 3, which is divided into three main parts: the first part will analyze the lateral boundary condition data, in order to check whether they reasonably represent the state of the atmosphere for the study. Also, we will briefly analyze the WRF skills in representing the convective structure generated by breeze convergence. The second part will show the sensitivity analysis of different cumulus and microphysics parameterizations within WRF using a neighboring-based statistical approach and point observations. The third part will look at the spatial distribution of precipitating areas using a grid-based statistical approach and satellite images as reference. The summary and main conclusions are presented in Section 4.

2. Data and Methods

2.1. The Deep Convection Event of 1 May 2012

On 1 May 2012 large precipitation amounts were measured in several locations in Cuba in just a few hours. Clouds started developing as the sun rose and the north sea breeze converged with the south sea breeze (central-south and north of the eastern and western part of the island, resp.); see Figure 1. The event was characterized by a sparse spatial distribution of precipitation generated from heavy thunderstorms alongside the island. Several meteorological stations collected more than 50 mm of precipitation in less than 3 hours. These gauges are considered as intense precipitation on the island. The vertical wind profile shows southeasterly winds at the surface, with speed between 5 and 25 km/h, that turned to the west with height and reached a speed greater than 50 km/h at 200 mb, which favored air lifting. The maximum temperature ranged from 30 to 33 degrees Celsius across the island, which acted as the fuel for this convective system. The forcing mechanism produced by the sea breeze convergence on the surface ensured that humidity values over 70% were carried to the levels aloft, where cumulonimbus clouds developed with tops over 10 km height.

2.2. WRF Description and the Experiment Design

The WRF model has recently been implemented at the Center for Atmospheric Physics (CAP) of the Institute of Meteorology in Cuba (INSMET), and further sensitivity studies are needed in order to improve the prediction of convective systems. Moreover, one of the goals of an on going project at CAP is to produce reasonable precipitation forecasts for The National Institute of Hydraulic Resources in Cuba. Thus, the need to improve numerical forecasts at INSMET for short-term (i.e., less than 72 hours) forecasts, especially when it comes to forecasting intense precipitation associated with convective severe storms that may affect people’s safety and infrastructure, is also what makes this study relevant to Cuba and the Caribbean in general.

We have used the WRF model version 3.5.1, developed at the National Center for Atmospheric Research (NCAR). WRF uses lateral boundary and initial conditions from global models and runs as a limited-area model, which allows for the increase of spatial resolution for a particular region. WRF is composed of two main parts: (a) the dynamical core, the Advanced Research WRF (ARW core), which can be used in a nonhydrostatic manner, hence making WRF-ARW a suitable choice for high-resolution simulations [15], and (b) the parameterization set, which includes different options for different parameterization types (e.g., radiation, cumulus, and microphysics). The options within a parameterization type vary in complexity, efficiency, applicability, and the computational cost [21]. Table 1 summarizes the model options used in this study.

The six-hourly European Center for Medium-Range Weather Forecasts (ECMWF) Reanalysis (ERA Interim) at T255 (~0.7 degrees, or around 75 km horizontal resolution) and 60 vertical levels were used as initial and lateral boundary conditions. The ERA Interim dataset was created using the ECMWF Integrated Forecast System (IFS) model with fully coupled atmosphere, land surface, and ocean waves, together with a 4-dimensional variational assimilation (4D-Var) system [2527]. We have used land and elevation data from the United States Geological Survey (USGS) at thirty seconds (~900 m) of spatial resolution for all domains. These data are available on the WRF website at http://www2.mmm.ucar.edu/wrf/users/download/get_sources_wps_geog.html.

2.3. Cumulus and Microphysical Parameterization Sets

Six microphysical and five cumulus schemes were selected, such that there were enough differences (i.e., treatment of ice particle, distribution of parameters, and complexity) between them. Also, they possess features to allow for a proper representation of mixed-phase convective precipitation. Each experiment combines a microphysical scheme with a cumulus one, keeping the cumulus parameterization turned on for each domain regardless of the resolution.

It seems to be widely accepted [8, 9, 11] to let convection be explicitly solved for domain resolutions higher than 6-5 km, in spite of what has been said by Stensrud [16] who stated that (a) “…cumulus formation is explicitly resolved in numerical simulations at scales ranging from 25 meters to 1 km”; (b) “…the convective precipitation field obtained at a given domain resolution varies dramatically depending upon which convective scheme is used, which might be related to the way it interacts with the grid-scale and explicit microphysical parameterizations.” However, Mohan and Bhati [28] analyzed the WRF model performance with parameterized cumulus in a domain over the subtropical region of Delhi, India, with a horizontal grid spacing of 2 km. They found that a combination of physical options using the Kain-Fritsch cumulus parameterization [29] showed the best model performance during the verified period. They also found that the microphysics and cumulus parameterizations had less impact than the other physical options on the model output. These results are limited since they only focused on temperature, wind speed, and humidity variables (no precipitation), which were verified against two sounding station data (no spatial verification). Also, convective clouds over the Cuban island may vary in horizontal extent from hundreds of meters to a few kilometers, while their vertical extent could be enough to produce precipitation; in extreme cases cloud tops can reach 16–18 km height.

Taking these facts into consideration, two more simulations were added to this study allowing explicit cumulus formation at 1 km resolution (d04), having 2 different cumulus parameterizations turned on in the outer domains (d01, d02, and d03) in order to analyze the sensitivity of precipitation to cumulus parameterization. Therefore, this study is composed of 32 experiments, that is, the total number of possible combinations regarding these selected cumulus and microphysics parameterizations (30) and 2 extra experiments. This is one of the first studies to test as many combinations among microphysical and cumulus schemes; it is one of the few first studies using WRF specifically for Cuba [30, 31]. Also, Martínez-Castro et al. [32] conducted a sensitivity analysis of summer precipitation using the Regional Climate Model version 3 (RegCM3). They tested only three different cumulus parameterization schemes and two ocean flux schemes and the highest resolution used for a domain centered on Cuba was only 25 km. They highlighted that although the Grell convective scheme [33] with the Arakawa-Schubert [34] closure assumption gave a more realistic diurnal precipitation cycle, the domain resolution (25 km) was too coarse to resolve the sea breeze over the Caribbean islands.

2.3.1. Short Description of Cumulus Schemes

This research focuses on five cumulus physics options, described as follows:(i)The Kain-Fritsch (KF) scheme is a deep and shallow convection subgrid scheme that uses a mass flux approach with downdraft and CAPE (Convective Available Potential Energy) removal time scale. It includes cloud, rain, ice, and snow detrainment and cloud persistence over convective time scale. This scheme is able to account for the small-scale processes that lead to the development of convection [29].(ii)The Betts-Miller-Janjic (BMJ) is an adjustment type scheme with deep and shallow profiles. It has no explicit updraft or downdraft and no cloud detrainment either [10].(iii)The Grell-Freitas (GF) scheme is a modification of the Grell-Devenyi [35] ensemble scheme. It is a multiclosure, multiparameter, and ensemble method with typically 144 subgrid members that tries to smooth the transition to cloud resolving scales. It has explicit updrafts and downdrafts and includes cloud and ice detrainment.(iv)The Tiedtke scheme (Tiedtke) is a mass-flux scheme with a CAPE-removal time scale closure. It includes shallow convection, momentum transport, and cloud and ice detrainment [36].(v)The New Simplified Arakawa-Schubert (NSAS) scheme is a new mass-flux scheme with deep and shallow components that includes cloud and ice detrainment, downdraft, momentum transport, and single, simple clouds [37].

2.3.2. Short Description of Microphysics Schemes

The following is a short description of the selected microphysical schemes:(i)The Lin et al. [38] scheme (Lin) includes ice, snow, and graupel processes, suitable for real-time high-resolution simulations.(ii)The WRF Single-Moment 5-class scheme (WSM5) is a simple and efficient scheme with ice and snow processes suitable for mesoscale grid sizes that allows for mixed-phase processes and super-cooled water [39].(iii)The Eta microphysics scheme (Eta) is described as a simple efficient scheme with diagnostic mixed-phase processes [40].(iv)The New Thompson et al. scheme (Thompson) includes ice, snow, and graupel processes suitable for high-resolution simulations; it also adds rain number concentration apart from the ice number calculations done in earlier versions of WRF [41].(v)The Milbrandt-Yau Double-Moment 7-class scheme (MY) includes separate categories for hail and graupel with double-moment cloud, rain, ice, snow, graupel, and hail [42].(vi)The Morrison double-moment scheme (M2m) is a scheme with double-moment ice, snow, rain, and graupel for cloud-resolving simulations [43].

Here, every experiment will be called a combination of the “Cumulus_Microphysics” options; so, for example, “KF_Thompson” refers to the experiment that combined Kain-Fritsch cumulus parameterization with the Thompson microphysics scheme. When a cumulus scheme (CU) is turned off, then the microphysics scheme (MP) solves for total precipitation; in these cases the combination is named Cuoff_MP.

2.4. Real Data and Ranking Methods

For the comparison of precipitation-related variables, we use a network of 66 observation stations at 6 hourly accumulated precipitation (see Figure 2(b)). The observational data come from the Meteorological Observational Network, which have already been validated by the Climate Center at INSMET following the World Meteorological Organization requirements. Visible satellite images are also used, in this case, for spatial verification. The visible satellite images were downloaded from http://goes.gsfc.nasa.gov/goeseast-lzw/cuba/vis/ in Tagged Image File Format (TIFF) at every hour from 1900 to 2300 UTC, 1 May 2012, which have a spatial resolution of 1 km. Figure 1 corresponds to this data set of visible satellite images taken by the geostationary satellite GOES 13 and made freely available on the Internet (upon request) by the National Oceanic and Atmospheric Administration (NOAA). All comparisons are made within the inner domain (see Figure 2(b)) of the model simulations.

2.4.1. Analysis with Observations

A point-to-point evaluation method, in which a node (or an interpolated datum) of the forecast grid is verified against an observational station datum, is used to verify precipitation. In this case, the method is applied to compare observation data with the nearest node in the d04 domain of the corresponding meteorological station, where and represent the position in domain (d04), following the zonal and meridional coordinates, respectively. The analysis is performed for two precipitation accumulation periods: 1800 UTC, 1 May, to 0000 UTC, 2 May 2012, and 0000 UTC to 0600 UTC, 2 May 2012. These are the only periods of 6 hours remaining after the spin-up time of the model simulations. Equations (A.1), in the Appendix A.1 section, show the verification measures used to rank all the experiments via the point-to-point method.

2.4.2. Analysis with Satellite Images as the Reference Field

A spatial verification analysis, using satellite images as reference field, is performed to determine the skill of the model for each experiment at every hour for which the images are available. This method compares the spatial distribution of the rain-producing clouds that appear in the visible satellite images, with the hourly cumulative precipitation of every experiment. Analyzed times are 1900 UTC, 2000 UTC, 2100 UTC, 2200 UTC, and 2300 UTC. Also, the images, in TIFF format, were manually enhanced to ensure that only areas with high probability of precipitation were highlighted to be used by the spatial verification method. This enhancement was achieved by increasing brightness levels to any values above 170 (gray level units), wherever penetrating cumulus clouds regions appeared in the images or wherever cumulonimbus clouds were more likely to be. These features can be determined in the visible satellite images by analyzing texture, brightness, and appearance of clouds. Then, the images were read as raw brightness data for verification purposes. The authors of this research are aware of the subjective nature of this procedure, but it was here used to achieve spatial verification when and where no radar data were available.

The Fraction Skill Score (FSS) is used to measure the model’s skill in the analysis and its description is given in the Appendix A.2. The other statistical measures used are the total interest and the median of the total interest distribution (i.e., median of maximum interest). The total interest is a measure of the relationship between features (objects) detected in the forecast and verification fields after applying a convolution threshold.

The total interest is computed following the Method for Object-Based Diagnostic Evaluation algorithm [44] by analyzing up to 12 geometrical properties of an object. Table 2 summarizes the weights that were used for each geometrical property; zero weighted properties imply that they were not considered for the total interest calculation. In this study, the maximum total interest value of each experiment was added to the median of maximum interest so when this measure (called “interest” hereafter) equals 2, it represents the best performance.

The thresholds for convolving the fields were 170 gray level units for the verification (satellite) fields and 0.1, 0.5, 1.0, 2.5, and 5.0 mm for the precipitation forecast fields. For the FSS calculations, another threshold of 10 mm was added. These thresholds were used in the convolution process for object detection.

3. Results and Discussion

3.1. From Synoptic to Local Scale

We first assessed the lateral boundary condition data (LBCs) to ensure that the WRF model is forced with meteorological data that accurately describe the synoptic situation for the case study. Also, biases in global models are transferred through the LBCs and may significantly impact the downscaled results [45]. In addition to this, Warner [15] stated that fluxes of heat, moisture, and momentum should be well defined in the initialization and the LBCs, so that the limited-area model can treat all meteorological processes within the integration domain reasonably well. This also contributes to avoiding artificial dynamical feedbacks between grids that can cause instability in the model simulation [15].

Figure 3(a) shows the ERA Interim mean sea level pressure and the wind field at 850 mb and Figure 3(b) shows the ERA Interim vorticity and wind fields at 200 mb, which compose two key variables that forced the WRF model as LBCs. At first, the presence of the trough westerly of the Cuban island, as shown in Figure 3(a), suggested higher likelihood of precipitation on the western part of the country. However, the positive vorticity field at 200 mb over the island (Figure 3(b)), with a slightly convergent westerly wind flow at the western side and slightly divergent at the eastern side of the territory, gave more probability of deep convection to occur on the central-eastern part of the country, as it actually happened (see Figure 1).

The south-easterly wind flow (as in Figure 3(a)) is typical for the rainy season in Cuba [1]. Under this synoptic situation, the warm flow from the south and local favorable conditions ensure moisture convergence to produce deep convection events (see Figure 4). Moreover, as it may be seen in Figure 1, the cloud system structure was formed alongside the interior of the Cuban territory and it reached its maximum development later in the afternoon. This cloud structure exemplifies the role played by the sea breeze convergence and the accumulated heat during a typical summertime day in a deep convection situation at a local scale. Figures 5(a) and 5(b) show how the model is able to reproduce the convergence line along the country due to the sea breeze effect and heating. Overall, the lateral boundary condition data used for the WRF simulation (i.e., ERA Interim) reasonably describe the synoptic condition depicted in the case study. Consequently, the WRF model was able to reproduce reasonably well the local sea breeze convergence along the island where moisture and heat combined to produce this deep convection event.

3.2. Sensitivity Analysis Assessment of Cumulative Precipitation Using Observation Stations

As described in the previous section, two methods were used to verify precipitation for all experiments. The first one (point-to-point method) aims to assess the six-hourly cumulative precipitation at 66 grid points, which represent the position of the observational stations, where six-hourly gauge precipitation data were gathered. These data were used as reference for verification.

In this respect, Figure 6(a) shows a Taylor diagram representing the correlation and standard deviation between every experiment and the reference valid for 0000 UTC, 2 May 2012. Figure 6(b) shows their mean biases. From this Taylor diagram, it can be appreciated that most experiments using KF cumulus parameterization show higher correlations (above 0.4) and that, in general, the one with best correlation (KF_WSM5) is not the one that best agrees with the precipitation values from observations (GF_Eta). This is in agreement with Gallus Jr. [46], who found that while the KF cumulus scheme represents heavy precipitation amounts more accurately, it fails to place it in the correct location by hundreds of kilometers.

For KF experiments, KF_M2m shows better performance, since its standard deviation is closer to reference, while GF_Eta has the best standard deviation of all experiments. Experiments using GF and BMJ cumulus parameterizations show more dispersion between their corresponding microphysics schemes, while experiments using NSAS show practically no dispersion. It is noteworthy that all GF experiments and Tiedtke_MY have standard deviations above reference while the rest of the experiments have standard deviation below reference. This could be explained by the fact that GF experiments widely overestimate observation values since their mean biases, in Figure 6(b), tend to always have relatively large negative (red) values.

On the other hand, BMJ- and NSAS-related experiments underestimate observations. Tiedtke experiments, with average correlations of 0.2 to 0.4, have relatively small mean biases, with all of them indicating underestimation with the exception of Tiedtke_MY, which indicates overestimation (negative value: −1.97 mm). This is in agreement with Tiedtke_MY being the experiment exception having a standard deviation above reference. This exception may be caused by the fact that MY is the only microphysics scheme used here that actually includes hail processes.

KF experiments show the lowest mean biases, and their standard deviations are below reference. While some KF experiments underestimate, others overestimate observed precipitation values. However, from numerical results (not shown here) it can be seen that this happens because KF experiments tend to slightly underestimate precipitation in areas where it actually rained and report small values of precipitation in areas where it did not. This is in agreement with KF tending to displace maximum precipitation values away from the correct location.

Then, a valid question arises: is it practical to turn on the KF cumulus parameterization for this resolution? When turning off the cumulus parameterization (KFoff_M2m), it can be seen, in Figure 6, that the correlation between forecast and observed values diminishes, while the absolute mean bias increases (i.e., more underestimation) and the standard deviation remains similar (compared to KF_M2m). This is because when precipitation is explicitly resolved (KFoff_M2m), the precipitation intensity is spatially underestimated by the microphysics routine (M2m). Note that, for this case, the precipitation area appears similar to reference, but actually displaced with respect to KF_M2m. In this later case, although the maximum intensity precipitation areas are displaced with respect to reference, low intensity precipitation values could still be found in the surroundings, which makes the correlation higher. See Figure 7.

For GF experiments (GF_M2m and GFoff_M2m), the bias and standard deviation show better results when precipitation is explicitly resolved (GFoff_M2m). The correlation exerts the same behavior as with KF. It is noteworthy that KFoff_M2m showed worse results than KF_M2m but it is still better than the best of the GF experiments (GFoff_M2m).

Figure 8 shows results for the 6 hourly cumulative precipitation period valid at 0600 UTC, 2 May 2012. This time, all experiments underestimate precipitation and, accordingly, all standard deviation values have decreased far under the reference value with respect to those in Figure 6. In the case of NSAS experiments, their standard deviation did not vary much, but their correlations dropped to zero or negative values. Only experiments combining Eta and WSM5 microphysics with KF and Tiedtke cumulus parameterizations and Tiedtke_Lin kept correlation values above 0.2.

The best-correlated experiment in this analysis is GF_Lin with the largest correlation value (0.6) of all experiments for both periods. This is the GF experiment with the highest mean bias at this time. Note that the results here are applied to this case study only, but lessons can be learned from other experiments, such as the study by Davis et al. [47] who found that the WRF model, running at 22 km grid spacing and less, failed to represent the diurnal cycle of rainfall. The authors linked this failure to the inadequacies of the convective parameterization schemes. These inadequacies are related to the triggering functions used by each cumulus parameterization, which include the use of CAPE, Convective Inhibition and/or Boundary-Layer parameters to activate the cumulus scheme. Since these variables depend on available moisture and solar radiation, their values at nighttime tend to inhibit convection at these hours. So, we hypothesize that in the case of KFoff_M2m and GFoff_M2m that influence could potentially come from the outer domain (d03), since cumulus is turned off for d04. The humidity and temperature fluxes from the boundary layer and radiation parameterization schemes could directly affect the mixing ratios and grid temperature, so precipitation is rapidly inhibited at nighttime.

3.3. Sensitivity Analysis Assessment of Spatial Distribution of Precipitation Using Satellite Images as the Reference Field

The previous results are not conclusive since the verification method used only checks for precipitation values in some random verification points and do not really verify the spatial distribution of precipitating clouds. Following the results found so far, one might think that, with any of the KF experiments, more specifically with KF_WSM5 or KF_M2m, the WRF model would represent precipitation more accurately, at least in intensity values and at the reference observational stations. However, it is of interest to determine whether the model is able to capture the essence of the precipitating cloud system.

By visually analyzing the numerical outputs of all experiments, it becomes clear that KF experiments do not represent the deep convection structure accurately compared to other scheme combinations. For instance, when Figures 9(a) and 9(b), which show the total precipitation field for one-hour cumulative period, are compared against Figure 1, it seems that GF_Thompson (Figure 9(b)) adjusts better to the cloud structure shown in the satellite image. While this experiment shows high values of precipitation within the cloud system, giving in turn a more realistic appearance of deep convection, the KF_WSM5 (Figure 9(a)) spatially overestimates rainfall, showing low values of precipitation over large smoothed areas where precipitation was not observed. This happens because the KF scheme uses the Fritsch-Chappell trigger function [48] that uses the running-mean grid-scale vertical velocity. This variable depends on the average of the instantaneous vertical velocity between two consecutive vertical levels and the time step used for calling the cumulus parameterization routine. This averaged vertical velocity is then used to compute the parcel temperature at the Lifting Cloud Level. When this temperature overcomes the environment temperature, the parcel rises and the clouds are allowed to form. Therefore, the Fritsch-Chappell trigger mechanism gives smoother fields of convective initiation.

Since this partial result can be judged for its subjective nature, a spatial verification method was used to account for the model skill in representing the spatial distribution of rainfall as the deep convection cloud structure based on the satellite images. Therefore, Figure 10 shows the FSS of all experiments computed for the scale of the domain resolution (1 km grid square) and different precipitation thresholds. The FSS was computed in Figure 10(a) using a threshold for precipitation of 0.1 mm, while, in Figure 10(b), a threshold of 0.5 mm was used. Both figures are valid for 2000 UTC, 1 May 2012, and they represent the general behavior of all verified times.

From Figure 10(a), it can be said that GF experiments show more skill than the others using a threshold of 0.1 mm and a scale of one grid square. GF_M2m shows the highest FSS for this time and it is also the highest for all analyzed times. However, the maximum Fraction Skill Score is found for different GF experiments at different times. These maximum values are more likely to be found in GF experiments combined with complex microphysical schemes (i.e., M2m, MY, and Thompson). GF_M2m and GF_MY (in this order) showed the best skills for a scale of one grid square. Tiedtke experiments were found to be the second set of microphysics combinations having high FSS, while NSAS experiments have the lowest values regardless of the microphysics combination used. However, NSAS experiments showed the highest FSS when the precipitation threshold was increased from 0.1 to 0.5 mm (Figure 10(b)) and to 1.0 mm (not shown). In both cases, GF experiments were found to be the second best set of microphysics combinations. This happens because the NSAS scheme tends to spatially overestimate precipitation for values over 0.1 mm, reproducing wide smoothed fields of precipitation values less than 10 mm (see Figure 11).

Also from Figure 11, the NSAS experiments do not realistically represent the shape of the deep convection structure formed that day. This suggests that the high FSS found for these experiments should not be regarded as a good performance of the model in representing the precipitation field associated with the deep convection event. In addition to FSS, another spatial statistic measure was used to confirm this suggestion. Hence, Figures 12(a) and 12(b) show the interest at 2000 UTC for thresholds of 0.5 mm and 1.0 mm, respectively. Both figures display low interest values for NSAS and KF experiments, whereas GF, Tiedtke, and BMJ, in this order, have higher values. For GF experiments, the highest interest values may be found, at different times and microphysics schemes, as the threshold changes. For a threshold of 0.5 mm, GF_Eta has the highest interest (1.56) of all analyzed times, but GF_WSM5 showed the highest values at all times for a threshold of 0.1 mm. To illustrate this, Figure 13 shows interest values for different times (2000 UTC and 2100 UTC) using a threshold of 0.1 mm. The remainder of the analyzed times exerted similar behavior (not shown). A precipitation threshold of 0.1 mm seems to be the most appropriate one to be used for verification, because all experiments gave, in general, the best FSS results and for all verified times when it was used. Also, with this threshold, all experiment precipitation fields seemed to match better with the reference fields, revealing the spatial overestimation of the KF and NSAS experiments, as it can be inferred from their low interest values in Figures 12 and 13.

On the other hand, differences among microphysics schemes are small, for experiments using the same cumulus parameterization, whereas noticeable differences may be seen between experiments using different cumulus parameterizations; see Figure 10. Also, in experiments using the same cumulus parameterization, there was no appreciable trend for a particular microphysics scheme to achieve the highest FSS in all analyzed times (not shown). This makes choosing the best performing experiment somewhat difficult. In this sense, more cases need to be added to this study in order to analyze the statistical significance. For this case, and following the FSS calculations, the WRF model shows more skill with GF_M2m and GF_MY combinations. Following the interest statistic measure, the GF_WSM5 simulation seems to be a better match to the verification fields. However, since the interest values considered here are sensitive to the number of objects found in the forecast fields, they will have less weight than the FSS for choosing the best performing combination among GF experiments. This happens because these values include the median of maximum interest of matching objects found in every forecast field, and the number of these objects may not be the same for all experiments, hence masking the outcome.

Finally, we analyze the sensitivity of the activation (or not) of the cumulus parameterizations. As it can be seen from Figures 10(a), 12, and 13, while KFoff_M2m shows better performance than KF_M2m, GF experiments show no appreciable difference among all analyzed times. This suggests no value of activating the GF cumulus parameterization for 1 km resolution. Also, results from these explicit-precipitation-resolving experiments (KFoff_M2m and GFoff_M2m) show great sensitivity to the precipitation intensity threshold used to compute the FSS. Note that for a threshold of 0.5 mm in Figure 10(b), the FSS of KFoff_M2m is worse than that for KF_M2m. In general, as this threshold was increased from 0.1 to 1.0 mm, the verification results showed that experiments having cumulus parameterization activated had a best performance.

4. Summary and Conclusions

A sensitivity analysis of WRF cumulus and microphysical parameterization schemes was performed for a severe case study of deep convection over the Cuban island. Results showed sensitivity to the method used for verification, whether it was spatial- or punctual-based. While some cumulus schemes were found to give a reasonable representation of the convective precipitation structure, others represented better the precipitation intensity as reported by the observational network. Hence, the cumulus and microphysics parameterizations ability to characterize the precipitation field depend on the approach taken to compare the model output with observations. This does not mean, however, that either the spatial- or the punctual-based method should be neglected. It does highlight the importance of considering both methods.

Moreover, it can be noticed that cumulus parameterizations are more important than microphysical schemes for the case study. This agrees with Reddy et al. [49] who also stated the importance of cumulus over microphysics schemes in a sensitivity study of physical parameterizations to a cyclone track and intensity prediction case. However, from the experiments of explicit precipitation made in this case (KFoff_M2m and GFoff_M2m), it can be suggested that the role played by the cumulus parameterization activated in the outer domains is not important for the inner domain with an explicit cloud-resolving resolution; instead, the planetary boundary layer and radiation schemes were probably the main responsible ones for the WRF failure in representing the precipitation during the nighttime hours. Chandrasekar and Balaji [50] pointed out that the best combination set of physical options chosen within a WRF sensitivity study were related to the resolution of the chosen domains; even so, it is evident that inadequacies still remain in the physical options chosen in this study.

Regarding the present case study, it is evident that GF experiments show the best performance regarding the spatial precipitation distribution. In contrast, it is not necessary to activate this parameterization for a 1 km resolution domain. This statement is justified since no appreciable difference was found for the FSS between GF_M2m and GFoff_M2m, so there is no added value when the GF scheme was activated at this resolution; in fact, value is lost when results found for intensity precipitation analysis are taken into account, since GFoff_M2m performed better than GF_M2m. This was not the case for the KF experiments. Here, precipitation due to the cumulus parameterization did not seem to be negligible. For 0.1 mm precipitation threshold KFoff_M2m showed better FSS than KF_M2m, whereas verification against point observations suggests that KF_M2m performed better; see Figure 6. Particular attention must be paid to the intensity threshold used to calculate the spatial verification measures since it was found that when this threshold is increased, the activation of the KF scheme becomes important for the spatial distribution of precipitation. Further studies are needed to determine this cumulus scheme sensitivity to the resolution of the domain integration.

From all experiments with parameterized cumulus, GF experiments using M2m and MY microphysics schemes were chosen as the most suitable combinations leading to better skills in representing the deep convection cloud system developed in the case study. On the one hand, GF_M2m had a slightly better Fraction Skill Score than GF_MY and was the best-correlated experiment to the observational stations. On the other hand, GF_MY had higher interest values but the FSS and correlation values were smaller than those of the GF_M2m experiment; see Figure 10(a). The small differences in performance between these two experiments (GF_M2m and GF_MY) may be explained by the fact that M2m includes the intercept parameter of ice, snow, rain, and graupel size distributions as a prognostic variable, while, in MY, these intercept parameters, except for snow, are kept fixed. Morrison et al. [43] state that varying the intercept parameter leads to a more flexible treatment of the particle size distributions, contributing to a more accurate representation of the stratiform and convective precipitation regions in tropical systems. This is why the M2m microphysical scheme gave better results than the MY microphysical scheme. These results might still be expected when the GF cumulus scheme is deactivated in a cloud-resolving resolution domain as it was advised before.

Also, most microphysical schemes including graupel production (M2m, MY, and Thompson) were more likely to have higher FSS values. This is not enough to prove the hypothesis that the inclusion of graupel production processes within a microphysical scheme will lead to a more accurate model simulated precipitation field. This is because the Lin microphysical parameterization scheme also includes graupel formation and had FSS values similar to those that do not (Eta and WSM5); see Figure 10(a). It is noteworthy that the two microphysical schemes (M2m and MY) selected as the most suitable parameterizations are double-moment. This means that these schemes not only compute mixing ratio tendencies, but also calculate the concentration of each hydrometeor distribution at every time step. With this feature, a more accurate representation of the size distribution of all precipitation-related particles may be expected; as a consequence, it leads to a better representation of the convective precipitation field.

Overall, this study has assessed the WRF representation of the 1 May 2012 deep convection event over Cuba and selected a combination of schemes that characterize this specific event more appropriately. We conclude that the GF_M2m and GF_MY experiments give the best performance in this case. Also, for domain resolutions of 1 km the GF cumulus parameterization should be turned off since no added value was found when activating this cumulus parameterization at this resolution. For future work, it is our aim to address the physical mechanisms within these combinations and identify modifications in these schemes that can improve even more the representation of deep convection events in Cuba.

Key Points

(i) Sensitivity analysis of WRF cumulus and microphysical schemes for a case study.(ii)Precipitation is verified against observations and visible satellite images.(iii)Two simulations of explicit cumulus formation are analyzed.

Appendix

A. Point-to-Point and Spatial Verification Equations

A.1. Verification Measures Used to Rank All the Experiments via the Point-to-Point Method

From (A.1), the BIAS aims to determine the average direction of the mean error (positive values indicate underestimation), the Pearson correlation coefficient (PCC) is used to determine the linear correlation between the forecasts and the observations, and the standard deviation (STD) determines how the observation and forecast data are much alike by taking the mean absolute error between them as the reference value:where is the number of meteorological stations, is the observed value at the th station, is the forecast value to be verified at the th station location, and the over-barred values are the mean of the and populations, respectively. MAE is the mean absolute error computed following

A.2. Measure Used for the Spatial Verification

The Fraction Skill Score (FSS) is defined by Ebert [51] aswhere and are the probabilities of a forecast and observation to occur, respectively, within a neighborhood of grid points. An FSS of one 1.0 indicates a perfect forecast, while an FSS of 0.0 indicates no skill at all.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The static data used for the WRF simulations in this research are freely available on the WRF website at http://www2.mmm.ucar.edu/wrf/users/download/get_sources_wps_geog.html. Lateral boundary conditions data are available for purchase on the ECMWF website at http://www.ecmwf.int/en/research/climate-reanalysis/era-interim and were obtained through the Norwegian Meteorological Institute license. Also, the visible satellite images used in this study were retrieved from http://goes.gsfc.nasa.gov/goeseast-lzw/cuba/vis/. Any other data that might be necessary to reproduce the results of this paper can be obtained by contacting its corresponding author ([email protected]). The authors would like to thank the Norwegian Directorate for Civil Protection (DSB) and the Norwegian Ministry of Foreign Affairs for funding this research work. The funding was through The Future of Climate Extremes in the Caribbean project (XCUBE Project). They are also grateful for the supercomputing resources provided by the Norwegian Metacenter for Computational Science. They express their sincere thanks to NCAR for making their regional climate model freely available and to the Climate Center at INSMET for the observational data provided. Also thanks are due to B.Sc. Adrian Luis Ferrer Hernández, B.Sc. Israel Borrajero Montejo, M.Sc. Maibys Sierra Lorenzo, B.Sc. Elier Pila Fariñas, and B.Sc. Mariam Fonseca Hernández from INSMET for their support. Finally, the authors would like to thank the anonymous reviewers, whose comments and critiques were very helpful.