Earth Perspectives

Transdisciplinarity Enabled

Earth Perspectives Cover Image
Open Access

Climate information, outlooks, and understanding–where does the IRI stand?

Earth PerspectivesTransdisciplinarity Enabled20141:20

https://doi.org/10.1186/2194-6434-1-20

Received: 1 October 2013

Accepted: 19 February 2014

Published: 17 June 2014

Abstract

The International Research Institute for Climate and Society (IRI) began providing user-oriented climate information, including outlooks, in the late 1990s. Its climate products are intended to meet the needs of decision makers in various sectors of society such as agriculture, water management, health, disaster management, energy, education and others. They try to link the current state of the science in climate diagnostics and prediction to the dynamically evolving practical needs of users worldwide. Because most users are not climate scientists, the manner in which the information is provided is of paramount importance in order for it to be understandable and actionable. Non-technical language that preserves essential content is required, as well as graphics that are intuitive and largely self-explanatory. The climate information products themselves must be in demand by users, rather than ones that the producers believe would be best. These requirements are consistent with IRI’s mission of improving human welfare, particularly in developing countries where decision makers may not initially know what climate information they need, and how best to use it. This lack of initial understanding requires back-and-forth communication between the producers and users to initiate and sustain uptake and beneficial use of the information. Backed by its climate prediction research, the IRI’s climate information products span time-scales of days to decades. Experience on the statistics of daily weather behavior within seasons has been gleaned, as has the benefits of statistical and dynamical spatial downscaling of predictions. By providing views in a progressive sequence of temporal scales, IRI’s products help demonstrate that preparation for interannual climate variability may be the best preparation for decadal variability and trends related to climate change.

Keywords

ClimateClimate informationClimate forecastsClimate predictionClimate researchClimate knowledgeClimate and societyClimate communicationClimate forecast productsClimate risk management

Background

The International Research Institute for Climate and Society (IRI) began providing current and historical climate information, including seasonal climate outlooks, in the late 1990s. During the 1980s and earlier 1990s, measureable improvements in understanding of the climate system had taken place in part because of increased availability of, and ability to interpret observations of the climate system. IRI’s “map room”, available online since 1999 (The IRI’s climate map room), contains observational information about the state of the global oceanic and atmospheric climate both currently and in recent history. Seasonal and monthly time scales are emphasized, and user-friendly graphical formats and useful descriptions were a priority in designing the displays.

Perhaps the most important aspect of an increased understanding of the climate system is the ability to make useful seasonal (e.g., an average or total over 3 months) climate forecasts. Such forecasts are primarily based on the influence of patterns of the ocean sea surface temperature (SST) on the large scale atmospheric circulation–particularly the SST in the tropical oceans. The most important oceanic influence on the atmosphere is the El Niño/Southern Oscillation, or ENSO. In 1997, IRI began issuing climate outlooks for the globe, including forecasts for temperature and precipitation for the upcoming two consecutive 3-month periods (Mason et al. 1999). Forecasts were initially issued quarterly, but in 2001 began being issued each month and for all four overlapping 3-month periods between the first and second seasons. All of the IRI’s forecasts, past and present, are available online (The IRI’s seasonal climate forecasts). In 2012, IRI developed a more flexible forecast format that enabled users to extract more detailed climate forecast information than earlier. It also introduced a product providing a descriptive partitioning of climate variability into three complementary time-scales, based on a full century of observations.

Beginning in early 2002, probabilistic forecasts of the state of the ENSO itself began being issued. These forecasts were deemed important because ENSO has known effects on climate in specific regions during specific seasons of the year. Although these effects are incorporated into the climate forecasts, some users desire knowledge of the ENSO outlook itself. These ENSO forecasts are available online (The IRI’s forecasts of ENSO).

This paper describes several of the main climate products provided to date by IRI, with illustrative examples and explanations of their utility. Some focal points of the research associated with the content of the products are identified, and two products in current development are then highlighted. Finally, some ideas are provided for a path toward better fulfillment of the mission in the years to come regarding new research and the resulting provision of improved climate information for the benefit of societies, especially in developing countries where climate predictability and human need are both greatest (Goddard et al. 2014).

Review

Climate information products, and the research behind them

a. Climate observation maproom

Many climate-sensitive users need to know what the climate is doing right now, or what its state has been in the recent or more distant past. The IRI’s global climatea maproom is an extension of its Data Library (The IRI’s seasonal forecast verification site; Blumenthal et al. 2014), which is a data repository containing over 300 datasets from a variety of earth science disciplines and climate-related topics. The maproom automatically displays the latest updates of many climate fields, while also allowing viewing of past seasons or months of the same variable. Weekly, monthly and seasonal averages of various climate variables are available, such as 2-meter atmospheric temperature, precipitation, sea level pressure, lower and upper atmospheric circulation, and SST. For example, Figure 1 (top) shows the departure from normal (i.e., anomaly) of seasonal total precipitation during March 2011, suggesting significant flooding in parts of Indonesia and Southeast Asia in association with the La Niña of 2010–11. Both rain gauges and satellite data contribute to the data used for this anomaly map, using the so-called CAMS-OPI rainfall data from the Climate Prediction Center (CPC) of the National Oceanic and Atmospheric Administration (NOAA) (The CAMS-OPI gauge-plus-satellite rainfall data from the Climate Prediction Center of NOAA). A version of the same map that uses only rain gauges is shown in Figure 1 (bottom), which is generally consistent but leaves large areas devoid of data around the globe, suggesting the value of satellite-derived rainfall measurements in the tropics where errors in satellite-estimated rainfalls are smallest. Some derived variables are also available, such as the standardized precipitation index (SPI) for various time averages. Some variables are expressed in terms of percentile as well as the anomaly itself. Specialized map room capabilities include zooming in on any user-selected rectangular region, an animation of the seasonal march of the seasonal fraction of annual precipitation, and a point-and-click option to see climatological plots of the seasonal march of temperature and precipitation and freeze days at the selected location. For some variables, the change from one month to the next is displayed, as seen in Figure 2 for the SST in May and June 1997, and the change from May to June, indicating the rapid development of the 1997–98 El Niño episode whose strong climate anomalies caused enormous societal impacts worldwide.
Figure 1

Example of IRI global rainfall observation maps. Departure from normal of precipitation is shown during March 2011 using (top) a combination of gauge and satellite rainfall data, and (bottom) only gauge rainfall measurements. During this month, severe flooding took place in parts of Indonesia and Southeast Asia.

Figure 2

Example of map of specialized feature of SST. Anomaly of SST is shown for (top) May 1997 and (middle) June 1997, and (bottom) the change from May to June. The 1997–98 El Niño was undergoing rapid development.

b. Seasonal climate forecasts

Climate forecasts for the coming season or seasons farther into the future, if of sufficient quality, are useful to a myriad of sectors of society. The IRI’s seasonal climate forecasts for the globe have been made using mainly a two-tiered process in which first a predictionb is made for the SST in the global oceans, and then the SST prediction is used as a driver of a forecast for the atmospheric climate–precipitation and temperature (Mason et al. 1999). A mix of dynamical and statistical models has been used to develop the SST predictions, varying by tropical ocean basinc. For the first forecast season, in addition to these evolving SST predictions, the observed SST anomalies from the most recently completed calendar month have been used as another, more conservative, persisted anomalous SST prediction scenario. The strategy of establishing the SST prediction first, and then the climate forecast afterwards, comes out of more than a decade of research on how the climate is influenced by SST (Bengtsson et al. 1993). Following the initial monumental revelations in the 1980s about ENSO’s climate effects, additional studies at IRI demonstrated the roles of more regional SST, such as that of the western Indian Ocean on some portions of Africa (Goddard & Graham 1999), and of particular variations of El Niño-related SST (Goddard et al. 2006). The degrading effects of imperfect SST predictions on model-generated climate predictions have also been studied at IRI (Goddard & Mason 2002; Li et al. 2008).

The format of the issued climate forecasts is probabilistic, in which the probabilities for the precipitation or temperature to be above normal, near normal, and below normal, are issued for each location for each forecasted season. The three categories are defined such that each has been observed in one-third of the cases for the given season over a recent 30-year period. This set of three probabilities is intended to provide a general idea of the shift in the expected odds of the temperature or precipitation from the historical climatological distribution. The probabilities are based mainly on a set of ensembles of predictions from several dynamical atmospheric general circulation models (AGCM)d. Each AGCM produces its own ensemble of predictions, with each member run having slightly different initial weather conditionse, but being influenced by the same SST prediction so that the differing resulting predictions span a distribution that represents the relative probabilities of the range of outcomes. Each AGCM prediction is adjusted for its own systematic biases, based on a collection of hindcasts, which are “predictions” for the given season for many past years for which the observed outcomes already exist. Biases that are adjusted include ones of average prediction value, direction and amount of deviation from the average value, and even the spatial positions of the main features of the prediction. The advanced regression method, called canonical correlation analysis (CCA), is the main vehicle used to adjust for model biases. The predictions of all of the ensemble members from the several models are then brought together to form a pool of over 100 members (a multi-model ensemble), and a prediction probability distribution is then fairly well sampled, and therefore reasonably representative. Many studies have demonstrated that multi-model ensembles often yield higher predictive skill and utility than the set of ensemble members from any of the single constituent models (Kharin & Zwiers 2002; Barnston et al. 2003; Palmer et al. 2004; Hagedorn et al. 2005; Tippett & Barnston 2008). At IRI, research has been conducted to determine the best methods to combine the predictions of several models into a single net probability prediction, as for example how to weight the predictions of the constituent models based on their performance in hindcasting over several decades (Rajagopalan et al. 2002; Robertson et al. 2004).

In addition to the three-category probability global forecasts, IRI issues forecasts for the probability of the “extremes”, defined as the upper or lower 15% of the distribution. In these forecasts, areas having probabilities of 25-40%, 40-50%, and 50% and greater, as opposed to the historically expected 15%, are indicated on the maps. The extremes forecasts are intended for users whose livelihoods are particularly sensitive to seasonal climate conditions farther from average than indicated by probability forecasts of the upper or lower 33%. When the probability for an extreme is elevated, the probability for the standard category in the same direction from normal is always higher than 33% as well, and usually by a considerable amount.

The spatial resolution of the issued global gridded forecasts field is 2.5° for precipitation and 2.0° for temperature, matching the resolution of the global observed verification data. Those verifying observations come from the Climate Anomaly Monitoring System (CAMS) (Ropelewski et al. 1985) and from the Climate Prediction Center (CPC) Merged Analysis of Precipitation (CMAP) (Xie & Arkin 1997) for temperature and precipitation, respectively. These resolutions do not differ greatly from those of the AGCMs, which have ranged from 2.8° to 1°. The forecasts are issued as a single global map, and also as more detailed maps for the individual continents. Colors are used to indicate the direction of the probability shift from the equal-odds, or climatological, forecast of 33.3% probability for each of the three categories. An example of a probability forecast for Asia for October to December 2011, issued in mid-September (Figure 3), shows anticipated enhanced probabilities for above-normal precipitation in Indonesia and extreme Southeast Asia and northern Australia, and for below-normal precipitation in portions of southwestern Asia and the Middle East.
Figure 3

Forecast for precipitation for Asia for October-December 2011, issued in mid-September. Color shading indicates the probability of the most likely category. The histograms show the probabilities of all three categories in selected regions (see the key). White areas over land indicate no shift in the odds from climatological (33.3%, 33.3%, 33.3%) probabilities. Pink areas indicate where climatologically very low precipitation amounts are expected during the season, and no forecast is given. Enhanced probabilities for above normal precipitation were forecast in the Maritime Continent, and for below normal precipitation in central southwest Asia.

To date, two studies have assessed the skill of IRI’s standard seasonal forecasts, one for the first four years (Goddard et al. 2003) and one for the first 11 years (Barnston et al. 2010). The skill of the extremes forecasts over 12 years has also been examined (Barnston & Mason 2011). All three evaluations suggest significant predictive skill, and thus potential utility, of the forecasts for specific regions, each region having its own seasons of measured usable skill. These “forecasts of opportunity” are known and are built into the forecast probabilities such that in places/seasons known to have little or no skill, the climatology forecast (33.3% probability for each of the categories) is issued, shown as white areas on the forecast maps. In some regions and seasons, a useful forecast is possible primarily when ENSO is not it a neutral state—i.e., when an El Niño or La Niña is expected. During ENSO-neutral times the climatology forecast may be issued at the same location and season, or in some cases an enhanced probability for the near-normal category may be issued. It is worth noting that forecasts favoring the near normal category historically have not been as skillful as forecasts favoring below or above normal (Van den Dool & Toth 1991), likely due to the fact that the strongest probability forecasts occur in the form of opposing directions of deviation from climatological probabilities (of 33.3%) between the below and above normal categories, while the near normal category usually has a relatively weaker deviation. Temperature has been somewhat more skillfully predicted than precipitation, both because its anomaly areas tend to be more coherent (less “noisy”) and more closely related to associated large-scale atmospheric circulation patterns, and also because it has been more predictably influenced by trends—e.g. a slow warming related to climate change. This last factor makes possible additional skill because the three categories are defined on the basis of a past 30-year period, so that trends show up as predictable biases in one direction with respect to a climatology based on these past observationsf.

The IRI keeps a running evaluation of its probabilistic climate forecasts, including probabilistic accuracy, skill, and a number of other fundamental attributes, using a variety of verification measures. This set of forecast evaluations is available online (The IRI’s seasonal forecast verification site), accompanied by detailed definitions and explained meanings of each verification measure. The range of verification measures, and their descriptions, has come partly as a result of extensive research at IRI on the nature, and advantages and disadvantages of each measure (Mason 2004; Mason 2008). The probabilistic verification measure known as relative operating characteristics, or ROC (Mason 1982), which deals with “hits” versus “false alarms” for one of the forecast categories, underwent particularly intensive research at IRI (Mason & Graham 1999), where an extension of ROC applicable to all forecast categories together rather than one at a time was developed (Mason & Weigel 2009; Weigel & Mason 2011).

As an example of one verification measure, the geographical distribution of the likelihood score, defined as the geometric average of the forecast probability assigned to the correct (later observed) category, is shown in Figure 4 for temperature forecasts issued in mid-December for the January-March season. Higher probabilities for the correct category are seen in the tropics, as is generally the case for all of the verification measures for both temperature and precipitation. An important diagnostic for probability forecasts, known as reliability (Murphy 1973; Wilks et al. 2006), evaluates the correspondence between the full range of issued forecast probabilities and their associated relative frequency of observed occurrence, and shows forecast characteristics such as probabilistic bias, forecast over-(under-) confidence, and forecast sharpness. A probability forecast is said to be reliable when, for example, if all of the cases when the probability of above normal precipitation is forecast to be 50% are collected, the observations are found to be in the above normal category 50% of the time.
Figure 4

Verification of IRI’s forecasts for January-March temperature from 1998 to 2013. Here the likelihood verification measure is shown, which is the geometric average of the probabilities issued for the later-determined correct category. Scores greater than 0.333 indicate positive skill, and scores of 0.4 or more suggest useful levels of skill for many applications.

Such a reliability analysis is shown in Figure 5 for precipitation forecasts, averaged over the globe for all seasons and made 0.5 months ahead of the beginning of the first month. Since the orange and green lines (representing reliability for the below-normal and above-normal categories, respectively) are close to the dotted diagonal line, favorable reliability is concluded. Forecast sharpness, describing the extent to which the forecasts deviate from climatological forecasts of 33.3%, is shown in the histogram below the main plot. The sharpness is seen to be small, indicating a relative dearth of probability forecasts with strong shifts from climatological odds. While this lack of sharpness may not give users the degree of confidence necessary to make some important decisions, the fact that the reliability lines are close to the diagonal line indicates that such weak probability shifts are appropriate and may be all that is possible, given the inherent uncertainty in precipitation forecasts in the physical ocean–atmosphere system and our current state-of-the-science in climate modeling. A good reliability record implies that when a strong shift in the probabilities does occur, it carries credibility. Because predictability is greater in the Tropics than over the globe as a whole, reliability for the Tropics alone shows greater forecast sharpness than that seen in Figure 5.
Figure 5

Reliability analysis for 0.5-month lead forecasts of global precipitation. Forecast data for all seasons are used during 1997–2013. The green curve pertains to forecast probabilities for above-normal precipitation, the orange curve forecast probabilities for below-normal precipitation, and the gray curve for near-normal precipitation. For above and below normal, least squares regression lines are shown, weighted by the sample sizes represented by each point. Points representing probability intervals that are forecast in a relatively greater proportion of the time are shown using larger symbols. The diagonal y = x line represents perfect reliability. The colored marks on the axes show the overall means of the forecast probabilities (x-axis) or observed relative frequencies (y-axis). The lower panel shows the frequency with which each interval of probability was forecast, where interval widths are 0.05 (e.g., 0.175–0.225 is labeled as 0.20), except that the climatological (0.333) probability is also explicitly shown.

One reasonably might ask if the quality of IRI’s forecasts is among the best available. IRI is one of a moderately large number of sources of today’s state-of-the-art global seasonal climate forecasts. Most of the other centers, funded by and principally serving an individual country or region using their own single global coupled ocean–atmosphere model, include those in Australia, Brazil, Canada, China, European Center, France, Japan, Korea, Russia, UK, and United States, among a few others. Objective global model climate predictions from all of these sources are collected into a grand multi-model ensemble prediction, disseminated quarterly by the World Meteorological Organization (WMO) in a document called the Global Seasonal Climate Update (GSCU). The IRI’s forecasts are not included in the GSCU for political reasons: Although IRI is located in the US, its forecasts are not the official US product; the predictions from the model run at NOAA/CPC represent the official US contribution. While the IRI’s forecasts are not an official product of any one country, they nonetheless are well known to many around the globe. The comparative quality of the forecasts from all of the above-mentioned countries (and from IRI) has never been systematically examined. However, individual model seasonal hindcast skill maps provided in the GCSU document reveal that while the different models may have slightly differing overall average skills, the models vary most widely regarding which regions, and during which seasons, they provide most useful predictive skill. Many of the models tend to have highest skill in those regions and seasons of the expected influences of ENSO, as they are capable of reproducing the observed large-scale responses to anomalous tropical Pacific SST. This is equally true of IRI’s forecasts. However, there is indirect evidence from a number of independent studies that the predictions from the model at the European Center for Medium Range Forecasts (ECMWF), a multi-national center with very high performance computing facilities, may have the highest average predictive skill of any mentioned above, by a slight but noticeable margin. Still, the predictions from the models of most of the other countries, and the forecasts from IRI, are regarded as state-of-the-art in that they deliver reasonably competitive skill when averaged over all regions and seasons.

Together with the provision of climate information since the late 1990s, IRI has actively conducted research focusing on specific aspects of climate predictability, toward enabling improvements to the seasonal forecasts. A set of experiments was run using a state-of-the-science atmospheric model—the European Center/Hamburg (ECHAM4.5) model from Max Planck Institute in Hamburg, Germany—in which the climate was hindcasted over several past decades using both observed and predicted SST to influence the hindcasts during those decades. One purpose of the experiments was to determine the benefits to the probability forecasts resulting from the number of ensemble members used. (Recall that today’s climate predictions come from prediction models run many times, where each run, producing one prediction ensemble member, is influenced by the same underlying SST prediction but given slightly differing initial atmospheric weather conditions—the positions and strengths of the weather featuresb.) One outcome of these experiments, also confirmed at other major global climate producing centers, is that the number of ensemble members run affects the precision of the final predicted probability distribution, such that high quality predictions require large ensemble sizes. Another benefit of the experiment was improved definitions of the relationships between SST anomalies and their consequent climate anomalies. Better knowledge of SST-climate relationships increases understanding of predicted climate anomaly patterns and also makes possible attribution of some of the observed climate anomalies (Barnston et al. 2005). The same set of experiments also permitted evaluations of the effects of different SST prediction methods on the skill of the resulting climate predictions (Li et al. 2008), and the importance of the spatial resolution (the smallness of the model grid squares) on prediction quality.

IRI conducted research toward developing a climate prediction model that predicts SST and the climate simultaneously—a single-tier design as opposed to the predominantly two-tier design used in IRI’s issued climate forecasts (DeWitt 2005). The effort culminated in a well performing model for both climate and SST (e.g., ENSO) predictions (Tippett et al. 2012; Barnston et al. 2012), and one that was later recruited to participate in a National Multi-model Ensemble Experiment along with several other North American single-tier climate models (Kirtman et al. 2014).

Climate prediction, unlike many problems in astronomy or solid body physics, does not involve one correct prediction, but instead a wide envelope of possible outcomes. Although there is just one eventual observation, it is unable to be predicted accurately in advance due to the inherently chaotic, nonlinear nature of the ocean–atmosphere system. This probabilistic aspect creates challenges to many forecast users, as many of the resulting forecasts involve only a slight to moderate tilt of the odds away from the historical, or climatological, probability distribution that exists by default without any current forecast information. Making the best use of probabilistic forecasts has been a focus of IRI’s research. One aspect of this issue is the presentation of the forecast information, given a choice of formats with which to represent probability forecasts over a spatial domain. One option is to provide three maps for each forecast, each showing the spatial distribution of the probability for one of the three categories (below-, near-, and above-normal). Another option, and the one that IRI adopted, is to present a single map with color shading indicating which category is most likely, as well as the probability of that dominant category (Figure 3). Probability histograms are inserted at some locations to help communicate the three probability levels. Experiments related to choices in forecast map presentation have indicated that for any map format, understanding is not easy for some users, and there is a learning curve requiring careful study of the map legend and the provided explanation (Ishikawa et al. 2011). It remains a challenge to present probabilistic information in a simple, user-oriented format, given that some users lack the time or patience to learn to assimilate information that initially seems overly complex or confusing. Thus, as one component of partnering with users and their representative organizations, capacity development workshops are designed to methodically educate and train both weather service personnel and end users who benefit from knowing how best to use climate information (Mantilla et al. 2014). A free and easily downloadable climate prediction tool developed at IRI, called the Climate Predictability Tool (CPT), allows users to apply sophisticated multivariate linear regression methods (e.g., CCA or principal components regression) to get downscaled (localized) prediction probabilities and maps (Korecha & Barnston 2007; Recalde-Coronel et al. 2014) without having to comprehensively learn the mathematics or create their own software. In the particularly fruitful case of the South African Weather Service, a sustained collaborative relationship with IRI has included not only the use and uptake of CPT, but sharing of statistical and dynamical forecast methodologies and even specific AGCMs, and adoption of best practices in the scientific and operational aspects of climate prediction (Landman 2014). Such beneficial linkage to meteorological centers in other countries is being built on a more global scale through IRI’s contributions to the WMO’s recent programmatic effort in the form of the Global Framework for Climate Services (GFCS) (Hewitt et al. 2012), whereby prediction tools such as CPT are being provided along with the appropriate training. CPT has now been added as a prediction tool in a sizable list of developing countries.

A more substantive aspect of the probabilistic nature of climate forecast comes with the use of forecast information in decision making in sectors of society such as agriculture, water management, health, energy and disasters (e.g., floods, hurricanes, droughts, extreme cold or heat). Making decisions under conditions of forecast uncertainty involves decision theory and aspects of higher mathematics for which many users lack tools for direct use. IRI personnel have had success in partnering with users and their representatives in developing countries in Africa, Asia and Latin America to help make optimum use of probabilistic climate forecasts for risk management and decision making (Goddard et al. 2010; Lyon et al. 2014). Some examples of these beneficial applications are highlighted in subsection c below.

Recently, as an extension of the 3-category probability forecast format, IRI developed a more flexible forecast format that enables users to extract information for those parts of the forecast distribution of greatest interest to them, such as the probability of extremely dry conditions (e.g., the driest 10%), in the wettest 40% of the distribution, or any others. Additionally, the forecast probability distribution for a specific location is given in terms of the temperature or precipitation in its own physical units as opposed to the probabilities of the three categories without a direct indication of the boundaries between those categories, which requires consulting a separate web page. An example of the flexible format version of a temperature forecast is shown in Figure 6, where the map shows the probability of exceeding the climatological 50th percentile, or median, and the two insets show the forecast probability distribution (one as an actual probability density function and the other as a cumulative density function) in °C for a location in northern Brazil where a locally strong shift toward above-normal temperature is forecast. The map can be controlled to show the probability of exceeding (or not exceeding) a large set of user-selected percentiles. Providing flexibility in the forecast format allows users to glean from the forecast what matters most to them, which varies widely depending on the application.
Figure 6

Example of a flexible format forecast for temperature. Here the forecast was for the February-April season, issued in mid-January 2013. In top panel, the contours indicate the climatological (expected) temperature for the season, and the color shading shows the probability of exceeding that temperature. The bottom panels show the forecast probability distribution (green) along with the climatological distribution (black) for the grid square located at 5°S, 57°W in northern Brazil where a strong shift of the odds toward above-normal temperature was forecast. The lower left panel shows the cumulative forecast probability distribution, and the lower right panel shows the probability density function itself.

For a known set of regions and season, the single most important determinant of the seasonal climate outlooks is the expected ENSO state during the targeted season. Therefore, many users are interested in an outlook for ENSO itself. Since the early 2000s the IRI has provided information, including recent observations and forecasts, of the ENSO state. This information has been encapsulated into a graphical product called the ENSO Quick Look, which itself contains forecast summaries and a display of current predictions of dynamical and statistical models whose skills have been evaluated (Tippett et al. 2012; Barnston et al. 2012). Figure 7 illustrates an ENSO Quick Look issued near the end of 2012, following a period when a possible weak El Niño failed to materialize and neutral conditions were then expected. A technical narrative about the ENSO condition and outlook is also issued each month.
Figure 7

Example of an ENSO Quick Look issued at end of December 2012. The top left and top right panels show forecast probabilities of El Niño, neutral and La Niña conditions through the first half of 2013 made by humans and objectively by models, respectively. The bottom left panel shows a multidecadal ENSO history, and bottom right shows the IRI/CPC ENSO prediction plume, containing model predictions for the Niño3.4 SST region as indicated directly from a set of models.

Beginning in late 2011, the Climate prediction Center (CPC) of the National Oceanic and Atmospheric Administration (NOAA) and IRI began sharing jointly some of the monthly production tasks pertaining to ENSO diagnostic and forecast products, including the ENSO Diagnostic Discussion and a long-lead ENSO probability outlook based on human judgment (Figure 7, lower left panel) and on a set of objective model predictions summarized on the IRI/CPC ENSO prediction plume (lower right panel). Accordingly, the plume diagram and the ENSO Diagnostic Discussion now bear the names of both institutions.

c. Specialized or tailored forecasts

Many decision makers need outlooks for aspects of climate different from seasonal total precipitation and average temperature for a large grid square such as one of those used in IRI’s global climate forecasts. Perhaps the most common request is for forecasts for shorter periods embedded within the 3-month season, such as the individual months or even week-to-week variations. Another common need is for forecasts for a particular location, whose climate may differ from that of the embedding grid square due to local geographical features such as mountains or bodies of water. Finally, some users want to forecast a non-climate variable directly, such as crop yield, without necessarily first predicting rainfall and then using the rainfall forecast as a predictor of the crop yield. IRI has carried out research on each of the above problems, which are generally considered forms of downscaling.

Spatial downscaling can refer simply to calibrating the climate forecast in a grid square to an exact location within the square by considering the climatic difference between the location and the average over the square. A linear calibration might be all that is necessary to obtain the local forecast. A more complex version of spatial downscaling is required when the climatic difference between the location and the average over the grid square is caused by significant local features such as the terrain or land surface conditions. In this case, a simple calibration may not suffice, because the direction of the climate anomaly in the grid square may not carry over to the exact location of interest. An example would be when a grid square is located mainly over the windward part of an mountain range, but we are interested in forecasting for a town on the inland, or leeward, side of the mountain range—a location whose rainfall anomaly tendency may be in the opposite sense to that of the average over the grid square. Resolving the often opposing anomalies may be done either statistically, as tested in South Africa (Landman et al. 2009), or through use of a regional dynamical model, as was done for semi-arid northeast Brazil in an IRI project in the early 2000s (Sun et al. 2005), or likewise in recent work focusing both there and in Chile, in the context of water management issues (Robertson et al. 2014).

Forecasting for portions of a season, often known as temporal downscaling, is challenging because the accuracy of daily weather forecasts extends only to about 10 to 14 days in advance. Meanwhile, the benefit of seasonal climate forecasts relies on averaging over a large amount of time in order that the weak but consistent influence of SST patterns may stand out above the “noise” of the unpredictable daily weather fluctuations. However, it is possible to identify correlations between seasonal climate and the climate during subseasonal periods, and even the character of the day-to-day weather. A common example of the latter that is of interest to the agricultural community is the occurrence of “dry spells” within a season, where a dry spell is defined as a sequence of at least a certain number of days (e.g., 4 or 5) without any meaningful rainfall (e.g., 1 mm or more). Although the number of dry spells during a season is expected to be inversely related to the seasonal rainfall total, the relationship is not always straightforward, because the degree and typical time scale of subseasonal rainfall variation becomes important. In addition to dry spell occurrences, the number of non-dry days over the course of a season may be of greater interest than the seasonal rainfall total (Robertson et al. 2009).

Rainfall or temperature forecasts may be desired in order to prepare for something of societal significance such as the threat of a malaria epidemic in Botswana (Thomson et al. 2006), meningitis in western Africa (García-Pando et al. 2014), corn yield in Kenya (Hansen et al. 2009) or some countries in South America, or water resources in the Philippines (Brown & Carriquiry 2007; Lyon & Camargo 2009). Stakeholders may consider the climate forecast to determine the consequent probabilities of outcomes within their application, and make their decision accordingly. IRI has striven to increase awareness on the parts of potential beneficiaries that climate information is useful to their welfare and success, and to make the relevant climate information easily accessible, understandable and usable to them (Hansen et al. 2014; Dinku et al. 2014). A particularly good example of this building of awareness and demand has been in IRI’s partnership with the International Red Cross/Red Crescent organizations (Coughlan De Perez and Mason 2014).

Interestingly, some IRI research has demonstrated that using SST anomaly patterns to predict rainfall, and then predicting an applied variable (such as crop yield or disease epidemic) from the rainfall, sometimes does not predict the applied variable as well as predicting the applied variable directly from SST without considering rainfall as an intermediary. Such bypassing of rainfall led to better predictions of wheat yields in Australia (Hansen et al. 2004) despite that rainfall is likely the most important mediating variable. A similar finding emerged in the prediction of wildfire activity in Indonesia (Ceccato et al. 2014), and maize yield in Colombia in a recent set of experiments. Bypassing rainfall in the prediction chain may be beneficial because the rainfall is a “noisier” field than yield, or because the rainfall is expressed as a seasonal total rather than at the subseasonal temporal scale of greater importance to the crop yields or fire activity. Another possible explanation is that the final result depends on climate variables additional to precipitation (e.g., temperature and/or humidity), whose most likely values are implicit in the SST patterns. Bypassing intermediate variables in a multi-step prediction process simplifies the prediction task, and is appealing when the ultimate goal is the final benefit to society rather than predicting the climate variables per se.

d. Toward decadal and longer forecasts

Forecasts of seasonal or subseasonal temperature and precipitation, while important to many users, represent only part of the range of time scales that are in demand. With increasing awareness and concern about climate change, many policy makers are interested in outlooks going out to 5 or 10 years and longer (Meehl et al. 2009; Goddard et al. 2012). The long-term projections from a large set of climate models have been analyzed, consolidated and published by the Intergovernmental Panel on Climate Change (IPCC) (Solomon et al. 2007), given several specific future greenhouse gas scenarios. These projections, accompanied by their estimated uncertainties, are available in print and on the web. However, many users cannot adequately understand the projections, and particularly the implications of their uncertainties, without taking time to study them extensively—time they may not have.

The IRI has involved itself in several research projects relating to climate prediction on decadal and longer time scales. While these embryonic study areas carry the challenge of a relative lack of data with which to verify the prediction models, the observational record has shown decadal variability over the last century and also a clear warming trend, especially when averaged over the globe, between the 1970s and the 2010s. Best practices in establishing probabilistic predictions on the decadal time scale and even for end-of-century climate, and methods to verify such predictions, have been at the center of IRI’s research on these longer time scales.

The IRI does not attempt to translate or simplify the IPCC projections for public consumption, in part because it has another message about climate change projections that it wishes to emphasize. That message pertains to a preferred way to view climate change projections in relation to year-to-year climate variations—namely, that these shorter-term variations (climate variability) in most cases are expected to continue to overwhelm the slower, longer-term changes into the coming several decades. The relatively greater magnitude of interannual variability as compared with climate change has prevailed over the past century and even the past several decades in which warming has been observed, and is expected to continue into the future even if the rate of climate change becomes greater than it has been in the last several decades.

To demonstrate the relative magnitudes of the variability on interannual, decadal, and trend-like time scales, the IRI has developed a page on the web called the time scales map room (The IRI’s time scales map room). The total observed variability is partitioned into the three general frequency bands based on the data during the 20th century, using digital filtering methods, and the relative contributions from each time scale are shown (Greene et al. 2011). The analyses are useful because the time scale(s) of primary concern varies by application, and further by context. For example, the risk of crop loss due to insufficient seasonal rainfall is generally elevated during years having below average rainfall due to climate controls varying on a year-to-year basis, such as ENSO or the tropical SST in any ocean basin. However, decades that themselves are drier than normal may also have noticeable additional impact. An understanding of how variations on different time scales combine to produce the observed climate histories can help plan strategies for adaptation or risk mitigation. For example, Figure 8 shows the global distribution of the proportion of contribution of interannual, decadal and trend variability to the total variability for temperature for the January-March season. Focusing on a particular location, such as central Colombia, Figure 9 shows the filtered time series of the temperature over the 20th century for each of the three time scales and indicates the approximate percentages of variance contained in each. In the case of central Colombia there is substantial trend variability toward the end of the century and some decadal variability between about 1950 and 1985, in addition to the more dominant interannual variability. Figure 8 shows that although some locations have larger decadal and trend components of variability than seen in this location, in many locations interannual variability dominates more strongly than seen here. Additionally, the interannual scale generally dominates to a greater extent for precipitation than for temperature, because trends are weaker for precipitation in most cases (not shown).
Figure 8

Example from the time scales maproom for temperature. The top, middle and bottom panels show the geographical distribution of the proportion of the total temperature variance through the 20th century coming from interannual, decadal, and trend variability, respectively, for the January-March season.

Figure 9

Example of time scale partitioning for a specific location and season. Here the temperature time series for the January-March season is partitioned for a grid point in central Colombia among (top left) interannual, (top right) decadal and (bottom) trend-related variance. The three time scales account for 63%, 17% and 20% of the variability of the total time series, respectively. Clicking on this location in the map shown in Figure 8 triggers the display of these three plots. The total (raw) time series is shown in the panel showing the trend.

A logical conclusion suggested by the relative dominance of the interannual time scale is that while decadal and trend variability may be important, particularly cumulatively in the case of consistent upward trends (e.g. with continued warming after 20+ years into the future), being prepared for the year-to-year ups and downs of temperature or precipitation remains the dominant concern for many applications. One impact of decadal variability and/or trends is that extremes in one direction from the average become more likely than extremes in the opposite direction. Being prepared for extremes in either direction is always prudent with or without decadal variability and trends related to climate change, and is all the more important with the two later components with the probability for extremes in one direction becoming greater. Seasonal climate predictions ideally incorporate all three time scales together. The fact that most of the IRI’s seasonal forecasts show much larger areas having enhanced probabilities for above normal than below normal temperature indicates that upward trends are indeed represented, given that the boundaries between the near-normal category and each of the two other categories are based on a recent but completed (and thus, too cold to accurately represent today) 30-year period. Because interannual variability is the predominant time scale, and because it is essentially unpredictable beyond about one year in advance, climate forecasts for more than a year into the future would consist of predictable decadal variability and trends. The resulting forecasts, besides having variable expected skill (Meehl et al. 2009), might often be too weak to be of interest to most users, and therefore they are not usually produced. The predictions of the IPCC target much longer averaging periods, such as for the 30 years at the end of the 21st century, and become more meaningful when accompanied by their substantial uncertainty estimates. Even in the case of very long period averaging, the unpredictable interannual variability would inevitably cause temperatures for a given season at the end of the century to vary considerably among adjacent years, and such interannual variability is as important for very long-term planning (e.g., urban development) as the warmer average climate.

Conclusion: Our achievements, and the way forward

The IRI’s climate information products and services are designed to help decision makers in many societal sectors and also forecasters in hydrometeorological agencies in developing countries. Although interest in, and uptake of, the information has occurred and has resulted in some verifiable societal benefits, some aspects of IRI’s goals have not yet been adequately met. One problem is that some users who are interested and willing to invest time into using climate information are unable to do so because of existing rules or constraints regarding how to conduct their activities, such as a rigidly defined set of procedures authorized by their federal government that cannot be easily changed. Another problem is that some potential users who would be permitted to use climate information are not willing to spend the needed time learning about how climate forecasts can help them better achieve their goals. Moreover, among users who are willing and able to use climate information and forecasts, a recurrent problem is that they may not sufficiently understand the probabilistic nature of climate forecasts, and implications for their proper use in making decisions. There are two directions in which this lack of understanding may manifest itself: On the one hand, they may minimize the meaning of the probabilistic information and regard the most likely forecast category to be the forecast in an unqualified sense; and secondly, they may understand the probabilistic information but interpret it too pessimistically, so as to believe it is virtually worthless, when over a long period of time it would more clearly be economically advantageous if used consistently properly. This second misunderstanding may develop when the observations do not occur in the category having the greatest forecast probability in the initial one or two trials of using the forecasts. Both of these misconceptions have proven able to be ameliorated through repeated communication, preferably face-to-face and in instances of imminent application (e.g., during times of local forecast issuance, with stakeholders present and needing to make decisions such as what crop varieties to plant and when to plan them).

Users frequently express interest in forecasts having more temporal or spatial detail than is scientifically possible, such as spatially detailed week-to-week scenarios within the coming season. Again, communication can help rectify such an unrealistic expectation.

Although IRI has worked hard to make the observational and forecast information understandable, many browsers of the web pages express confusion and need help in learning the correct meaning of the information. Sometimes they are not willing or able to read metadata relating to the product, while in other cases they do attempt to read the relevant material but still find the graphical charts or maps unintelligible. The users most able to absorb and use the climate information appropriately have been from the developed world, and often in private businesses such as energy or agricultural businesses or even weather derivative enterprises. However, a recent survey of subscribers to our forecasts indicated that while many of them are from Europe and the United States, a moderate percentage of them are from Central or South America and in the agricultural sector, using the map room and/or the ENSO and climate forecasts for short-term planning purposes. This finding clearly represents a concrete success. Likewise, the uptake and sustained use of CPT (see the subsection on seasonal climate forecasts above) by meteorological offices for making objective climate predictions in a moderately large set of developing countries in Africa, the Caribbean, Latin America and a few others (Mantilla et al. 2014) is considered a measurable success. Although success is relative, and it is difficult to assess some aspects of IRI’s success levels using objective criteria, it is clear that IRI’s efforts to link climate science with society have been improving human welfare in an increasing number of developing countries.

An increasingly common issue among users is their interest in forecasts on longer, decadal to climate change time scales. This brings out a need to more carefully explain the interplay of different time scales in anticipating near- and longer-term climate conditions to be expected. New and continuing research on this subject is needed, and IRI fully expects to rise to the occasion. The time scales map room product is just a beginning for what could and should become a more seamless suite of products for climate on sub-seasonal to century time scales. An example of such an innovative product would be a seasonal forecast that explicitly breaks down the interannual, decadal and trend components in separate maps to reveal the contributions from each time scale. While the interannual component would usually be expected to dominate, there might be some seasons and locations in which one of the other components plays a significant role. Another potential product would be an explicit forecast for the average climate over the coming 10 years, based on increasing knowledge of the drivers of decadal climate variability. Regardless of what shape its future climate information products take, IRI intends to be a beacon of enlightenment to help quell the insatiable needs, questions and confusion of societies that potentially could make use of climate information to their economic advantage and for their general welfare.

Endnotes

aMaprooms other than that of global climate are also available. Those applying climate in a specialized, context include those of food security, fire, and the International Federation of Red Cross and Red Crescent Societies. Also, besides the global scale climate, regional scale and ENSO-specific maprooms may be selectedb. In this paper, the term “prediction” is used when produced entirely by an objective method, or set of methods, such as by one or more prediction models, and not altered by human forecasters. Alternatively, “forecast” is used when some form of human judgment also enters into the final product. “Prediction” is also used when referring to the overall discipline or field of predicting the climatec. In all three tropical ocean basins, the Coupled Forecast System SST prediction from the NCEP, and the constructed analogue SST prediction also from NCEP, are used. In the tropical Pacific Ocean, the Lamont-Doherty Earth Observatory intermediate coupled model (version 5) is also used. In the tropical Indian Ocean, a CCA prediction is also used, using the predicted tropical Pacific SST prediction and the recently observed Indian Ocean SST as predictorsd. The set of AGCMs used by IRI has evolved since 1997. The six AGCMs used at the time of this writing include: (1) ECHAM4.5 (from Max Planck Institute, Hamburg, Germany, run at IRI, 24 ensemble members); (2) CCM3.6 (from NCAR, Boulder Colorado US, run at IRI, 24 members); (3) CFSv2 (from NCEP, College Park, Maryland, US, run at NCEP, 24 members, 1-tiered output used; (4) COAPS (from Florida State University, Tallahassee, Florida, US, run at FSU, 12 members); (5) GFDL (from Princeton, New Jersey, US, run at GFDL, 30 members) and (6) COLA2.2.6 (from Fairfax, Virginia, US, run at COLA, 36 members)f. The weather conditions are not predictable beyond about 10–14 days, but serve as a randomizing agent for the seasonal climate predictions; hence, using initial weather conditions as actually observed is unnecessarye. Not all climatologists agree that this trend-based source of skill is “fair”. Among those who do not, most think that skill should be based more exclusively on the ability to distinguish year-to-year variations within a given decade.

Authors’ information

AB oversees the production and scheduled issuance of a range of IRI climate forecast products. He participates in implementation of improved methods and tools to enhance the quality and content of the forecasts. He seeks to engage the user community on forecast interpretation and use, including the national and international media. He provides training and capacity building on aspects of climate forecasting for visiting scientists, students, and forecasters at national meteorological centers abroad. He designed and teaches the statistics portion of a core quantitative course in the curriculum of the Climate and Society Masters Program at Columbia University—a course relating climate to decision making in the climate-sensitive components of society. He has published studies involving verification of climate prediction methods used over the last 30 years.

MT leads the Global Prediction Development effort at the IRI and works on problems related to predictability and the application of statistical methods in climate science. He develops improved methods and tools to enhance the quality of the IRI’s seasonal climate forecasts.

Acknowledgement

Responsible editor: Hong Liao

Abbreviations

AGCM: 

Atmospheric general circulation model

CAMS: 

Climate Anomaly Monitoring System

CAMS_OPI: 

CAMS Outgoing longwave radiation Precipitation Index

CCA: 

Canonical correlation analysis

CCM3.6: 

Community climate model version 3.6 model

CFSv2: 

Climate forecast system version 2 model

CMAP CPC: 

Merged analysis of precipitation

COAPS: 

Center for ocean–atmosphere prediction studies model

COLA2.2.6: 

Center for ocean-land-atmosphere studies version 2.2.6 model

CPC: 

Climate prediction center (in College Park, Maryland, US)

ECMWF: 

European center for medium-range weather forecasts (in Reading, UK)

ECHAM4.5: 

European Center/Hamburg version 4.5 model

FSU: 

Florida State University (in Tallahassee, Florida, US)

GFCS: 

Global Framework for Climate Services

GFDL: 

Geophysical Fluid Dynamics Laboratory model (in Princeton, New Jersey, US)

GSCU: 

Global Seasonal Climate Update

IPCC: 

Intergovernmental Panel on Climate Change

IRI: 

International research institute for climate and society (in Palisades, New York, US)

NCAR: 

National center for atmospheric research (in Boulder, Colorado, US)

NCEP: 

National centers for environmental prediction

NOAA: 

National oceanic and atmospheric administration

SPI: 

Standardized precipitation index

SST: 

Sea surface temperature

WMO: 

World Meteorological Organization (in Geneva, Switzerland).

Declarations

Authors’ Affiliations

(1)
International Research Institute for Climate and Society, Columbia University
(2)
Center of Excellence for Climate Change Research, Department of Meteorology, King Abdulaziz University

References

  1. The IRI’s climate map room [http://iridl.ldeo.columbia.edu/maproom/Global/index.html]
  2. The IRI’s seasonal climate forecasts [http://iri.columbia.edu/our-expertise/climate/forecasts/seasonal-climate-forecasts/]
  3. The IRI’s forecasts of ENSO [http://iri.columbia.edu/our-expertise/climate/forecasts/enso/]
  4. The IRI Data Library [http://iri.columbia.edu/resources/data-library/]
  5. The CAMS-OPI gauge-plus-satellite rainfall data from the Climate Prediction Center of NOAA [http://iridl.ldeo.columbia.edu/SOURCES/.NOAA/.NCEP/.CPC/.CAMS_OPI/.v0208/]
  6. The IRI’s seasonal forecast verification site [http://iri.columbia.edu/our-expertise/climate/forecasts/verification/]
  7. The IRI’s time scales map room [http://iridl.ldeo.columbia.edu/maproom/Global/Time_Scales/index.html]
  8. Barnston AG, Mason SJ: Evaluation of IRI's seasonal climate forecasts for the extreme 15% tails. Wea Forecasting 2011, 26: 545–554. 10.1175/WAF-D-10-05009.1View ArticleGoogle Scholar
  9. Barnston AG, Mason SJ, Goddard L, DeWitt DG, Zebiak SE: Multimodel ensembling in seasonal climate forecasting at IRI. Bull Am Meteorol Soc 2003, 84: 1783–1796. 10.1175/BAMS-84-12-1783View ArticleGoogle Scholar
  10. Barnston AG, Kumar A, Goddard L, Hoerling MP: Improving seasonal prediction practices through attribution of climate variability. Bull Am Meteorol Soc 2005, 86: 59–72. 10.1175/BAMS-86-1-59View ArticleGoogle Scholar
  11. Barnston AG, Shuhua L, Mason SJ, DeWitt DG, Goddard L, Gong X: Verification of the first 11 years of IRI’s seasonal climate forecasts. J Appl Meteorol Climatol 2010, 49: 493–520. 10.1175/2009JAMC2325.1View ArticleGoogle Scholar
  12. Barnston AG, Tippett MK, L’Heureux ML, Li S, DeWitt DG, DeWitt DG: Skill of real-time seasonal ENSO model predictions during 2002–11. Is our capability increasing? Bull Am Meteorol Soc 2012, 93: 631–651. 10.1175/BAMS-D-11-00111.1View ArticleGoogle Scholar
  13. Bengtsson L, Schlese U, Roeckner E, Latif M, Barnett T, Graham N: A two-tiered approach to long-range climate forecasting. Science 1993, 261: 1026–1029. 10.1126/science.261.5124.1026View ArticleGoogle Scholar
  14. Blementhal MB, Bell M, del Corral J, Cousin R, Khomyakov I: IRI Data Library: Enhancing accessibility of climate knowledge. Earth Perspect 2014, 1: 19. 10.1186/2194-6434-1-19View ArticleGoogle Scholar
  15. Brown C, Carriquiry M: Managing hydroclimatological risk to water supply with option contracts and reservoir index insurance. Water Resour Res 2007, 43: W11423. doi:10.1029/2007WR006093 doi:10.1029/2007WR006093Google Scholar
  16. Ceccato P, Fernandez K, Ruis D, Allis E: Climate and environmental monitoring for decision making. Earth Perspect 2014, 1: 16. 10.1186/2194-6434-1-16View ArticleGoogle Scholar
  17. Coughlan De Perez E, Mason SJ: Climate information for humanitarian agencies: Some basic principles. Earth Perspect 2014, 1: 11. 10.1186/2194-6434-1-11View ArticleGoogle Scholar
  18. DeWitt DG: Retrospective forecasts of interannual sea surface temperature anomalies from 1982 to present using a directly coupled atmosphere–ocean general circulation model. Mon Weather Rev 2005, 133: 2972–2995. 10.1175/MWR3016.1View ArticleGoogle Scholar
  19. Dinku T, Block P, Sharoff J, Hailemariam K, Osgood D, Del Corral J, Cousin R, Thomson MC: Bridging critical gaps in climate services and applications in Africa. Earth Perspect 2014, 1: 15. 10.1186/2194-6434-1-15View ArticleGoogle Scholar
  20. García-Pando CP, Thomson MC, Stanton MC, Diggle PJ, Hopson T, Panya R, Miller RL: Meningitis and climate--From science to practice. 2014, 1: 14.Google Scholar
  21. Goddard L, Graham NE: The importance of the Indian Ocean for simulating precipitation anomalies over the eastern and southern Africa. J Geophys Res 1999, 104: 19099–19116. 10.1029/1999JD900326View ArticleGoogle Scholar
  22. Goddard L, Mason SJ: Sensitivity of seasonal climate forecasts to persisted SST anomalies. Climate Dynam 2002, 19: 619–632. 10.1007/s00382-002-0251-yView ArticleGoogle Scholar
  23. Goddard L, Barnston AG, Mason SJ: Evaluation of the IRI’s “Net Assessment” seasonal Climate Forecasts: 1997–2001. Bull Am Meteorol Soc 2003, 84: 1761–1781. 10.1175/BAMS-84-12-1761View ArticleGoogle Scholar
  24. Goddard L, Kumar A, Barnston AG, Hoerling MP: Diagnosis of anomalous winter temperatures over the eastern United States during the 2002/03 El Niño. J Clim 2006, 19: 5624–5636. 10.1175/JCLI3930.1View ArticleGoogle Scholar
  25. Goddard L, Aichellouche Y, Baethgen W, Dettinger M, Graham R, Hayman P, Kadi M, Martinez R, Meinke H: Providing seasonal-to-interannual climate information for risk managmement and decision-making. Procedia Environ Sci 2010, 1: 81–101.View ArticleGoogle Scholar
  26. Goddard L, Hurrell JW, Kirtman BP, Murphy J, Stockdale T, Vera C: Two time scales for the price of one (almost). Bull Am Meteorol Soc 2012, 93: 621–629. doi:10.1175/BAMS-D-11–002201 doi:10.1175/BAMS-D-11-002201 10.1175/BAMS-D-11-00220.1View ArticleGoogle Scholar
  27. Goddard L, Baethgen W, Bhojwani H, Robertson AW: The International Research Institute for Climate and Society: Why, what and how. Earth Perspect 2014, 1: 10. 10.1186/2194-6434-1-10View ArticleGoogle Scholar
  28. Greene AM, Goddard L, Cousin R: Web tool deconstructed variability in twentieth century climate. EOS, Transactions Amer Geophys Union 2011, 92(45):397–398.View ArticleGoogle Scholar
  29. Hagedorn R, Doblas-Reyes FJ, Palmer TN: The rationale behind the success of multi-model ensembles in seasonal forecasting - I Basic concept. Tellus Series A Dynamic Meteorology Oceanography 2005, 57: 219–233. doi:10.1111/j1600–0870200500103x doi:10.1111/j1600-0870200500103x 10.1111/j.1600-0870.2005.00103.xView ArticleGoogle Scholar
  30. Hansen JW, Potgieter A, Tippett MK: Using a general circulation model to forecast regional wheat yields in Northeast Australia. Agric Forest Meteor 2004, 127: 77–92. 10.1016/j.agrformet.2004.07.005View ArticleGoogle Scholar
  31. Hansen JW, Mishra A, Rao KPC, Indeje M, Ngugi RK: Potential value of GCM-based seasonal rainfall forecasts for maize management in semi-arid Kenya. Agric Syst 2009, 101: 80–90. 10.1016/j.agsy.2009.03.005View ArticleGoogle Scholar
  32. Hansen JW, Zebiak SE, Coffey K: Shaping global agendas on climate risk management and climate services; an IRI perspective. Earth Perspect 2014, 1: 13. 10.1186/2194-6434-1-13View ArticleGoogle Scholar
  33. Hewitt C, Mason S, Walland D: The global framework for climate services. Nat Clim Chg 2012, 2: 831–832. 10.1038/nclimate1745View ArticleGoogle Scholar
  34. Ishikawa T, Barnston AG, Kastens KA, Louchouarn P: Understanding, evaluation and use of climate forecast data by environmental policy students. In Special Paper 474, Qualitative Inquiry in Geosciences Education Research, Geolog. Soc. Amer Edited by: Feid AD, Stokes A. 2011, 153–170.View ArticleGoogle Scholar
  35. Kharin VV, Zwiers FW: Climate predictions with multimodel ensembles. J Clim 2002, 15: 793–799. doi:10.1175/1520–0442(2002)015<0793:CPWME>20CO;2 doi:10.1175/1520-0442(2002)015<0793:CPWME>20CO;2 10.1175/1520-0442(2002)015<0793:CPWME>2.0.CO;2View ArticleGoogle Scholar
  36. Kirtman BP, Min D, Infanti JM, Kinter JL, Paolino DA, Zhang Q, Van Den Dool H, Saha S, Mendez MP, Becker E, Peng P, Tripp P, Huang J, DeWitt DG, Tippett MK, Barnston AG, Li S, Rosati A, Schubert SD, Lim Y-K, Li ZE, Tribbia J, Pegion K, Merryfield W, Denis B, Wood E: The North American Multi-Model Ensemble (NMME): Phase-1 Seasonal to Interannual Prediction, Phase-2 Toward Developing Intra-Seasonal Prediction. Bull. Amer. Meteor. Soc 2014., 95: in press (April 2014) in press (April 2014)Google Scholar
  37. Korecha D, Barnston AG: Predictability of June–September Rainfall in Ethiopia. Mon Weather Rev 2007, 135: 628–650. 10.1175/MWR3304.1View ArticleGoogle Scholar
  38. Landman A: How the International Research Institute for Climate and Society has contributed towards seasonal climate forecast modeling and operations in South Africa. Earth Perspect 2014, 1: 22. 10.1186/2194-6434-1-22View ArticleGoogle Scholar
  39. Landman WA, Kgatuke M, Mbedzi M, Beraki A, Bartman A, du Piesanie A: Performance comparison of some dynamical and empirical downscaling methods for South Africa from a seasonal climate modeling perspective. Int J Climatol 2009, 29: 1535–1549. 10.1002/joc.1766View ArticleGoogle Scholar
  40. Li S, Goddard L, DeWitt DG: Predictive skill of AGCM seasonal climate forecasts subject to different SST prediction methodologies. J Clim 2008, 21: 2169–2186. 10.1175/2007JCLI1660.1View ArticleGoogle Scholar
  41. Lyon B, Camargo SJ: The seasonally-varying influence of ENSO on rainfall and tropical cyclone activity in the Philippines. Clim Dynam 2009, 32: 125–141. 10.1007/s00382-008-0380-zView ArticleGoogle Scholar
  42. Lyon B, Giannini A, Gonzales P, Robertson AW: The role of targeted climate research at the IRI. Earth Perspect 2014, 1: 18. 10.1186/2194-6434-1-18View ArticleGoogle Scholar
  43. Mantilla G, Thomson C, Sharoff J, Barnston AG, Curtis A: Capacity development through the sharing of climate information with diverse user communities. Earth Perspect 2014, 1: 21. 10.1186/2194-6434-1-21View ArticleGoogle Scholar
  44. Mason I: A model for assessment of weather forecasts. Aust Meteor Mag 1982, 30: 291–303.Google Scholar
  45. Mason SJ: On using “climatology” as a reference strategy in the Brier and ranked probability skill scores. Mon Weather Rev 2004, 137: 1891–1895.View ArticleGoogle Scholar
  46. Mason SJ: Understanding forecast verification statistics. Meteor Applic 2008, 15: 31–40. 10.1002/met.51View ArticleGoogle Scholar
  47. Mason SJ, Graham NE: Conditional probabilities, relative operating characteristics, and relative operating levels. Weather Forecast 1999, 14: 713–725. 10.1175/1520-0434(1999)014<0713:CPROCA>2.0.CO;2View ArticleGoogle Scholar
  48. Mason SJ, Weigel AP: A generic forecast verification framework for administrative purposes. Mon Weather Rev 2009, 137: 331–349. 10.1175/2008MWR2553.1View ArticleGoogle Scholar
  49. Mason SJ, Goddard L, Graham NE, Yulaeva E, Sun L, Arkin PA: The IRI seasonal climate prediction system and the 1997/98 El Niño event. Bull Am Meteorol Soc 1999, 80: 1853–1873. 10.1175/1520-0477(1999)080<1853:TISCPS>2.0.CO;2View ArticleGoogle Scholar
  50. Meehl GA, Goddard L, Murphy J, Stouffer RJ, Boer G, Danabasoglu G, Dixon K, Giorgetta MA, Greene AM, Hawkins E, Hegerl G, Karoly D, Keenlyside N, Kimoto M, Kirtman B, Navarra A, Pulwarty R, Smith D, Stammer D, Stockdale T: Decadal prediction – Can it be skillful? Bull Am Meteorol Soc 2009, 90: 1467–1485. 10.1175/2009BAMS2778.1View ArticleGoogle Scholar
  51. Murphy AH: A new vector partition of the probability score. J Appl Meteorol 1973, 12: 595–600. 10.1175/1520-0450(1973)012<0595:ANVPOT>2.0.CO;2View ArticleGoogle Scholar
  52. Palmer TN, Alessandri A, Andersen U, Cantelaube P, Davey M, Delecluse P, Deque M, Diez E, Doblas-Reyes FJ, Feddersen H, Graham R, Gualdi S, Gueremy JF, Hagedorn R, Hoshen M, Keenlyside N, Latif M, Lazar A, Maisonnave E, Marletto V, Morse AP, Orfila B, Rogel P, Terres JM, Thomson MC: Development of a European multimodel ensemble system for seasonal-to-interannual prediction (DEMETER). Bull Am Meteorol Soc 2004, 2004(85):853–872.View ArticleGoogle Scholar
  53. Rajagopalan B, Lall U, Zebiak SE: Categorical climate forecasts through regularization and optimal combination of multiple GCM ensembles. Mon Weather Rev 2002, 130: 1792–1811. 10.1175/1520-0493(2002)130<1792:CCFTRA>2.0.CO;2View ArticleGoogle Scholar
  54. Recalde-Coronel GC, Barnston AG, Munoz AG: Predictability of December-April rainfall in coastal and Andean Ecuador. J Appl Meteorol Climatol 2014., 53: in press [http://journals.ametsoc.org/toc/apme/0/0] in pressGoogle Scholar
  55. Robertson AW, Lall U, Zebiak SE, Goddard L: Improved combination of multiple atmospheric GCM ensemble for seasonal prediction. Mon Weather Rev 2004, 132: 2732–2744. 10.1175/MWR2818.1View ArticleGoogle Scholar
  56. Robertson A, Moron WV, Swarinoto Y: Seasonal predictability of daily rainfall statistics over Indramayu district Indonesia. Int J Climatol 2009, 29: 1449–1462. 10.1002/joc.1816View ArticleGoogle Scholar
  57. Robertson AW, Baethgen W, Block P, Lall U, Sankarasubramanian A, Filho FAS, Verbist K: Climate risk managment in water for semi-arid regions. Earth Perspect 2014, 1: 12. 10.1186/2194-6434-1-12View ArticleGoogle Scholar
  58. Ropelewski CF, Janowiak JE, Halpert MS: The analysis and display of real time surface climate data. Mon Weather Rev 1985, 113: 1101–1106. 10.1175/1520-0493(1985)113<1101:TAADOR>2.0.CO;2View ArticleGoogle Scholar
  59. Solomon S, Qin D, Manning M, Chen Z, Marquis M, Averyt KB, Tignor M, Miller HL: Climate Change: The Physical Science Basis Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, United Kingdom and New York, NY, USA: Cambridge University Press; 2007.Google Scholar
  60. Sun L, Moncunill DF, Li H, Moura AD, Filho FAS: Climate downscaling over Nordeste, Brazil, using the NCEP RSM97. J Clim 2005, 18: 551–567. 10.1175/JCLI-3266.1View ArticleGoogle Scholar
  61. Thomson MC, Doblas-Reyes SJ, Mason SJ, Hagedorn R, Connor SJ, Phindela T, Morse AP, Palmer TN: Malaria early warnings based on seasonal climate forecasts from multi-model ensembles. Nature 2006, 439: 576–579. 10.1038/nature04503View ArticleGoogle Scholar
  62. Tippett MK, Barnston AG: Skill of multimodel ENSO probability forecasts. Mon Weather Rev 2008, 136: 3933–3946. Doi: 10.1175/2008MWR24311 Doi: 10.1175/2008MWR24311 10.1175/2008MWR2431.1View ArticleGoogle Scholar
  63. Tippett MK, Barnston AG, Li S: Performance of recent multimodel ENSO forecasts. J Appl Meteorol Climatol 2012, 51: 637–654. 10.1175/JAMC-D-11-093.1View ArticleGoogle Scholar
  64. Van den Dool HM, Toth Z: Why do forecasts for “near normal” often fail? Wea Forecasting 1991, 6: 76–85. 10.1175/1520-0434(1991)006<0076:WDFFNO>2.0.CO;2View ArticleGoogle Scholar
  65. Weigel AP, Mason SJ: The generalized discrimination score for ensemble forecasts. Mon Weather Rev 2011, 139: 3069–3074. 10.1175/MWR-D-10-05069.1View ArticleGoogle Scholar
  66. Wilks DS: Statistical Methods in the Atmospheric Sciences. Academic Press, San Diego, California; 2006:627.Google Scholar
  67. Xie P, Arkin PA: Global precipitation: A 17-year monthly analysis based on gauge observations, satellite estimates, and numerical model outputs. Bull Am Meteorol Soc 1997, 78: 2539–2558. 10.1175/1520-0477(1997)078<2539:GPAYMA>2.0.CO;2View ArticleGoogle Scholar

Copyright

© Barnston and Tippett; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.