Rainfall Extremes in a Changing Climate: Detecting Change and Improving Predictions May 9, 2025Key PointsGlobal warming increases the probability of intense rainfall events and might impact rainfall associated to tropical cyclones.Detecting trends in historical rainfall records is challenging due to the large variability in rainfall extremes. We’d need to wait a long time before trends emerge, while physics and climate models suggest that these trends exist.Statistical methods that focus on capturing the signal in noisy data and that leverage information from large datasets should be preferred in presence of non-stationarities and to predict future extremes.Last week I was in Vienna at the annual meeting of the European Geoscience Union (EGU), a large gathering of the wider international academic community, which sees thousands of researchers gather to showcase their work in the wide and disparate fields of geosciences. As always it was a great opportunity to hear the latest research advancements, meet old friends and build new connections. At EGU I presented Inigo’s reflections on extreme rainfall events, and suggested methods for handling extremes in presence of non-stationarities [Nicòtina et al., 2025]. In this article I will cover the motivation that led to investigate rainfall extremes, some details on the data available and their analysis and the main take away.Why should we focus on trends in rainfall extremes?Global warming is expected to intensify rainfall extremes due to a warmer, more moisture-rich atmosphere. Yet, proving this in historical data remains challenging. This article discusses why detecting trends in extreme rainfall is challenging, why conventional significance testing may not be sufficient, and how model selection can offer a better tool to describe trends in the presence of physical arguments for their existence. Crucially, recognizing trends in historical rainfall observations helps to build more resilient and up to date views of potential extremes. The future NOAA Atlas 15, the authoritative National Precipitation Frequency Atlas for the United States, will recognize non-stationarity in historical data. Such a methodological advance can improve our capacity to anticipate future extremes and design more resilient infrastructure and risk mitigation strategies; therefore the quantification of the trends becomes a fundamental part of the risk assessment process.Hurricane Helene and the role of preceding rainfallOn the evening of September 26, 2024, Hurricane Helene made landfall in the Big Bend region of Florida as a Category 4 hurricane. Concurrently, a broad area of extensive and intense precipitation had been affecting the southern Appalachian Mountains since the early hours of September 26. This system persisted through Helene’s landfall and continued until the tropical cyclone’s path reached western North Carolina. The combination of the “Predecessor Rain Event” (PRE) and the tropical rainfall of Helene resulted in sustained widespread precipitation for three consecutive days, particularly in North Carolina making hurricane Helene the deadliest storm to make landfall in the US since Katrina in 2005. Figure 2 shows Helene’s rainfall amount in the historical context through annual maximum daily and 3-day total rainfall amount. The 3-day total registered by the pluviometer in Asheville, NC surpassed 350mm making it by far the highest extreme since records begun in 1878.Figure 1 – Schematic of Helene’s PRE event (Source: CW3E)PREs are mesoscale zones of intense rainfall, often forming approximately 1,000 km poleward of recurving tropical cyclones. In this instance, the PRE contributed to catastrophic flooding across eastern Tennessee and western North Carolina. The predecessor event significantly exacerbated flooding, turning what might have been a moderate flood into a catastrophic event. For example, the French Broad River near Fletcher, NC, reached a peak gauge height of 30.31 feet on September 27, exceeding the previous record by over 10 feet [Center for Western Weather and Water Extremes (CW3E), 2024].Helene’s devastation is a striking reminder that extreme rainfall is not just a meteorological anomaly, but a growing hazard in a warming world. However, separating climate-driven trends from natural variability in rainfall extremes remains a central challenge which leads to significant uncertainty in the estimation of the probability of occurrence of extreme rainfall and flood events.Figure 2 – Annual maxima 24h (blue) and 72h (yellow) rainfall amounts for the rain gauge in Asheville, NC. (Data: GHCNd, NOAA)Climate Science Suggests Extremes are IntensifyingA warmer atmosphere can hold more moisture – about 7% more per 1°C rise – leading to heavier rainfall events. This thermodynamic principle underpins projections of increasing precipitation extremes globally. Changes in global weather patterns also impact rainfall extremes through dynamic effects, whose mechanisms are active research topics (e.g. stronger hurricanes produce more updraft which leads to increased rainfall intensity; changes in circulation can also lead to increased likelihood of blocking weather patterns that lead to stationary systems and prolonged rainfall events). Figure 3 shows the global mean temperature anomaly over land and sea, calculated by NOAA in their Global Temperature Index. The anomaly is taken relative to the 1971-2000 average and highlights the strong trends in global temperatures that characterize man-made climate change. In present climate extreme precipitation events are expected to be more extreme than they were pre-1950.Climate models, from CMIP5/CMIP6 to high-resolution regional simulations, also consistently predict a rise in the frequency and severity of extreme rainfall under continued warming. These models suggest that the upper tail of the rainfall distribution is becoming fatter and shifting rightward. However, these projections must be supported by observational data to validate model accuracy and inform adaptation efforts.Figure 3 – Global Mean Surface Temperature (GMST) anomaly in the period 1850-2024. The anomaly is relative to the 1970-2000 average temperature. (Source: NCEI NOAA, NOAA Global Temp | National Centers for Environmental Information)Beyond Significance: Using Model Selection to Detect TrendsTo analyze real-world trends, we examined over 40 U.S. stations with long (≥70-year) daily rainfall records with a focus on annual maximum rainfall for 24-hours and 72-hours rainfall accumulations (Figure 4). If we look at trends in the samples through a non-parametric statistical test, the Mann-Kendall test [Papalexiou & Montanari, 2019], with the null hypothesis being the absence of trends, the test assesses whether the null hypothesis can be rejected with a given degree of confidence (e.g. 5% significance level). The results of the Mann-Kendall test showed that for the sites being analyzed about 15 to 20 percent of the sites exhibited a statistically significant trend, according to this test. This result confirms that in most cases it is not possible to establish and quantify trends when faced with noisy data dominated by natural variability even when science suggests these trends should exist. The implication? Traditional statistical significance testing (p-value) may be the wrong tool for the job.Figure 4 – Location of the sites selected for this analysis throughout the US. Selected stations have at least 70 years of daily rainfall observations and are located in areas where there is a significant flood risk exposure.When focussing on present day risk a practically more useful question is around how to best model extremes. To achieve this objective, we might interrogate the data from the point of view of identifying the best statistical model that describes the extremes. Since climate science suggests that extremes should depend on global temperature it seems reasonable to ask whether a model that relates rainfall extremes to global mean surface temperature is better at describing the variability of the data compared to one that is not (i.e. stationary model). The Akaike Information Criterion (AIC) is a metric that evaluates the quality of statistical models while penalizing complexity [Jewson et al., 2021] and can support the model selection logic. When comparing two models – one assuming no trend, the other incorporating a linear trend linked to global mean temperature – the model with the lower AIC is preferred.Figure 5 – Relationship between annual maxima 1-day (top) and 3-days (bottom) rainfall and GMST anomaly. Green line show best fit for a model that assumes linear trend with the GMST covariate (left) and one that assumes no trend (right).This approach shifts the focus from ‘Is the trend significant?’ to ‘Which model better explains the data?’ Using regression models with and without trends in rainfall extremes (based on global mean temperature anomalies), we found that approximately 40% of sites are better modeled with a trend, compared to only ~20% showing statistically significant trends.Learning from the Ordinary ValuesTraditional Extreme Value Theory (EVT) looks only at block maxima – e.g. the single highest rainfall value per year, if the block size is 365 days – to model the behaviour of the tail in a random process. This has the advantage of not having to make assumptions on the distribution of the process at hand, leveraging theory that relies on the asymptotic behaviour of the extremes regardless of the (unknown) parent distribution.However, this approach leads to discarding the majority of the observations available for a physical process like rainfall, assuming that they do not contain useful information. As an alternative, the Metastatistical Extreme Values approach [MEV, Marani and Ignaccolo, 2015, Devò et al., 2025], postulates that the distribution of ordinary values contains useful information on the physical processes and their potential non-stationarity which can be leveraged to understand and model the extremes.Figure 6 – The metastatistical extreme values (MEV) approach applied to the historical daily rainfall observationsh in Asheville, NC. The weibull fit of the annual ordinary values distribution is shown on top, color-coded based on the year. The scale (left) and shape (right) parameters of the weibull distribution are shown in the bottom panels, related to the GMST anomaly.In the MEV framework the distribution of the ordinary rainfall is assumed to follow a Weibull distribution and one such distribution is fit to each year of observations. Regressing the parameters of the annual fit against global temperatures gives an indication of how the ordinary value distribution has changed over time. The application of the MEV approach to the data for Asheville is shown in Figure 6 and the extension to all the sites studied in this article revealed that approximately 60% of stations show upward trends in the scale parameter when using the MEV approach.This suggests that while extremes are hard to diagnose due to noise, the underlying distribution is clearly shifting, implying that extremes will be also shifting even if their behaviour might emerge on longer time scale.ConclusionsIn this simple analysis we have focussed on the common approaches to look at trends in extreme rainfall and suggested that from a practical risk management point of view it is more useful to concentrate on the identification of the trends, rather than proving their existence from past observations. Various statistical tools allow to incorporate non-stationarities after detecting trends through model selection, either through traditional Extreme Value Theory (EVT) or leveraging the information in the full distribution through the MEV approach. The central insight of the MEV approach and the trend analysis more in general is that changes in moderate rainfall can inform us about changes in extremes. If the body of the distribution (e.g., 10–50 mm daily events for the example of Asheville) is intensifying, then the upper tail (e.g., >100 mm) is likely doing so as well—even if data scarcity masks it.The ability to identify and quantify trends allows decision makers to act on clearer, more stable signals rather than waiting for rare extreme events to show statistically significant changes. In practice, the most valuable question is not ‘Was there a trend in the past?’ but ‘What might happen in the future?’ Looming in the background remain the limitations of statistical methods when it comes to extrapolating to unseen extremes (epistemic uncertainty); therefore, it is important to invest in reducing these uncertainties as much as possible by leveraging methods that are informed by the physical processes to complement the statistical analysis.
18 December 2020 Commencing countdown, engines on – new specialty insurer Inigo launches 1 January 2021 London, 23 November 2020 Inigo Limited (Inigo), a new insurance group, announces that it has successfully completed a capital raise of approximately ... Read
6 January 2021 Richard Watson: “Our sceptics are a negative force who don’t want the competition” Our CEO, Richard Watson, spoke to Insurance Insider at the turn of the year on the reasons behind launching Inigo, his outlook on the market and ... Read