Rethinking current famine classification: insights from history

Ingrid de ZwarteAlex de WaalL H Lumeyc

The mass starvation in Gaza has called into question how famine is defined and measured. On Aug 22, 2025, the Famine Review Committee of the Integrated Food Security Phase Classification (IPC) determined that the food situation in Gaza Governorate had reached phase 5: famine.1 This famine status followed repeated warnings from humanitarian organisations and medical professionals that starvation deaths and acute malnutrition among children were rising sharply due to Israeli Government policies and Israel Defense Forces’ actions in the Gaza Strip, including denying humanitarian aid.2,3 Although the IPC famine declaration was retracted in mid-December, 2025, the Gaza case shows the limitations of a universal mortality threshold, which could mask the character of famine’s effects. We therefore call for a fundamental re-examination of how famine thresholds are set.

First, the IPC’s mortality thresholds were designed for rural African settings and not for middle-income, urbanised populations. Baseline mortality in Gaza before October, 2023, was less than 0·1/10 000 people per day. To reach IPC phase 5 mortality (2/10 000 people per day), non-trauma deaths would need to increase more than 20-fold. By contrast, in rural Somalia or South Sudan, a six-fold or seven-fold increase would suffice.4 This creates sharp disparities in how famine mortality is assessed in different settings.

Second, widespread starvation can remain unclassified as famine for a long period without reaching IPC phase 5 mortality thresholds. Between December, 2023, and October, 2025, around 95% of Gaza’s population was in IPC phase 3 (crisis) or higher.1,5,6 In December, 2025, 77% of the population is still facing high levels of acute food insecurity (IPC phase 3 or higher).6 This situation far exceeded figures in any food security crisis since the IPC’s creation, but without robust mortality data, the IPC’s Famine Review Committee delayed concluding that the threshold for declaring a famine had been met. The system thus cannot recognise mass starvation until it is well advanced.

Third, the IPC’s reliance on absolute mortality rates ignores the importance of relative increases within age groups. In most historical famines, the largest proportional mortality increases were often not seen among children younger than 5 years—the IPC’s primary focus—but among older children. In the Soviet Union in 1922, in Greece from 1941 to 1944, and in Darfur, Sudan, from 1984 to 1985, relative mortality among children aged 5–14 years increased more sharply than in any other age group.7–9 During the Dutch Hunger Winter, infant mortality in March, 1945, reached approximately 3·4/10 000 infants per day in major cities (4 times higher than before the war), but mortality in children aged between 1 year and 4 years increased seven-fold to 0·5/10 000 children.10 In absolute terms, these mortality increases would not meet the current IPC famine threshold for mortality in children younger than 5 years (4/10 000 children per day).

Fourth, mortality is a delayed indicator of famine. In the occupied western Netherlands in late 1944, food rations fell below subsistence levels (<1500 kcal per day) 3 months before the increase in mortality, and continued to decline sharply during those months.11 Similar lag patterns appear when examining most historical famines. If famine determination depends on mortality, the alarm is raised only after avoidable starvation deaths have occurred. An early indicator of famine stress that plays no role in the IPC classification is the sharp birthweight decline of newborn infants. During the Dutch famine, pregnant women exposed to famine in the third trimester delivered infants that were 300 g lighter at birth than infants born before the famine.12 A lagging indicator is the decline in the number of births during famine. In Ukraine, births fell by 50% in provinces exposed to extreme famine.13 During the Dutch Hunger Winter, births declined two-fold to three-fold.11 Similar declines occurred in Darfur in the 1980s.9

Finally, famine classification is easily politicised. Classifying anything short of IPC phase 5 across all three indicators (acute food insecurity, malnutrition, and mortality) as not famine encourages authorities to restrict data access or manipulate indicators. In war zones, where most modern mass starvation occurs, reliable demographic data are usually missing. For several 20th-century famines, including the Ukraine Holodomor, demographic data became available only decades later.13 Classification systems that depend on hard-to-obtain data in conflict settings risk systematic under-recognition.

On the basis of these lessons from historical famines, we therefore question the continued application of a mortality-based classification system. This system is insensitive to the varying demographic profiles of populations. Furthermore, the reliance on overall mortality masks early signs of famine stress, including rapid changes in birth outcomes and rises in infant deaths. These early signs could reduce the time lag between acute food insecurity and rising death rates among the population at large. We therefore advocate for the systematic collection of more sensitive famine indicators to provide a more timely, accurate, and powerful diagnostic tool for the necessity of humanitarian action.

Stay Connected

This field is for validation purposes and should be left unchanged.