In 2018, there were 281 climate-related and geophysical events recorded in the International Disaster Database (EM-DAT). It is estimated that these events caused the deaths of 10,733 people across the world, impacting some 61 million people.
Last year saw a rise in seismic activity in Indonesia, in particular. Add to that a string of disasters in Japan and India, including flooding and increased volcanic activity – culminating in more deaths than the previous 18 years combined from this activity alone.
The World Bank is researching how it can use data and machine learning algorithms to predict certain types of disasters, and improve recovery in their aftermath, says Vivien Deparday, Disaster Risk Management Specialist at the Bank’s Global Facility for Disaster Risk Reduction.
The initiative is researching disasters such as floods, droughts and earthquakes in novel ways. This “involves a lot of data sources and complex relationships”, says Deparday. “We are using machine learning to model this phenomenon, but also to try to forecast them. It can provide new ways to look into those complex relationships.”
This approach can help the public sector understand – with more efficiency and accuracy – where the most vulnerable people are, and the location of exposed assets, he adds. “This is done through a combination of satellite imagery, interpretation and street-view interpretation, as well as through census data,” he says.
This data, says Deparday, also informs policy makers on where to invest their resources through more comprehensive prioritisation including local assets and a better understanding of the urban environment, enabling enhanced risk mitigation.
The project is at an early research stage, he adds. More work is needed around drought forecasting, in particular, which is being worked on through the World Bank initiative, as well as trying to create early warning systems.
A new area where machine learning is starting to be used is post-disaster management, he adds. “Using things like satellite imagery or drone imagery after disaster, and then using machine learning on those data sets can help get an idea of damage, extent of a disaster”, and enable quicker recovery, he adds.
One of the pitfalls of machine learning is implicit bias in the data being used. This could skew predictions and lead to incorrect policy recommendations. “Data comes with preconception and bias embedded in the data. That’s being recognised more and more, and it has put an impact on public policy,” he adds.
A few strategies governments can employ to combat data bias include the use of multiple people to code the data, have participants review your results, verify with more data sources, check for alternative explanations, and review findings with peers.
An open data policy could be instrumental in combating bias, as long as the source of that data is known. “Governments need to invest in good digital data collection and data platforms, so data is available, according to open standards,” says Deparday.
The World Bank has launched the Open Data for Resilience programme to improve the quality of public data on disasters. “Data should be shared so that it can be used and reused for different machine learning applications.”
“We need data; we need to collect it, to manage it, and to have it in a digital format – to enable a lot of these applications,” says Deparday. “That is an important aspect that should not be overlooked.”
A better understanding of disaster and advances in predictions would help governments invest in the right places and plan ahead for rehabilitation – and ultimately impact millions of lives.