In a recent keynote speech at Wharton’s Initiative for Global Environmental Leadership (IGEL), Innovyze CEO Colby Manwaring took the stage to address the current state of flood modeling techniques. The story? We can do better. Wet weather is becoming more unpredictable and the impacts are being felt by our communities. Municipalities, engineering consultants, and government agencies cannot continue to operate with outdated flood models that are plagued by inaccurate or incomplete data. If they do, residents and businesses are left vulnerable to flood risks and costs, compromising the integrity of local governments.
Where We Come from Determines Where We Are
The presentation began with a call to re-evaluate current flood modeling methods: “We need to consider our frame of reference if we are going to get to a solution, we need to consider our assumptions on where we are.”, Colby stated. Once we realize where we come from, we can plot a course for the future.
Colby goes on to explain that what we currently know about flood forecasting, mapping, and modeling originated in the 1970’s. During this period, the foundation was laid for data collection systems and computing that was later used to shape flood modeling approaches.
These methods were eventually codified and legislated in the developed world based on key data points extracted from weather patterns. The result was a reliance on risk assessments based on 100-year return intervals of flooding (probably from most people thinking “I won’t be here in 100 years, so I’m safe”). In an elementary sense, risk assessment was framed as: How likely is an area to flood throughout a 100-year time frame: once? 15 times? 40 times? This framework for risk assessment is relatable to most people, regulators and the public alike, but somewhere along the way the understanding was lost that a “100-year flood” is just a way of saying that every year there is a 1% chance of this flood occurring. Probability this year is not affected by last year’s events, nor by future events – so we can have “100-year floods” anytime.
Flood mapping based on this framework was, inevitably, misunderstood and flawed. Weather and rainfall input data was spread out, disjointed, and often assumed because it is hard to predict and codify. The inaccuracies in rainfall data contributed to the assumptions made for the overall model, which led to the need for tweaking of output data.
Flood maps based on 100-year risk assessments emerged as a binary, or “single truth”, basis for flood insurance, infrastructure planning, and risk mitigation. Businesses and residents were either in or out of the floodplain, and insurance costs were calculated accordingly. In reality, flood emergencies do not occur in an “in vs. out”, binary manner. They are more fluid than that – they are graduated events that we are trying to quantify with probabilistic methods – so floods don’t stop at some imaginary floodplain line.
What Happens when the Map is Wrong?
In 2012, Hurricane Sandy ravaged the US east coast and the Caribbean. Hundreds of thousands of housing units were destroyed and billions of dollars were spent in reconstruction efforts for infrastructure, homes, and businesses.
In his first major address following the disastrous storm, former New York City Mayor Michael Bloomberg compared the FEMA 100-year flood maps to the actual flooding caused by the storm. According to the former mayor “two-thirds of all homes damaged by Sandy [were] outside of FEMA’s existing 100-year flood maps.”