Original carrier policy location indicated no flooding (left), while the corrected geocoded location shows 0.92 m of flooding. The discrepancy reflects a 114 m geocoding error that placed the asset on the road in front of the building rather than at the building footprint. Flood data shown in blue is derived from ICEYE flood data from Hurricane Ian.
Accurate location data underpins nearly every step of insurance risk assessment. Yet in practice, location error is widespread and often invisible. Assets may appear to sit outside hazard zones they are actually exposed to, or inside zones they never occupy. In dense urban environments, locations can snap to roads or neighboring structures. In rural regions or areas with informal addressing systems, assets may be placed hundreds of meters away from their true footprint.
These inaccuracies rarely show up at the point where data is first ingested. Instead, they carry through underwriting, portfolio analysis, and catastrophe modeling workflows, often unnoticed. When the problem does surface, typically during claims handling or after an event—key decisions have already been made using the wrong assumptions, and fixing the underlying location data is neither quick nor easy to scale.
To address this challenge, EarthDaily developed the Geocoding Consensus Algorithm (GCA), a proprietary service designed to accurately and scalably identify the correct asset at the building level.
Geocoding is the process of turning address information into geographic coordinates that describe where an asset actually sits on the Earth’s surface. Those coordinates allow insurers to link properties to geospatial datasets used in risk assessment, including weather, wildfire, flood, and seismic hazards.
Image source: Milliman
In insurance risk practice, geocoding is a fundamental part of property underwriting, pricing, and portfolio analysis. Geocoded locations are used at multiple points in the policy lifecycle to link assets with hazard data and risk models.
In many cases, the coordinates insurers start with are simply not very precise. They may be good enough to place a property in a general area, but not good enough to rely on for analysis. That becomes a real problem for perils like flood, where risk can change from one street to the next and small location errors start to matter.
It also supports consistent exposure assessment and more precise catastrophe and portfolio analytics. Public-sector frameworks from organizations such as the Federal Emergency Management Agency and the United Nations Office for Disaster Risk Reduction emphasize that hazard, exposure, and vulnerability cannot be meaningfully assessed without reliable spatial context, and that errors in the location information used to join assets to hazard data can materially affect risk estimation.
An accurate understanding of an asset’s location is the first, and most foundational, step in assessing risk. However, across the global P&C insurance industry, geocoding errors remain widespread enough to materially affect underwriting accuracy, portfolio analysis, and catastrophe modeling.
These inaccuracies limit insurers’ ability to take full advantage of high-resolution geospatial data and can lead to mispriced risk, inefficient workflows, and suboptimal customer outcomes. Without precise building-level location data, even the most sophisticated risk models are built on an unstable foundation.
Many insurance carriers rely on a single geocoding service. Every geocoding service is different and performance varies by geography and building type, depending on how and where each model was trained. Some services also focus on navigation and consumer mapping vs building identification. Relying on a single geocoding source can introduce bias and error, particularly where addresses are messy or inconsistent.
Insurers often try to catch these issues through manual checks or spot reviews, but that approach doesn’t hold up once portfolios reach scale. At the scale most insurers operate, the validation process has to be automated. Even then, results differ from one provider to the next, depending on the data they use and how their systems perform in different regions. That variation doesn’t disappear just because the rest of the workflow is well designed.
Addressing this challenge requires moving beyond single-point geocoding toward approaches that explicitly measure agreement, uncertainty, and error, rather than assuming any one source is correct.
EarthDaily’s Geocoding Consensus Algorithm overcomes these limitations by fusing outputs from multiple global geocoders with aggregated building and parcel-level data. Rather than depending on a single source, GCA evaluates consensus across providers to determine the most accurate representation of a property’s true location.
This approach enables consistent, scalable, and building-level precision across geographies. The GCA returns:
Together, these outputs provide transparency, trust, and a strong foundation for downstream risk analytics.
By resolving location uncertainty at the source, the Geocoding Consensus Algorithm enables insurers to fully leverage high-resolution geospatial data across underwriting, portfolio management, and risk analytics. When asset locations are consistently and accurately identified at the building level, downstream models can operate on stable inputs, hazard attribution becomes more reliable, and decisions are less likely to be distorted by hidden spatial error.
As insurers incorporate increasingly granular data into their workflows, the quality of the underlying location data becomes a limiting factor. Getting location data right at the start reduces the need for manual clean-up later and makes downstream analysis more reliable across large portfolios.
If you’re looking for consistent, property-level risk intelligence you can trust at scale, connect with the EarthDaily team or explore how our data supports insurance risk decisions.