Home Product Pricing Blog About Team Contact
Back to Blog

Adverse Selection Isn't Random — The Patterns Are Detectable Before You Bind

Adverse selection detection dashboard

Adverse selection is not a market force that arrives without warning. It arrives in your submission pipeline weeks or months before it shows up in your loss ratio, and the indicators are present in the ACORD data you already collect. The problem is not a lack of information — it is that most carriers do not analyze submission patterns at the book level in real time, so the adverse selection signal accumulates silently until the accident year matures enough to make the problem obvious.

By then, the damage is done. The policies are bound, the premium is inadequate for the risk, and the only question is how large the reserve strengthening will be. The actuarial response — retrospective recognition of a problem that underwriting data could have flagged prospectively — is the default mode for carriers without submission-level risk monitoring.

Three Structural Patterns of Adverse Selection

Adverse selection in commercial P&C lines tends to follow three distinct patterns, each with different causes and different signatures in submission data.

The first pattern is geographic concentration in CAT-exposed zones. When a carrier starts accepting disproportionate volume from a territory with elevated CAT exposure — Southeast coastal counties in PCS wind zones, Gulf Coast flood plains, California wildfire interface areas — the aggregate CAT exposure grows without a corresponding increase in premium per unit of risk. This pattern is detectable by monitoring the geographic distribution of new business against the carrier's filed CAT zone appetite. A sudden increase in submissions from a specific PCS wind zone, particularly at premium levels that reflect flat-rating rather than territory-specific loadings, is a predictable precursor to unfavorable CAT results.

The second pattern is occupancy class concentration. When a wholesaler or MGA has experienced deterioration in a specific occupancy class — say, restaurant and food service businesses with elevated fire frequency — the submissions from that class flow disproportionately to carriers who have not yet updated their pricing to reflect the deterioration. A regional carrier that prices restaurant risks at standard commercial property rates without a surcharge for cooking equipment hazard will attract adverse selection from accounts that more sophisticated competitors have declined or surcharged.

The third pattern is suppressed loss history. The ACORD 125 supplemental requires five years of prior loss history. Accounts with significant prior losses that are presented with only partial history — or with loss runs that show low reported losses because large claims have been reopened and moved to a different policy year — are disproportionately likely to generate claims under the new policy. Comparing loss history gaps (missing years, unusually clean recent years relative to older years) against the account's SIC code and operational characteristics is a systematic way to identify potential loss suppression.

Why Portfolio-Level Monitoring Catches What Individual File Review Misses

Senior underwriters reviewing individual submissions are often skilled at identifying adverse risk within a single file. They recognize that a roofing contractor in a coastal county submitting at their book rates is probably a risk other carriers have declined. But the individual file review process cannot detect emerging book-level patterns because no single reviewer sees enough submissions simultaneously to notice that 40% of new restaurant submissions in the last 60 days come from the same wholesaler in the same three states — a pattern that would be immediately visible in a submission pipeline dashboard.

The monitoring gap is structural. Individual underwriter review is optimized for per-risk assessment; portfolio-level adverse selection requires aggregate pattern recognition. These are different analytical tasks, and trying to do both simultaneously in a manual review process means one or both will be done poorly.

The practical solution is to separate the two functions: individual risk assessment remains with underwriters, but aggregate submission pattern analysis is automated and delivered as a portfolio dashboard that underwriting managers review on a weekly basis. The dashboard need not be complex — a few key metrics tracked over time, with alerts when those metrics deviate from historical norms, provides most of the analytical value.

Reading CAT Exposure in Submission Data

CAT exposure is the highest-stakes adverse selection vector for property carriers because a single hurricane season can produce losses that dwarf the book's annual premium on the affected territory. Yet CAT exposure assessment in most carriers is conducted at the aggregate portfolio level by the CAT modeling team, rather than at the individual submission level by underwriters. There is often no feedback loop from the CAT model to the underwriter's binding decision at the time the submission arrives.

The missing step is geocoding submissions at intake and comparing their location against the carrier's filed CAT zone appetite and accumulation limits. A submission for a commercial building in Broward County, Florida should trigger an immediate accumulation check: how much commercial property premium is the carrier already writing in that county, and does the prospective addition push total county-level TIV above the accumulation ceiling? If the answer is yes, the submission should route to a senior underwriter for manual CAT assessment before the standard pricing workflow proceeds.

Most carriers have access to geocoding data — ACORD 125 requires a physical location address for all commercial property risks, and geocoding services are a commodity. The barrier is system integration: connecting the submission intake workflow to the accumulation tracking system so that the CAT check is automatic rather than a manual step that individual underwriters may or may not perform.

Loss Run Red Flags and How to Identify Them

Loss runs are the primary mechanism by which prior experience influences underwriting decisions, and they are also the submission element most vulnerable to selective presentation. An insured who has experienced a large claim is motivated to present loss history in a way that minimizes the apparent severity of that claim — by presenting only recent years, by showing a claim as "closed at $0 additional" when in fact it was reopened in a prior policy period, or by attributing large claims to extraordinary events rather than the account's underlying loss profile.

Several specific patterns in loss runs warrant elevated scrutiny. Missing years are the most obvious: if a five-year loss run shows only three years, the missing years may contain the worst experience. An account with three-year loss runs should be reviewed against the application's prior carrier history to confirm the presentation is complete.

Timing anomalies are subtler: a commercial account with two prior claims under $10K each, then suddenly clean for the most recent 18 months, may have had a large claim that was settled and moved to a prior policy year's financials. Comparing the loss run's inception dates against the application's prior carrier field can sometimes reveal discontinuities that suggest incomplete presentation.

Finally, uniformly low severity across multiple years in a high-hazard SIC code is itself a yellow flag. A restaurant with five years of loss-free experience is not implausible, but it is unusual enough to warrant verification against the insured's claims history through a comprehensive inquiry — not just acceptance of the presented loss run.

Automating the Adverse Selection Screen

The operational challenge is that the checks described above require both individual account analysis and portfolio-level monitoring, applied consistently across every submission, without materially slowing the underwriting workflow. Manual application of all these checks is not feasible at the submission volumes most commercial carriers handle. Automation is the only way to make the screening consistent and scalable.

The automation architecture is straightforward: at submission intake, extract the key ACORD fields that drive adverse selection risk — location, SIC code, prior loss history, years in business, submission source — and run them through a rule-based flagging layer before the file reaches an underwriter's queue. High-priority flags route to senior underwriters. Clear submissions move through the standard workflow. Medium-priority flags generate a checklist for the assigned underwriter rather than an escalation.

As we discuss in our article on submission scoring methodology, the model that drives this triage needs to be trained on a dataset that includes declined and non-renewed submissions, not just bound business. Adverse selection patterns are defined by the accounts that cause losses relative to the accounts that do not — and both populations need to be in the training data for the model to learn the distinction.

Conclusion

Adverse selection is not an unforeseeable market outcome. It is a predictable consequence of pricing and selection processes that do not incorporate available information about submission quality. The ACORD data that carriers already collect contains the signatures of the three major adverse selection patterns: geographic CAT concentration, occupancy class deterioration, and loss history suppression. The question is whether that data is analyzed systematically at the portfolio level, or only inspected one file at a time by overloaded underwriters who see the trees but cannot see the forest.

Carriers that build the portfolio monitoring infrastructure to detect these patterns before they bind materially adverse business will show more stable combined ratios than those that rely on retrospective loss analysis to identify problems after they have already developed.

See RiskVert's adverse selection detection in action.

Contact us at support@riskvertx.com or request a demo.