Back to Homepage

What GenAI Is Not

Rohan Yashraj Gupta

Rohan Yashraj Gupta

January 2, 2026

Actuaries work with models every day.

GLMs for pricing.

Chain ladder for reserving.

Survival models for longevity.

GenAI sounds like it belongs in that toolkit.

It doesn't.

Not a Pricing Model

A GLM takes data and produces predictions through mathematical relationships.

You feed it claim counts, exposures, and rating variables.

It estimates coefficients that minimize prediction error.

You can inspect those coefficients.

You can validate the link function.

You can check residuals.

GenAI takes text and produces more text.

Ask it to price a policy, and it will write something that sounds like a pricing analysis.

It might mention relativities.

It might reference credibility.

It might even suggest numbers.

But it didn't calculate anything.

It completed a pattern it saw in training data.

A GLM estimates. GenAI mimics estimation.

Not a Reserving Method

Chain ladder methods transform triangles into reserve estimates through explicit arithmetic.

You apply development factors.

You calculate ultimates.

You know exactly how each number was derived.

GenAI cannot perform chain ladder.

It can describe chain ladder beautifully.

It can format a triangle.

It can draft the narrative around reserve movements.

But if you ask it to calculate the IBNR for a specific triangle, you're asking the wrong tool.

It will produce numbers.

Those numbers will look plausible.

They won't be correct.

Chain ladder calculates reserves. GenAI explains reserves.

Not a Decision Maker

Machine learning models make decisions.

A fraud detection model flags suspicious claims.

A lapse model predicts policyholder behavior.

These models encode decision logic through training on outcomes.

GenAI encodes language patterns.

If you ask it "Should we increase rates for this segment?" it will generate an answer.

That answer reflects how actuaries typically write about rate increases.

It doesn't reflect your company's data.

Your risk appetite.

Your competitive position.

It's a generic response dressed up in specific-sounding language.

ML models decide based on data. GenAI writes based on patterns.

The Dangerous Middle Ground

The confusion happens because GenAI is very good at sounding authoritative.

It uses the right terminology.

It structures arguments logically.

It caveats appropriately.

A poorly trained actuary might produce a worse-looking pricing memo.

But that actuary is still doing actual analysis underneath.

GenAI is only ever producing text.

Where the Lines Blur

GenAI can assist models without being one.

It can:

  • Generate SQL queries that feed into your pricing model
  • Summarize model output for non-technical stakeholders
  • Suggest features to test in your next GLM iteration

These are language tasks wrapped around real models.

The model still does the work.

GenAI handles the translation layer.

The Test

Here's a simple way to know if you're using GenAI appropriately:

Could a competent actuary verify this output without redoing the entire analysis?

If yes, you're using it well.

If no, you're treating it like a model when it's just a writer.

GenAI doesn't belong in your modeling toolkit.

It belongs in your communication toolkit.

Models analyze.

GenAI articulates.

Keep them separate, and you'll use both well.