Politics

/

ArcaMax

Commentary: Can AI developers avoid Frankenstein's fateful mistake?

Dov Greenbaum and Mark Gerstein, Los Angeles Times on

Published in Op Eds

Audiences already know the story of Frankenstein. The gothic novel — adapted dozens of times, most recently in director Guillermo del Toro’s haunting revival now available on Netflix— is embedded in our cultural DNA as the cautionary tale of science gone wrong. But popular culture misreads author Mary Shelley’s warning. The lesson isn’t “don’t create dangerous things.” It’s “don’t walk away from what you create.”

This distinction matters: The fork in the road comes after creation, not before. All powerful technologies can become destructive — the choice between outcomes lies in stewardship or abdication. Victor Frankenstein’s sin wasn’t simply bringing life to a grotesque creature. It was refusing to raise it, insisting that the consequences were someone else’s problem. Every generation produces its Victors. Ours work in artificial intelligence.

Recently, a California appeals court fined an attorney $10,000 after 21 of 23 case citations in their brief proved to be AI fabrications — nonexistent precedents. Hundreds of similar instances have been documented nationwide, growing from a few cases a month to a few cases a day. This summer, a Georgia appeals court vacated a divorce ruling after discovering that 11 of 15 citations were AI fabrications. How many more went undetected, ready to corrupt the legal record?

The problem runs deeper than irresponsible deployment. For decades, computer systems were provably correct — a pocket calculator can consistently offer users the mathematically correct answers every time. Engineers could demonstrate how an algorithm would behave. Failures meant implementation errors, not uncertainty about the system itself.

Modern AI changes that paradigm. A recent study reported in Science confirms what AI experts have long known: plausible falsehoods — what the industry calls “hallucinations” — are inevitable in these systems. They’re trained to predict what sounds plausible, not to verify what’s true. When confident answers aren’t justified, the systems guess anyway. Their training rewards confidence over uncertainty. As one AI researcher quoted in the report put it, fixing this would “kill the product.”

This creates a fundamental veracity problem. These systems work by extracting patterns from vast training datasets — patterns so numerous and interconnected that even their designers cannot reliably predict what they’ll produce. We can only observe how they actually behave in practice, sometimes not until well after damage is done.

This unpredictability creates cascading consequences. These failures don’t disappear, they become permanent. Every legal fabrication that slips in undetected enters databases as precedent. Fake medical advice spreads across health sites. AI-generated “news” circulates through social media. This synthetic content is even scraped back into training data for future models. Today’s hallucinations become tomorrow’s facts.

So how do we address this without stifling innovation? We already have a model in pharmaceuticals. Drug companies cannot be certain of all biological effects in advance, so they test extensively, with most drugs failing before reaching patients. Even approved drugs face unexpected real-world problems. That’s why continuous monitoring remains essential. AI needs a similar framework.

Responsible stewardship — the opposite of Victor Frankenstein’s abandonment — requires three interconnected pillars. First: prescribed training standards. Drug manufacturers must control ingredients, document production practices and conduct quality testing. AI companies should face parallel requirements: documented provenance for training data, with contamination monitoring to prevent reuse of problematic synthetic content, prohibited content categories and bias testing across demographics. Pharmaceutical regulators require transparency while current AI companies need to disclose little.

Second: pre-deployment testing. Drugs undergo extensive trials before reaching patients. Randomized controlled trials were a major achievement, developed to demonstrate safety and efficacy. Most fail. That’s the point. Testing catches subtle dangers before deployment. AI systems for high-stakes applications, including legal research, medical advice and financial management, need structured testing to document error rates and establish safety thresholds.

Third: continuous surveillance after deployment. Drug companies are obligated to track adverse events of their products and report them to regulators. In turn, the regulators can mandate warnings, restrictions or withdrawal when problems emerge. AI needs equivalent oversight.

 

Why does this need regulation rather than voluntary compliance? Because AI systems are fundamentally different from traditional tools. A hammer doesn’t pretend to be a carpenter. AI systems do, projecting authority through confident prose, whether retrieving or fabricating facts. Without regulatory requirements, companies optimizing for engagement will necessarily sacrifice accuracy for market share.

The trick is regulating without crushing innovation. The EU’s AI Act shows how hard that is. Under the Act, companies building high-risk AI systems must document how their systems work, assess risks and monitor them closely. A small startup might spend more on lawyers and paperwork than on building the actual product. Big companies with legal teams can handle this. Small teams can’t.

Pharmaceutical regulation shows the same pattern. Post-market surveillance prevented tens of thousands of deaths when the FDA discovered that Vioxx — an arthritis medication prescribed to more than 80 million patients worldwide — doubled the risk of heart attacks. Still, billion-dollar regulatory costs mean only large companies can compete, and beneficial treatments for rare diseases, perhaps best tackled by small biotechs, go undeveloped.

Graduated oversight addresses this problem, scaling requirements and costs with demonstrated harm. An AI assistant with low error rates gets extra monitoring. Higher rates trigger mandatory fixes. Persistent problems? Pull it from the market until it’s fixed. Companies either improve their systems to stay in business, or they exit. Innovation continues, but now there’s more accountability.

Responsible stewardship cannot be voluntary. Once you create something powerful, you’re responsible for it. The question isn’t whether to build advanced AI systems — we’re already building them. The question is whether we’ll require the careful stewardship those systems demand.

The pharmaceutical framework — prescribed training standards, structured testing, continuous surveillance — offers a proven model for critical technologies we cannot fully predict. Shelley’s lesson was never about the creation itself. It was about what happens when creators walk away. Two centuries later, as Del Toro’s adaptation reaches millions this month, the lesson remains urgent. This time, with synthetic intelligence rapidly spreading through our society, we might not get another chance to choose the other path.

____

Dov Greenbaum is professor of law and director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies at Reichman University in Israel.

Mark Gerstein is the Albert L. Williams Professor of Biomedical Informatics at Yale University.


©2025 Los Angeles Times. Visit at latimes.com. Distributed by Tribune Content Agency, LLC.

 

Comments

blog comments powered by Disqus

 

Related Channels

The ACLU

ACLU

By The ACLU
Amy Goodman

Amy Goodman

By Amy Goodman
Armstrong Williams

Armstrong Williams

By Armstrong Williams
Austin Bay

Austin Bay

By Austin Bay
Ben Shapiro

Ben Shapiro

By Ben Shapiro
Betsy McCaughey

Betsy McCaughey

By Betsy McCaughey
Bill Press

Bill Press

By Bill Press
Bonnie Jean Feldkamp

Bonnie Jean Feldkamp

By Bonnie Jean Feldkamp
Cal Thomas

Cal Thomas

By Cal Thomas
Christine Flowers

Christine Flowers

By Christine Flowers
Clarence Page

Clarence Page

By Clarence Page
Danny Tyree

Danny Tyree

By Danny Tyree
David Harsanyi

David Harsanyi

By David Harsanyi
Debra Saunders

Debra Saunders

By Debra Saunders
Dennis Prager

Dennis Prager

By Dennis Prager
Dick Polman

Dick Polman

By Dick Polman
Erick Erickson

Erick Erickson

By Erick Erickson
Froma Harrop

Froma Harrop

By Froma Harrop
Jacob Sullum

Jacob Sullum

By Jacob Sullum
Jamie Stiehm

Jamie Stiehm

By Jamie Stiehm
Jeff Robbins

Jeff Robbins

By Jeff Robbins
Jessica Johnson

Jessica Johnson

By Jessica Johnson
Jim Hightower

Jim Hightower

By Jim Hightower
Joe Conason

Joe Conason

By Joe Conason
Joe Guzzardi

Joe Guzzardi

By Joe Guzzardi
John Stossel

John Stossel

By John Stossel
Josh Hammer

Josh Hammer

By Josh Hammer
Judge Andrew P. Napolitano

Judge Andrew Napolitano

By Judge Andrew P. Napolitano
Laura Hollis

Laura Hollis

By Laura Hollis
Marc Munroe Dion

Marc Munroe Dion

By Marc Munroe Dion
Michael Barone

Michael Barone

By Michael Barone
Mona Charen

Mona Charen

By Mona Charen
Rachel Marsden

Rachel Marsden

By Rachel Marsden
Rich Lowry

Rich Lowry

By Rich Lowry
Robert B. Reich

Robert B. Reich

By Robert B. Reich
Ruben Navarrett Jr.

Ruben Navarrett Jr

By Ruben Navarrett Jr.
Ruth Marcus

Ruth Marcus

By Ruth Marcus
S.E. Cupp

S.E. Cupp

By S.E. Cupp
Salena Zito

Salena Zito

By Salena Zito
Star Parker

Star Parker

By Star Parker
Stephen Moore

Stephen Moore

By Stephen Moore
Susan Estrich

Susan Estrich

By Susan Estrich
Ted Rall

Ted Rall

By Ted Rall
Terence P. Jeffrey

Terence P. Jeffrey

By Terence P. Jeffrey
Tim Graham

Tim Graham

By Tim Graham
Tom Purcell

Tom Purcell

By Tom Purcell
Veronique de Rugy

Veronique de Rugy

By Veronique de Rugy
Victor Joecks

Victor Joecks

By Victor Joecks
Wayne Allyn Root

Wayne Allyn Root

By Wayne Allyn Root

Comics

Joey Weatherford Joel Pett Lisa Benson Dick Wright Chris Britt Monte Wolverton