Commentary: How to save Wikipedia from AI
Published in Op Eds
Artificial intelligence is killing Wikipedia.
The volunteer-run online encyclopedia just issued a stark warning: AI is exploiting the site’s data, siphoning off its traffic, and threatening its future. Contrary to what you were taught in middle school, Wikipedia is remarkably accurate. It’s used by doctors and professional fact-checkers. Even large language models (LLMs) and the AI products they power, such as Elon Musk’s new Wikipedia competitor “Grokipedia,” rely on the site for training data. That means all of us, including AI companies, would be worse off in a world without Wikipedia.
The good news is that we have an educational playbook to fight back. Wikipedia’s warning includes a plea for internet users to look for citations and click through to original sources. Our research group has shown that after a few hours of instruction, people can learn to do just that: trace information to its source and make more accurate judgments about online content.
A viral ad from the AI company Perplexity shows why sites such as Wikipedia are in trouble: laying out AI’s biggest selling point and highlighting its greatest danger.
The ad stars Lee Jung-jae, from the hit show “Squid Game.” Trapped in an ominous hallway, taunted by a malevolent computer, Lee must answer a series of questions. “How do you remove coffee stains from a white shirt?” His hands tremble as he opens a mock-up of Google’s results page. Overwhelmed by a patchwork of confusing websites, Lee can see no clear answer.
He pivots to Perplexity, which answers his query in a single sentence, uncluttered by links. Zero friction. Zero ambiguity. Ultra-convenient. And that’s exactly the problem.
Too often, AI amputates information from its source, promising it can preserve the integrity of the dismembered limb even when blood ceases to flow. More worrying is that users appear content with these excised outputs. Even when AI answers include links, people often fail to click through to the original sources.
Wikipedia isn’t the only one in trouble. AI is also chomping away at news outlets’ traffic: One study found that AI-generated overviews reduce clicks to outside sources by nearly 35%. Users now appear to be satiated by algorithm-generated responses, making AI look like a time machine that catapults us back to the worst days of the internet.
Early search engines such as AltaVista profited the longer you stayed on their portal, flashing annoying banners and pop-ups that prioritized ad revenue over search quality. Google rendered its competitors extinct by helping you find what you wanted, fast, even if it meant less time on its site. The company’s mission to “organize the world’s information” aligned with the interests of content producers, who benefited from the traffic Google directed their way.
Today, publishers are racing to prepare for “Google Zero”: the day when they will no longer get traffic from the tech behemoth. OpenAI just released ChatGPT Atlas: a web browser designed to keep users away from websites, caging them within its interface. AI is trying to fuse the failed business model of early search engines with the revolution Google brought about: improved technology that keeps you immured inside a product ecosystem, supposedly without compromising the quality of the results.
That’s a big problem — and not just because of AI’s tendency to hallucinate or blend credible outlets with Reddit threads, YouTube videos, and random blogs. A self-contained AI ecosystem reduces the motivation to create and publish high-quality, free content. AI appears to be returning us to a narrower web, where information producers and those who organize that information have radically different incentives.
AI may be driving new threats to collective knowledge. But individuals aren’t blameless. Well before AI, studies on search behavior showed that many users ceded credibility judgments to Google, clicking on higher-ranked results even when those further down were more relevant. One of our studies showed college students taking answers at face value without questioning the source behind them — akin to blindly trusting an AI summary or what a chatbot has to say.
Behavior like this is not set in stone. A body of research from Canada, Germany, India, the United States, and elsewhere shows that we can teach students to investigate who’s behind the information and to trace answers back to their original source. Our own studies everywhere from Nebraska and Texas to Illinois and Georgia show that even a few hours of instruction substantially boost students’ ability to make wise choices about what to believe online.
Education is at its best — more magical than an LLM ever will be — when it moves us from being passive recipients of knowledge to active explorers who reason about what we see. The challenge we face is to move from being passive users of AI to engaged citizens who can verify AI responses that, as a common expression goes, are “frequently wrong but never in doubt.”
AI companies appear willing to trade the long-term integrity of information ecosystems such as Wikipedia for short-term gain. But education can disrupt this corrupt bargain, a perilous pact in which we elect the easy route, relinquish our thinking, and blithely fork over our trust to a machine.
____
Nadav Ziv is a researcher at the Digital Inquiry Group. Sam Wineburg is co-founder of DIG and an emeritus professor of education at Stanford University. His latest book, with co-author Mike Caulfield, is “Verified: How to Think Straight, Get Duped Less, and Make Better Decisions About What to Believe Online.”
_____
©2025 Chicago Tribune. Visit at chicagotribune.com. Distributed by Tribune Content Agency, LLC.
























































Comments