-->

Friends of Enterprise AI World! Register NOW for London's KMWorld Europe 2026 & save £300 with the code HOLIDAYS. Offer ends 12/31.

What Consumer AI Gets Wrong

Article Featured Image

It is an indelible attribute of humankind that we seek and accept simple answers. When those answers are provided instantly, somehow it becomes both a satisfactory and satiate experience because people crave consistent, thorough, and authoritative answers. Such is the case with the consumable AI applications provided by all the usual suspect search engines. There is, however, a huge problem. That is, people have stopped questioning the results. I, of course, am assuming that people ever did proffer questions to the sources they listed, but stay with me for a few minutes. 

In recent times, when posing a question to a common search engine, significant information pops up on the screen before a link list appears, rendering that list of links somewhat superfluous. This phenomenon is observable, ubiquitous, and personal. I noticed myself succumbing to the comfort of the magisterial simplicity of the sirens of the internet. That is precisely why I’m writing this article. I feel compelled to remind myself—and the world—that this instant information is not always correct. Very often, the material is biased and sometimes it’s nothing more than complete babble. Often, the AI wizard behind the curtain will admit that there is no answer to your inquiry, but usually it redirects, or misdirects, the user to a question, word, or name that the wizard guesses “you actually mean” as opposed to the specific request you made. 

The other day, I asked my favorite search engine for information about a well-known purveyor in fake luxury watches. Watches, or “timepieces,” to use the correct community jargon, are a new obsession of mine as Silicon Valley has yet to invite me back from exile—but I digress. The “company” name I requested information on is a name of what I will call “common construction,” meaning that the name could be easily mistaken for an existing name of a legitimate company. Putting aside the fact that many of these organizations have a half-life reminiscent of an unstable radioactive subatomic particle, I had expected to discover some incriminating information or salacious details of law enforcement actions or even an episode of American Greed. I would love to use the flippant quip of “crickets,” but that isn’t what happened. The AI system generated pages of information on what it “speculated I was looking for.” So, I carefully reconstructed my query with great literary clarity.

“Tell me if this company is selling fake Rolex watches and where they come from.” The company has a Chicago address. I still got nothing useful. Please allow me to use a cliché now: I got, in fact, “a whole lot of nothing.” I did receive many platitude warnings to be observant of what the watch industry politely refers to as “replicas.” Also, there was a plethora of useless information providing me references to law enforcement actions in parallel with trite disclaimers ensuring that I didn’t interpret the results as conclusive of any law-breaking or chicanery of any sort. Again, a whole lot of nothing.

ACCURACY RESEARCH 

So, how often is consumer AI wrong or at least dangerously misleading? We, of course, don’t really know, but there are studies which can be obtained by a simple search of that same omnipresent internet. Fuel Cells & Hydrogen Observatory (fchobservatory.eu/how-often-is-ai-wrong), a European think tank and financial analytics organization, concludes that AI is wrong often but has yet to quantify that claim with any actual satisfying metric.

Ars Technica cites Columbia Journalism Review’s Tow Center for Digital Journalism, which claims that generative AI results can be up to 60% wrong (arstechnic.com/ai/2025/03/ ai-search-engines-give-incorrect-answers-at-an-alarming60-rate-study-says). According to this claim, simple math reveals that consumer AI is possibly more wrong than right. Simple logic might suggest that a consumer would then benefit by using the exact opposite of any advice that they receive. 

What is worse is a particular quote in the article: “Citation error rates varied notably among the tested platforms.” This means that there isn’t even any consistency to the inaccuracy. I hope that the readers of this article understand that caution is paramount when glancing at that first splattering of information posing as a convincing result that they see on their screens.

Futurism concludes that the situation is even more dire. “This may come as a shock, but it turns out that an astounding proportion of AI search results are flat-out incorrect” is not only the basis of the title of this article but the lead sentence (futurism.com/study-ai-search-wrong). It also sites research from the Tow Center for Digital Journalism that coincidentally concludes that major consumer AI applications are wrong approximately 60% of the time. Some are worse than others.

I didn’t work very hard to collect this research. Ironically, I performed a simple Google search. I thought that I would need to peruse multiple pages of results and dozens of articles to find evidence to support my hypothesis, but amazingly, the three articles that I have cited here appeared on the first screen of my search. 

It’s clear that the general populace needs to learn to consume the results of an A-related answer to any internet search with significant skepticism, if not cynical suspicion. I think it’s clear that the entire world is being used as beta-testers. The masters of the universe who control the Six Cities of Silicon Valley are developing a multi-trillion-dollar industry that the marketing geniuses who rediscovered the 1950s’ term “artificial intelligence” have recoined. I’m so glad I’m not among the group who simply reads an AI-generated answer, accepts it, and acts upon it. However, I need to finish the article now. I just saw an email pop up with an incredibly cheap watch that I have had my eye on. The “store” has a Vegas address, but I’m pretty sure it’s legitimate. 

EAIWorld Cover
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues