Home Ideas Google’s AI Is Still Recommending Putting Glue in Your Pizza, and This...

Google’s AI Is Still Recommending Putting Glue in Your Pizza, and This Article Is Part of the Problem

19
googles ai is still recommending putting glue in your pizza and this article is part of the problem

Despite explaining away issues with its AI Overviews while promising to make them better, Google is still apparently telling people to put glue in their pizza. And in fact, articles like this are only making the situation worse.

When they launched to everyone in the U.S. shortly after Google I/O, AI Overviews immediately became the laughing stock of search, telling people to eat rocks, use butt plugs while squatting, and, perhaps most famously, to add glue to their homemade pizza.

Most of these offending answers were quickly scrubbed from the web, and Google issued a somewhat defensive apology. Unfortunately, if you use the right phrasing, you can reportedly still get these blatantly incorrect “answers” to pop up.

In a post on June 11, Bluesky user Colin McMillen said he was still able to get AI Overviews to tell him to add “1/8 cup, or 2 tablespoons, of white, nontoxic glue to pizza sauce” when asking “how much glue to add to pizza.”

The question seems purposefully designed to mess with AI Overviews, sure—although given the recent discourse, a well-meaning person who’s not so terminally online might legitimately be curious what all the hubbub is about. At any rate, Google did promise to address even leading questions like these (as it probably doesn’t want its AI to appear to be endorsing anything that could make people sick), and it clearly hasn’t.

Perhaps more frustrating is the fact that Google’s AI Overview sourced the recent pizza claim to Katie Notopoulus of Business Insider, who most certainly did not tell people to put glue in their pizza. Rather, Notopoulus was reporting on AI Overview’s initial mistake; Google’s AI just decided to attribute that mistake to her because of it. 

“Google’s AI is eating itself already,” McMillen said, in response to the situation.

I wasn’t able to reproduce the response myself, but The Verge did, though with different wording: The AI Overview still cited Business Insider, but rightly attributed the initial advice to to Google’s own AI. Which means Google AI’s source for its ongoing hallucination is…itself.

What’s likely going on here is that Google stopped its AI from using sarcastic Reddit posts as sources, but it’s now turning to news articles reporting on its mistakes to fill in the gaps. In other words, as Google messes up, and as people report on it, Google will then use that reporting to back its initial claims. The Verge compared it to Google bombing, an old tactic where people would link the words “miserable failure” to a photo of George W. Bush so often that Google images would return a photo of the president when you searched for the phrase.

Google is likely to fix this latest AI hiccup soon, but it’s all bit of a “laying the train tracks as you go situation,” and certainly not likely to do anything to improve AI search’s reputation.

Anyway, just in case Google attaches my name to a future AI Overview as a source, I want to make it clear: Do not put glue in your pizza (and leave out the pineapple while you’re at it).

Source: LifeHacker.com