Connect with us

Click here to join NNU for free and make money while reading news and getting updates daily.

TC

Using memes, social media users have become red teams for half-baked AI features

Published

on


“Running with scissors is a cardio exercise that can increase your heart rate and require concentration and focus,” says Google’s new AI search feature. “Some say it can also improve your pores and give you strength.”

Google’s AI feature pulled this response from a website called Little Old Lady Comedy, which, as its name makes clear, is a comedy blog. But the gaffe is so ridiculous that it’s been circulating on social media, along with other obviously incorrect AI overviews on Google. Effectively, everyday users are now red teaming these products on social media.

In cybersecurity, some companies will hire “red teams” – ethical hackers – who attempt to breach their products as though they’re bad actors. If a red team finds a vulnerability, then the company can fix it before the product ships. Google certainly conducted a form of red teaming before releasing an AI product on Google Search, which is estimated to process trillions of queries per day.

It’s surprising, then, when a highly resourced company like Google still ships products with obvious flaws. That’s why it’s now become a meme to clown on the failures of AI products, especially in a time when AI is becoming more ubiquitous. We’ve seen this with bad spelling on ChatGPT, video generators’ failure to understand how humans eat spaghetti, and Grok AI news summaries on X that, like Google, don’t understand satire. But these memes could actually serve as useful feedback for companies developing and testing AI.

Advertisement

Despite the high-profile nature of these flaws, tech companies often downplay their impact.

“The examples we’ve seen are generally very uncommon queries, and aren’t representative of most people’s experiences,” Google told TechCrunch in an emailed statement. “We conducted extensive testing before launching this new experience, and will use these isolated examples as we continue to refine our systems overall.”

Not all users see the same AI results, and by the time a particularly bad AI suggestion gets around, the issue has often already been rectified. In a more recent case that went viral, Google suggested that if you’re making pizza but the cheese won’t stick, you could add about an eighth of a cup of glue to the sauce to “give it more tackiness.” As it turned out, the AI is pulling this answer from an eleven-year-old Reddit comment from a user named “f––smith.”

Beyond being an incredible blunder, it also signals that AI content deals may be overvalued. Google has a $60 million contract with Reddit to license its content for AI model training, for instance. Reddit signed a similar deal with OpenAI last week, and Automattic properties WordPress.org and Tumblr are rumored to be in talks to sell data to Midjourney and OpenAI.

Advertisement

To Google’s credit, a lot of the errors that are circulating on social media come from unconventional searches designed to trip up the AI. At least I hope no one is seriously searching for “health benefits of running with scissors.” But some of these screw-ups are more serious. Science journalist Erin Ross posted on X that Google spit out incorrect information about what to do if you get a rattlesnake bite.

Ross’s post, which got over 13,000 likes, shows that AI recommended applying a tourniquet to the wound, cutting the wound and sucking out the venom. According to the U.S. Forest Service, these are all things you should not do, should you get bitten. Meanwhile on Bluesky, the author T Kingfisher amplified a post that shows Google’s Gemini misidentifying a poisonous mushroom as a common white button mushroom – screenshots of the post have spread to other platforms as a cautionary tale.

When a bad AI response goes viral, the AI could get more confused by the new content around the topic that comes about as a result. On Wednesday, New York Times reporter Aric Toler posted a screenshot on X that shows a query asking if a dog has ever played in the NHL. The AI’s response was yes – for some reason, the AI called the Calgary Flames player Martin Pospisil a dog. Now, when you make that same query, the AI pulls up an article from the Daily Dot about how Google’s AI keeps thinking that dogs are playing sports. The AI is being fed its own mistakes, poisoning it further.

Advertisement

This is the inherent problem of training these large-scale AI models on the internet: sometimes, people on the internet lie. But just like how there’s no rule against a dog playing basketball, there’s unfortunately no rule against big tech companies shipping bad AI products.

As the saying goes: garbage in, garbage out.





Source link: Techcrunch

Continue Reading
Advertisement