Google was accidentally leaking its Bard AI chats into public search results

 

By Chris Stokel-Walker

AI-powered chatbots are designed to be the always-available advisors that will power the next era of productivity. But they have a privacy problem—as evidenced by a recent misstep by Google.

The search giant, which developed its Bard AI chatbot in response to the arrival of OpenAI’s ChatGPT late last year, has been inadvertently leaking conversations into its search results—an issue that is now being remedied after having been made public.

SEO consultant Gagan Ghotra first raised the issue that URLs linking to conversations users had with Bard were showing up in Google’s search index, the database of websites that the search engine crawls in order to provide answers to users’ queries.

Ghotra shared a screenshot showing users’ private conversations with Bard asking for tips about how to spot otters in Singapore, and how to use Bard to improve your writing. Clicking on those links would take users to the historical record of a conversation another user had with Bard on the topic. Peter Liu, a research scientist at Google DeepMind, quickly clarified that chat conversations appearing in search results had previously been shared with another user using the Share functionality common in AI chatbots—which allows users to provide links to people of their choosing.

Liu was criticized for what some saw as an attempt to downplay the seriousness of the breach in user confidence in privacy. Critics suggested that users who shared links to their chats were doing so with only a small circle of people they chose, rather than all and sundry on the internet. “It’s a pretty subtle point, but it’s actually really important,” says Simon Willison, the AI critic and creator of Datasette.

“This is fundamentally a privacy feature,” says Willison. “Everything you say to Bard is private by default, but there’s a feature that lets you share that conversation.” Willison suggests that users would expect their conversations only to be shared with whomever they choose. He points to the text used in the user interface for the Share functionality with Bard. “’Let anyone with the link see what you’ve selected’,” he says. “That implies that only people who you send the link to will be able to see the conversation.”

That wording, Willison says, suggests that there is a limit to where the conversation will go. “Your shared link showing up in random searches is a breach of your expectations when you used the share feature,” he says. Willison points out that ChatGPT has a similar Share button on its service. “But ChatGPT sharing, as it should do, includes code in the HTML to block the shared content from being indexed by search engines,” he says. “Google Bard evidently forgot to include that, and as a result Google Search started sweeping up those conversations.”

 

The issue is compounded by the fact that the inclusion of chat conversations in search results—which Google DeepMind’s Liu seemed to imply was by design, and an example of the Share functionality working normally—is contrary to what has gone before. “This is very different from what Google does with products like Google Docs and Google Drive,” says Margaret Mitchell of AI company Hugging Face, who points out that on those other products, for Enterprise users, share options will warn you about the risks of sharing content with people outside your organization’s domain name.

That inconsistency between what Google has previously done and what it appears to be doing now is confusing for everyday users, she reckons. “For users who have an expectation of consistency, or an assumption of basic privacy aligned with the norms Google has already established in its products, this is creepy and underhanded,” she says. “Whoever approved this is way out of sync with what is already known about privacy at Google. Or intentionally overriding it, which is a pretty intense move for something Google admits is an experiment.”

Mitchell’s initial reaction when shown by Fast Company what had happened was shock. A Google spokesperson declined to comment on the record for this story, but pointed to a tweet from Danny Sullivan, the company’s public liaison for search, who seemingly admitted this was an error. “Bard allows people to share chats, if they choose,” Sullivan wrote. “We also don’t intend for these shared chats to be indexed by Google Search. We’re working on blocking them from being indexed now.”

The issue is such a concerning one because of the way users have adopted AI chatbots as the 21st century confession booth. Recent research shows users are willing to share private identifying information with ChatGPT, while Lilian Weng, a worker on OpenAI’s own AI safety team, this week suggested that users might want to try using her company’s chatbot as an alternative to a therapist. The risk of those conversations becoming public appears not to have been taken into consideration.

Google’s quick action to fix the breach is commendable, says Willison, but highlights a broader issue about the race for AI supremacy that Google, Microsoft, OpenAI, and a clutch of other companies are currently engaged in. “It’s a good illustration of how all three companies are moving at a breakneck speed to compete with each other,” he says, “which makes it more likely that mistakes like this one will slip through to production.”

Fast Company

(1)