Google is putting AI into your business processes—what could possibly go wrong?

 

By Chris Stokel-Walker

This week, Google announced the release of an AI tool that can act as an automated notetaker in meetings and can produce presentation materials based on raw business data.

“Duet AI can create a whole new presentation, complete with text, charts, and images, based on your relevant content in Drive and Gmail,” Aparna Pappu, general manager and vice president of Google Workspace, said in a blog post announcing the feature’s rollout. “With Duet AI, we’re now helping people get back to the best parts of their jobs, to the parts that rely on human creativity, ingenuity, and expertise.” And given that Duet AI has been made available to Google Workspace’s three billion users for free on a trial basis (with a cost for large companies of $30 per user), it’s likely the tool sees a wide—and rapid—adoption. 

But there’s just one problem: Duet AI is powered by generative AI, which has a nasty habit of spouting false information.

Faced with the threat of competition, Google has embraced the AI revolution wholeheartedly. Earlier this month, it announced its “SGE while browsing” tool, which would employ AI to summarize the content of web pages while a user is browsing. The problem, as Fast Company reported, is that there’s no guarantee, when using AI, that the underlying model hasn’t fabricated content. What’s more, unlike most humans who will admit to mistakes when pressed long enough, generative AI has a tendency to double down on its mistakes, denying that it has made up an answer.

That could be a major problem with Duet. When you’re asking the AI tool  to parse vital business data, such as a company’s balance sheet or profit and losses, the risks of it misreading the data and then drawing incorrect inferences about a business’s success or failure become truly significant. 

For Beth Singler, an assistant professor at the University of Zurich, the hype around Google Duet AI showcases some of the underlying risks of putting too much trust in the AI revolution. “There is a massive problem in relying on AI-created summaries of information,” she says. “It’s just adding yet another layer of interpretation onto data, and it’s a layer created out of probabilities that words go together, not any actual understanding.”

In response to questions about Duet’s potential to veer into fiction, Google spokesperson Ross Richendrfer says, “We’re deeply aware of the limitations of LLMs and are taking a very deliberate approach to address these, including making end-users aware of these limitations.” He adds: “We’re releasing our new generative AI offerings with built-in guardrails from the start, and put our LLMs through rigorous internal and external testing to ensure that it meets our user needs and high standards for safety.”

 

Singler worries that simply advising people to double-check the outputs of such AI summarization tools won’t be enough to avoid potentially catastrophic mistakes, in part because of the way these tools have been promoted into the public consciousness. “There can be real-world repercussions, as we might be encouraged to double-check for things like hallucinations and misinformation, but our science-fiction stories and narratives about AI are mostly about how much smarter and more rational it is than humans,” says Singler.

The almost mythical way AI has been presented to the public means that its users are more likely to think they’re wrong rather than the AI tool. “So people are going to trust the machine’s answers over their own intuitions and experience,” says Singler, “time and time again.” (It would appear people indeed are quite likely to jump on this latest AI bandwagon: A recent survey by accounting service Xero found that half of U.K. small businesses trust AI with identifiable customer information, while 40% trust it with sensitive commercial information.)

“Do we really need to stuff generative AI into anything?” asks Sasha Luccioni, a research scientist at the AI company, Hugging Face. Still, in the grand scheme of things, Luccioni is less worried about this use compared to some others, such as search. “All things considered, I see this as one of the not-so-bad usages of generative AI,” she says, “though it’s important to do a thorough sanity check before using any outputs.”

Fast Company

(6)