How AI is reshaping Google’s search page

 

By Mark Sullivan

Welcome to AI DecodedFast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.

AI search will have a huge effect on Google—and brands 

For many people, Google’s search engine acts as a front door to the internet. And search is big business for Google: In 2022, more than half of its total revenue, $162.45 billion, came from search ads. But that formula might now be changing, thanks to AI.

Slowly, consumers are shifting from a keyword-based search experience—where they are presented with a barrage of ads to click through—to a conversational interaction that uses a search bot powered by a large language model. Such a pivot could have profound effects on  Google’s core business. 

Google knows this and has been developing its own AI search function, called Search Generative Experience (SGE). SGE was announced last May and so far has only been available as an “experiment” that users can try out. But SGE will very likely become a permanent fixture of Google’s search page for all users, says Jim Yu, founder of SEO firm BrightEdge. It’ll be triggered by certain kinds of keyword searches and will appear on the results page alongside the ads and links we’re used to seeing.

This could have big implications for brands that rely on Google ads to find new customers. When a customer searches for “best midsize cars,” for example, SGE will return a narrative summary of what it found, along with four or five examples of cars, a pros-and-cons for each car, and even some snippets of reviews about the cars. That package of results is probably more helpful for someone searching for a car than a list of links, Yu says, but it’s also very opinionated (for example, saying that a given car is harder to maintain). If you’re the brand, Yu adds, you may wonder why you’re spending tens of thousands on internet advertising when Google’s search results are talking potential customers out of buying your product.

 It’ll be important that a company’s various marketing groups—including those that manage paid search, organic search, location search, reputation, and reviews—work together to manage the brand’s image as it appears on AI-powered search, he says.

“How do I manage in this new world where all these different aspects of my digital presence are interconnected as I run these different campaigns?” Yu says. “Today they’re kind of talking to each other but they’re not really talking to each other; they’re not really orchestrated, and that’s going to change.” 

AI is named as a major factor in the Doomsday Clock’s time 

The Bulletin of the Atomic Scientists said Tuesday that the Doomsday Clock remains at 90 seconds until midnight, the same as it was last year. But this year marks the first time generative AI was cited as one of the major global dangers. (As usual, the Bulletin’s board members call out nuclear weapons as the biggest existential threat, with biological weapons and climate change close behind.) 

What’s interesting about AI in this context is that the tech can act as a contributing factor to the other major threats. For example, somebody might ask an AI chatbot to provide detailed instructions on how to design a bio weapon, says Herb Lin, a member of the Bulletin of Atomic Scientists, and a senior research scholar for cyber policy and security at the Hoover Institution at Stanford University. 

But Lin is also concerned about the capacity of AI to “pollute the information space” with so much generated content that it’s impossible to differentiate between reliable, human-written truth, and machine-written misinformation. “I personally believe that the threat of AI to the information space is in fact an existential threat, but the Bulletin hasn’t officially adopted that position.”

 

AI companies are addressing the threat of chatbots generating misleading or dangerous content by imposing “guardrails” on their models. But Lin doubts the efficacy of that approach. “You put up guardrails when you don’t understand what the machine is doing,” he says. Guardrails can be applied for specific hazardous or toxic outputs, but researchers can’t delve into the depths of the large language model and locate the flaw that made it generate the bad content. That’s the interpretability problem I wrote about last year.

OpenAI’s Sam Altman argues that we can’t look into a human’s brain and pinpoint their reason for thinking or saying something, but we can ask a human to explain their reasoning. He says the same approach can be used to understand the output of AI systems.

AI governance had its day in the Sun at Davos

AI has been a big topic at Davos in the past, but this year the term seemed to be everywhere, competing with the wars in Ukraine and Gaza for top billing. Numerous panels, keynotes, and workshops at the World Economic Forum’s annual event in the Alps centered on how governments and the private sector might work together to manage the many risks of AI. Accenture CEO Julie Sweet was even personally conducting AI governance workshops for C-suite executives.

Navrina Singh, founder and CEO of the cloud-based AI governance platform CredoAI, says she was fascinated by the top billing given to AI governance this year. “Compared to last year, or compared to 2022, this year there was a movement toward action and operationalization,” says Singh, who has spoken on the subject numerous times at Davos.

Singh says AI governance might become a more common term in 2024, if for unfortunate reasons. It’s very possible that generative AI systems could be used in unforeseen and harmful ways to spread political misinformation, undermine confidence in the electoral system, or keep people away from the polls. “This is going to be the year that we recognize how much impact AI is going to have on something that is so fundamental to us,” Singh says.

Fast Company – technology

(12)