Can the AI boom thwart San Francisco’s bust?

 

By Mark Sullivan

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.

This week, I’m looking at how San Francisco is being transformed by the burgeoning AI industry. Also, OpenAI shuts down its AI detection tool, and big media companies are beginning to band together to demand payment from AI firms for use of their content to train models.

If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com, and follow me on Twitter @thesullivan.

AI in my neighborhood

Just two years ago, unprecedented numbers of people were leaving the Bay Area because of COVID-19. Big tech companies abandoned their office spaces, leaving the Financial District a shadow of its former self. As COVID recedes, people are now trying to rebuild the social and public life they knew before the virus. The city seems to be searching for a new identity, and the AI boom, which was ignited last year with the release of ChatGPT, is, in a small but growing way, providing it. 

A new Brookings Institute study says that 60% of new generative AI job listings are looking for people in just 15 cities, and San Francisco is leading the pack by a wide margin. Most of the best-funded generative AI startups are San Francisco-based. OpenAI lives in a large, plain-looking building in the Mission District. Anthropic’s office is located near the Montgomery Street BART station on Market Street. Perplexity has a small space up in Bernal Heights. Scale AI, Midjourney, and Databricks are located here too.

Living near Alamo Square in San Francisco, I see signs of this budding industry. My neighborhood is full of people who work for Big Tech companies, including Google, Meta, and Apple, all of which are shifting more and more resources to AI R&D. In the local coffee shops, people talk about OpenAI and Midjourney. One cafe a few blocks away has become known as a place where young AI entrepreneurs pitch VCs. A local VC, Index Ventures, recently held a get-together of AI journalists and AI company heads at a swanky nearby eatery on Divisadero Street. And just down the street from me is Hayes Valley, a part of town that has been dubbed “Cerebral Valley” because of all the AI startup founders and developers in the area, some of whom share living/working spaces in so-called hacker houses.

Perhaps the most obvious sign of AI can actually be found on the city streets, with the growing number of self-driving cars. Waymo– and Cruise-owned cars began showing up in San Francisco earlier this year, at first with human drivers inside. These days, however, there’s often no human behind the wheel. Both companies say they have hundreds of vehicles on the street these days, some of which are transporting riders, while others are training or collecting mapping data. People in the northwest corner of the city (including my neighborhood) can hail a self-driving Cruise through the company’s app. 

There are certainly downsides to a place relying on a fickle, profit-driven tech industry for its identity. But right now, any narrative is preferable to the endless reports of San Francisco’s crime, homelessness, and open-air drug markets. The story of my city’s “AI boom” is likely just beginning

After Biden asks for AI detection, OpenAI shuts down AI detection tool

A year from now, much of the content we see online will be AI-generated. Given that fact, we’d naturally want to make sure any AI-generated text, image, and video is clearly marked as such. Indeed, just last week, the major AI model developers pledged to the Biden administration that they would find technological methods of identifying and labeling AI content. But that same week, OpenAI quietly shut off access to its AI content detection tool, AI Classifier. The company says that the tool had a low accuracy rate (26%). It turns out, even the smartest AI systems are bad at identifying AI-generated content. 

This could add to the misinformation and propaganda problem voters face heading into next year’s election season. AI generation tools make it easier and cheaper to create highly targeted (and sometimes misleading) political messages at scale. Voters will see political messages carefully tailored to their political bent and their hot-button issues, in the same way that TikTok uses AI to deliver a continuous stream of videos that the user likes, Cloudera cofounder and ex-Google VP Amr Awadallah tells me. 

 

However, Awadallah says, when it comes to misinformation and disinformation, detecting AI is still only half the battle. People need to be ready to recognize twisted facts or propaganda regardless of whether it was generated by an AI or a person, he says. Awadallah’s new company, Vectara, uses large language models to help companies, including AI model developers, distinguish facts from propaganda, provide additional trusted information, and cite sources. “You have the confidence now that the response being given is the correct response,” he says, “or it’s a balanced response, meaning it’s giving you both angles [on a given topic].”

Barry Diller’s IAC, New York Times, and News Corp. plan to sue AI companies for “billions”

As we’ve discussed here before, companies like OpenAI train their AI chatbots by feeding them enormous amounts of content, data, and code scraped from the internet. They then profit from selling access to the models. Publishers of the content are beginning to demand their cut.

We’ve already seen some high-profile cases filed by content owners against OpenAI and others. Comedian Sarah Silverman filed suit earlier this month against OpenAI for training its models using her material without permission. Now, some more powerful players are planning to have a go at AI developers in court. Semafor reports that Barry Diller’s company, IAC (which owns titles such as The Daily Beast and Ask.com), and a group of big publishers, including the New York Times and News Corp., have plans of forming a coalition to demand payment from the AI developers—via court order, if necessary. The group may also press for new laws requiring payment for training data. 

The courts have just begun determining if AI companies violate publishers’ copyright when they use internet content to train models, or when the models output data or facts from content owned by the publishers. The publishing industry is closely watching a Delaware lawsuit in which Thomson Reuters is suing an AI developer for using its Westlaw service’s content for training. The AI developer argues that using the content is covered under the “fair use” clauses in copyright law. Stay tuned, the legal skirmishes are just getting started.

More AI coverage from Fast Company:

From around the web:

Fast Company

(24)