OpenAI launches its store for customized ChatGPTs

By Mark Sullivan

Welcome to AI DecodedFast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.

OpenAI launches its GPT Store for specialized chatbots

This morning, OpenAI launched its “GPT Store,” which features versions of ChatGPT that have been customized by users to perform specific tasks. ChatGPT users have now made three million GPTs, OpenAI says, and some of them will want to go public with their creations. Once in the store, users’ GPTs become searchable and, if they’re cool or useful, may climb the leaderboards. 

The company says it will curate the store, showcasing the most “useful and delightful” GPTs in various categories, such as writing, research, programming, education, and lifestyle. (OpenAI’s image generation tool DALL-E also gets its own category in the store.) The store will feature GPTs from companies like Consensus (whose GPT lets users search and synthesize insights from 200 million academic papers), Khan Academy’s Code Tutor (which can teach new coding skills), and Canva (which helps to design flyers, social posts, or business materials).

OpenAI is rolling out the store at chat.openai.com/gpts to its ChatGPT Plus, Team, and Enterprise users beginning today. (The store is only for paying customers, however.) ChatGPT says it’ll soon start letting builders earn money as people download and use their creations. OpenAI hasn’t provided details such as rates and policies, but says it’ll do so soon. 

OpenAI also announced a new “ChatGPT Team” service, which offers teams of people within companies access to models like GPT-4 and DALL·E 3, as well as a dedicated collaborative workspace and admin tools for team management. As with OpenAI’s ChatGPT Enterprise offering, data sent to the LLM or generated by the LLM is walled off so that only the team can access it. OpenAI says it won’t use the data to train other models.

Stanford study shows the perils of legal AI

Remember the story about the New York lawyer who filed a legal brief filled with bogus information generated by ChatGPT? The story went viral, and may have caused Chief Justice John Roberts to wring his hands over the role of AI in legal circles in his end-of-year report on the federal judiciary. Now, a group of researchers at Stanford’s Institute for Human-Centered Artificial Intelligence has measured and mapped generative AI’s limitations in various types of legal comprehension and writing. 

The researchers studied the output of three large language models commonly used in legal applications: OpenAI’s GPT 3.5, Google’s PaLM 2, and Meta’s Llama 2. (The latter of the three is open-source.) They asked each model more than 200,000 legal questions, ranging from simple (“who is the author of the opinion?”) to complex (“how is legal precedent A in tension with legal precedent B?”). The results are less than reassuring: 

    Legal hallucinations are pervasive and disturbing: hallucination rates range from 69% to 88% in response to specific legal queries for state-of-the-art language models. 
    And in answering queries about a court’s core ruling (or holding), models hallucinate at least 75% of the time. 
    LLMs hallucinate less when asked about higher-profile cases, such as from the Supreme Court or the influential Second and Ninth Circuit Courts. They hallucinate more when asked about cases in lower courts (such as district courts) and courts in less populous parts of the country, and cases that involve localized information. 
    In a task measuring the precedential relationship between two different cases, most LLMs do no better than random guessing. 
    GPT-3.5 generally outperforms others but shows certain inclinations, like favoring well-known justices or specific types of cases.

“While flashy headlines about models passing the bar exam might lead you to think that lawyers will be replaced, our findings are much more sobering,” says Daniel Ho, Stanford law professor and associate director of HAI. “AI in legal practice is best conceived of as augmenting, not replacing, legal judgment. Or, as Chief Justice Roberts put it, AI should not ‘dehumaniz[e] the law.’”

Should your personal AI assistant live on your phone or something else?

The Consumer Electronics Show is under way in Vegas this week, so it’s a good time to think about how AI could be best married with tech gadgets. As you might imagine, there’s all sorts of new hardware integrating AI, from smart bird feeders to AI dog bowls

But some tech companies are gunning for the biggest use case of all—an AI assistant that’s with you at all times. It uses new advanced AI models to understand your habits and tastes, and helps you get things done and even anticipate when you might want something. The new rabbit r1 personal AI device had its coming-out party on Tuesday. I wouldn’t be surprised if more dedicated personal AI assistants are announced this week. 

 

But that’s just one way to package this “everyday AI.” It’s quite possible that people will prefer to have it contained in the hardware they’re already used to, like their smartphones. Companies like Qualcomm are hot to provide the chips that can run AI algorithms locally on the phone. Samsung and Asus are talking about new AI-imbued smartphones and features this week. Google already said it plans to put a version of its new Gemini LLM on its Pixel 8 Pro phone.

It’s also possible that your personal AI could ride around with you inside your AR glasses, identifying things and people in the real world, helping you cook, translating, suggesting apps, etc. Meta has been working on this problem for a few years now and has already built AI features into Ray-Ban smart glasses. I’ve not yet heard about an AI-powered smartwatch, but I’m sure that’s coming. 

In the coming years, you’ll see the tech industry taking cues from consumers as they try to get to the ideal vehicle device for personal AI. It seems unlikely that the smartphone will be unseated as our go-to tech device anytime soon, but we’re still in the early innings with personal AI. It’s possible that one day we’ll carry around a small device to talk through our daily tasks, and keep another tech device with a large screen on hand for watching videos or playing games.

Fast Company – technology

(8)