Apple has the most to gain from smartphone-sized AI

 

By Mark Sullivan

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

On-device AI is Apple’s game to lose

Tim Cook strongly hinted to analysts during an earnings call last week that his company will be building new AI features into its various operating systems later this year. Apple already uses some machine learning in its phones for photography features, but Cook likely means that Apple has been working on adding generative AI features to its phone OS—perhaps for tasks like generating image captions or crafting text messages. 

Apple is unlikely to offer generative AI as a stand-alone app or feature, but might very well use the burgeoning technology to improve existing experiences (Siri, for example). Apple is in a very good position to do that. It controls both the OS and the phone, so it could give its AI model access to the device’s sensors, mics, and cameras (while denying that access to AI apps offered by third parties). The data collected from the cameras, microphones, and sensors could be used to paint a picture of the user’s world, which might make the iPhone better at anticipating and suggesting useful information or features. Long term, Apple could move toward offering a uniquely “personal AI”—maybe called Siri—in its phones and maybe in other devices like AR glasses.

Apple has already spent years doing the groundwork of reassuring users that their personal data is safe on their iPhone. This could be very important if the company wants to eventually allow users to train the AI in their iPhone on their personal and private information, to understand and anticipate their work and life habits and needs. Users would need to feel assured that the most personal and private data was staying on their device, and not traveling up to the cloud. Because Apple would be making both the model and the processor to run the model on the phone, it could give that assurance. And Apple has already made that sort of security and privacy promise to users for financial data, such as in Apple Pay.

Increasingly, researchers across the industry are working to downsize large language models (LLMs) to mobile chip size. Google’s new Gemini “nano” model, for example, will reside on Google and Android phones (and new Android features are good predictors of new iPhone features). The challenge is shrinking down the models while leaving them smart enough to still perform certain advanced tasks in a mobile device. 

“The classic trade-off is accuracy versus the size of the model,” says Sophie Lebrecht, COO at the nonprofit Allen Institute for AI. But some accuracy-related trade-offs make sense, depending on the demands of the application. “How fast does the model need to work? How accurate does the model need to be?” Lebrecht says. “We may end up seeing situations where we have more customized models, where we say ‘I don’t need to know how many legs a caterpillar has, I just need to do the task that I need.’”

The quest to make AI learn like babies

Babies have a remarkable ability to recognize and label objects in the real world. Some even believe babies are born with an innate sense of language. LLMs have no such ability, and, compared to children, they learn about the world very slowly. Now a team of researchers from various universities have taken an important step toward teaching AI models to learn more like babies learn. Their work was published last week in Science.

In order to capture audiovisual data, the researchers mounted onto a group of kids (aged 6 months old to 2.5 years old) a helmet adorned with a camera and microphone. The angle of the camera shot indicated what object, person, or pet caught the child’s attention. Using the camera, the researchers took thousands of images, labeling them with the audio picked up by the microphone (the child’s parents pointing at objects and saying their names, for example). 

They then used the data to train an LLM. They found the model was able to start getting a feel for language after training on a relatively small data set. This comes in sharp contrast to the way models like GPT-4 are pretrained, by processing huge and complex data sets on hundreds or thousands of expensive servers running nonstop for months. Language models might be able to learn faster, and far less expensively, and with a far smaller carbon footprint, if they saw and heard the world more like children do. 

Meta chief AI scientist Yann LeCun says that today’s large models—which are mainly trained using text from the internet—aren’t as smart as toddlers, and don’t learn as quickly. LeCun explains that babies, unlike machines, can take in lots of information through their optic nerve in real time. AI models may be able to learn much more efficiently if they could process video and audio—say, from a repository like YouTube. But researchers have not found a way to do that yet. Teaching AI models to learn faster and cheaper may be a key step toward the holy grail of artificial general intelligence (AGI), where the machines can do pretty much everything humans can. 

 

AI lobbying surged in 2023 

OpenSecrets, at the request of CNBC, counted up the companies, institutions, and organizations that lobbied in 2023 to influence the way the government regulates AI. They  found that more than 450 companies lobbied (either through their own staff or via K Street lobbyists) on behalf of the tech, a 185% increase over 2022. Together, companies spent $957 million on lobbying, although most of the companies also lobbied on other issues, not just AI.

The companies included OpenAI, Tesla, Nvidia, Palantir, TikTok owner ByteDance, Andreessen Horowitz, Pinterest, Shopify, Disney, and chipmaker TSMC.

Big Tech firms believe their businesses could be impacted if the government imposes binding rules on the development and use of AI models. The Biden administration issued an executive order last year calling for Congress to work toward passing such rules. Lawmakers, meanwhile, have been hurrying to get up to speed on new LLMs and image generators.  

So far, the government has mostly asked for voluntary self-regulation from the Big Tech companies developing AI. But that could, in theory, change this year. If Congress acts, it’s likely to target applications of AI that present specific, near-term risks, such as systems that process sensitive personal data (health, financial, identity, etc.) or systems used in making consequential lending or housing decisions.

Fast Company – technology

(11)