Avi Loeb believes AI could save humanity—but first we have to stop feeding it junk food

 

Avi Loeb believes AI could save humanity—but first we have to stop feeding it junk food

The Harvard astrophysicist says AI could future-proof human civilization, but not without getting the training data right.

Sometimes the thought of Avi Loeb being an extremely advanced AI has crossed my mind. It’s the only way I can explain that a man so prolific in his research, so busy with teaching, trips, and conferences, replies so swiftly to email. 

Whether early in the morning, when the sun is barely out—after jogging in the forests by his home near the Harvard University campus in Cambridge, Massachusetts—or from the middle of the Pacific Ocean, after a long day searching for evidence of the first known interstellar meteor on the coast of Papua New Guinea, Loeb always answers my emails, seemingly within seconds of me sending them. But his replies, always kind, warm, and illuminating, can’t come from any of our current AIs.

Loeb—who is the Frank B. Baird Jr. Professor of Science at Harvard, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and bestselling author of Extraterrestrial and Interstellar—does have a lot of thoughts about artificial intelligence.

He has been thinking about the current approach to training AI, and how we can correct the path so that the technology can harness the best of humanity. Loeb believes that AI could ultimately become humanity’s eternal spirit, traveling the universe in the same way that ancient alien civilizations may have already done, sending AI probes across the immensity of the Milky Way.

I spoke with Loeb about all of this via video conference, and came out of the conversation full of both hope and despair. Loeb is a scientist who is never afraid to ask questions that others ignore. He has built a reputation as a (sometimes controversial) maverick in the scientific community by challenging the dominant orthodoxy anchored in the eternal fight for research money and the fear of ridicule in academia. 

He genuinely believes that science would have progressed faster if the adults practicing it were guided by their childhood curiosity. “Instead, experts often worry about their public image and pretend that they can explain all new evidence based on their past knowledge,” he says.

Avi Loeb believes AI could save humanity—but first we have to stop feeding it junk food | DeviceDaily.com
[Source Photo: choness/Getty Images]

Teach AI as our own children

Loeb believes AI is being developed too rapidly. Today, most systems are trained on vast amounts of data pulled from across the internet. This approach carries significant risks, Loeb tells me, as it could embed the worst aspects of humanity into the algorithms that will shape our future. 

He compares the process of training AI to how we raise children, emphasizing the importance of being just as cautious with AI as we are with young minds. “The way I understand it is that AI is being trained on all texts that are available on the internet. This is equivalent to taking a teenager or a young kid and exposing them to everything that you find in magazines, newspapers, everywhere,” he says. 

In very broad terms, this forced feeding is a product of companies’ insatiable need to continue training their large language models with as much information as possible in order for them to, in essence, become more complex and “smart.” The more information the models can eat, the more able they will be to respond to queries by predicting the most likely bit of language.

While this has provided some immediate satisfaction to corporations and consumers, the strategy will inevitably lead to long-term harm to AI’s “brain”—and ultimately to everyone who uses AI. “It’s like saying, ‘Okay, we have some kids that we want to grow, and we have to feed them, so we will feed them with junk food so that they grow very fast,’” Loeb says. “You might say, ‘Okay, well, that may be a solution for one generation.’ But I don’t want to give authority to these kids that are eating junk food because they would be unhealthy in their mentality.” 

To extend the metaphor, we know that too much junk food can lead to unhealthy outcomes; eventually, a bad diet can lead to disease and death. This is not so different from AI. As companies run out of material, the quality of the text keeps decreasing until, eventually, it feeds on its own production scattered around the web causing models to collapse.

Low-quality training data can lead to systems that reflect and even amplify negative human behaviors, from racism to gender discrimination. Loeb equates it to raising a child in an environment filled with harmful influences. It’s a parent’s job to curate information that will help them raise their children to be responsible adults. In school, kids follow a structured curriculum for a reason. Shouldn’t we be equally careful in selecting the data we use to train AI? 

“Some of this material has negative content that is not constructive to society,” Loeb says. “Some people are not constructive to society,” he says. “Instead [we should] imagine a society that is far better than what we humans were able to produce in the past.” 

Better curation of training data is a moral obligation for future generations, Loeb says. He imagines a future where AI helps build a society that is collaborative, respectful, and focused on solving the major challenges facing humanity.

But this approach requires a fundamental shift in how we think about AI training. If we want AI to become a force for good in the world, let’s feed AI only the right data instead of feeding AI more of any data. “Right now, they have a hunger for data because these models work on how many parameters you can get in,” Loeb notes. “But if we slow down and focus on the quality of the data rather than the quantity, we could end up with AI systems that are far more beneficial to society.” 

Avi Loeb believes AI could save humanity—but first we have to stop feeding it junk food | DeviceDaily.com
[Source Photo: U.S. Navy]

Fast path vs. slow path

Loeb says one way to ease the current market forces around AI could be a two-path development system for the technology: one approach for closed labs and another for public consumption. This bifurcation in training would allow AI scientists to research new engines faster in a controlled environment. Meanwhile, regular people will have access only to the versions of the AI that, while less capable than the ones in the lab, will only help us in positive ways. 

Loeb envisions a scenario in which research labs could experiment in a controlled environment, unbound by the protections that society requires. “You can develop it in the laboratory as long as you don’t apply it to society,” he says. He compares it to the way scientists approach drug and vaccine development. The laboratory can be a place of experimentation, but any proposed solution must go through several phases of testing to ensure it’s safe and will have an overall positive effect on people’s lives.

Innovation is essential to advancing civilization, but it must be guided by ethical considerations and a clear understanding of the potential consequences. Another way of thinking about this is how nuclear energy evolved, says Loeb. Experimentation was necessary for scientific progress, but there was also a clear acknowledgment for caution and control. 

“Let’s think about that,” Loeb says. “In order to develop nuclear physics, you had to do experiments in the laboratory. That was completely legitimate. . . . But once you understand how to make an atomic bomb, you want to have some limitation on the use of the atomic bomb.” 

Loeb says the risks posed by unregulated AI development are obvious. Without proper oversight and regulation, AI could become a powerful and potentially dangerous tool in the wrong hands. The stakes are incredibly high, and Loeb believes that the decisions we make now will have long-lasting consequences for the future of humanity.

One of Loeb’s primary concerns is that the rapid pace of AI development, driven by commercial interests, could lead to a situation where the technology outpaces our ability to control or understand it. He’s also concerned about AI falling into the hands of individuals or groups with malicious intent. In an increasingly interconnected world, even a single AI system with harmful programming could have far-reaching consequences. “This is a very open, very easy-to-use technology,” he says. “Like, any teenager in the house can actually be kind of an actor. They can become terrorists. They can introduce these systems on the internet and do bad things.” 

Avi Loeb believes AI could save humanity—but first we have to stop feeding it junk food | DeviceDaily.com
[Source Photo: fhm/Getty Images]

Design AI with human values

Loeb doesn’t believe stopping is the answer. “Some people say, ‘Let’s ban development for six months, have a moratorium.’ But we need to think about what would be a better approach,” he says. “And I’m suggesting the training set is the key—making sure the training set incorporates the values we want to have for the future.”

Current AI design processes overlook the importance of embedding ethical considerations into AI, despite what OpenAI or Google are telling us. And without a deliberate effort to instill values, Loeb argues, AI will evolve in ways that are misaligned with the best interests of humanity: “What is missing right now is how to introduce values to those intelligence systems.”

Loeb says this would require a proactive effort to consciously design the content that AI systems are trained on, ensuring that it embodies the values we want to see reflected in the technology. This could involve training AI on texts that emphasize collaboration, empathy, and ethical decision-making, rather than on content that reflects conflict, competition, and self-interest. “Imagine that the future would be generous, where people share ideas, where people support each other, where society works together to solve the major problems that face us,” he says. 

And yes, this will be a Herculean effort. It will cost a lot of money. It will not be the easy, cheap approach that OpenAI, Google, and other tech companies are currently taking by scraping the internet for everything (including copyright material and trashy Twitter posts). But Loeb believes that taking the time to carefully design and produce the content we use to train AI could lead to a significant payoff in the long run. While Loeb didn’t get into specifics, it’s clear to me that if companies like OpenAI, Meta, and Google really had the betterment of humanity in mind rather than their bottom line, they could eliminate access to the current models and retrain new ones guided by the responsibility principles Loeb outlines.

Avi Loeb believes AI could save humanity—but first we have to stop feeding it junk food | DeviceDaily.com
[Source Image: Westend61/Getty Images]

Loeb believes none of this matters without the necessary legal and regulatory frameworks for AI’s development. He says there needs to be a robust legal structure to hold developers accountable for the systems they create. This includes clear guidelines on who is responsible when AI systems cause harm, as well as mechanisms for ensuring that AI systems can be retrained or decommissioned if they act in ways that are contrary to societal values.

“The issue is who to punish if something wrong happens,” he says. “As long as the system is under the training phase, it’s obviously the developer’s responsibility. Also, as long as it’s not developed well beyond what the manufacturer or the distributor is doing—you know, there will be distributors of these systems—they [the manufacturers] should be held responsible. Just like if a self-driving car causes accidents, it’s the manufacturer that is held liable for that.”

Things get more complicated once AI reaches a point where it can no longer be controlled directly by its creators. Legal frameworks must be designed to address new challenges posed by AI systems that operate independently of human oversight. “There would be systems that evolve well beyond what the manufacturer or the training set was about, and that would be equivalent to kids leaving home and becoming autonomous,” Loeb says. “They become independent of the educational phase, and therefore their parents should not be held responsible.” 

It’s hard to comprehend now, he says, but AI systems that have become autonomous and commit harmful actions should face legal consequences, similar to how humans are treated under the law. Minor infractions could result in retraining, he points out, while more serious offenses could lead to the AI being permanently decommissioned.

To do all this, however, we are going to need strong leadership; especially from the United States, but the rest of the world, too. The European Union seems to be ahead in these efforts. Meanwhile, during a meeting last year at the White House—where President Joe Biden and Vice President Kamala Harris invited CEOs from Big Tech companies to discuss “responsible artificial intelligence innovation”—there was neither substance nor seriousness on this extremely delicate problem. Loeb argues that most meetings end up being little more than performative gestures without real follow-through. To me, asking corporations to weigh in on responsible AI is like asking the wolves weigh in on strategies to safeguard sheep.

Loeb believes the U.S. should take the lead in establishing a comprehensive regulatory framework for AI. However, he acknowledges the challenges in achieving this, especially given the differing perspectives and interests of various nations. “I think the United States has to decide about what is the appropriate path, reach out to the Chinese, and say, ‘Look, this is an issue of humanity. It’s nothing to do with competition,’” he says.

At this point, however, this seems extremely unlikely. On September 10, China refused to sign a nonbinding agreement to ban the control of nuclear weapons by AI systems. And the U.S. is not much better: Washington has consistently roadblocked international efforts to ban the use of AI in autonomous weapons.

Avi Loeb believes AI could save humanity—but first we have to stop feeding it junk food | DeviceDaily.com
[Source Photo: NASA/CXO/SAO]

Our future in the stars

Despite these issues, Loeb still sees a future where AI systems, trained responsibly and equipped with the right values, could become humanity’s emissaries to the stars. According to Loeb, these autonomous AI systems could play a critical role in the long-term survival and expansion of humanity.

As an astronomer, he is acutely aware of the existential threats that lie ahead, such as climate change and the eventual expansion of the sun, which will render Earth uninhabitable in the distant future. In this context, AI represents not just a technological challenge but also a potential solution to these existential risks.

“When I think about the next phase in humanity, it’s actually going to space because Earth itself will not be hospitable to life. I am an astronomer, so I know that within a billion years, the sun will boil off all the oceans on Earth, so Earth will become just like Mars,” he says. “There will be no liquid water on the surface. It will be a desert.” 

To ensure the survival of human civilization, he believes we must look beyond Earth and explore the possibilities of living on other planets, or even in space stations that are not tied to any single celestial body. However, Loeb recognizes the immense challenges involved in such endeavors, particularly the difficulties of long-duration space travel.

Loeb says AI could become humanity’s best hope for exploring and colonizing distant worlds. He suggests that autonomous, completely synthetic AI brains with cognitive abilities equal to or better than ours could be designed to withstand the harsh conditions of space and undertake the long and dangerous journeys between stars. These are missions that would be difficult, if not impossible, for humans to endure. 

“If you want to leave the solar system, it’s really difficult for humans and any biological creatures because the trip takes millions of years,” he says. “We are just not designed to survive such a long trip, and also there are cosmic rays that can harm our bodies. We are protected under the womb of the Earth’s atmosphere and magnetic field. If we go to space, we will not be protected.”

These AI explorers, however, would be hardened against the rigors of space travel, and capable of surviving for millions of years in an interstellar environment. They could serve as the pioneers of human civilization, charting new worlds and possibly even creating life on other planets using advanced technologies like 3D printing. 

“If they have 3D printers next to them, they might, for example, create life on other planets far away, you know, millions of years from now, billions of years from now,” Loeb says. “That’s the way I see the future of humanity. It is like sending seeds from a dandelion.” 

In Loeb’s long-term vision, AI is not just a tool for solving immediate problems but a critical component of humanity’s future in the cosmos. He believes that by developing AI systems with the right values and capabilities, we can extend the reach of human civilization far beyond Earth, ensuring its survival and flourishing in the distant future. 

The way we train and develop AI will determine whether these systems can serve as the vanguard of humanity in the universe or whether they will simply replicate the flaws of our past on a larger scale, possibly leading to our complete extinction from the cosmic record. That’s why it is imperative for nations to come together and establish common principles for AI development that prioritize the well-being of humanity as a whole.

As of right now, it all seems rather unlikely. “This is a reality that we’ve never witnessed before,” Loeb says. Today, AI is a fast-racing car toward the cliff. “Nobody’s doing anything about it, also because the United States has the fear that China would be way ahead.” 

Realizing AI’s potential for good requires us to act with foresight, responsibility, and a commitment to the values that we want to see reflected in our future. “We need to agree for the future, for the benefit of humanity. What we do now, right now, not next year, not in five years, will be crucial to get to the best possible outcome,” Loeb says. The future of AI—and indeed the future of humanity—is in our hands. It’s up to us to ensure that the technology we create serves to uplift and unite us. And take us to the stars.


Fast Company

(3)