How Harry Potter At Hogwarts Helped Microsoft AI Researchers Prove A Point

How Harry Potter At Hogwarts Helped Microsoft AI Researchers Prove A Point

by , Staff Writer @lauriesullivan, December 27, 2023

How Harry Potter At Hogwarts Helped Microsoft AI Researchers Prove A Point | DeviceDaily.com

Artificial intelligence (AI) has helped to create many surreal images while also posing some unforeseeable challenges.

While some are positive and some are negative, we’ve seen nothing like what Microsoft Principal Researcher Ronen Eldan and Microsoft CTO of Azure Mark Russinovich conjured up. 

For years, researchers thought it was nearly impossible to make an AI model forget things it had learned from private user data, but Eldan and Russinovich wanted to prove the industry wrong.

The two set out in October to determine whether or not developers could make large language models (LLM) unlearn any subset of their training data to undo bias or remove knowledge of existing information. 

In a joint project, the two used the model Llama 2-7b and after fine-tuning it, in 30 minutes made it forget the Harry Potter universe while maintaining its performance on common benchmarks.

The research aims to explore “accomplishments and the challenges that lie ahead” from the use of AI.

These models require data, and lots of it. The researchers note that one challenge comes from realizing that these massive amounts of data — from which LLMs draw their strength — often contain bias and untruths. This may include copyrighted texts, toxic or malicious data, inaccurate or fake content, misinformation, personal data, and more.

The researchers also said traditional models focus on adding or reinforcing knowledge, but do not provide straightforward ways to “forget” or “unlearn” knowledge. Completely retraining the model to address these specific issues is a process that is both time-consuming and resource-intensive — rendering it impractical, according to the research paper.

The Microsoft researchers told Bloomberg they chose the books for their universal familiarity. “We believed that it would be easier for people in the research community to evaluate the model resulting from our technique and confirm for themselves that the content has indeed been ‘unlearned’,” Russinovich said, according to Bloomberg. “Almost anyone can come up with prompts for the model that would probe whether or not it ‘knows’ the books. Even people who haven’t read the books would be aware of plot elements and characters.”

Ronen posted the model on Huggingface, an open-source data science and machine-learning platform. “We’d love it if you tried to break it by making it spit out Harry Potter content,” he wrote on X with directions on how to use it.

Forgetting or unlearning could become just as heroic as reinventing an industry that has depended so much on data, information and news.

NewsGuard, which monitors how AI tools are used to push misinformation, reported today that as tools improve, falsehoods produced by generative AI become better-written and more persuasive and dangerous.

On Wednesday, The New York Times said it is suing OpenAI, the creator of ChatGPT, and Microsoft for copyright infringement.

The NYT said says millions of articles were used to train its chatbots. The information serves up in search results on Bing that compete with the NYT for a source of information.

The New York Times said OpenAI and Microsoft should be liable for billions of dollars in damages — something publishers have been challenged with for years.

Note: The Harry Potter image was created with Microsoft Creator, with a simple text prompt.

Two Microsoft researchers set out to determine whether or not developers could make large language models unlearn any part of the data they have been trained on.
 
 

(7)