Google’s plans to bring AI to education make its dominance in classrooms more alarming

By Ben Williamson

May 28, 2021

When Google CEO Sundar Pichai addressed the company’s annual I/O Developers Conference on May 18, 2021, he made two announcements suggesting Google is now the world’s most powerful organization in education. Opening the livestreamed keynote from the Mountain View campus gardens, Pichai celebrated how Google had been able to “help students and teachers continue learning from anywhere” during the pandemic.

Minutes later, he announced Google’s new AI language platform, a central part of the company’s long-term AI strategy, with a specific use-case example from education. LaMDA (Language Model for Dialogue Applications), he claimed, could enable students to ask natural language questions and receive sensible, factual, and interesting conversational responses.

“So if a student wanted to discover more about space,” Pichai wrote on the company blog, “the model would give sensible responses, making learning even more fun and engaging. If that student then wanted to switch over to a different topic,” he added, “LaMDA could continue the conversation without any retraining.” The company plan is to embed LaMDA in its Workspace suite of cloud computing tools, software, and products.

These proclamations indicate how Google plans to advance its business in education following the disruptions of COVID-19—by consolidating the huge growth of its platforms in schools and integrating AI into teaching and learning. That’s raising fresh concerns among privacy campaigners and researchers because it gives Google access to data about students and schools at international scale.

Google’s global classroom

With schools reopening worldwide, Google has worked hard to ensure the big market gains it made in 2020 can be sustained and strengthened as students return to physical rather than virtual classrooms. With user numbers of its digital learning platform, Google Classroom, up to 150 million from 40 million just a year before, it announced a new “road map” for the platform in early 2021.

“As more teachers use Classroom as their ‘hub’ of learning during the pandemic, many schools are treating it as their learning management system (LMS),” wrote Classroom’s program manager. “While we didn’t set out to create an LMS, Classroom is committed to meeting the evolving needs of schools.”

The road map for Classroom as a school LMS was just one plan laid out at its annual Learning with Google conference, which also included the launch of 40 new Chromebook laptop models alongside feature upgrades across its educational products. These developments illustrate an ongoing strategic expansion that Google has been pursuing in education for 15 years, since launching its free software for education in 2006 and low-cost Chromebooks in 2011. Its competitive edge in both school hardware and software has only advanced during the pandemic.

The steady extension of Google’s reach in education has always been highly controversial. Five years ago, the nonprofit digital liberties organization the Electronic Frontier Foundation filed an official complaint with the Federal Trade Commission against Google for collecting and data mining schoolchildren’s personal information from Chromebooks and Google Apps for Education (since renamed Workspace for Education) without permission or opt-out options. Researchers from the University of Boras in Sweden highlighted how the privacy policy of Google Apps for Education disguised its business model, making it almost impossible to ascertain what data it collected about students and what Google uses it for.

Google’s data mining in education has only become more contested. In February 2020, the attorney general of New Mexico filed a lawsuit alleging Google violates the privacy of students who use its Chromebooks and software, in contravention of both federal law and the Student Privacy Pledge to which Google is itself a signatory. Google, claimed the attorney general, had pledged to only collect, maintain, use, and share student data expressly for educational purposes, but was continuing to mine it for commercial purposes.

Nonetheless, in the months following, Google continued to expand across education systems worldwide, often with the backing of state or national level departments of education and international organizations such as the OECD.

Controversies over data collection and sharing are likely to intensify with the expansion of Classroom. Research published by a team from universities in Australia and the U.K., to which I contributed recently, highlighted how hundreds of external education technology providers are integrated into Classroom, potentially enabling Google to extend its data extraction practices far beyond the platform. The road map for Classroom confirms its plans to extend these integrations, through a “marketplace” of “Classroom add-ons” that teachers can then assign without requiring extra students’ log-ins. This makes Classroom itself the main gateway for students to access other non-Google resources.

These developments give Google extraordinary gatekeeping power in the education technology industry, as it sets the rules for other third-party providers to integrate with Classroom and for the exchange of data between them. In its new role as an LMS, Classroom can even integrate with existing school information systems, acting as the key interface between a school and its student records.

Together, the expansion of Classroom and its integrations prioritize a particular model of education premised on constant collection and exchange of student data across platforms via the Google Cloud. The distinction between commercial purpose and educational purpose is increasingly difficult to identify in these developments. Google’s data-extractive business model has become symmetrical with and supportive of digital approaches to teaching and learning that Google itself has helped establish as a global model for the future of schooling.

Techno-ethical auditing

Google now looks likely to push its new AI functionality into schools too. Education will not be the only sector of society affected by Google’s conversational AI interface—though, as Sundar Pichai’s announcement at I/O made plain, education is an obvious use case for such technologies.

Large language model technologies are among the most contentious of Google’s recent developments. Late last year, a group of researchers, including the two co-leads of the Ethical AI team at Google itself, produced a research paper claiming harmful ideas, biases, and misleading information are embedded in these models. Google subsequently fired the authors from its Ethical AI team, leading to widespread condemnation and serious questions about the long-term ethical implications of its AI strategy.

This raises the troubling question of whether installing Google’s language AI technologies in educational products might reproduce biases and misinformation within the institutions of schooling. At I/O, Pichai maintained further development will ensure “fairness, accuracy, safety, and privacy” are baked in to LaMDA before full rollout, though the firing of its Ethical AI specialists weakens the credibility of these assertions.

According to the authors of a new research paper, “Don’t Be Evil: Should We Use Google in Schools?,” the company deserves far greater scrutiny before any further expansion in education. Using a method of “techno-ethical auditing,” the research team from the University of North Texas found that “Google extracts personal data from students, skirts laws intended to protect them, targets them for profits, obfuscates the company’s intent in their Terms of Service, recommends harmful information, and distorts students’ knowledge.”

Techno-ethical auditing is an important step to address Google’s growing role in education. But larger questions remain about private technology companies’ influence in state and public education systems, and the potential of new AI and cloud computing platforms to change the practices and priorities of the schooling sector.

Private company involvement in education is not new, but the international scale of Big Tech influence, and the technological and ethical implications of emerging platforms, AI, and data systems in schools do demand fresh attention. Google has produced the hardware, software, and underlying cloud and data systems on which education systems are increasingly dependent, at scales that cross geographical and political borders and continents. These are technical, ethical, and political issues that should not only be delegated to educators and school leaders to sort out. They need to be addressed at the regulatory level, and through democratic, collective discussion about the future of schools beyond the pandemic.


Ben Williamson is a senior research fellow at the University of Edinburgh, U.K., and is on Twitter @BenPatrickWill.

(18)