These former Google researchers warned about the limits of artificial intelligence — and stand by their work

Back in early 2020, two co-leaders of Google’s Ethical AI team, Timnit Gebru and Margaret Mitchell, teamed up with several other people for a paper on artificial intelligence — including some Google colleagues and University of Washington professors Emily M. Bender and Angelina McMillan-Major.
In that paper, they argued that artificial intelligence (AI) wasn’t sentient but predicted that others would claim that it was. And now, according to Gebru and Mitchell, others are incorrectly making that claim just as they predicted.
One person who has made that claim, Gebru and Mitchell report in an article published by the Washington Post on June 17, is Blake Lemoine, a software engineer for Responsible AI who Google has placed on administrative leave.
“(Lemoine) believed that Google’s chatbot LaMDA was sentient,” Gebru and Mitchell explain. “‘I know a person when I talk to it,’ Lemoine said. Google had dismissed his claims and, when Lemoine reached out to external experts, put him on paid administrative leave for violating the company’s confidentiality policy.”
Gebru and Mitchell add, “But if that claim seemed like a fantastic one, we were not surprised someone had made it. It was exactly what we had warned would happen back in 2020, shortly before we were fired by Google ourselves. Lemoine’s claim shows we were right to be concerned — both by the seductiveness of bots that simulate human consciousness, and by how the excitement around such a leap can distract from the real problems inherent in AI projects.”
Gebru and Mitchell note that LaMDA stands for Language Model for Dialogue Applications, describing it as “a system based on large language models (LLMs): models trained on vast amounts of text data, usually scraped indiscriminately from the internet, with the goal of predicting probable sequences of words.”
LLMs, they add, “stitch together and parrot back language based on what they’ve seen before, without connection to underlying meaning.”
“One of the risks we outlined was that people impute communicative intent to things that seem humanlike,” Gebru and Mitchell explain. “Trained on vast amounts of data, LLMs generate seemingly coherent text that can lead people into perceiving a ‘mind’ when what they’re really seeing is pattern matching and string prediction. That, combined with the fact that the training data — text from the internet — encodes views that can be discriminatory and leave out many populations, means the models’ perceived intelligence gives rise to more issues than we are prepared to address.”
Google, according to Gebru and Mitchell, was “not pleased with” the paper they wrote in early 2020, and they were “subsequently very publicly fired” by the tech giant. But they stand by that paper.
“Less than two years later,” Gebru and Mitchell write, “our work is only more relevant. The race toward deploying larger and larger models without sufficient guardrails, regulation, understanding of how they work, or documentation of the training data has further accelerated across tech companies.”
Gebru and Mitchell warn that tech companies have been promoting the idea that artificial intelligence has genuine “reasoning and comprehension abilities” —which, they stress, it doesn’t.
“There are profit motives for these narratives,” the former Google employees write. “The stated goals of many researchers and research firms in AI is to build ‘artificial general intelligence,’ an imagined system more intelligent than anything we have ever seen, that can do any task a human can do tirelessly and without pay. While such a system hasn’t actually been shown to be feasible, never mind a net good, corporations working toward it are already amassing and labeling large amounts of data — often without informed consent and through exploitative labor practices.”
Tech companies, Gebru and Mitchell argue, are doing the public a huge disservice by trying to convince consumers that artificial intelligence can think critically in the way that people think critically.
“The drive toward this end sweeps aside the many potential unaddressed harms of LLM systems,” Gebru and Mitchell write. “And ascribing ‘sentience’ to a product implies that any wrongdoing is the work of an independent being, rather than the company — made up of real people and their decisions, and subject to regulation — that created it.”
Gebru and Mitchell add, “We need to act now to prevent this distraction and cool the fever-pitch hype. Scientists and engineers should focus on building models that meet people’s needs for different tasks, and that can be evaluated on that basis, rather than claiming they’re creating über intelligence. Similarly, we urge the media to focus on holding power to account, rather than falling for the bedazzlement of seemingly magical AI systems, hyped by corporations that benefit from misleading the public as to what these products actually are.”
The full story continues here (subscription required).
- The 10 Biggest Companies That Spare Their Employees the ... ›
- DOJ pushes for sanctions against Google as part of anti-trust lawsuit ... ›
- The More You Use Google, the More Google Knows About You ... ›
- Widow says AI chatbot encouraged husband to commit suicide: 'Without Eliza, he would still be here' - Alternet.org ›