In the media blitz around a now-in/famous letter calling for a pause to advanced AI research, hundreds of articles have echoed its warning about apocalyptic scenarios that will supposedly ensue from near-human minds and the need to contain AI.
“AI labs have been locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the authors write.
“We must ask ourselves: Should we let machines flood our information channels with propaganda and untruths? Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”
Tautologically, the answer to these questions is No. Control of civilization is not something any civilization ever wishes to relinquish, and it is self-evidently true that no one – not even those with a better command of grammar – wants to be in a world where machines “obsolete” us.
Grammatical quibbles aside, I have a more serious question: Whose interests are being served by this letter? While there is no doubt society must find ways to reduce the threats AI is provoking there is reason to question both the motifs and solutions proposed in this dispatch.
The writers serve up a heavy dose of fear-mongering: “Advanced AI could represent a profound change in the history of life on Earth,” their second sentence declares, “and should be planned for and managed with commensurate care and resources.”
I think everyone on Earth agrees AI will have to be managed with a great deal of care, yet the word “resources” might also ring some alarm bells. What the letter writers are asking for is a lot more funding for centers and programs devoted to “controlling” a problem they suggest is already bordering on a fait acompli, and which most of the authors – as AI researchers themselves – are actively involved in creating. Underlying their ‘concern’ for humanity is an inference that they are the people to get us out of the AI mess. There is a circularity to their argument which needs unpacking and begs the question of who are the people behind this missive?
Underlying their ‘concern’ for humanity is an inference that they are the people to get us out of the AI mess.
The letter is published by the Future of Life Institute (FLI), a non-profit organization incorporated in Pennsylvania and Brussels whose modestly stated aim is “preserving the future of life," through a process explained on its homepage as “steering transformative technologies towards benefitting life and away from extreme large-scale risks.”1
Co-founders of the organization are two theoretical physicists, cosmologist Anthony Aguirre at the University of California Santa Cruz, and MIT superstar Max Tegmark, whose self-embraced nickname “Mad Max” meshes with a penchant for promoting outre headline-grabbing ideas, including the notion that reality is nothing more than pure mathematics. 2
Formally, the letter was composed by FLI staff in collaboration with Yoshua Bengio, a former Google AI researcher and winner of the Turing Award; Stuart Russell, a UC Berkeley computer science professor; and “heads of several AI-focused NGO’s.”
Among the letter’s first twenty signatories – alongwith Aguirre, Tegmark, Bengio and Russell – are Tesla billionaire bro-in-chief Elon Musk (#4), Apple co-founder Steve Wozniak (#5), and Jaan Tallinn, billionaire co-founder of Skype (#12). Only one woman appears in the first 20 signers, with very few among the first 50. Almost all the first hundred signers are from elite tech companies and academic institutions in the US with a smattering from Europe. In that 100, I see only one person at an African institution, one from India, and no-one from South America.
An FAQ addendum on the FLI site declares that whatever situation we find ourselves in now with AI, things can – and almost inevitably will – get worse. “Many of the world’s top AI researchers also think more powerful systems – often called Artificial General Intelligence (AGI) – that are competitive with humanity’s best at almost any task are achievable, and this is the stated goal of many commercial AI labs.” “This will come sooner than many expect,” they say, before admonishing: “Malicious actors can use this to do bad things.”
Resisting closure of the infomatic commons will be an urgent collective project.
As ever with alarmist research it serves to follow the money. In its latest reporting year, 2021, FLI had income of just over 4 million Euro, of which 3.5 million Euro came from the Musk Foundation, with most of the rest from its founders. So, control of civilization apparently relies on Elon, a man who’s exposed himself to a flight attendant, promotes dubious cryptocurrencies, and just founded a company to go full-bore on AI himself.
Musk is the epitome of the tech titan who wants his AI cake and to eat it too. He was one of the founders of Open AI, the company behind ChatGPT and its siblings GPT-3 and GPT-4; yet his name is fourth on the list of warners. The massive media coverage the letter has garnered and its wide association with his name amounts to a PR bonanza for the always-attention-seeking Musk – who needs a public relations firm when you can generate this much buzz on your own?
Elon aside, what about all the academics and computer scientists who’ve signed on? FLI claims there are 50,000 people wanting to get on board and the organization is working to verify each before listing them in public – currently over 27,000 are cited.
The issue of who these people are is raised by the authors of the most salient response to the letter I’ve seen, a short text composed by four female AI researchers who all study issues of ‘Ethical AI.’ They write:
“While we are not surprised to see this type of letter from a longtermist organization like the Future of Life Institute, which is generally aligned with a vision of the future in which we have become radically enhanced posthumans, colonize space, and create trillions of digital people, we are dismayed to see the number of computing professional who have signed this letter, and the positive media coverage it has received.”
These authors warned about the ethical dangers of AI in a much-discussed 2021 paper known as “Stochastic Parrots,” because its critique of the parroting qualities of the large-language model (LLMs) underling the new generation of chat-bots. One of the writers, Timnit Gebru, was fired from her job as co-leader of Google’s in-house Ethical AI team after the paper was published because it questioned the company’s efforts to embrace a genuine AI ethics; and all four writers are leaders in researching the intersection of AI with social justice and equity.
It’s like an ourobos of AI-imagining eating its own tail
For the Stochastic Parrot group the real dangers around AI are not Terminator-infused nightmares of “powerful digital minds” and “human-competitive intelligence” but the technology’s potential to accelerate “worker exploitation” and “concentration of power.” An AI-fueled “explosion of synthetic media” is already “reproduce[ing] systems of oppression” baked into our social DNA and exacerbated by other new technologies, they note. “None of these issues” are addressed in the FLI letter. 3
Fearing our attention is being diverted from actual social disruptions towards sci-fi scenarios, Gebru and her co-authors challenge FLI’s presentation of false binaries which on the one hand predict a “catastrophic future” yet on the other hand promise a new utopia. While the FLI writers devote most of their text to declaiming the menaces of AI they end with an abrupt U-turn proposing an “AI summer” in which “we reap the rewards, [and] engineer these systems for the benefits of all.”
Herein lies what seem to me the real intent of this letter – to convince us that we need saving from potentially evil-AI and that the only people who can ensure a future with good-AI are the cadre of professionals who are currently behind the technology. It’s like an ourobos of AI-imagining eating its own tail, a giant snake wrapped into a circular configuration expecting the rest of us to accept its self-dealing as central to our happiness and future well-being.
Like the Stochastic Parrot authors, I refuse to collude in the assumption that we the people “must adapt to a seemingly predetermined technological future” and learn to “cope” with whatever inventors of these technologies throw at us.
“Accountability properly lies not with the artifacts but with the builders” they write, and this can’t be stressed enough. It’s not ‘the technology per se that needs regulating – some impersonal force or entity now cast as a science fiction monster invading us from outside – it’s the people and companies bringing it into being. People and companies with specific names like Google and Meta and Open-AI operating in our midst who ought to be subject to our will not we to theirs.
After being fired from Google for daring to critique its ability, or willingness, to reign in its corporate impulses and own researchers, Gebru founded the Distributed Artificial Intelligence Research institute (DAIR) to “create a space for independent, community-rooted AI research free from Big Tech’s pervasive influence,” a place where “researchers across the globe can set the agenda and conduct AI research rooted in their communities and lived experiences.” Such is the kind of organization I hope to see going forward.4
What seems to me the real intent of this letter is to convince us that we need saving from potentially ‘evil-AI’ and that the only people who can ensure a future with ‘good-AI’ are the cadre of professionals who are currently behind the technology.
Gebru’s co-authors approach AI with a similar community inflection, two of them at major universities, and we will for sure require new academic centers of scholarship to help us respond to AI. Particularly, we’ll want researchers who understand these technologies to get involved in shaping how such systems are trained – on what data, whose data, and how. If data is “the new oil” the spigots shouldn’t be monopolized again by a new set of Standard Oils and Exxon Mobiles; resisting the closure of our infomatic commons and taking back what’s already been purloined from us by mega-data-conglomerates will be urgent collective projects.
Gebru’s case shows, as others are also doing, how we can’t trust the nascent field of ethical AI to the huge corporations; there will have to be external independent research and guidance. But by who? Not the Future of Life Institute.
There is something patronizing about the whole FLI exercise, an undercurrent of neo-colonialism revealed in part by the earlier quoted question: “Should we automate away all the jobs, including the fulfilling ones?” (Their emphasis.) Embedded here is an assumption that there are ‘fulfilling’ jobs which should potentially be saved from the AIs’ maws and ‘non-fulfilling’ ones which might well be palmed off to machines.
We are seeing versions of this increasingly expressed in discussions around AI. In an article last week in The New York Times, Geoffrey Hinton, a pioneer of machine learning known as the “Godfather of AI,” explained his reasons for resigning from Google’s AI division and his disaffection now for much of his life’s work. Hinton worries the techniques he helped create will put many people out of work. According to the Times journalist, “Today chatbots like ChatGPT tend to complement human workers but they could replace paralegals, personal assistants, translators and others who handle rote tasks.” (My emphasis.) AI “takes away the drudge work,” Hinton notes, before cautioning “it might take away more than that.” (My emphasis.) Again, there is an intimation jobs can be parsed into those worth saving and those not worth saving, those constituting ‘drudgery’ and others which are exciting and estimable.
I have a good friend who’s a translator who enjoys his work very much and regards his skill as anything but drudgery. One of the most fascinating talks I’ve heard was by the late great Slavic languages translator Michael Heim about why he chose to render a new English translation of Thomas Mann’s novel Death In Venice. Heim described in thrilling prose how every age sees a prior author with fresh eyes and how words change meaning over time precipitating ripples of cultural resonance peculiar to each age. As he explained his work, translation is not a rote task at all; it lives and breathes within a cultural and historical matrix, and anyone who’s looked at more than one translation of the Divine Comedy senses this. I imagine the same is true for translations of Shakespeare into non-English languages.
While AI’s may well translate for us in practical situations such as vacations in other countries or filling out government forms – both of which will be useful – translation as an art will remain, though I suspect like many other areas of employment the industry will bifurcate into those who can afford a human and the rest who must rely on machines. No doubt a lot of human translators will be put out of work.5
‘Dignity in work’ may be a new category of human rights to consider
Who gets to say what constitutes “rote” work? Personal assistants and paralegals are likely to feel a sense dignity and satifaction in the workplace if they are treated well by employers and paid fairly with adequate benefits. As I imagine autoworkers do under similar conditions, and others who are now being displaced by robots. It’s no coincidence the tsunami of opioid addiction is happening where former blue-collar workers are being shoved out of employment in the name of technology-driven progress. ‘Dignity in work’ may be a new category of human rights to consider, for work has never been solely about money or pure exchange – time swapped for dollars. Maybe lots of jobs should be preserved, even the ‘unfulfilling’ ones.
I don’t want to live in a future where computer scientists and coders determine who gets employed, far less one in which they are in charge of paternalistically granting the rest of us a “universal basic income” or UBI, a scenario much-touted in Silicon Valley and discussed in a recent New York Times podcast by Open AI’s CEO Sam Altman.
“I think for something numeric that we can measure today, wealth is the best thing. And I do think … somehow or other — this is maybe the techno-optimist in me — almost everyone’s lives are going to get better. People will demand it. And the question is, what form is that going to go in? I don’t actually subscribe to the Silicon Valley UBI-will-solve-all-problems, we can just do that and stop talking about it.
I think it’s actually a small part of the solution. But I think we’re going to do it. I think somehow or other that’s going to happen.”
Such a situation would return us to feudalism, but it would be worse than the original feudalism because at least peasants had jobs and presumably could find some satisfaction in a job well done.6
Too many of today’s AI boosters buy into a paternalistic fantasy
I want to live in a future of collective decision-making and community control of resources, including all the data the AI boom will generate, data rightfully belonging to us not corporate overlords.
On FLI’s homepage they say: “We believe that the way powerful technology is developed and used will be the most important in determining the prospects for the future of life.” This self-serving delusion echoes a long-standing theme embedded in Western culture since the scientific revolution which posits science as the source of salvation.7 Francis Bacon nailed it at the dawn of the 17th century in his short prescient text The New Atlantis depicting a powerful cadre of elite science masters presiding over lavish laboratories and enormous machines to provide a grateful infantilized populace with everything they might need, and many things they never imagined they might need.
Too many of today’s AI boosters and thinly-disguised warners buy into this paternalistic fantasy. Bacon framed his proto-scientists as a new priesthood – “the Fathers” he called them – wise, ethical, caring, ever-inventive, and beloved by the people. F**k that future.
The ‘technologies’ we are really in need of now are the ‘technologies of care’ – parenting, teaching, community building, nurturing familial and neighborly relations, and therapy. A mass techno-psychosis seems to be taking hold. If we don’t start paying more attention to the embodied intelligences we face each day across the breakfast table and on the street then ‘artificial intelligence’ will neither doom us nor save us. It will become just another glittery detraction from the real enterprises of life.
[approx 2800 words]
The image at the start of this piece is from the FLI’s website, to mark their AI strand of their research and programming. My title is a homage to Robert Romanyshyn’s wondrous book Technology as Symptom and Dream.
Reviewing Tegmark’s book Our Mathematical Universe in The New York Times, science historian Amir Alexander writes: “Dr. Tegmark’s ultimate reality is one in which anything that is mathematically possible actually exists, but there is nothing that is not mathematical. Our reality, in other words, is not just described by mathematics, it is mathematics.”
In their paper the Stochastic Parrots authors question many foundational aspects of AI research and how the technology is being deployed today, including indiscriminately hoovering up data. “We take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? We provide recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models.” (My emphasis.)
This excellent article in Wired gives an in-depth analysis of how Gebru was dismissed from Google.
Personally, I love the experience of being in non-English-speaking countries and meandering through with a smattering of the local language. It’s a mode of communication with its own charms and I’m not sure universal language translation is always a plus. For a start, it may reduce incentives to learn other languages.
Sam Altman was interviewed by New York Times journalist Ezra Klein. Here’s a telling snippet in which Altman explains that what he’s most worried about with AI “is actually closer to the super powerful systems like the ones that people talk about creating an existential risk to humanity where there’s a race condition. And that I think will be on us and the other players in the field to put together a sufficient coalition to stop ourselves from racing when safety is in the balance.” (My emphasis)
For more on this theme I recommend philosopher Mary Midgely’s Science as Salvation.
Great article , thanks for your insights
This is a really great and tragically under-read analysis, thank you so much, Margaret.