Every winter when my garage floods, I marvel at water’s ability to solve a gnarly physics problem. The flow of water – as for all liquids and gases – is described by the Navier-Stokes equations, among the most difficult in physics. Though they were written down almost 200 years ago, mathematicians and physicists are still struggling to deal with them.
Our understanding of the Navier Stokes equations “remains so minimal” that the problem was designated one of the seven great open questions in mathematics by the Clay Mathematical Institute, which offers a US$1 million prize to anyone who makes “progress toward a mathematical theory” to “unlock the secrets hidden” in them.
Even with supercomputer simulation software, humans struggle to model the way water behaves. For instance, the way that, in my garage, it pools in places where the floor isn’t quite level. From years of watching the water I now know where the tiniest dips lie and where to put sandbags when a storm surge is high.
In some sense the incoming water ‘solves’ the Navier-Stokes equations. By the lights of physics and math, it ‘knows’ how to flow. And so I want to ask: Is water intelligent?
I mean this as a serious inquiry, not only because I have a fascination with the ways material things instantiate complex mathematical relationships, but because the concept of ‘intelligence’ has become a subject of vast financial speculation and frenzied debate.
I am talking of course about ‘artificial intelligence,’ and specifically the quest for so-called ‘Artificial General Intelligence,’ or AGI. I want to ask what computer scientists and AI entrepreneurs like Sam Altman mean when they claim their systems are – or are becoming – ‘intelligent’?
The concept of ‘intelligence’ has become a subject of vast financial speculation and frenzied debate.
Ever since ChatGPT burst upon us in 2022, each week seems to bring news about AI’s accomplishing tasks hitherto thought beyond mechanic capability. The pace of development is so rapid, Silicon Valley companies are talking about achieving a more general form of intelligence “probably in 2026 or 2027, possibly as soon as this year”. That’s a quote from New York Times tech writer Kevin Roose, who recently penned an impassioned essay arguing that AGI – defined as “a general-purpose A.I. system that can do almost all cognitive tasks a human can do” – is coming on so fast we must make it a societal priority to think about it and prepare for it Now!
In San Francisco, where Roose is based, “the idea of AGI isn’t fringe or exotic” anymore. Tech people "talk about ‘feeling the AGI’ and building smarter-than-human AI systems.” According to Roose, “we are losing our monopoly on human level intelligence,” with the consequence that “big change, world-shaking change, the kind of transformation we’ve never seen before, is just around the corner.”
Roose quotes Dario Amodei, head of the AI company Anthropic, who believes we are just a year or two away from having “a very large number of AI systems that are much smarter than humans at almost everything.” [My italics.] Elsewhere, Amodei has talked about super-intelligent computers as like having “a country of geniuses in a data center.”
*
Let’s consider what this hyped-up ‘intelligence’ is going to achieve: What kinds of “world shaking” transformations might we be looking at?
For sure, souped-up AI will have an impact. It’s already helping with biomedical research and medical diagnoses; its displacing coders; and changing the way students write and read.
All of us already are interacting with AI’s through automated call systems and service ‘chat’ options; and in the tsunami of ersatz content on social media. I notice that all those annoying YouTube ads for miraculous healing supplements are now AI voiced, and increasingly the person speaking is an AI simulacra, lip movements not quite syncing with the sound. My Instagram feed is is awash with “AI slop.”
Promoters of AGI – the more ‘intelligent’ version – claim their golems will deliver some magic future of enhanced productivity and well-being, and hyper-creativity. In an astonishing blogpost titled “The Intelligence Age,” Sam Altman, head of OpenAI puts it this way:
“Our grandparents – and the generations that came before them – built and achieved great things. They contributed to the scaffolding of human progress that we all benefit from. AI will give people tools to solve hard problems and help us add new struts to that scaffolding that we couldn’t have figured out on our own…
… eventually we can each have a personal AI team, full of virtual experts in different areas, working together to create almost anything we can imagine. Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need. We can imagine similar ideas for better healthcare, the ability to create any kind of software someone can imagine, and much more.
With these new abilities, we can have shared prosperity to a degree that seems unimaginable today; in the future, everyone’s lives can be better than anyone’s life is now.
Altman later continues:
I believe the future is going to be so bright that no one can do it justice by trying to write about it now; a defining characteristic of the Intelligence Age will be massive prosperity.
Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace…
…. AI will allow us to amplify our own abilities like never before.
Reading this hyperbolic screed I am reminded of Francis Bacon’s short influential book The New Atlantis (1620), one of the founding philosophical texts of the scientific revolution. Bacon, too, foretold an age of massive prosperity where a small cadre of what we would now call ‘scientists’ delivered to all humanity every conceivable benefit: miraculous medicines, wondrous material goods, delicious new varieties of plants and animals to eat; health, wealth, and happiness. About the only thing Bacon didn’t predict was space travel. It’s a text worth reading today.
Science, for sure, has delivered many marvels. But the benefits have been unevenly distributed and sectors of the world’s population are paying an appalling price for some of us to own computers and cars and have access to the latest cancer treatments. Children slaving in Congolese cobalt mines are not enjoying the wonders of science; the Polynesian islands disappearing as sea levels rise are literally being erased as a consequence of scientific miracles.
Don’t get me wrong: I love science. I’ve devoted my career to explaining it. But its not a panacea for everything and it takes place within social contexts that remain deeply inequitable. Altman, like Bacon, assumes that when certain people acquire certain kinds of knowledge the world will transform into a better place for all. This is one of the founding illusions of modern science, what the great British philosopher Mary Midgely has called the myth of “science as salvation.”
*
I have critiqued this quasi-religious mythos elsewhere; I want to focus here on what Altman and his Silicon Valley peers might specifically mean by ‘artificial intelligence.’
The first thing to note is that they themselves have developed formal criteria. ‘Intelligence,’ they claim, can be adjudicated by empirical tests, and there is now a sort of IQ test for AI’s – the ‘ARC test,’ short for Abstract Reasoning Corpus. It comprises a set of clever visual puzzles in which an AI (or a human) gets shown examples of a graphic system (often shapes on a grid) and then has to work out what elements of a mystery one would look like. You can try some of the puzzles here yourself at the New York Times.
These puzzles are developed by French AI researcher Francois Chollet, who last year teamed up with a software company founder to establish the ARC Prize, promising $700,000 to anyone who builds an A.I. system that can exceed human performance. Last week they added many harder puzzles and have called the updated version ARC-AGI-2. So far humans outclass AI’s on these tests, but AI’s are catching up fast and it seems only a matter of time until we’re beaten at this game.
But is this ‘intelligence’?
Mathematics, logic, theoretical physics, coding, computer science – these are the things ReallySmartPeople™ do. So symbol manipulation is the benchmark for AI.
When I was a student of computer science in the early 1980’s we were told that chess was the barometer of intelligence and once machines could beat humans at this they would be intelligent. That milestone passed in 1997 when a computer named Deep Blue beat World Chess Champion Garry Kasparov. Computers are now way better than humans at chess; but contrary to what was feared, nobody gives a shit. And, thankfully, the art of chess between humans is flourishing. No-one says anymore that chess is the sin-qua-non of ‘intelligence.’ Ditto for the much harder Chinese game Go, which also fell to a computer in 2016 when Google’s AlphaGo beat champion Lee Sedol. Sedol supposedly sank into a deep depression; the rest of us carried on.
I suggest the ARC Prize tests are in a similar vein. No doubt one day computers will beat us at this too. So What!
The mistake here is to think that intelligence is equated with manipulating symbols. It’s a notion permeating western culture inherited from the ancient Greeks then imported into modern science and now hardened into a tech-world ideology. ‘Smart people,’ are those who can play abstract games with symbols – the Abstract Reasoning Corpus writ large. Mathematics, logic, theoretical physics, coding, computer science – these are the things ReallySmartPeople™ do. So symbol manipulation is the benchmark for AI.
I would like to demur. I suggest symbolic manipulation is just one kind of smarts. And not a particularly important one for human flourishing. Contrary to what Altman says, AI’s are NOT going to "fix the climate” – for the simple reason that “fixing the climate” is not a symbolic problem. It’s a social, political, and material problem. Likewise, providing “better healthcare” to everyone won’t be achieved by any amount of computation or ‘deep learning’. As for “establishing a space colony” – how is AI going to overcome the vast biomechanical stresses human bodies endure on long space-flights? And why is colonizing space even being touted as a goal?
Altman gives his game away when he enthuses that AI’s will give us “the ability to create any kind of software someone can imagine.” What if we don’t want to imagine any kind of software?
Altman’s conception of ‘intelligence’ is so self-serving: it values and valorizes the very things he’s good at, the things he wants, he imagines.
As someone with a degree in computer science, I also recognize in Altman’s notion of ‘intelligence’ a parallel with the computational view of ‘information.’ To computer scientists, ‘information’ is something describable by an equation due to Claude Shannon, the “father of information theory.” Shannon’s “information entropy” equation laid the foundation for modern telecommunications, and he was also a pioneer in thinking about AI. Shannon is to computer science what Einstein is to physics – he’s a Legend, someone in a category by himself.
Shannon’s conception of ‘information’ is brilliant and elegant and very very useful; but it’s an abstraction with little relevance to what most of us mean when we use the word ‘information.’ The same is true for Altman’s and co’s view of ‘intelligence’. It’s an abstraction with little real-world value – though, as we are seeing, it has massive real-world power. [See here for an excellent article by Tressie McMillan Cottom about the ways Elon Musk and DOGE are deploying AI to stiff workers.]
*
What I want, and what so many feminist thinkers stress with regards to human flourishing, is a society that truly values people – all people, not just symbolizers, not just science and tech “geniuses.” Symbolizing can be wonderful for those who have a facility with it, and I count myself among those who have some (I have degrees in both math and physics.) However, the core of human flourishing rests with how we treat one another and how we raise our children. Real intelligence, in my view, is rooted in how we enact community and conduct familial and social relations. AI – general or otherwise – has little to add to that.
I think what we are being presented with is "tech as salvation."
It's brought about some cool tools but tech was never meant to be honored as a leading value or principle for transformation. If we look up a list of values and principles upon which to build an effort, we won't find tech on the list.
It's like being a fan of the hammer and believing that the hammer is the only tool that will lead us to salvation.
Excellent piece, Margaret. Thank you.