Our attitudes towards AI reveal how we really feel about human intelligence – The Guardian

We’re in the untenable position of regarding the AI as alien because we’re already in the position of alienating each other
The idea that superintelligent robots are alien invaders coming to “steal our jobs” reveals profound shortcomings in the way we think about work, value, and intelligence itself. Labor is not a zero-sum game, and robots aren’t an “other” that competes with us. Like any technology, they’re part of us, growing out of civilization the same way hair and nails grow out of a living body. They’re part of humanity – and we’re partly machine.
When we “other” a fruit-picking robot – thinking of it as a competitor in a zero-sum game – we take our eyes off the real problem: the human who used to pick the fruit is considered disposable by the farm’s owners and by society when no longer fit for that job. This implies that the human laborer was already being treated like a non-person – that is, like a machine. We’re in the untenable position of regarding the machine as alien because we are already in the untenable position of alienating each other.
Many of our anxieties about artificial intelligence are rooted in that ancient, often regrettable part of our heritage that emphasizes dominance and hierarchy. However, the larger story of evolution is one in which cooperation allows simpler entities to join forces, creating larger, more complex, and more enduring ones; that’s how eukaryotic cells evolved out of prokaryotes, how multicellular animals evolved out of single cells, and how human culture evolved out of groups of humans, domesticated animals, and crops. Mutualism is what has allowed us to scale.
As an AI researcher, my chief interest is not so much in computers – the “artificial” in AI – as in intelligence itself. And it has become clear that, no matter how it is embodied, intelligence requires scale. The “Language Model for Dialogue Applications” or “LaMDA”, an early large language model we built internally at Google Research, convinced me in 2021 that we had crossed an important threshold. While it was still very hit-or-miss, LaMDA, with its (for the time) whopping 137bn parameters, could almost hold down a conversation. Three years later, state-of-the-art models have grown by an order of magnitude, and accordingly, they have gotten a lot better. In another few years, we’ll likely see models with as many parameters as there are synapses in the human brain.
As a species, modern human beings are likewise the result of an explosion in brain size. Over the past several million years, our hominin ancestors’ skulls quadrupled in volume. Social group size has grown in lockstep as researchers find when they correlate primate troop size with brain volume. Bigger brains allow larger groups to cooperate effectively. Larger groups are, in turn, more intelligent.
What we think of as “human intelligence” is a collective phenomenon arising from cooperation among many individually narrower intelligences, like you and me. When we catalog our intellectual achievements – antibiotics and indoor plumbing, art and architecture, higher mathematics and hot fudge sundaes – let’s acknowledge how clueless most of us are, individually. Could you make a sundae, even if you began with domesticated cows, cacao pods, vanilla beans, sugar cane and refrigeration – that is, with 99% of the hard work already done?
Human intelligence consists not only of people, but also of an array of plant and animal species, microbes, and even technologies from paleolithic to contemporary. Those cows and cacao plants, the rice and wheat, the ships, trucks and railroads that have supported explosive population growth are all fundamental. To neglect the existence of all these companion species and technologies is akin to imagining us as a disembodied brain in a vat.
Further, our intelligence is variously embodied and distributed. It will become even more so as AI systems proliferate, making it increasingly hard to pretend that our achievements are individual or even solely human. Perhaps we should adopt a broader definition of “human”, to include this entire bio-technological package.
Some of our most impressive feats, like making silicon chips, are truly global in scale. Our challenges, too, are increasingly global. Threats like the climate crisis and the resurgent possibility of nuclear war weren’t created by any one actor, but by all of us, and we can only solve them collectively. The increasing depth and breadth of collective intelligence is a good thing if we want to flourish at planetary scale, but that growth isn’t often perceived as something cumulative and mutual. Why?
Put simply, because we’re worried about who will be on top. But dominance hierarchies are nothing more than a particular trick for allowing troops of cooperating animals with otherwise aggressive tendencies toward each other, borne of internal competition for mates and food, to avoid constant squabbling by agreeing on who would win, were a fight over priority to break out. Such hierarchies may be, in other words, just a hack for half-clever monkeys, not some universal law of nature.
AI models can embody considerable intelligence, just as human brains can, but they aren’t fellow apes vying for status. As a product of high human technology, they depend on people, wheat, cows, and human culture in general to an even greater extent than Homo sapiens do. They aren’t conniving to eat our food or steal our romantic partners. They depend on us; we may come to depend on them just as deeply. Yet concern about dominance hierarchy has shadowed the development of AI from the start.
The very term “robot”, introduced by Karel Çapek in his 1920 play Rossum’s Universal Robots, comes from the Czech word for forced labor, robota. Nearly a century later, a highly-regarded AI ethicist titled an article Robots should be slaves, and though she later regretted her choice of words, the robot debate still turns on domination. AI doomers are now concerned that humans will be enslaved or exterminated by superintelligent robots. On the other hand, AI deniers believe that computers are incapable by definition of any agency, but are instead mere tools humans use to dominate each other. Both perspectives are rooted in zero-sum, us-versus-them thinking.
Many labs today are developing AI agents. They will become commonplace in the coming years, not because the robots are “taking over”, but because a cooperating agent can be a lot more helpful, both to individual humans and to human society, than a mindless robota.
If there is any threat to our social order here, it comes not from robots but from inequalities among human beings. Too many of us haven’t understood yet that we’re interdependent. We’re all in it together – human, animal, plant, and machine alike.

source

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every month.

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Your email address will not be published. Required fields are marked *