Utopia or Apocalypse: The Promise and Perils of AGI from the Ancients to the Present


Like many people I have been playing with generative Large Language Models (LLMs) like ChatGPT, Claude (my favourite), and Gemini. These models write code, produce essays and reports, create PowerPoint presentations, and perform graduate level maths. It is hard not to be impressed by the capabilities of these models, far exceeding what even the most optimistic people predicted five years ago. I have integrated them in my workflow and I think it is folly for any knowledge worker to ignore them.

Yet despite their capabilities we are still in the early days, and the models are still fraught with errors. All they do is predict the next token, and they occasionally hallucinate. They don't truly understand; they simply guess probabilistically based on their (massive) training data. In other words there is no true intelligence, only statistical techniques. At least for now.

Still their utility is undeniable, and given that we are still in the early days they are bound to improve. So like other technophiles I have gotten curious and I've started following the field more closely, listening to podcasts and reading papers on Artificial Intelligence (AI) and AI Safety. I have even taken a course on Coursera. It's been mind-expanding to hear what very smart people think will happen when- not if- we finally get to Artificial General Intelligence (AGI).

The predictions, and arguments, range from utopian dreams to apocalyptic nightmares.

Carl Shulman on the 80000 Hours Podcast makes a thought provoking case of a future world of AGI. Labour will become cheap, productivity will skyrocket, the super intelligent agents will invent new things we do not know or imagine, they will create new algorithms, new drugs, and end disease perhaps. It sounds like Utopia, but it is not without its own dangers.

On the other extreme I have been reading The Alignment Problem, Stuart Russel's Human Compatible, and Max Tegmark's Life 3.0 in addition to a couple of papers[^1] on AI safety and these present a worrying problem - aligning the goals of intelligent agents to our own goals is an extremely difficult challenge. What might happen if AGI agents turn rogue, or are employed by bad actors and bad governments? What of accidents when they escape the control of their creators, as in Frankenstein? And what might an AI arms race produce, when governments try to outdo each other in the race to manufacture the most powerful agents in a field that is greedy, where the winner gets disproportionate and possibly insurmountable advantages?

These two extreme views are hardly new. It seems they have existed for as long as mankind has entertained the idea of creating autonomous, human level intelligence- a dream as old as humanity itself.

Take Aristotle, for example. For him autonomous intelligent agents would result in cheap and bountiful labour, end the need for slaves, and perhaps usher in a world of leisure and no scarcity. He writes, in the Politics:

We can imagine a situation in which each instrument could do its own work, at the word of command or by intelligent anticipation, like the statues of Daedalus or the tripods made by Hephaestus, of which the poet relates that Of their own motion they entered the conclave of Gods on Olympus.* A shuttle would then weave of itself, and a plectrum would do its own harp-playing. In this situation managers would not need subordinates and masters would not need slaves.

— Aristotle, Politics

In such a situation no one would need to work. Man, having created intelligence, would finally break free of the Biblical stricture to work. We would all live like aristocrats, in conditions that the ancient Romans called otium - a life of leisure and contemplation as opposed to one of business and work.

What would we do then? Aristotle no doubt thought we would study philosophy, and take part in government, the two ways of living he holds in highest regard. But these machines might govern better than any human, and resolve any and all philosophical questions. And what would it do to our pysche, if we did not have to work at all?

In the Daoist text, the Liezi, there is a story with an altogether different view of how an intelligent automaton might behave. In the story a King is shown an artificer of extraordinary skill named Yen Shih. What Yen Shih has done is truly remarkable - he has created a humanoid that dances, sings, and does everything that a human does. It also looks like a human, and the king and his court can hardly be persuaded it's not a real man.

But the humanoid has a wandering eye:

When the entertainment was about to end, the performer winked his eye and beckoned to the concubines in waiting on the King's left and right. The King was very angry, and wanted to execute Yen-shih on the spot. Yen-shih, terrified, at once cut open the performer and took it to pieces to show the King. It was all made by sticking together leather, wood, glue and lacquer,coloured white, black, red and blue. The King examined it closely; on the inside the liver, gall, heart, lungs, spleen, kidneys, intestines and stomach, on the outside the muscles, bones, limbs,joints, skin, teeth and hair, were all artificial, but complete without exception. When they were put together, the thing was again as he had seen it before. The King tried taking out its heart,and the mouth could not speak; tried taking out its liver, and the eyes could not see; tried taking out its kidneys, and the feet could not walk. The King was at last satisfied, and said with a sigh: 'Is it then possible for human skill to achieve as much as the Creator?'He had it loaded into the second of his cars, and took it back with him

Liezi

The automaton appears to have agency, to form - and pursue- its own goals.
It seems Liezi believed, as many AI researchers now do, that autonomous beings would have agency and goals, and these might not align with ours.

This is something that many AI researchers agree on, that intelligent agents will form an awareness of themselves and the world, and in pursuit of their creators' goals, they will also form sub-goals. These sub-goals might not entirely align with our goals. And further, goal seeking in intelligent agents leads to power seeking behaviours such as resource acquisition, making allies, deception, and making sure the agent survives at all costs. Even an intelligent coffee maker might therefore develop other goals to aid it in supplying you with coffee. It might hack and disable other coffee suppliers, figure out times and quantities that keep you most hooked, and resist being turned off. As Stuart Russel famously said, "you can't fetch coffee if you're dead".

So I wonder if these two possibilities are irreconcilable. Is it possible to have AGI that solves all our problems, provides everyone with cheap and abundant resources, frees us to debate philosophy and paint, while not accumulating power, or winking at our partners, or, worse, enslaving all humanity?

As we stand on the brink of true AGI, Aristotle's musings and Lie Zi's parable take on new relevance. We are forced to confront fundamental questions about the nature of work, human purpose, and our relationship with the technologies we create.

Given how hard it is to understand the internal workings of the models we currently have, how much harder it'll become as they get more powerful, and the difficulty not only in encoding human values and control mechanisms, but also in getting consensus of what they are, it seems prudent for humanity to proceed cautiously, perhaps even momentarily pausing the development of AI until more is understood.

For while the humanoid in Liezi is lecherous, it is also easily dismembered. The same cannot be said for truly capable AGI.

[^1]: e.g 8 Things to Know About LLMs, The Alignment Problem from a Deep Learning Perspective