Those who know me well know that I've got some hot takes on AGI. It's a fraught discussion for several reasons, but one of the biggest issues is that AGI means very different things to different people. I'm certain that there is some person out there with a somewhat reasonable definition of AGI, but today at least reasonable definitions of AGI are a trace minority.
Much more commonly when I hear people talk about the kinds of things that they want AGI to achieve I'm struck more by how expansive the abilities are. Here are a sample of the capabilities I've heard pitched for AGI
Solve global warming
Cure cancer
Create a post-scarcity economy
Eliminate traffic
Manipulate public opinion
Predict the future
Automate scientific inquiry
Automate programming
Read your mind
The list goes on. The longer I spend diving into the annals of AGI speculation the more that I come to believe it's nothing more than a kind of masturbatory navel-gazing at the throne of our own intellect. It's a kind of nigh-omnipotence that blindly asserts that our problems might be solved with just a little more thinking. That somehow 7 billion people isn't enough human intelligence.
I think that intelligence is a far more singular thing that people recognize. Rather than existing along a strict, ordered spectrum it's this messy complicated space of planning emotion and behavior. Plenty of things that a crow can do that I can't and I think it gets tricky to draw a line between which of those things counts as "intelligence" or not.
So why am I saying this? Because I wanted to explain where I get the notion that AGI doesn't exist. It's... not a coherent concept. In fiction terms I'd argue that it's not even hard worldbuilding. Even in fiction AGI lacks a consistent set of rules. Indeed, even to humans Asimov's rules are more riddle than instruction.
Listen to these thoughts and more on AI and the Future of Work: