We Are Nowhere Near AGI. That's Bad for Animal Rights.
There has been lots ink spilled over the last few years about the threat — and perhaps opportunity — posed by the development of so-called Artificial General Intelligence (AGI), i.e., a human-level intelligence that can solve a general class of problems and thereby transform human society. By creating an infinite army of knowledge workers, AGI could supercharge economic development, scientific research, and (importantly) further developments in AI. Many fear that self-improving could create a superintelligence that destroys humanity. After all, the record so far in earth’s history is that more intelligent beings, such as human beings, destroy those who are cognitively inferior.
But the most recent set of model releases have led many of the leading figures in AI, including Turing Award winners Rich Sutton and Yann Lecun, former head of Tesla’s AI division Andrei Karpathy, and, most recently, Ilya Sutskever. The latter is arguably the most influential figure in the history of AI research. His AlexNet paper in 2012 launched the revolution in deep learning, the AI architecture modeled after the neuronal connections in the human brain. He later led OpenAI’s research that led to the development of ChatGPT. But most recently, he’s become known for something else: predicting that the development of AI is slowing.
Two bottlenecks, in particular, have appeared, according to Sutskever. The first is the possible end of the “laws of scaling.” For nearly a decade, when AI models were trained on larger amounts of data, with larger amounts of compute, and in order to build models with a larger number of neuronal connections, they became predictably more intelligent. But around the time Sutskever left OpenAI, he predicted that these scaling laws have failed. His prediction has now proven true. The latest round of models, including ChatGPT’s own version 5, have failed to impress. Given that around 1% of all electricity is already being used to train AI, and each generation of models takes up about 10x as much energy as the last, the slowing of the scaling laws means that we simply don’t have enough energy to keep pushing toward AGI.
The second bottleneck, however, was not identified until the last few weeks, when Sutskever sat down with prominent tech podcaster Dwarkesh Patel to discuss his latest startup, Safe Superintelligence. In one of the most remarkable exchanges in the history of AI research, and one that received little commentary, Sutskever makes another bold prediction: the conventional approach to AI, focused on self-improving AI, will inevitably fail. Instead, only an approach focused on building AI “robustly aligned with caring about sentient life.” As I wrote on X:
Is the future of AI “caring for all sentient beings?”
I think @dwarkesh_sp missed the most important point in his recent conversation with OpenAI co-founder @ilyasut. Ilya says that other AI companies are making a mistake by “building self-improving AI” and suggests instead they should be “building AI aligned to care about sentient life.” Dwarkesh seems to think Ilya is making a moral point about alignment, i.e., that AI *should* care about sentient beings. But he’s not. He’s making a scientific prediction: that AI *will* care about sentient beings as an emergent property of developing intelligence.
The reason is that true intelligence may require empathy. We can’t efficiently model the world around us without imagining what it’s like to be other beings. Indeed, this appears to be how humans and other animals became intelligent. We have neural circuitry, e.g., the mirror neuron, that allows us to simulate the feelings of the beings around us, so we can learn from that simulation.
This social learning is incredibly important because it’s by far the most complex intelligence challenge of human life. It’s why we can send a rocket to hit a precise target on the moon a few days from now, but we can’t predict the movement of the stock market 5 minutes from the present. The stock market prediction requires social intelligence, which is very hard.
Indeed, it’s so hard that achieving social intelligence, through empathy, may be the best path to super intelligence. If Ilya is right, and I think he might be, then AI development will be a powerful force for good. The smarter it gets, the more empathic it will get.
I am deeply skeptical of AI progress. As I’ve written previously, there are fundamental differences between the animal brain, and modern LLMs like ChatGPT, that suggest the latter will inevitably fail at generating novel insights. But this discussion with Sutskever may imply that this is an unfortunate outcome. Perhaps the best hope for animals lies, not in human moral progress, but in AI moral progress supercharged by learning through empathy.


