6/16/2024
See “Situational Awareness - The Decade Ahead” by Leopold Aschenbrenner
Overall Assessment
Aschenbrenner’s paper is a comprehensive articulation of the current state of AI development with respect to scaling, power consumption, current situation, and lab security. It has an embedded viewpoint on AI alignment that is very limited (really control, not alignment) and arguably is not terribly useful. Aschenbrenner views development of Artificial Superintelligence (ASI) as likely, imminent, and tantamount to a new nuclear threat, from which he argues we should use a nuclear weapon-like control regime. I would argue there are serious flaws with that reasoning: 1. Yes ASI is likely 2. I think he greatly underestimates the gap between lab development and widespread deployment (mistaking a clear view for a short distance), and 3. ASI will have massive social implications but the nuclear weapon analogy is not necessarily a good one and leads to proposals of questionable value.
Positioning
“You can see the future first in San Francisco.” “right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness”
- The initial position reveals great myopia. There are more than a few hundred executives in the Fortune 500 alone who are actively tracking ASI development. The core concepts of ASI breakout have been tracked for over a decade, by forecasters such as Tony Seba at RethinkX, Ray Kurzweil, and ARK Invest, and are followed broadly in tech leadership.
There are at least ten of thousands, and likely hundreds of thousands, of people actively tracking these issues. A review of top YouTube channels following cutting-edge AI research includes:
- YouTube subscribers:
- ReThinkX: 17k
- Solving the Money Problem: 275k
- ARK Invest: 545k
- AI Explained: 263k
- David Shapiro: 156k
- Matthew Berman: 273k
- (Myopia) Aschenbrenner, like many in San Francisco, confusing the city’s important, even leading, role with an exclusive role**.** Although SF tends to lead, Frontier AI development is worldwide, including London (DeepMind), France (Mistral), and China.
- Furthermore, San Francisco may lead in AI development but it’s not at all clear they lead in thinking in AI alignment. The San Francisco school approaches AI alignment from the perspective of a game theory - i.e. they view it as a math problem informed by philosophy. An alternative school (of which I am a part) views it as a social sciences problem - see my paper with Lara Scheibling: The Elephant in the Room: Why AI Safety Demands Diverse Teams. The social sciences school views the math-based school as fundamentally naive about complex interactions between actors with agency.
- Physicist Sabine Hoffenfelder described the paper as suffering from “groupthink,” which I think is is fair.
- However, I think the paper will become popular at evangelizing the intelligence breakout more broadly and at evangelizing a point of view in AI safety (although I don’t agree with that view).
- Further, Aschenbrenner deserves credit for such a large scale (165 page) effort to articulate (what he sees as) the current state of AI alignment and the upcoming impact of Artificial Superintelligence.
Main topics (summary)
- ASI (intelligence explosion) is on the roadmap
- The AI explosion is causing a gigantic infrastructure push for GPUs, data centers, and power capacity
- ASI will be able to evade human control
- ASI is a national security threat tantamount to nuclear weapons, and the government needs to take over security for AI labs
ASI (intelligence explosion) is on the roadmap
- Concur:
- There is reason to think our scaling will continue and obstacles will be overcome: data, compute, algorithms.
- We can project no reason why AI would not exceed human intellect.
- We can achieve large immediate gains by providing AI with missing components (memory), strategies (chain-of-thought, agents), and tools (web search). Aschenbrenner terms this “unhobbling.”
- Aschenbrenner expects to reach Artificial Superintelligence (he defines this as AI outperforming any human, not all humanity) in 2027. That’s probably realistic.
- He identifies that AI research is one of the easiest fields to accelerate, since it is entirely virtual.
- Critique:
- He actually underestimates AI’s current IQ, as measured in various capability tests (Lifearchitect.ai/iq-testing-ai), currently assessed at high general PhD level.
- (Myopia) It’s not like this idea is new or unique to San Francisco; Ray Kurzweil laid it out in The Singularity is Near in 2005, referencing Vernor Vinge’s 1993 essay "The Coming Technological Singularity." YouTube channels that cover it tend to have in the order of ~250k subscribers.
- (Myopia) His model of ASI explosion is one big ASI undergoing an intelligence explosion that overwhelms every other intelligence. This is an old idea (Book: The Metamorphosis of Prime Intellect, 2002), Film: Transcendence, 2014). But our current landscape has many diverse and actors close in capability, more like microcomputer revolution; it is more likely we will have multiple competing AI core models - perhaps very many.
- Further, in many domains such as navigating regulation and building physical objects, one cannot move at the inherently blistering speed of digital intelligence.
- (Myopia) Superintelligent AI breakout is still limited by the time to build and improve things in the physical world; it provides immediate improvements in some areas but not in others. Even with a large deployed android base (coming but not here), operations in the physical world take time.
- Even in the virtual world, where there is no theoretical constraint to an AI that can code improving its own code, there are non-obvious bottlenecks like simple complexity management. For example, today models can upgrade single files in a code project but if you ask them to upgrade the project itself from JBOSS to SpringBoot, they lack sufficient strategic planning and execution capability to do it, even though they have the intelligence.
- It’s dangerous to propose that an observed scaling law will suddenly increase. The scaling law already factors in all of the acceleration and drag forces.