This document discusses four general perspectives on AI - AI optimists, AI skeptics, AI realists ("modernizers" and "preservationists"), and AI absolutists ("accelerationists" and "decelerationists"). It presents these perspectives on a linear spectrum as well as a "horseshoe theory" view. The AI realists agree that AI is advancing but have different views on risks and policies. AI absolutists see AI as world-changing but disagree on whether it will be positive or negative. Most practical governance will likely come from pragmatic AI realists, though absolutists influence debates and realists may ally on certain issues.
2. AI Optimists AI Skeptics
AI Realists “Modernizers” “Preservationists”
AI Absolutists
“Accelerationists” /
“Cornucopians”
“Decelerationists” /
“Catastrophists”
4 General
AI Worldviews
Source: Adam Thierer, R Street Institute
4. Linear Spectrum of AI Perspectives
Source: Adam Thierer, R Street Institute
Accelerationist
AI Absolutist
(“Cornucopian”)
Optimistic
AI Realist
(“Modernizer”)
Skeptical
AI Realist
(“Preservationist”)
Decelerationist
AI Absolutist
(“Catastrophist”)
desires powerful AGI;
believes it will bring
abundance
uncertain about AGI,
but embraces AI
potential for good
uncertain about AGI;
more worried about
current AI capabilities
fears powerful AGI;
believes it will bring
catastrophe
wants fast change;
boost speed of it
open to change;
modernize policies
cautious of change;
preserve past policies
fearful of change; seek
slow or stop progress
get obstacles
out of the way;
boost AGI R&D
pragmatic, flexible
governance standards
+ enforce existing
policies / common law
centralized, top-down
regulations + expand
existing policies & add
many new regs
sweeping new global
controls & agencies;
licensing &
surveillance regs
5. Liner approach offers one way to cut things
• Linear approach highlights similarities between “AI Accelerationists”
and “Optimistic AI Realists”
• Both generally excited about AI possibilities
• Both generally want to give AI innovation a policy green light
• Both generally favor more market-oriented approaches
• Linear approach also highlights similarities between “AI
Decelerationists” and “Skeptical AI Realists”
• Both generally concerned about AI to different degrees
• Both seeking greater precautionary controls / regulation
• But there’s a different way to cut these worldviews: “Horseshoe
Theory” (or the idea that the extremes meet in some ways) …
6. We can also find agreement
among realists and absolutists
7. Horseshoe Theory of AI Perspectives
Optimistic
AI Realist
(“Modernizer”)
• AI realists agree AI is
advancing, but understand
society & govt can shape it
• Both generally reject
extreme forecasts &
singularitarian thinking
Skeptical
AI Realist
(“Preservationist”)
Accelerationist
AI Absolutist
(“Cornucopian”)
• AI absolutists agree AI is a
world-changing force
(especially AGI)
• Both often speak in
deterministic terms about
inevitability of AGI
• Both favor extreme policy
approaches
Decelerationist
AI Absolutist
(“Catastrophist”)
Source: Adam Thierer, R Street Institute
8. AI Worldviews: The “AI Realists”
AI realists: Divided between “Modernizers” vs. “Preservationists”
• Both agree AI is advancing, but understand society & govt can shape it
• Both generally reject extreme AGI forecasts / singularitarian thinking
• Both focused on near-term issues and more pragmatic steps to achieve goals
• But disagree about nature of AI risks & how/when to address them:
Source: Adam Thierer, R Street Institute
“Modernizers” “Preservationists”
more open to change & optimistic
about positive AI developments
more skeptical of change & focused on
negative AI developments
prefer AI governance handled by best
practices / decentralized mechanisms
prefer AI policy enforced via top-down
law/regulation
believe laws & institutions should
adapt to ensure innovation can happen
goal is to preserving various values or
institutions; expand regs to do so
9. AI Worldviews: The “AI Absolutists”
AI absolutists: Divided between “Accelerationists” vs. “Decelerationists”
• Both argue that AI is a world-changing force (especially AGI)
• Both often speak in deterministic terms about inevitability of AGI
• Share longtermist “we’re out to save the world” mentality (but in different ways)
• But they disagree bitterly about whether AGI is good or bad for humanity:
Source: Adam Thierer, R Street Institute
“Accelerationists” “Decelerationists”
“cornucopian”: believe AGI brings
abundance (possibly singularity)
“catastrophist”: believe AGI brings
disasters (possible extinction)
wants fast change: Let ‘er rip! wants to slow or stop AI advances
get all governance obstacles
out of the way
calls for sweeping global regulatory
controls & new agencies
10. Some general thoughts / takeaways
• AI absolutists are dominating media / policy discussions today
• Extreme rhetoric & predictions grab headlines & attention more easily
• But hard for absolutists to gain traction with extreme positions / solutions
• AI realists are very frustrated that the absolutists get so much attention
• But the realists offer more practical governance solutions
• Ultimately, most AI policy will be shaped by the realists
However…
• “Modernizer” realists will make selective alliances with “accelerationists”
on some issues when opposing some new AI regulations
• “Preservationist” realists will make selective alliances with
“decelerationists” on some issues when calling for new AI regulations
• This reflects broader tech governance debate between “permissionless
innovation” vs. the “precautionary principle”
11. General Spectrum of Technological Governance Options
Top-down
(ex ante) Solutions
Bottom-up
(ex post) Solutions
Permissionless Innovation Precautionary Principle
Source: Adam Thierer, R Street Institute
Competition
Contracts
Property rights
Social norms
Learning / Coping
Self-regulation
Best practices
Educational steps
Transparency
Sandboxes
Consumer protection
Product recalls
Licensing
Permits
Pre-market approvals
Restrictive defaults
Agency “nudges”
Other mandates
Product bans
Entry barriers
State ownership
Surveillance
Censorship