3. Shaping the long-term future
Bostrom 2002, Beckstead 2013 3
AI strategy and policy AI safetyBiosecurity
Broad vs. narrow interventions (Beckstead 2013) AI forecasting
Maxipok principle (Bostrom 2002) X-risk (Bostrom 2002)
4. Shaping the long-term future
Bostrom 2002, Beckstead 2013 4
AI strategy and policy AI safetyBiosecurity
Broad vs. narrow interventions (Beckstead 2013) AI forecasting
Maxipok principle (Bostrom 2002) X-risk (Bostrom 2002)
Population ethics??
What is ‘wellbeing’??
5. Shaping the long-term future
Bostrom 2002, Beckstead 2013 5
AI strategy and policy AI safetyBiosecurity
Broad vs. narrow interventions (Beckstead 2013) AI forecasting
Maxipok principle (Bostrom 2002) X-risk (Bostrom 2002)
Population ethics??
What is ‘wellbeing’??
6. Based on Bostrom 2013, Figure 2
Scope
Severity
imperceptible endurable crushing (hellish)
Fatal car
crash
personal
local
global
trans-generational
pan-generational
(cosmic)
X-risk
7. Zooming in on existential risk
Based on Bostrom 2013, Figure 2
cosmic
pan-
generational Extinction
crushing hellish
8. Zooming in on existential risk
cosmic S-risk
pan-
generational Extinction
crushing hellish
9. What should we do about s-risks?
Not controversial:
• Reducing s-risk is desirable, all else being equal.
But: Should reducing s-risks be a priority?
• How likely are s-risks?
• What interventions are there to reduce s-risks? Do they differ
from those reducing extinction risk?
• Who is working on such interventions?
11. What s-risks are there?
By type of intent:
● No intent: accidents, indifference, evolution …
● Evil intent
● Strategic intent: conflicts, threats, bribes, …
12. Tractability?
Targeted interventions
● (Some work in) AI safety
● (Some work in) AI policy
Broad interventions
● International cooperation, expanding moral circle (?)
● Going meta: research anti-s-risk strategies
Suffering-focused AI safety (Gloor 2016)
13.
14. On the neglectedness of s-risk
● Not totally neglected, but gets little attention
● Misconception: existential risk = extinction risk
● The Foundational Research Institute is the only organization
that specifically focuses on reducing s-risk
15. Summary: why and how to prevent s-risk
● Whether to focus on s-risk depends on both empirical and value
judgments
● S-risks not much more unlikely than AI-related extinction risk
● Some, but not all, familiar work on x-risk also reduces s-risk
● Reducing s-risks seems neglected
18. Presentation title
John Smith | Head of Department 28.06.2016
Subtitle or caption
Thank you.
More on s-risk? Check out
foundational-research.org
max@foundational-research.org | Add me on Facebook! :)