5. International
Control
System
Friendly AI
Plan B
Survive
the
catastrophe
Leave
Backups
Study and Promotion
• Study of Friendly AI theory
• Promotion of Friendly AI (Bostrom and Yudkowsky)
• Fundraising (MIRI)
• Teaching rationality (LessWrong)
• Slowing other AI projects (recruiting scientists)
• FAI free education, starter packages in programming
Solid Friendly AI theory
• Human values theory and decision theory
• Proven safe, fail-safe, intrinsically safe AI
• Preservation of the value system during
AI self-improvement
• A clear theory that is practical to implement
Seed AI
Creation of a small AI capable
of recursive self-improvement and based on
Friendly AI theory
Superintelligent AI
• Seed AI quickly improves itself and
undergoes “hard takeoff”
• It becomes dominant force on Earth
• AI eliminates suffering, involuntary death,
and existential risks
• AI Nanny – one hypothetical variant of super
AI that only acts to prevent
existential risks (Ben Goertzel)
Preparation
• Fundraising and promotion
• Textbook to rebuild civilization
(Dartnell’s book “Knowledge”)
• Hoards with knowledge, seeds and
raw materials (Doomsday vault in
Norway)
• Survivalist communities
Building
• Underground bunkers, space colonies
• Nuclear submarines
• Seasteading
Natural refuges
• Uncontacted tribes
• Remote villages
• Remote islands
• Oceanic ships
• Research stations in Antarctica
Rebuilding civilisation after catastrophe
• Rebuilding population
• Rebuilding science and technology
• Prevention of future catastrophes
Time capsules
with information
• Underground storage
with information and DNA for future
non-human civilizations
• Eternal disks from Long Now
Foundation (or M-disks)
Preservation of
earthly life
• Create conditions for the re-emergence of new intelligent life on Earth
• Directed panspermia (Mars, Europe, space dust)
• Preservation of biodiversity and highly developed animals
(apes, habitats)
Prevent x-risk research
because it only increases risk
• Do not advertise the idea of man-made
global catastrophe
• Don’t try to control risks as it would only
give rise to them
• As we can’t measure the probability of glob-
al catastrophe it maybe unreasonable to try to
change the probability
• Do nothing
Plan of Action to Prevent Human Extinction Risks
Singleton
• “A world order in which there is a
single decision-making agency
at the highest level” (Bostrom)
• Worldwide government system based on AI
• Super AI which prevents all possible risks and
provides immortality and happiness to humanity
• Colonization of the solar system, interstellar travel
and Dyson spheres
• Colonization of the Galaxy
• Exploring the Universe
Controlled regression
• Use small catastrophe to prevent large one (Willard
Wells)
• Luddism (Kaczynski):
relinquishment of dangerous science
• Creation of ecological civilization without technology
(“World made by hand”, anarcho-primitivism)
• Limitation of personal and collective intelligence to
prevent dangerous science
Global
catastrophe
Plan D
Improbable
Ideas
Messages to
ET civilizations
• Interstellar radio messages with encoded
human DNA
• Hoards on the Moon, frozen brains
about humanity
Quantum immortality
• If the many-worlds interpretation of QM is true, an observer will survive any sort of
death including any global catastrophe (Moravec, Tegmark)
• It may be possible to make almost univocal correspondence between observer
survival and survival of a group of people (e.g. if all are in submarine)
Unfriendly AI
• Kills all people and maximizes non-human values
(paperclip maximiser)
• People are alive but suffer extensively
Reboot of the civilization
• Several reboots may happen
• Finally there will be total collapse
or a new supercivilization level
Interstellar distributed
humanity
• Many unconnected human civilizations
• New types of space risks (space wars, planets and stellar
explosions, AI and nanoreplicators, ET civilizations)
Attracting good
outcome by positive thinking
• Preventing negative thoughts
about the end of the world and about
violence
• Maximum positive attitude
“to attract” positive outcome
• Secret police which use mind
superpowers to stop them
• Start partying now
First quarter of 21 century (2015 – 2025)
High-speed
tech development
needed to quickly pass
risk window
• Investment in super-technologies (nanotech, biotech)
• High speed technical progress helps to overcome slow process of
resource depletion
• Invest more in defensive technologies than in offensive
Improving human
intelligence and
morality
• Nootropics, brain stimulation, and gene therapy for
higher IQ
• New rationality: Bayes probabilities, interest in long term
future, LessWrong
• High empathy for new geniuses is
needed to prevent them from becoming superterrorists
• Many rational, positive, and cooperative
people are needed to reduce x-risks
• Lower proportion of destructive beliefs, risky behaviour,
• Engineered enlightenment: use brain science to make
people more unite, less aggressive; open realm of
spiritual world to everybody
• Prevent worst forms of capitalism: desire to short term
money reward, and to change rules for getting it.
Stop greedy monopolies
Saved by non-human
intelligence
• Maybe extraterrestrials
are looking out for us and will
save us
• Send radio messages into space
asking for help if a catastrophe is
inevitable
• Maybe we live in a simulation
and simulators will save us
• The Second Coming,
a miracle, or life after death
Timely achievement of immortality
on highest possible level
• Nanotech-based immortal body
• Mind uploading
• Integration with AI
Miniaturization for survival and invincibility
• Earth crust colonization by miniaturized nano-tech bodies
• Moving into simulated world inside small self sustained computer
Rising
Resilience
AI based on uploading
of its creator
• Friendly to the value system of its creator
• Its values consistently evolve during its self-improvement
• Limited self-improvement may solve friendliness
problem
Bad plans
Depopulation
• Could provide resource preservation and make
control simpler
• Natural causes: pandemics, war, hunger
(Malthus)
• Birth control (Bill Gates)
• Deliberate small catastrophe (bio-weapons)
Unfriendly AI may be better
than nothing
• Any super AI will have some
memory about humanity
• It will use simulations of human
civilization to study the probability of
its own existence
• It may share some human
values and distribute them
through the Universe
Strange strategy
to escape
Fermi paradox
Random strategy may help us to escape
some dangers that killed all previous
civilizations in space
Interstellar travel
• “Orion” style, nuclear powered “generation ships” with colonists
• Starships which operate on new physical principles with immortal people on board
• Von Neumann self-replicating probes with human embryos
Temporary
asylums in space
• Space stations as temporary asylums
(ISS)
• Cheap and safe launch systems
Colonisation of the Solar system
• Self-sustaining colonies on Mars and large asteroids
• Terraforming of planets and asteroids using
self-replicating robots and building space colonies there
• Millions of independent colonies inside asteroids and comet bodies
in the Oort cloud
Space
Colonization
Technological
precognition
• Prediction of the future based
on advanced quantum technology and avoiding
dangerous world-lines
• Search for potential terrorists
using new scanning technologies
• Special AI to predict and prevent new x-risks
Robot-replicators in space
• Mechanical life
• Preservation of information about humanity for billions of years
• Safe narrow AI
Resurrection by
another civilization
• Creation of a civilization which has a lot of common
values and traits with humans
• Resurrection of concrete people
Risk controlTechnology bans
• International ban on dangerous technologies or
voluntary relinquishment (such as not creating new
• Lowering international confrontation
• Lock down all risk areas beneath piles of
bureaucracy, paperwork, safety requirements
• Concentrate all bio, nano and AI research in several controlled
centers
• Control over dissemination of knowledge of mass destruction
• Control over suicidal or terrorist ideation in bioscientists
• Investments in general safety: education, facilities, control,
checks, exams
• Limiting and concentrating in one place an important
resource: dangerous nuclear materials, high level scientists
Technology speedup
• Differential technological development: develop safety and con-
• Laws and economic stimulus (Richard Posner,
carbon emissions trade)
Surveillance
• International control systems (like IAEA)
• Internet control
Worldwide risk
prevention authority
• Worldwide video-surveillance and control
• “The ability when necessary to mobilise a
strong global coordinated response to
anticipated existential risks” (Bostrom)
• Center for quick response to any
emerging risk; x-risks police
a system of international treaties
• Robots for emergency liquidation of
bio-and nuclear hazards
Active shields
• Geoengineering against global warming
• Worldwide missile defence and
anti-asteroid shield
• Nano-shield – distributed system of
control of hazardous replicators
• Bio-shield – worldwide immune system
• Mind-shield - control of dangerous ideation
by the means of brain implants
• Merger of the state, Internet
and worldwide AI in uniform
monitoring, security and control system
• Isolation of risk sources at great distance
from Earth
Research
• Create, prove and widely promote
long term future model (this map is
based on exponential tech development
future model)
• Invest in long term prediction studies
• Comprehensive list of risks
• Probability assessment
• Prevention roadmap
• Determinate most probable
and most easy to prevent
risks
• Education of “world saviors”: choose
best students, provide them courses,
money and tasks
• Create x-risks wiki and x-risks internet
forum which could be able to attract best
minds, but also open to everyone
and well moderated
• Solve the problem that different
scientists ignore each other
(“world saviours arrogance” problem)
• Integrate different lines of thinking
about x-risks
• Lower the barriers of entry into the
• Unconstrained funding of x-risks
research for many different
approaches
(Bostrom) to create high quality x-risk
research
• Study existing system of decision mak-
ing in UN, hire a lawyer
• Integration of existing plans to prevent
concrete risks (e.g. “Climate plan” by
Carana)
• General theory of safety and risks
prevention
• War for world domination
• One country uses bioweapons to kill all
world’s population except its own which is
immunized
• Use of a super-technology (like nanotech)
to quickly gain global military dominance
• Blackmail by Doomsday Machine for world
domination
Global
catastrophe
Fatal mistakes in world
control system
• Wrong command makes shield system
attack people
• Geo-engineering goes awry
• World totalitarianism leads to catastrophe
Social support
with shared knowledge
and productive rivalry (CSER)
• Popularization (articles, books,
forums, media): show public that
x-risks are real and diverse,
but we could and should act.
• Public support, street action
• Political support (parties)
peer-reviewed journal, conferences,
intergovernment panel, international
institute
• Political parties for x-risks prevention
• Productive cooperation between
scientists and society based on trust
Elimination of
certain risks
• Universal vaccines, UV cleaners
• Asteroid detection (WISE)
• Transition to renewable energy, cutting emissions,
carbon capture
• Stop LHC, SETI and METI until AI will be created
• Rogue countries integrated, or crashed or prevented
from having dangerous weapons programs and ad-
vanced science
International
cooperation
• All states in the world contribute to the UN to
• States cooperate directly without UN
• Group of supernations takes responsibility of
x-risks prevention (US, EU, BRICS) and sign a
treaty
• A smaller catastrophe could help unite hu-
manity (pandemic, small asteroid, local nuclear
war) - some movement or event that will cause a
paradigmatic change so that humanity becomes
more existentially-risk aware
• International law about x-risks which will pun-
ish people for rising risk: underestimating it,
plotting it, risk neglect, as well as reward people
-
forts in their prevention.
• The ability to quickly adapt to new risks and
envision them in advance.
• International agencies dedicated to certain
risks
Values
transformation
• Rise public desire for life extension and global security
• Reduction of radical religious (ISIS) or nationalistic values
• Popularity of transhumanism
• Change the model of the future
• “A moral case can be made that existential risk reduction is strictly more im-
portant than any other global public good” (Bostrom)
• The value of the indestructibility of
level and as a goal of every nation
• “Sustainability should be reconceptualised in dynamic terms, as aiming for a
sustainable trajectory rather than a sustainable state” (Bostrom)
• Movies, novels and other works of art that honestly depict x-risks and moti-
vate to their prevention
• Translate best x-risks articles and books on main languages
• Memorial and awareness days: Earth day, Petrov day, Asteroid day
• Rationality and effective altruism
• Education in schools on x-risks, safety and rationality topics
Global
catastrophe
Manipulation of the
extinction probability
using Doomsday argument
• Decision to create more observers in case of
unfavourable event X starts to happen, so lowering its
probability (method UN++ by Bostrom)
• Lowering the birth density to get more time for
the civilization
Control of the simulation
(if we are in it)
• Live an interesting life so our
simulation isn’t switched off
• Don’t let them know that we
know we live in simulation
• Hack the simulation and
control it
• Negotiation with the simulators
or pray for help
Second half of the 21 century (2050-2100)
AI practical studies
• Narrow AI
• Human emulations
• Value loading
• FAI theory promotion to most AI commands; they agree to implement it
and adapt it to their systems
• Tests of FAI on non self-improving models
Readiness
• Crew training
• Crews in bunkers
• Crew rotation
• Different types of asylums
• Frozen embryos
Improving
sustainability
of civilization
• Intrinsically safe critical systems
• Growing diversity of human beings
and habitats
• Increasing diversity and increasing
connectedness of our civilization
• Universal methods of prevention
(resistances structures, strong medicine)
• Building reserves (food stocks,
seeds, minerals, energy, machinery, knowledge)
• Widely distributed civil defence, including air
and water cleaning systems, radiation meters,
gas masks, medical kits etc
Practical steps to
cope with certain
risks
• Instruments to capture radioactive dust
• Small bio-hack groups around the world could im-
prove the biotechnology understanding of the public
to the point where no one accidentally creates a bio-
technology hazard
• Developing better guidance on safe biotechnology
processes and exactly why its safe this way and not
otherwise... effectively “raising the sanity waterline”
• Dangerous forms of BioTech become centralised
• DNA synthesizers are constantly connected to the
Internet and all DNA is checked for dangerous frag-
ments
• Funding things like clean-rooms with negative pres-
sure
• Methane and CO2 capture, may be by bacteria
• Invest in bio-divercity of food supply chain,
prevent pest spread: 1) better carantine law, 2) port-
-
sions in medium bulks of foodstuffs,
sterilization.
• “Develop and stockpile new drugs and vaccines,
monitor biological agents and emerging diseases, and
strengthen the capacities of local health systems to
respond to pandemics” (Matheny)
• Better quarantine laws
• Mild birth control (female education)
prevention
international court or secret institution
• Large project which could unite humanity, like pandemic prevention
• Rogue countries integrated, based on dialogue and appreciating their
values
• Peaceful integration of national states
Dramatic social changes
It could include many interesting but different topics: demise of capi-
talism, hipster revolution, internet connectivity, global village, dissolv-
ing of national states.
Changing the way that politics works so that the policies implemented
actually have empirical backing based on what we know about sys-
tems.
Steps
and
timeline
Exact dates depend on
the pace of technological progress
and of our activity to prevent risks
Step 1
Planning
Step 3
First level of defence
on low-tech level
Step 4
Second level of defence
on high-tech level
Step 5
Reaching indestructibility
of the civilization with
near zero x-risks
Step 2
Preparation
Second quarter of 21 century (2025-2050)
High-tech bunkers
• Nano-tech based adaptive structures
• Small AI with human simulation inside space rock
PlanA
Preventthecatastrophe
Space colonies on large planets
• Creation of space colonies on the Moon and Mars (Elon Musk)
Ideological
payload of new
technologies
Space tech - Mars as backup
Electric car - sustainability
Self-driving cars - risks of AI and value of human life
Facebook - empathy and mutual control
Open source - transparency
Psychedelic drugs - empathy
Computer games and brain stimulation - virtual world
Design new monopoly tech with special ideological
payload
Reactive and
Proactive
approach
Reactive: React on most urgent and most
visible risks. Pros: Good timing, visible re-
sults, right resource allocation, invest only in
real risks
Good for slow risks, like pandemic.
Cons: can’t react on fast emergencies (AI,
asteroid, collider failure).
Risk are ranged by urgency.
Proactive: Envision future risks and build
multilevel defence.
Pros: Good for coping this principally new
risks, enough time to build defence.
Cons
imaginable risks, no clear reward,
problem with identifying new risks and dis-
counting mad ideas.
Risks are ranged by probability.
Decentralized
monitoring
of risks
Decentralized risks
monitoring
• Transparent society: everybody could monitor everybody
• Decentralized control (local police, net-solutions,) local police could handle local crime and
x-risks peers, they could control their neighbourhood in their professional space; Anonymus
hacker groups; google search control
• “Anonymous” style hacker groups: large group of “world saviours”
• Net based safety solutions
• Laws and economic stimulus (Richard Posner,
carbon emissions trade): prizes for any risk found and prevented
• World democracy based on internet voting
• Mild monitoring of potential signs of dangerous activity (internet), using deep learning
• High level of horizontal connectivity between people
Useful ideas to limit
catastrophe scale
Limit the impact of catastrophe by implementing measures to slow
the growth and areas impacted by a catastrophe. Carnitine, improve
the capacity for rapid production of vaccines in response to emerging
threats or create or grow stockpiles of important medical countermeas-
ure.
Increase time available for preparation by improving monitor-
ing and early detection technologies. For example, with pandemics you
could: supporting general research on the magnitude of biosecurity risks
and opportunities to reduce them and improving and connect disease
surveillance systems so that novel threats can be detected and respond-
ed to more quickly
Worldwide x-risk prevention exercises
6. Plan
A
is
complex:
We
should
do
all
it
simultaneously:
international
control,
AI,
robustness,
and
space
International
Control
System
Friendly AI Rising
Resilience
Space
Colonization
Decentralized
monitoring
of risks
7. Plans
B,
C
and
D
have
smaller
chances
on
success
Survive
the catastrophe
Leave
backups
Plan D
Deus ex
machina
Bad
plans
8. Plan A1.1
International Control System
Risk controlTechnology bans
• International ban on dangerous technologies or
voluntary relinquishment (such as not creating new
• Freezing potentially dangerous projects for 30 years
• Lowering international confrontation
• Lock down all risk areas beneath piles of
bureaucracy, paperwork, safety requirements
• Concentrate all bio, nano and AI research in several controlled
centers
• Control over dissemination of knowledge of mass destruction
• Control over suicidal or terrorist ideation in bioscientists
• Investments in general safety: education, facilities, control,
checks, exams
• Limiting and concentrating in one place an important
resource: dangerous nuclear materials, high level scientists
Technology speedup
• Differential technological development: develop safety and con-
• Laws and economic stimulus (Richard Posner,
Surveillance
• Internet control
Worldwide risk
prevention authority
• Worldwide video-surveillance and control
• “The ability when necessary to mobilise a
strong global coordinated response to
• Center for quick response to any
x-risks police
a system of international treaties
• Robots for emergency liquidation of
bio-and nuclear hazards
Active shields
• Geoengineering against global warming
• Worldwide missile defence and
anti-asteroid shield
• Nano-shield – distributed system of
control of hazardous replicators
• Mind-shield - control of dangerous ideation
by the means of brain implants
• Merger of the state, Internet
and worldwide AI in uniform
monitoring, security and control system
• Isolation of risk sources at great distance
from Earth
Research
• Create, prove and widely promote
long term future model (this map is
based on exponential tech development
• Invest in long term prediction studies
• Comprehensive list of risks
• Probability assessment
• Prevention roadmap
• Determinate most probable
and most easy to prevent
risks
• Education of “world saviors”: choose
best students, provide them courses,
money and tasks
• Create x-risks wiki and x-risks internet
forum which could be able to attract best
minds, but also open to everyone
and well moderated
• Solve the problem that different
scientists ignore each other
• Integrate different lines of thinking
about x-risks
• Lower the barriers of entry into the
• Unconstrained funding of x-risks
research for many different
approaches
research
• Study existing system of decision mak-
ing in UN, hire a lawyer
• Integration of existing plans to prevent
concrete risks (e.g. “Climate plan” by
• General theory of safety and risks
prevention
Social support
with shared knowledge
• Popularization (articles, books,
x-risks are real and diverse,
but we could and should act.
• Public support, street action
peer-reviewed journal, conferences,
intergovernment panel, international
institute
• Political parties for x-risks prevention
• Productive cooperation between
scientists and society based on trust
International
cooperation
• All states in the world contribute to the UN to
• States cooperate directly without UN
• Group of supernations takes responsibility of
treaty
• A smaller catastrophe could help unite hu-
manity (pandemic, small asteroid, local nuclear
paradigmatic change so that humanity becomes
more existentially-risk aware
• International law about x-risks which will pun-
ish people for rising risk: underestimating it,
plotting it, risk neglect, as well as reward people
efforts in their prevention.
• The ability to quickly adapt to new risks and
envision them in advance.
• International agencies dedicated to certain
risks
9. Research
• Create, prove and widely promote
long term future model (this map is
based on exponential tech development
future model)
• Invest in long term prediction studies
• Comprehensive list of risks
• Probability assessment
• Prevention roadmap
• Determinate most probable
and most easy to prevent
risks
• Education of “world saviors”: choose
best students, provide them courses,
money and tasks
• Create x-risks wiki and x-risks internet
forum which could be able to attract best
minds, but also open to everyone
and well moderated
• Solve the problem that different
scientists ignore each other
(“world saviours arrogance” problem)
• Integrate different lines of thinking
about x-risks
• Lower the barriers of entry into the
• Unconstrained funding of x-risks
research for many different
approaches
(Bostrom) to create high quality x-risk
research
• Study existing system of decision mak-
ing in UN, hire a lawyer
• Integration of existing plans to prevent
concrete risks (e.g. “Climate plan” by
Carana)
• General theory of safety and risks
prevention
Step 1
Planning
Plan A1.1
International
Control System
10. Social support
with shared knowledge
institute
• Political parties for x-risks prevention
Step 2
Preparation
Plan A1.1
International
Control System
11. International
cooperation
• All states in the world contribute to the UN to
• States cooperate directly without UN
threaty
-
-
Step 3
First level of defence
on low-tech level
Plan A1.1
International
Control System
12. Risk controlTechnology bans
• International ban on dangerous technologies or
voluntary relinquishment (such as not creating new
• Freezing potentially dangerous projects for 30 years
• Lowering international confrontation
• Lock down all risk areas beneath piles of
bureaucracy, paperwork, safety requirements
• Concentrate all bio, nano and AI research in several controlled
centers
• Control over dissemination of knowledge of mass destruction
• Control over suicidal or terrorist ideation in bioscientists
• Investments in general safety: education, facilities, control,
checks, exams
• Limiting and concentrating in one place an important
resource: dangerous nuclear materials, high level scientists
Technology speedup
• Differential technological development: develop safety and con-
• Laws and economic stimulus (Richard Posner,
Surveillance
Step 3
First level of defence
on low-tech level
Plan A1.1
International
Control System
13. Worldwide risk
prevention authority
• Worldwide video-surveillance and control
• “The ability when necessary to mobilise a
strong global coordinated response to
anticipated existential risks” (Bostrom)
• Center for quick response to any
emerging risk; x-risks police
a system of international treaties
• Robots for emergency liquidation of
bio-and nuclear hazards
Step 4
Second level of defence
on high-tech level
Plan A1.1
International
Control System
14. Active shields
• Geoengineering against global warming
• Worldwide missile defence and
anti-asteroid shield
• Nano-shield – distributed system of
control of hazardous replicators
• Bio-shield – worldwide immune system
• Mind-shield - control of dangerous ideation
by the means of brain implants
• Merger of the state, Internet
and worldwide AI in uniform
monitoring, security and control system
• Isolation of risk sources at great distance
from Earth
Step 4
Second level of defence
on high-tech level
Plan A1.1
International
Control System
16. Singleton
• “A world order in which there is a
single decision-making agency
at the highest level” (Bostrom)
• Worldwide government system based on AI
• Super AI which prevents all possible risks and
provides immortality and happiness to humanity
• Colonization of the solar system, interstellar travel
and Dyson spheres
• Colonization of the Galaxy
• Exploring the Universe
Result:
17. Plan A1.2
Decentralised risk monitoring
Improving human
intelligence and
morality
• Nootropics, brain stimulation, and gene therapy for
higher IQ
• New rationality: Bayes probabilities, interest in long term
future, LessWrong
• High empathy for new geniuses is
needed to prevent them from becoming superterrorists
• Many rational, positive, and cooperative
people are needed to reduce x-risks
• Lower proportion of destructive beliefs, risky behaviour,
• Engineered enlightenment: use brain science to make
people more unite, less aggressive; open realm of
spiritual world to everybody
• Prevent worst forms of capitalism: desire to short term
money reward, and to change rules for getting it.
Stop greedy monopolies
Values
transformation
• Rise public desire for life extension and global security
• Reduction of radical religious (ISIS) or nationalistic values
• Popularity of transhumanism
• Change the model of the future
• “A moral case can be made that existential risk reduction is strictly more im-
portant than any other global public good” (Bostrom)
• The value of the indestructibility of
-
el and as a goal of every nation
• “Sustainability should be reconceptualised in dynamic terms, as aiming for a
sustainable trajectory rather than a sustainable state” (Bostrom)
• Movies, novels and other works of art that honestly depict x-risks and moti-
vate to their prevention
• Translate best x-risks articles and books on main languages
• Memorial and awareness days: Earth day, Petrov day, Asteroid day
• Rationality and effective altruism
• Education in schools on x-risks, safety and rationality topics
Cold war and WW3
prevention
international court or secret institution
• Large project which could unite humanity, like pandemic prevention
• Rogue countries integrated, based on dialogue and appreciating their
values
• Peaceful integration of national states
Dramatic social changes
It could include many interesting but different topics: demise of capi-
talism, hipster revolution, internet connectivity, global village, dissolv-
ing of national states.
Changing the way that politics works so that the policies implemented
actually have empirical backing based on what we know about sys-
tems.
Decentralized risks
monitoring
• Transparent society: everybody could monitor everybody
• Decentralized control (local police, net-solutions,) local police could handle local crime and
x-risks peers, they could control their neighbourhood in their professional space; Anonymus
hacker groups; google search control
• “Anonymous” style hacker groups: large group of “world saviours”
• Net based safety solutions
• Laws and economic stimulus (Richard Posner,
carbon emissions trade): prizes for any risk found and prevented
• World democracy based on internet voting
• Mild monitoring of potential signs of dangerous activity (internet), using deep learning
• High level of horizontal connectivity between people
18. Values
transformation
• Rise public desire for life extension and global security
• Reduction of radical religious (ISIS) or nationalistic values
• Popularity of transhumanism
• Change the model of the future
• “A moral case can be made that existential risk reduction is strictly more im-
portant than any other global public good” (Bostrom)
• The value of the indestructibility of
-
el and as a goal of every nation
sustainable trajectory rather than a sustainable state” (Bostrom)
-
vate to their prevention
• Translate best x-risks articles and books on main languages
• Rationality and effective altruism
Plan A1.2
Decentralised risk monitoring
Step 1
19. Improving human
intelligence and
morality
• Nootropics, brain stimulation, and gene therapy for
higher IQ
• New rationality: Bayes probabilities, interest in long term
future, LessWrong
• High empathy for new geniuses is
needed to prevent them from becoming superterrorists
• Many rational, positive, and cooperative
people are needed to reduce x-risks
• Lower proportion of destructive beliefs, risky behaviour,
• Engineered enlightenment: use brain science to make
people more unite, less aggressive; open realm of
spiritual world to everybody
Plan A1.2
Decentralised risk
monitoring. Step 2
20. Cold war and WW3
prevention
-
-
-
Plan A1.2
Decentralised risk monitoring.
Step 3
21. Decentralized risks
monitoring
• Transparent society: everybody could monitor everybody
• Decentralized control (local police, net-solutions,) local police could handle local crime and
hacker groups; google search control
• World democracy based on internet voting
Plan A1.2
Decentralised risk monitoring.
Step 4
22. Plan A2.
Friendly AI
Study and Promotion
• Study of Friendly AI theory
• Promotion of Friendly AI (Bostrom and Yudkowsky)
• Fundraising (MIRI)
• Teaching rationality (LessWrong)
• Slowing other AI projects (recruiting scientists)
• FAI free education, starter packages in programming
Solid Friendly AI theory
• Human values theory and decision theory
• Proven safe, fail-safe, intrinsically safe AI
• Preservation of the value system during
AI self-improvement
• A clear theory that is practical to implement
Seed AI
Creation of a small AI capable
of recursive self-improvement and based on
Friendly AI theory
Superintelligent AI
• Seed AI quickly improves itself and
undergoes “hard takeoff”
• It becomes dominant force on Earth
• AI eliminates suffering, involuntary death,
and existential risks
• AI Nanny – one hypothetical variant of super
AI that only acts to prevent
existential risks (Ben Goertzel)
AI practical studies
• Narrow AI
• Human emulations
• Value loading
• FAI theory promotion to most AI commands; they agree to implement it
and adapt it to their systems
• Tests of FAI on non self-improving models
23. Study and Promotion
• Study of Friendly AI theory
• Promotion of Friendly AI (Bostrom and Yudkowsky)
• Fundraising (MIRI)
• Teaching rationality (LessWrong)
• Slowing other AI projects (recruiting scientists)
• FAI free education, starter packages in programming
Plan A2.
Friendly AI. Step 1
24. Solid Friendly AI theory
• Human values theory and decision theory
• Proven safe, fail-safe, intrinsically safe AI
• Preservation of the value system during
AI self-improvement
• A clear theory that is practical to implement
Plan A2.
Friendly AI. Step 2
25. AI practical studies
• Narrow AI
• Human emulations
• Value loading
• FAI theory promotion to most AI commands; they agree to implement it
and adapt it to their systems
• Tests of FAI on non self-improving models
Plan A2.
Friendly AI. Step 3
26. Seed AI
Creation of a small AI capable
of recursive self-improvement and based on
Friendly AI theory
Plan A2.
Friendly AI. Step 4
27. Superintelligent AI
• Seed AI quickly improves itself and
undergoes “hard takeoff”
• It becomes dominant force on Earth
• AI eliminates suffering, involuntary death,
and existential risks
• AI Nanny – one hypothetical variant of super
AI that only acts to prevent
existential risks (Ben Goertzel)
Plan A2.
Friendly AI. Step 5
28. High-speed
tech development
needed to quickly pass
risk window
• Investment in super-technologies (nanotech, biotech)
• High speed technical progress helps to overcome slow process of
resource depletion
Dramatic social changes
It could include many interesting but different topics: demise of capitalism, hipster
revolution, internet connectivity, global village, dissolving of national states.
Changing the way that politics works so that the policies implemented actually have empirical backing based
on what we know about systems.
Improving human
intelligence and morality
• Nootropics, brain stimulation, and gene therapy for higher IQ
• New rationality: Bayes probabilities, interest in long term future,
LessWrong
• High empathy for new geniuses is
needed to prevent them from becoming superterrorists
• Many rational, positive, and cooperative
people are needed to reduce x-risks
• Engineered enlightenment: use brain science to make people more unite, less
aggressive; open realm of spiritual world to everybody
• Prevent worst forms of capitalism: desire to short term money reward, and to
change rules for getting it. Stop greedy monopolies
Timely achievement of immortality
on highest possible level
• Nanotech-based immortal body
• Mind uploading
• Integration with AI
Miniaturization for survival and invincibility
• Earth crust colonization by miniaturized nano tech bodies
• Moving into simulated world inside small self sustained computer
AI based on uploading
of its creator
• Friendly to the value system of its creator
• Its values consistently evolve during its self-improvement
• Limited self-improvement may solve friendliness
problem
Improving
sustainability
of civilization
• Intrinsically safe critical systems
• Growing diversity of human beings
and habitats
• Increasing diversity and increasing
connectedness of our civilization
• Universal methods of prevention
(resistances structures,
strong medicine)
• Building reserves (food stocks,
seeds, minerals, energy, machinery,
knowledge)
• Widely distributed civil defence,
including air and water cleaning
systems, radiation meters, gas masks,
medical kits etc
Plan A3.
Rising Resilience
29. Improving
sustainability
of civilization
• Intrinsically safe critical systems
• Growing diversity of human beings
and habitats
• Increasing diversity and increasing
connectedness of our civilization
• Universal methods of prevention
(resistances structures,
strong medicine)
• Building reserves (food stocks,
seeds, minerals, energy, machinery,
knowledge)
• Widely distributed civil defence,
including air and water cleaning
systems, radiation meters, gas masks,
medical kits etc
Plan A3.
Rising Resilience
Step 1
30. Plan A3.
Rising Resilience
Step 2
Useful ideas to limit
catastrophe scale
Limit the impact of catastrophe by implementing measures to slow
the growth and areas impacted by a catastrophe. Carnitine, improve
the capacity for rapid production of vaccines in response to emerging
threats or create or grow stockpiles of important medical countermeas-
ure.
Increase time available for preparation by improving monitoring
and early detection technologies. For example, with pandemics you
could: supporting general research on the magnitude of biosecurity risks
and opportunities to reduce them and improving and connect disease
surveillance systems so that novel threats can be detected and respond-
ed to more quickly
Worldwide x-risk prevention exercises
31. High-speed
tech development
needed to quickly pass
risk window
• Investment in super-technologies (nanotech, biotech)
• High speed technical progress helps to overcome slow process of
resource depletion
Dramatic social changes
It could include many interesting but different topics: demise of capitalism, hipster
revolution, internet connectivity, global village, dissolving of national states.
Changing the way that politics works so that the policies implemented actually have empirical backing based
on what we know about systems.
Plan A3.
Rising Resilience. Step 3
32. Timely achievement of immortality
on highest possible level
• Nanotech-based immortal body
• Integration with AI
Plan A3.
Rising Resilience. Step 4
33. Interstellar travel
• “Orion” style, nuclear powered “generation ships” with colonists
• Starships which operate on new physical principles with immortal people on board
• Von Neumann self-replicating probes with human embryos
Temporary
asylums in space
• Space stations as temporary asylums
(ISS)
• Cheap and safe launch systems
Colonisation of the Solar system
• Self-sustaining colonies on Mars and large asteroids
• Terraforming of planets and asteroids using
self-replicating robots and building space colonies there
• Millions of independent colonies inside asteroids and comet bodies in the Oort cloud
Space colonies on large planets
• Creation of space colonies on the Moon and Mars (Elon Musk)
with 100-1000 people
Plan A4.
Space colonisation
34. Temporary
asylums in space
• Space stations as temporary asylums
(ISS)
• Cheap and safe launch systems
Plan A4.
Space colonisation
Step 1
35. Space colonies on large planets
• Creation of space colonies on the Moon and Mars (Elon Musk)
with 100-1000 people
Plan A4.
Space colonisation
Step 2
36. Colonisation of the Solar system
• Self-sustaining colonies on Mars and large asteroids
• Terraforming of planets and asteroids using
self-replicating robots and building space colonies there
• Millions of independent colonies inside asteroids and comet bodies in the Oort cloud
Plan A4.
Space colonisation
Step 3
37. Interstellar travel
• “Orion” style, nuclear powered “generation ships” with colonists
• Starships which operate on new physical principles with immortal people on board
• Von Neumann self-replicating probes with human embryos
Plan A4.
Space colonisation
Step 4
38. Interstellar distributed
humanity
• Many unconnected human civilizations
• New types of space risks (space wars, planets and stellar
explosions, AI and nanoreplicators, ET civilizations)
Result:
39. Plan B.
Survive the catastrophe
Preparation
• Fundraising and promotion
• Textbook to rebuild civilization
(Dartnell’s book “Knowledge”)
• Hoards with knowledge, seeds and
raw materials (Doomsday vault in
Norway)
• Survivalist communities
Building
• Underground bunkers, space colonies
• Nuclear submarines
• Seasteading
Natural refuges
• Uncontacted tribes
• Remote villages
• Remote islands
• Oceanic ships
• Research stations in Antarctica
Rebuilding civilisation after catastrophe
• Rebuilding population
• Rebuilding science and technology
• Prevention of future catastrophes
Readiness
• Crew training
• Crews in bunkers
• Crew rotation
• Different types of asylums
• Frozen embryos
High-tech bunkers
• Nano-tech based adaptive structures
• Small AI with human simulation inside space rock
40. Preparation
• Fundraising and promotion
• Textbook to rebuild civilization
(Dartnell’s book “Knowledge”)
• Hoards with knowledge, seeds and
raw materials (Doomsday vault in
Norway)
• Survivalist communities
Plan B.
Survive the catastrophe
Step 1
41. Building
• Underground bunkers, space colonies
• Nuclear submarines
• Seasteading
Natural refuges
• Uncontacted tribes
• Remote villages
• Remote islands
• Oceanic ships
• Research stations in Antarctica
Plan B.
Survive the catastrophe
Step 2
42. Readiness
• Crew training
• Crews in bunkers
• Crew rotation
• Different types of asylums
• Frozen embryos
Plan B.
Survive the catastrophe
Step 3
43. High-tech bunkers
• Nano-tech based adaptive structures
• Small AI with human simulation inside space rock
Plan B.
Survive the catastrophe
Step 4
44. Rebuilding civilisation after catastrophe
• Rebuilding population
• Rebuilding science and technology
• Prevention of future catastrophes
Plan B.
Survive the catastrophe
Step 5
45. Reboot of the civilization
• Several reboots may happen
• Finally there will be total collapse
or a new supercivilization level
Result:
46. Time capsules
with information
• Underground storage
with information and DNA for future
non-human civilizations
• Eternal disks from Long Now
Foundation (or M-disks)
Preservation of
earthly life
• Create conditions for the re-emergence of new intelligent life
on Earth
• Directed panspermia (Mars, Europe, space dust)
• Preservation of biodiversity and highly developed animals
(apes, habitats)
Messages to
ET civilizations
• Interstellar radio messages with encoded
human DNA
• Hoards on the Moon, frozen brains
about humanity
Robot-replicators in space
• Mechanical life
• Preservation of information about humanity for billions of years
• Safe narrow AI
Plan C.
Leave backups
47. Time capsules
with information
• Underground storage
with information and DNA for future
non-human civilizations
• Eternal disks from Long Now
Foundation (or M-disks)
Plan C.
Leave backups
Step 1
48. Messages to
ET civilizations
• Interstellar radio messages with encoded
human DNA
• Hoards on the Moon, frozen brains
about humanity
Plan C.
Leave backups. Step 2
49. Preservation of
earthly life
• Create conditions for the re-emergence of new intelligent life on Earth
• Directed panspermia (Mars, Europe, space dust)
• Preservation of biodiversity and highly developed animals
(apes, habitats)
Plan C.
Leave backups. Step 3
50. Robot-replicators in space
• Mechanical life
• Preservation of information about humanity for billions of years
• Safe narrow AI
Plan C.
Leave backups. Step 4
51. Resurrection by
another civilization
• Creation of a civilization which has a lot of common
values and traits with humans
• Resurrection of concrete people
Result:
52. Quantum immortality
• If the many-worlds interpretation of QM is true, an observer will survive any sort of
death including any global catastrophe (Moravec, Tegmark)
• It may be possible to make almost univocal correspondence between observer
survival and survival of a group of people (e.g. if all are in submarine)
Saved by non-human
intelligence
• Maybe extraterrestrials
are looking out for us and will
save us
• Send radio messages into space
asking for help if a catastrophe is
inevitable
• Maybe we live in a simulation
and simulators will save us
• The Second Coming,
a miracle, or life after death
Strange strategy
to escape
Fermi paradox
Random strategy may help us to escape
some dangers that killed all previous
civilizations in space
Technological
precognition
• Prediction of the future based
on advanced quantum technology and avoiding
dangerous world-lines
• Search for potential terrorists
using new scanning technologies
• Special AI to predict and prevent new x-risks
Manipulation of the
extinction probability
using Doomsday argument
• Decision to create more observers in case of
unfavourable event X starts to happen, so lowering its
• Lowering the birth density to get more time for
the civilization
Control of the simulation
(if we are in it)
• Live an interesting life so our sim-
ulation isn’t switched off
• Don’t let them know that we
know we live in simulation
• Hack the simulation and
control it
or pray for help
Plan D.
Improbable ideas
53. Saved by non-human
intelligence
• Maybe extraterrestrials
are looking out for us and will
save us
• Send radio messages into space
asking for help if a catastrophe is
inevitable
• Maybe we live in a simulation
and simulators will save us
• The Second Coming,
a miracle, or life after death
Plan D.
Improbable ideas
Idea 1
54. Quantum immortality
• If the many-worlds interpretation of QM is true, an observer will survive any sort of
death including any global catastrophe (Moravec, Tegmark)
• It may be possible to make almost univocal correspondence between observer
survival and survival of a group of people (e.g. if all are in submarine)
Plan D.
Improbable ideas. Idea 2
55. Strange strategy
to escape
Fermi paradox
Random strategy may help us to escape
some dangers that killed all previous
civilizations in space
Plan D.
Improbable ideas
Idea 3
56. Technological
precognition
• Prediction of the future based
on advanced quantum technology and avoiding
dangerous world-lines
• Search for potential terrorists
using new scanning technologies
• Special AI to predict and prevent new x-risks
Plan D.
Improbable ideas
Idea 4
57. Manipulation of the
extinction probability
using Doomsday argument
• Decision to create more observers in case of
unfavourable event X starts to happen, so lowering its
probability (method UN++ by Bostrom)
• Lowering the birth density to get more time for
the civilization
Plan D.
Improbable ideas
Idea 5
58. Control of the simulation
(if we are in it)
• Live an interesting life so our sim-
ulation isn’t switched off
• Don’t let them know that we
know we live in simulation
• Hack the simulation and
control it
• Negotiation with the simulators
or pray for help
Plan D.
Improbable ideas
Idea 6
59. Prevent x-risk research
because it only increases risk
• Do not advertise the idea of man-made
global catastrophe
• Don’t try to control risks as it would only
give rise to them
• As we can’t measure the probability of glob-
al catastrophe it maybe unreasonable to try to
change the probability
• Do nothing
Controlled regression
• Use small catastrophe to prevent large one (Willard
Wells)
• Luddism (Kaczynski):
relinquishment of dangerous science
• Creation of ecological civilization without technology
(“World made by hand”, anarcho-primitivism)
• Limitation of personal and collective intelligence to
prevent dangerous science
Attracting good
outcome by positive thinking
• Preventing negative thoughts
about the end of the world and about
violence
• Maximum positive attitude
“to attract” positive outcome
• Secret police which use mind
superpowers to stop them
• Start partying now
Depopulation
• Could provide resource preservation and make
control simpler
• Natural causes: pandemics, war, hunger
(Malthus)
• Birth control (Bill Gates)
• Deliberate small catastrophe (bio-weapons)
Unfriendly AI may be better
than nothing
• Any super AI will have some
memory about humanity
• It will use simulations of human
civilization to study the probability of
its own existence
• It may share some human
values and distribute them
through the Universe
Bad plans
60. Prevent x-risk research
because it only increases risk
• Do not advertise the idea of man-made
global catastrophe
• Don’t try to control risks as it would only
give rise to them
• As we can’t measure the probability of glob-
al catastrophe it maybe unreasonable to try to
change the probability
• Do nothing
Bad plans
Idea 1
61. Controlled regression
• Use small catastrophe to prevent large one (Willard
Wells)
• Luddism (Kaczynski):
relinquishment of dangerous science
• Creation of ecological civilization without technology
(“World made by hand”, anarcho-primitivism)
• Limitation of personal and collective intelligence to
prevent dangerous science
Bad plans
Idea 2
62. Bad plans. Idea 3
Depopulation
• Could provide resource preservation and make
control simpler
• Natural causes: pandemics, war, hunger
(Malthus)
• Extreme birth control
• Deliberate small catastrophe (bio-weapons)
63. Unfriendly AI may be better
than nothing
• Any super AI will have some
memory about humanity
• It will use simulations of human
civilization to study the probability of
its own existence
• It may share some human
values and distribute them
through the Universe
Bad plans. Idea 4
64. Attracting good
outcome by positive thinking
• Preventing negative thoughts
about the end of the world and about
violence
• Maximum positive attitude
“to attract” positive outcome
• Secret police which use mind
superpowers to stop them
• Start partying now
Bad plans. Idea 5
65. Dynamic
roadmaps
Next
stage
of
the
research
will
be
creation
of
collectively
editable
wiki-‐style
roadmaps
They
will
cover
all
existing
topics
of
transhumanism
and
future
studies
Create
AI
system
based
on
the
roadmaps
or
working
on
their
improvement
66. You
could
read
all
roadmaps
on
http://immortality-‐roadmap.com
!
http://existrisks.org/