SlideShare ist ein Scribd-Unternehmen logo
1 von 84
Downloaden Sie, um offline zu lesen
A Roadmap: Plan of
action to prevent human
extinction risks
Alexey Turchin
www.existrisks.org
X-Risks
Exponential growth of technologies
Unlimited possibilities
Including destruction of humanity
X-Risks
Main X-Risks
Large scale nuclear war
Runaway global warming
Nanotech - grey goo
AI
Synthetic biology
Nuclear doomsday weapons
and large scale nuclear war
Contest: more than 50 ideas were added
David Pearce and Sakoshi Nakamoto
contributed
Crowdsoursing
International
Control
System
Plan A is complex:
Friendly
AI
Rising
Resilience
Space
Colonization
Decentralized
monitoring of
risks
We should do all it simultaneously:
international control, AI, robustness, and space
Plan A1.1 Plan A2 Plan A3 Plan A4Plan A1.2
Survive
the
catastrophe
Plans B, C and D have
smaller chances on success
Leave
backups
Improbable
ideas
Bad
Plans
Plan B Plan C Plan D
Research
Social
Support
International
Cooperation
Risk
Control
Worldwide risk
prevention
authority
Plan A1.1
International Control System
Active
Shields
Plan A1.1
International Control
System
Planing
Step 1
Research
• Long term future model
• Comprehensive list of risks, with probability
assessment and prevention roadmap
• Wiki and internet forum
• Integration of approaches, funding,
education, translation
• Additional studies areas: biases, law
Plan A1.1
International Control
System
Step 2
Preparation
Social
support
•
peer-reviewed journal, conferences,
intergovernment panel,
international institute
• Popularization (articles, books,
forums, media)
• Public support, street action
• Political support: lobbyism, parties
Plan A1.1
International Control
System
Step 3
International
Cooperation
First level of
defence on
low-tech level
•
risks
• Group of supernations takes responsibility of x-risks
prevention
• International law about x-risks
• International agencies dedicated to certain risks
• A smaller catastrophe could help unite humanity
Plan A1.1
International Control
System
Step 3
Risk
control
First level of
defence on
low-tech level
• International ban on dangerous technologies or
voluntary relinquishment
• Freezing potentially dangerous projects for 30 years
• Concentrate all bio, nano and AI research in several
controlled centers
• Differential technological development: develop
Plan A1.1
International Control
System
Step 4
Worldwide risk
prevention
authority
Second level of
defence on
high-tech level
• Center for quick response to any emerging risk; x-risks police
• Worldwide video-surveillance and control
on a system of international treaties
• Narrow AI based expert system on x-risks, Oracle AI
• Control over dissemination of knowledge of mass
Plan A1.1
International Control
System
Step 4
Active
shields
Second level of
defence on
high-tech level
• Geoengineering anti-asteroid shield
• Nano-shield – distributed system of
control of hazardous replicators
• Bio-shield – worldwide immune system
• Mind-shield - control of dangerous
ideation by the means of brain implants
• Worldwide monitoring, security and control
Risks of plan A1.1
Fatal mistake in world
control system
Global catastropheGlobal catastrophe
Active
Shields
Worldwide risk
prevention authority
Result
Singleton
«A world order in which there is a single
decision-making agency at the highest level» (Bostrom)
Worldwide government system based on AI
Super AI which prevents all possible risks and provides
immortality and happiness to humanity
Colonization of the solar system, interstellar travel and
Dyson spheres
Colonization of the Galaxy
Exploring the Universe
Improving
human
intelligence
and morality
Plan A1.2
Decentralised risk monitoring
Values
transformation
Cold war
and WW3
prevention
Decentralized
risks
monitoring
Plan A1.2
Decentralised risk
monitoring
Step 1
Values
Transformation
• The value of the indestructibility of the civilization
• Reduction of radical religious (ISIS) or nationalistic
values
• Popularity of transhumanism
• Movies, novels and other works of art that honestly
depict x-risks and motivate to their prevention
• Memorial and awareness days: Earth day, Petrov
day, Asteroid day
Plan A1.2
Decentralised risk
monitoring
Step 2
Improving
human
intelligence and
morality
• Higher IQ, New rationality, Fighting cognitive biases
• High empathy for new geniuses, lower proportion
of destructive beliefs
• Engineered enlightenment: use brain science
• Prevent worst forms of capitalism
• Promote best moral qualities
Plan A1.2
Decentralised risk
monitoring
Step3
Cold war
and WW3
prevention
Dramatic
social
changes
• International conflict management authority like
international court
• Large project which could unite humanity
• Antiwar and antinuclear movement
• Cooperative decision theory in international politics
• Prevent brinkmanship
• Prevent nuclear proliferation
Plan A1.2
Decentralised risk
monitoring
Step 4
Decentralized
risks
monitoring
• Transparent society: groups of vigilantes,“ Anony
mous” style hacker groups
• Decentralized control: local police,mutual control,
whistle-blowers
• Net based safety: ring of x-risks prevention organi
zations
• Economic stimulus: prizes for any risk found and
prevented
Plan A2
Friendly AI
Solid Friendly
AI theory
AI practical
studies
Seed
AI
Superintelligent
AI
Study
and Promotion
Plan A2
Friendly AI
Step 1
Study
and
Promotion
• Study of Friendly AI theory
• Promotion of Friendly AI (Bostrom and Yudkowsky)
• Fundraising (MIRI)
• •Slowing other AI projects (recruiting scientists)
• •FAI free education, starter packages in programming
Plan A2
Friendly AI
Plan A2
Friendly AI
Step 2
Solid
Friendly
AI theory
• Human values theory and decision theory
• Full list of possible ways to create FAI,
and sublist of best ideas
• Proven safe, fail-safe, intrinsically safe AI
• Preservation of the value system during
• AI self-improvement
• A clear theory that is practical to implement
Plan A2
Friendly AI
Step 3
AI
practical
studies
• Narrow AI
• Human emulations
• Value loading
• FAI theory promotion to most AI commands; they
agree to implement it and adapt it to their systems
• Tests of FAI theory on non self-improving models
Plan A2
Friendly AI
Step 4
Seed
AI
Creation of a small AI capable
of recursive self-improvement and based
on Friendly AI theory
Plan A2
Friendly
AI
Plan A2
Friendly AI
Step 5
Superintelligent
AI
• Seed AI quickly improves itself and
undergoes “hard takeoff”
• It becomes dominant force on Earth
• AI eliminates suffering, involuntary
death, and existential risks
• AI Nanny – one hypothetical variant of
super AI that only acts to prevent
existential risks (Ben Goertzel)
Singleton
Unfriendly AI
Plan A2
Friendly AI
Plan A3
Rising Resilience
Improving
sustainability
of civilization
Improving
human
intelligence
and morality
High-speed
tech
development
Timely
achievement
of immortality
AI based on
uploading
of it’s creator
Plan A3
Rising Resilience
Step 1
Improving
sustainability
of civilization
• Intrinsically safe critical systems
• Growing diversity
• Universal methods of catastrophe
prevention
• Building reserves (food stocks,)
• Widely distributed civil defence,
Plan A3
Rising Resilience
Step 2
Useful ideas
to limit
catastrophe
scale
• Limit the impact of catastrophe: quaran
tine, rapid production of vaccines, grow
stockpiles
• Increase time available for preparation
supporting general research risks,
connect disease surveillance systems
• Worldwide x-risk prevention exercises
• The ability to quickly adapt to new risks
Plan A3
Rising Resilience
Step 2
High-speed
tech dev.
needed to quickly
pass risk window
• Investment in super-technologies (nanotech, biotech)
• High speed technical progress helps to overcome slow
process of resource depletion
• Invest more in defensive technologies than in offensive
Plan A3
Rising Resilience
Plan A3
Rising Resilience
Step 4
Timely
achievement
of immortality
Miniaturization
for survival
and
invincibility
• Nanotech-based immortal body
•
capable of living in space
• Mind uploading
• Integration with AI
• Earth crust colonization by miniaturized nano tech bodies
• Moving into simulated world inside small self sustained
computer
Plan A3
Rising Resilience
Plan A4
Space colonisation
Temporary
asylums in
space
Space
colonies
on large
planets
Colonisation
of the Solar
system
Interstellar
travel
Plan A4
Space colonisation
Plan A4
Space colonisation
Step 1
Temprorary
asylums in
space
• Space stations as temprorary asylums (ISS)
• Cheap and safe launch systems
Plan A4
Space colonisation
Step 2
Space
colonies on
large planets
Creation of space colonies on the Moon and
Mars (Elon Musk) with 100-1000 people
Plan A4
Space colonisation
Step 3
Colonization
of the Solar
system
• Self-sustaining colonies on Mars and large
asteroids
• Terraforming of planets and asteroids using
self-replicating robots and building space
colonies there
• Millions of independent colonies inside
asteroids and comet bodies in the Oort
cloud
Plan A4
Space colonisation
Step 4
Interstellar
travel
• “Orion” style, nuclear powered “generation
ships” with colonists
• Starship which operate on new physical
principles with immortal people on board
• Von Neumann self-replicating probes with
human embryos
Result
Interstellar distributed humanity
Many unconnected human civilizations
New types of space risks (space wars, planets and stellar
explosions, AI and nanoreplicators, ET civilizations
Plan A4
Space colonisation
Plan B
Survive the catastrophe
Preparation Bulding Readiness
High-tech
bunkers
Rebulding
civilisation
after
catastrophe
Plan B
Survive the catastrophe
Plan B
Survive the catastrophe
Step 1
Preparation
• Fundraising and promotion
• Textbook to rebuid civilization
(Dartnell’s book «Knowledge»)
• Hoards with knowledge, seeds and raw
materials (Doomsday vault in Norway)
• Survivalist communities
Plan B
Survive the catastrophe
Step 2
Building
• Underground bunkers, space colonies
• Nuclear submarines
• Seasteading
Natural
refuges
• Uncontacted tribes
• Remote villages
• Remote islands
• Oceanic ships
• Research stations in Atarctica
Plan B
Survive the catastrophe
Step 3
Readiness
• Crew training
• Crews in bunkers
• Crew rotation
• Differnt types of asylums
• Frozen embryos
Plan B
Survive the
catastrophe
Plan B
Survive the catastrophe
Step 4
Miniaturization
for survival
and invincibility
• • Earth crust colonization by miniatur-
ized nanotech bodies
• Moving into simulated world inside
small self sustained computer
• Adaptive bunkers based on nanotech
Plan B
Survive the catastrophe
Step 5
Rebuilding
civilisation after
catastrophe
• Rebuilding population
• Rebuilding science and technology
• Prevention of future catastrophes
Result
Reboot of civilization
Several reboots may happen
Finally there will be total collapse or a new
supercivilization level
Plan C
Leave backups
Time
capsules
with
information
Messages
to ET
civilizations
Preservation
of earthly life
Robot-rep-
licators in
space
Plan C
Leave backups
Step 1
Time
capsules with
information
• Underground storage with information
and DNA for future non-human
civilizations
• Eternal disks from Long Now
Foundation (or M-disks)
Plan C
Leave backups
Plan C
Leave backups
Step 2
Messages to ET
civilizations
• Interstellar radio messages with encoded
human DNA
• Hoards on the Moon, frozen brains
• Voyager-style spacecrafts with information
about humanity
Plan C
Leave backups
Plan C
Leave backups
Step 3
Preservation of
earthly life
• Create conditions for the re-emergence of
new intelligent life on Earth
• Directed panspermia (Mars, Europe, space
and dust)
• Preservation of biodiversity and highly
developed animals (apes, habitats)
Plan C
Leave backups
Plan C
Leave backups
Step 4
Robot-replicators
in space
• Mechanical life
• Preservation of information about
humanity for billions of years
• Safe narrow AI
Result
Resurrection by another civilization
Resurrection of concrete people
Creation of a civilization which has a lot of common val-
ues and traits with humans
Plan D
Improbable ideas
Saved by
non-human
intelligence
Quantum
immortality
Strange
strategy to
escape
Fermi paradox
Technological
precognition
Manipulation of
the extinction
probability
using
Doomsday
argument
Control of the
simulation
(if we are in it)
Plan D
Improbable ideas
Idea 1
Saved by
non-human
intelligence
• Maybe extraterrestrials are looking out for us
and will save us
• Send radio messages into space asking for help
if a catastrophe is inevitable
• Maybe we live in a simulation and simulators
will save us
• The Second Coming, a miracle, or life after
death
Plan D
Improbable ideas
Idea 2
Quantum
immortality
• If the many-worlds interpretation of QM is true,
an observer will survive any death including any
global catastrophe (Moravec, Tegmark)
• It may be possible to make almost univocal
correspondence between observer survival and
survival of a group of people (e.g. if all are in
submarine)
• Another human civilizations must exist in the
Plan D
Improbable ideas
Plan D
Improbable ideas
Idea 3
Strange strategy
to escape Fermi
paradox
Random strategy may help us to escape some
dangers that killed all previous civilizations in
space
Plan D
Improbable
ideas
Plan D
Improbable ideas
Idea 4
Technological
precognition
• Prediction of the future based on advanced
quantum technology and avoiding dangerous
world-lines
• Search for potential terrorists using new
scaning technologies
• Special AI to predict and prevent new x-risks
Plan D
Improbable ideas
Idea 5
Manipulation
of the extinction
probability
using Doomsday
argument
• Decision to create more observers in case of
unfavourable event X starts to happen, so low-
ering it’s probability (method UN++ by Bostrom)
• Lowering the birth density to get more time for
the civilization
Plan D
Improbable ideas
Idea 6
Control of the
simulation
(if we are in it)
• Live an interesting life so our simulation isn’t
switched off
• Don’t let them know what we know we live in
simulation
• Hack the simulation and control it
• Negotiation with the simulators or pray for help
Bad plans
Prevent x-risk
research
because it only
increases risk
Controlled
regression
Depopulation
Unfriendly AI
may be better
than nothing
Attracting good
outcome by
positive
thinking
Bad plans
Idea 1
Prevent x-risk
research
because it only
increases risk
• Do not advertise the idea of man-made global
catastrophe
• Don’t try to control risk as it would only give rise
to them
• As we can’t measure the probability of global
catastrophe it maybe unreasonable to try to
change the probability
• Do nothing
Bad plans
Idea 2
Controlled
regression
• Use small catastrophe to prevent large one
(Willard Wells)
• Luddism (Kaczynski): relinquishment of dangerous
science
• Creation of ecological civilization without
technology (“World made by hand”, anarcho-primi
tivism)
• Limitation of personal and collective intelligence to
prevent dangerous science
•
world
Bad plans
Bad plans
Idea 3
Depopulation
• Could provide resource preservation and make
control simpler
• Natural causes: pandemics, war, hunger (Malthus)
• Extreme birth control
• Deliberate small catastrophe (bio-weapons)
Bad plans
Bad plans
Idea 4
Unfriendly AI
may be better
than nothing
• Any super AI will have some memory about
humanity
• It will use simulations of human civilization to
study the probability of it’s own existence
• It may share some human values and distribute
them through the Universe
Bad plans
Bad plans
Idea 5
Attracting good
outcome by
positive thinking
• Preventing negative thoughts about the end of the world
and about violence
• Maximum positive attitude «to attract» positive outcome
•
terrorists and superpowers to stop them
• Start partying now
Next stage of the research will be creation
of collectively editable wiki-style roadmaps
They will cover all existing topics of
transhumanism and future studies
Create AI system based on the roadmaps
or working on their improvement
Dynamic roadmaps
You could read all
roadmaps on:
www.immortality-roadmap.com
www.existrisks.org

Weitere ähnliche Inhalte

Andere mochten auch

World May End Tonight Ignite Bangalore
World May End Tonight Ignite BangaloreWorld May End Tonight Ignite Bangalore
World May End Tonight Ignite Bangalore
Anbusivam
 
The end of the world
The end of the worldThe end of the world
The end of the world
Simon Manzur
 
324 7 part 2 extinction
324 7 part 2 extinction 324 7 part 2 extinction
324 7 part 2 extinction
Ryan Sain
 
End of the world 02
End of the world 02End of the world 02
End of the world 02
topiwopi
 

Andere mochten auch (18)

World May End Tonight Ignite Bangalore
World May End Tonight Ignite BangaloreWorld May End Tonight Ignite Bangalore
World May End Tonight Ignite Bangalore
 
Roadmaps to prevent x risks 2- presentation
Roadmaps to prevent x risks 2- presentationRoadmaps to prevent x risks 2- presentation
Roadmaps to prevent x risks 2- presentation
 
Bible facts about eternal life
Bible facts about eternal lifeBible facts about eternal life
Bible facts about eternal life
 
The end of the world
The end of the worldThe end of the world
The end of the world
 
HUMAN EXTINCTION
HUMAN EXTINCTIONHUMAN EXTINCTION
HUMAN EXTINCTION
 
324 7 part 2 extinction
324 7 part 2 extinction 324 7 part 2 extinction
324 7 part 2 extinction
 
End of the world presentation
End of the world presentationEnd of the world presentation
End of the world presentation
 
Powerpoint
PowerpointPowerpoint
Powerpoint
 
It's the end of the world as we know it, and i feel fine
It's the end of the world as we know it, and i feel fineIt's the end of the world as we know it, and i feel fine
It's the end of the world as we know it, and i feel fine
 
The end of the World in the Spiritism Vision
The end of the World in the Spiritism VisionThe end of the World in the Spiritism Vision
The end of the World in the Spiritism Vision
 
End of the world 02
End of the world 02End of the world 02
End of the world 02
 
Is the End of the World Coming?
Is the End of the World Coming?Is the End of the World Coming?
Is the End of the World Coming?
 
world end
world endworld end
world end
 
The End of the World as We know it!
The End of the World as We know it!The End of the World as We know it!
The End of the World as We know it!
 
End of the world
End of the worldEnd of the world
End of the world
 
Journey To The Front End World - Part1 - The Skeleton
Journey To The Front End World - Part1 - The SkeletonJourney To The Front End World - Part1 - The Skeleton
Journey To The Front End World - Part1 - The Skeleton
 
Application of Graphene in electronics
Application of Graphene in electronicsApplication of Graphene in electronics
Application of Graphene in electronics
 
It's the end of the world as we know it
It's the end of the world as we know itIt's the end of the world as we know it
It's the end of the world as we know it
 

Ähnlich wie X-risks prevention plan

Backup on the Moon
Backup on the MoonBackup on the Moon
Backup on the Moon
avturchin
 
Aitp 2012 keynote on futuring and innovation
Aitp 2012 keynote on futuring and innovationAitp 2012 keynote on futuring and innovation
Aitp 2012 keynote on futuring and innovation
Cynthia Calongne
 
APPENDIX_F_2005_TP_neo_Cassandra_web
APPENDIX_F_2005_TP_neo_Cassandra_webAPPENDIX_F_2005_TP_neo_Cassandra_web
APPENDIX_F_2005_TP_neo_Cassandra_web
Marshall Alphonso
 
Towards an EarthMoonshot with Cognitive Computing
Towards an EarthMoonshot with Cognitive ComputingTowards an EarthMoonshot with Cognitive Computing
Towards an EarthMoonshot with Cognitive Computing
Jack Park
 
Encyclopedic intelligence big science and technology
Encyclopedic intelligence big science and technologyEncyclopedic intelligence big science and technology
Encyclopedic intelligence big science and technology
Azamat Abdoullaev
 

Ähnlich wie X-risks prevention plan (20)

Backup on the Moon
Backup on the MoonBackup on the Moon
Backup on the Moon
 
Pipelines: 2052-James Breaux, Centurion Pipeline Co.
Pipelines: 2052-James Breaux, Centurion Pipeline Co.Pipelines: 2052-James Breaux, Centurion Pipeline Co.
Pipelines: 2052-James Breaux, Centurion Pipeline Co.
 
23 march 2012_risk philosophy
23 march 2012_risk philosophy23 march 2012_risk philosophy
23 march 2012_risk philosophy
 
Strategicplan2011
Strategicplan2011Strategicplan2011
Strategicplan2011
 
Aitp 2012 keynote on futuring and innovation
Aitp 2012 keynote on futuring and innovationAitp 2012 keynote on futuring and innovation
Aitp 2012 keynote on futuring and innovation
 
APPENDIX_F_2005_TP_neo_Cassandra_web
APPENDIX_F_2005_TP_neo_Cassandra_webAPPENDIX_F_2005_TP_neo_Cassandra_web
APPENDIX_F_2005_TP_neo_Cassandra_web
 
Towards an EarthMoonshot with Cognitive Computing
Towards an EarthMoonshot with Cognitive ComputingTowards an EarthMoonshot with Cognitive Computing
Towards an EarthMoonshot with Cognitive Computing
 
The #FreeAI Manifesto
The #FreeAI ManifestoThe #FreeAI Manifesto
The #FreeAI Manifesto
 
Philosophy of Big Data: Big Data, the Individual, and Society
Philosophy of Big Data: Big Data, the Individual, and SocietyPhilosophy of Big Data: Big Data, the Individual, and Society
Philosophy of Big Data: Big Data, the Individual, and Society
 
Victorvan Rij Wildcards Hes Moskou December2011
Victorvan Rij Wildcards Hes Moskou December2011Victorvan Rij Wildcards Hes Moskou December2011
Victorvan Rij Wildcards Hes Moskou December2011
 
Social Machines of Scholarly Collaboration
Social Machines of Scholarly CollaborationSocial Machines of Scholarly Collaboration
Social Machines of Scholarly Collaboration
 
Badolato April 2011 Slideshow
Badolato April 2011 SlideshowBadolato April 2011 Slideshow
Badolato April 2011 Slideshow
 
Future possibilities for education and learning by the year 2030
Future possibilities for education and learning by the year 2030Future possibilities for education and learning by the year 2030
Future possibilities for education and learning by the year 2030
 
The Next Thirty Years
The Next Thirty YearsThe Next Thirty Years
The Next Thirty Years
 
Encyclopedic intelligence big science and technology
Encyclopedic intelligence big science and technologyEncyclopedic intelligence big science and technology
Encyclopedic intelligence big science and technology
 
2021-02-10_CogSecCollab_UBerkeley
2021-02-10_CogSecCollab_UBerkeley2021-02-10_CogSecCollab_UBerkeley
2021-02-10_CogSecCollab_UBerkeley
 
Distributed defense against disinformation: disinformation risk management an...
Distributed defense against disinformation: disinformation risk management an...Distributed defense against disinformation: disinformation risk management an...
Distributed defense against disinformation: disinformation risk management an...
 
Immortality roadmap
Immortality roadmapImmortality roadmap
Immortality roadmap
 
Social Transformation R4 - Phillip Andrews.pdf
Social Transformation R4 - Phillip Andrews.pdfSocial Transformation R4 - Phillip Andrews.pdf
Social Transformation R4 - Phillip Andrews.pdf
 
Windows of Opportunity in Hallways of Distractions: the race between threats,...
Windows of Opportunity in Hallways of Distractions: the race between threats,...Windows of Opportunity in Hallways of Distractions: the race between threats,...
Windows of Opportunity in Hallways of Distractions: the race between threats,...
 

Mehr von avturchin

Messaging future AI
Messaging future AIMessaging future AI
Messaging future AI
avturchin
 

Mehr von avturchin (20)

Fighting aging as effective altruism
Fighting aging as effective altruismFighting aging as effective altruism
Fighting aging as effective altruism
 
А.В.Турчин. Технологическое воскрешение умерших
А.В.Турчин. Технологическое воскрешение умершихА.В.Турчин. Технологическое воскрешение умерших
А.В.Турчин. Технологическое воскрешение умерших
 
Technological resurrection
Technological resurrectionTechnological resurrection
Technological resurrection
 
Messaging future AI
Messaging future AIMessaging future AI
Messaging future AI
 
Future of sex
Future of sexFuture of sex
Future of sex
 
Near term AI safety
Near term AI safetyNear term AI safety
Near term AI safety
 
цифровое бессмертие и искусство
цифровое бессмертие и искусствоцифровое бессмертие и искусство
цифровое бессмертие и искусство
 
Digital immortality and art
Digital immortality and artDigital immortality and art
Digital immortality and art
 
Nuclear submarines as global risk shelters
Nuclear submarines  as global risk  sheltersNuclear submarines  as global risk  shelters
Nuclear submarines as global risk shelters
 
Искусственный интеллект в искусстве
Искусственный интеллект в искусствеИскусственный интеллект в искусстве
Искусственный интеллект в искусстве
 
ИИ как новая форма жизни
ИИ как новая форма жизниИИ как новая форма жизни
ИИ как новая форма жизни
 
Космос нужен для бессмертия
Космос нужен для бессмертияКосмос нужен для бессмертия
Космос нужен для бессмертия
 
AI in life extension
AI in life extensionAI in life extension
AI in life extension
 
Levels of the self-improvement of the AI
Levels of the self-improvement of the AILevels of the self-improvement of the AI
Levels of the self-improvement of the AI
 
The map of asteroids risks and defence
The map of asteroids risks and defenceThe map of asteroids risks and defence
The map of asteroids risks and defence
 
Herman Khan. About cobalt bomb and nuclear weapons.
Herman Khan. About cobalt bomb and nuclear weapons.Herman Khan. About cobalt bomb and nuclear weapons.
Herman Khan. About cobalt bomb and nuclear weapons.
 
The map of the methods of optimisation
The map of the methods of optimisationThe map of the methods of optimisation
The map of the methods of optimisation
 
Как достичь осознанных сновидений
Как достичь осознанных сновиденийКак достичь осознанных сновидений
Как достичь осознанных сновидений
 
The map of natural global catastrophic risks
The map of natural global catastrophic risksThe map of natural global catastrophic risks
The map of natural global catastrophic risks
 
How the universe appeared form nothing
How the universe appeared form nothingHow the universe appeared form nothing
How the universe appeared form nothing
 

Kürzlich hochgeladen

Hubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroidsHubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroids
Sérgio Sacani
 
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Sérgio Sacani
 
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
PirithiRaju
 
Presentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptxPresentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptx
gindu3009
 
SCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptx
SCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptxSCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptx
SCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptx
RizalinePalanog2
 
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
Lokesh Kothari
 
Disentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTDisentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOST
Sérgio Sacani
 

Kürzlich hochgeladen (20)

Hubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroidsHubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroids
 
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
 
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
 
Presentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptxPresentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptx
 
SCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptx
SCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptxSCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptx
SCIENCE-4-QUARTER4-WEEK-4-PPT-1 (1).pptx
 
Creating and Analyzing Definitive Screening Designs
Creating and Analyzing Definitive Screening DesignsCreating and Analyzing Definitive Screening Designs
Creating and Analyzing Definitive Screening Designs
 
Botany 4th semester file By Sumit Kumar yadav.pdf
Botany 4th semester file By Sumit Kumar yadav.pdfBotany 4th semester file By Sumit Kumar yadav.pdf
Botany 4th semester file By Sumit Kumar yadav.pdf
 
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
 
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
 
GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)
 
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceuticsPulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
 
Animal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptxAnimal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptx
 
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 60009654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
 
Disentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTDisentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOST
 
SAMASTIPUR CALL GIRL 7857803690 LOW PRICE ESCORT SERVICE
SAMASTIPUR CALL GIRL 7857803690  LOW PRICE  ESCORT SERVICESAMASTIPUR CALL GIRL 7857803690  LOW PRICE  ESCORT SERVICE
SAMASTIPUR CALL GIRL 7857803690 LOW PRICE ESCORT SERVICE
 
VIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C PVIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C P
 
PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...
PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...
PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...
 
Recombinant DNA technology (Immunological screening)
Recombinant DNA technology (Immunological screening)Recombinant DNA technology (Immunological screening)
Recombinant DNA technology (Immunological screening)
 
GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)
 
Biological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfBiological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdf
 

X-risks prevention plan

  • 1. A Roadmap: Plan of action to prevent human extinction risks Alexey Turchin www.existrisks.org
  • 3. Exponential growth of technologies Unlimited possibilities Including destruction of humanity X-Risks
  • 4. Main X-Risks Large scale nuclear war Runaway global warming Nanotech - grey goo AI Synthetic biology Nuclear doomsday weapons and large scale nuclear war
  • 5. Contest: more than 50 ideas were added David Pearce and Sakoshi Nakamoto contributed Crowdsoursing
  • 6. International Control System Plan A is complex: Friendly AI Rising Resilience Space Colonization Decentralized monitoring of risks We should do all it simultaneously: international control, AI, robustness, and space Plan A1.1 Plan A2 Plan A3 Plan A4Plan A1.2
  • 7. Survive the catastrophe Plans B, C and D have smaller chances on success Leave backups Improbable ideas Bad Plans Plan B Plan C Plan D
  • 9. Plan A1.1 International Control System Planing Step 1 Research • Long term future model • Comprehensive list of risks, with probability assessment and prevention roadmap • Wiki and internet forum • Integration of approaches, funding, education, translation • Additional studies areas: biases, law
  • 10. Plan A1.1 International Control System Step 2 Preparation Social support • peer-reviewed journal, conferences, intergovernment panel, international institute • Popularization (articles, books, forums, media) • Public support, street action • Political support: lobbyism, parties
  • 11. Plan A1.1 International Control System Step 3 International Cooperation First level of defence on low-tech level • risks • Group of supernations takes responsibility of x-risks prevention • International law about x-risks • International agencies dedicated to certain risks • A smaller catastrophe could help unite humanity
  • 12. Plan A1.1 International Control System Step 3 Risk control First level of defence on low-tech level • International ban on dangerous technologies or voluntary relinquishment • Freezing potentially dangerous projects for 30 years • Concentrate all bio, nano and AI research in several controlled centers • Differential technological development: develop
  • 13. Plan A1.1 International Control System Step 4 Worldwide risk prevention authority Second level of defence on high-tech level • Center for quick response to any emerging risk; x-risks police • Worldwide video-surveillance and control on a system of international treaties • Narrow AI based expert system on x-risks, Oracle AI • Control over dissemination of knowledge of mass
  • 14. Plan A1.1 International Control System Step 4 Active shields Second level of defence on high-tech level • Geoengineering anti-asteroid shield • Nano-shield – distributed system of control of hazardous replicators • Bio-shield – worldwide immune system • Mind-shield - control of dangerous ideation by the means of brain implants • Worldwide monitoring, security and control
  • 15. Risks of plan A1.1 Fatal mistake in world control system Global catastropheGlobal catastrophe Active Shields Worldwide risk prevention authority
  • 16. Result Singleton «A world order in which there is a single decision-making agency at the highest level» (Bostrom) Worldwide government system based on AI Super AI which prevents all possible risks and provides immortality and happiness to humanity Colonization of the solar system, interstellar travel and Dyson spheres Colonization of the Galaxy Exploring the Universe
  • 17. Improving human intelligence and morality Plan A1.2 Decentralised risk monitoring Values transformation Cold war and WW3 prevention Decentralized risks monitoring
  • 18. Plan A1.2 Decentralised risk monitoring Step 1 Values Transformation • The value of the indestructibility of the civilization • Reduction of radical religious (ISIS) or nationalistic values • Popularity of transhumanism • Movies, novels and other works of art that honestly depict x-risks and motivate to their prevention • Memorial and awareness days: Earth day, Petrov day, Asteroid day
  • 19. Plan A1.2 Decentralised risk monitoring Step 2 Improving human intelligence and morality • Higher IQ, New rationality, Fighting cognitive biases • High empathy for new geniuses, lower proportion of destructive beliefs • Engineered enlightenment: use brain science • Prevent worst forms of capitalism • Promote best moral qualities
  • 20. Plan A1.2 Decentralised risk monitoring Step3 Cold war and WW3 prevention Dramatic social changes • International conflict management authority like international court • Large project which could unite humanity • Antiwar and antinuclear movement • Cooperative decision theory in international politics • Prevent brinkmanship • Prevent nuclear proliferation
  • 21. Plan A1.2 Decentralised risk monitoring Step 4 Decentralized risks monitoring • Transparent society: groups of vigilantes,“ Anony mous” style hacker groups • Decentralized control: local police,mutual control, whistle-blowers • Net based safety: ring of x-risks prevention organi zations • Economic stimulus: prizes for any risk found and prevented
  • 22. Plan A2 Friendly AI Solid Friendly AI theory AI practical studies Seed AI Superintelligent AI Study and Promotion
  • 23. Plan A2 Friendly AI Step 1 Study and Promotion • Study of Friendly AI theory • Promotion of Friendly AI (Bostrom and Yudkowsky) • Fundraising (MIRI) • •Slowing other AI projects (recruiting scientists) • •FAI free education, starter packages in programming
  • 25. Plan A2 Friendly AI Step 2 Solid Friendly AI theory • Human values theory and decision theory • Full list of possible ways to create FAI, and sublist of best ideas • Proven safe, fail-safe, intrinsically safe AI • Preservation of the value system during • AI self-improvement • A clear theory that is practical to implement
  • 26. Plan A2 Friendly AI Step 3 AI practical studies • Narrow AI • Human emulations • Value loading • FAI theory promotion to most AI commands; they agree to implement it and adapt it to their systems • Tests of FAI theory on non self-improving models
  • 27. Plan A2 Friendly AI Step 4 Seed AI Creation of a small AI capable of recursive self-improvement and based on Friendly AI theory
  • 29. Plan A2 Friendly AI Step 5 Superintelligent AI • Seed AI quickly improves itself and undergoes “hard takeoff” • It becomes dominant force on Earth • AI eliminates suffering, involuntary death, and existential risks • AI Nanny – one hypothetical variant of super AI that only acts to prevent existential risks (Ben Goertzel) Singleton Unfriendly AI
  • 31. Plan A3 Rising Resilience Improving sustainability of civilization Improving human intelligence and morality High-speed tech development Timely achievement of immortality AI based on uploading of it’s creator
  • 32. Plan A3 Rising Resilience Step 1 Improving sustainability of civilization • Intrinsically safe critical systems • Growing diversity • Universal methods of catastrophe prevention • Building reserves (food stocks,) • Widely distributed civil defence,
  • 33. Plan A3 Rising Resilience Step 2 Useful ideas to limit catastrophe scale • Limit the impact of catastrophe: quaran tine, rapid production of vaccines, grow stockpiles • Increase time available for preparation supporting general research risks, connect disease surveillance systems • Worldwide x-risk prevention exercises • The ability to quickly adapt to new risks
  • 34. Plan A3 Rising Resilience Step 2 High-speed tech dev. needed to quickly pass risk window • Investment in super-technologies (nanotech, biotech) • High speed technical progress helps to overcome slow process of resource depletion • Invest more in defensive technologies than in offensive
  • 36. Plan A3 Rising Resilience Step 4 Timely achievement of immortality Miniaturization for survival and invincibility • Nanotech-based immortal body • capable of living in space • Mind uploading • Integration with AI • Earth crust colonization by miniaturized nano tech bodies • Moving into simulated world inside small self sustained computer
  • 38. Plan A4 Space colonisation Temporary asylums in space Space colonies on large planets Colonisation of the Solar system Interstellar travel
  • 40. Plan A4 Space colonisation Step 1 Temprorary asylums in space • Space stations as temprorary asylums (ISS) • Cheap and safe launch systems
  • 41. Plan A4 Space colonisation Step 2 Space colonies on large planets Creation of space colonies on the Moon and Mars (Elon Musk) with 100-1000 people
  • 42. Plan A4 Space colonisation Step 3 Colonization of the Solar system • Self-sustaining colonies on Mars and large asteroids • Terraforming of planets and asteroids using self-replicating robots and building space colonies there • Millions of independent colonies inside asteroids and comet bodies in the Oort cloud
  • 43. Plan A4 Space colonisation Step 4 Interstellar travel • “Orion” style, nuclear powered “generation ships” with colonists • Starship which operate on new physical principles with immortal people on board • Von Neumann self-replicating probes with human embryos
  • 44. Result Interstellar distributed humanity Many unconnected human civilizations New types of space risks (space wars, planets and stellar explosions, AI and nanoreplicators, ET civilizations
  • 46. Plan B Survive the catastrophe Preparation Bulding Readiness High-tech bunkers Rebulding civilisation after catastrophe
  • 47. Plan B Survive the catastrophe
  • 48. Plan B Survive the catastrophe Step 1 Preparation • Fundraising and promotion • Textbook to rebuid civilization (Dartnell’s book «Knowledge») • Hoards with knowledge, seeds and raw materials (Doomsday vault in Norway) • Survivalist communities
  • 49. Plan B Survive the catastrophe Step 2 Building • Underground bunkers, space colonies • Nuclear submarines • Seasteading Natural refuges • Uncontacted tribes • Remote villages • Remote islands • Oceanic ships • Research stations in Atarctica
  • 50. Plan B Survive the catastrophe Step 3 Readiness • Crew training • Crews in bunkers • Crew rotation • Differnt types of asylums • Frozen embryos
  • 52. Plan B Survive the catastrophe Step 4 Miniaturization for survival and invincibility • • Earth crust colonization by miniatur- ized nanotech bodies • Moving into simulated world inside small self sustained computer • Adaptive bunkers based on nanotech
  • 53. Plan B Survive the catastrophe Step 5 Rebuilding civilisation after catastrophe • Rebuilding population • Rebuilding science and technology • Prevention of future catastrophes
  • 54. Result Reboot of civilization Several reboots may happen Finally there will be total collapse or a new supercivilization level
  • 55. Plan C Leave backups Time capsules with information Messages to ET civilizations Preservation of earthly life Robot-rep- licators in space
  • 56. Plan C Leave backups Step 1 Time capsules with information • Underground storage with information and DNA for future non-human civilizations • Eternal disks from Long Now Foundation (or M-disks)
  • 58. Plan C Leave backups Step 2 Messages to ET civilizations • Interstellar radio messages with encoded human DNA • Hoards on the Moon, frozen brains • Voyager-style spacecrafts with information about humanity
  • 60. Plan C Leave backups Step 3 Preservation of earthly life • Create conditions for the re-emergence of new intelligent life on Earth • Directed panspermia (Mars, Europe, space and dust) • Preservation of biodiversity and highly developed animals (apes, habitats)
  • 62. Plan C Leave backups Step 4 Robot-replicators in space • Mechanical life • Preservation of information about humanity for billions of years • Safe narrow AI
  • 63. Result Resurrection by another civilization Resurrection of concrete people Creation of a civilization which has a lot of common val- ues and traits with humans
  • 64. Plan D Improbable ideas Saved by non-human intelligence Quantum immortality Strange strategy to escape Fermi paradox Technological precognition Manipulation of the extinction probability using Doomsday argument Control of the simulation (if we are in it)
  • 65. Plan D Improbable ideas Idea 1 Saved by non-human intelligence • Maybe extraterrestrials are looking out for us and will save us • Send radio messages into space asking for help if a catastrophe is inevitable • Maybe we live in a simulation and simulators will save us • The Second Coming, a miracle, or life after death
  • 66. Plan D Improbable ideas Idea 2 Quantum immortality • If the many-worlds interpretation of QM is true, an observer will survive any death including any global catastrophe (Moravec, Tegmark) • It may be possible to make almost univocal correspondence between observer survival and survival of a group of people (e.g. if all are in submarine) • Another human civilizations must exist in the
  • 68. Plan D Improbable ideas Idea 3 Strange strategy to escape Fermi paradox Random strategy may help us to escape some dangers that killed all previous civilizations in space
  • 70. Plan D Improbable ideas Idea 4 Technological precognition • Prediction of the future based on advanced quantum technology and avoiding dangerous world-lines • Search for potential terrorists using new scaning technologies • Special AI to predict and prevent new x-risks
  • 71. Plan D Improbable ideas Idea 5 Manipulation of the extinction probability using Doomsday argument • Decision to create more observers in case of unfavourable event X starts to happen, so low- ering it’s probability (method UN++ by Bostrom) • Lowering the birth density to get more time for the civilization
  • 72.
  • 73. Plan D Improbable ideas Idea 6 Control of the simulation (if we are in it) • Live an interesting life so our simulation isn’t switched off • Don’t let them know what we know we live in simulation • Hack the simulation and control it • Negotiation with the simulators or pray for help
  • 74. Bad plans Prevent x-risk research because it only increases risk Controlled regression Depopulation Unfriendly AI may be better than nothing Attracting good outcome by positive thinking
  • 75. Bad plans Idea 1 Prevent x-risk research because it only increases risk • Do not advertise the idea of man-made global catastrophe • Don’t try to control risk as it would only give rise to them • As we can’t measure the probability of global catastrophe it maybe unreasonable to try to change the probability • Do nothing
  • 76. Bad plans Idea 2 Controlled regression • Use small catastrophe to prevent large one (Willard Wells) • Luddism (Kaczynski): relinquishment of dangerous science • Creation of ecological civilization without technology (“World made by hand”, anarcho-primi tivism) • Limitation of personal and collective intelligence to prevent dangerous science • world
  • 78. Bad plans Idea 3 Depopulation • Could provide resource preservation and make control simpler • Natural causes: pandemics, war, hunger (Malthus) • Extreme birth control • Deliberate small catastrophe (bio-weapons)
  • 80. Bad plans Idea 4 Unfriendly AI may be better than nothing • Any super AI will have some memory about humanity • It will use simulations of human civilization to study the probability of it’s own existence • It may share some human values and distribute them through the Universe
  • 82. Bad plans Idea 5 Attracting good outcome by positive thinking • Preventing negative thoughts about the end of the world and about violence • Maximum positive attitude «to attract» positive outcome • terrorists and superpowers to stop them • Start partying now
  • 83. Next stage of the research will be creation of collectively editable wiki-style roadmaps They will cover all existing topics of transhumanism and future studies Create AI system based on the roadmaps or working on their improvement Dynamic roadmaps
  • 84. You could read all roadmaps on: www.immortality-roadmap.com www.existrisks.org