Is a singularity near   dw 121020
Upcoming SlideShare
Loading in...5
×
 

Is a singularity near dw 121020

on

  • 539 Views

Copy of slides presented at London Futurists Meetup on 20 Oct 2012, http://www.meetup.com/London-Futurists/events/81804052/

Copy of slides presented at London Futurists Meetup on 20 Oct 2012, http://www.meetup.com/London-Futurists/events/81804052/

Statistics

Views

Total Views
539
Views on SlideShare
510
Embed Views
29

Actions

Likes
0
Downloads
12
Comments
0

1 Einbettung 29

http://www.scoop.it 29

Zugänglichkeit

Kategorien

Details hochladen

Uploaded via as Adobe PDF

Benutzerrechte

CC Attribution License

Report content

Als unangemessen gemeldet Als unangemessen melden
Als unangemessen melden

Wählen Sie Ihren Grund, warum Sie diese Präsentation als unangemessen melden.

Löschen
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Ihre Nachricht erscheint hier
    Processing...
Kommentar posten
Kommentar bearbeiten

Is a singularity near   dw 121020 Is a singularity near dw 121020 Presentation Transcript

  • Is a singularity near? A critical review of material presented at theSingularity Summit 2012 #ss12 San Francisco 13-14 OctLondon Futurists David W. Wood20th October 2012 Principal, Delta Wisdom Ltd @dw2 1
  • http://hplusmagazine.com/ 2
  • Is a singularity near? A critical review of material presented at theSingularity Summit 2012 #ss12 San Francisco 13-14 OctLondon Futurists David W. Wood20th October 2012 Principal, Delta Wisdom Ltd @dw2 3
  • 3 theories of the future1. The future is essentially just like the present – With minor variations – This ignores deep trends in technology, resource depletion…2. The future is a reasonably easily predictable projection of present-day trends – This ignores constraints – and “collisions” (interactions) – It also ignores the lag between underlying technology improvements and the application of that technology – “Grind is easy to predict, insight is hard to predict” – Black Swans (mini-singularities) are hard to predict, but likely3. The best way to plan for the medium term future is by repeatedly planning for the near term future – This ignores the threat of evolutionary dead-ends – Local optima aren’t necessarily global optima – Human societies have collapsed in the past, & may do again 4
  • The Singularity: Definition 1 “When humans transcend biology”When humans become at least as much computer/robot as biological A catchy phrase But misses out some of the real punch 5
  • The Singularity: Definition 2 – John von Neumann (1950s)• “The ever-accelerating progress of technology … gives the appearance of approaching ? some essential singularity in Technology the history of the race beyond which human affairs, as we know them, could not continue” Time 2050 6
  • The Singularity: Definition 2• “What is the Singularity? It’s a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed” – Ray Kurzweil ? TechnologyThe unexpected punch of exponential growth Time 2050 7
  • The Singularity: Definition 3 – Alan Turing, 1951• “My contention is that machines can be constructed which will simulate the behaviour of the human mind very closely...”• “…it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers”• “There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits”• At some stage therefore we should have to expect the machines to take control” 8
  • The Singularity: Definition 3• The advent of super-human general AI• AI smarter than humans, for sufficiently many purposes – Including the ability to design and build new AI• The process is no longer constrained by humans• A trigger for recursive improvement – A likely trigger for fast (and then even faster) recursive improvement• We could in a very short timespan have super-super…-super-human general AI• We are unlikely to be able to begin to conceive what will happen next (Vernor Vinge) 9
  • The Singularity: Definition 3• “When the first transhuman intelligence is created and launches itself into recursive self- improvement, a fundamental discontinuity is likely to occur, the likes of which I can’t even begin to predict” – Michael Anissimov Technology Time 2050 10
  • Some predictions of the Singularity• Downside of bad Singularity is awful: Hollywood disaster• Singularity would be much worse than Terminator movies… 11
  • The scale of intelligent minds: A parochial view. Village idiot Einstein This section adapted from http://singularity.org/upload/mindisall-tv07.pptEliezer Yudkowsky (2007) Singularity Institute for AI
  • The scale of intelligent minds: A parochial view. Village idiot Einstein A more cosmopolitan view: Mouse Village idiot Chimp EinsteinEliezer Yudkowsky (2007) Singularity Institute for AI
  • AI Mouse Village idiot Chimp EinsteinEliezer Yudkowsky (2007) Singularity Institute for AI
  • Vernor Vinge: The best answer to the question, “Will computers ever be as smart as humans?” is probably “Yes, but only briefly.” AI Mouse Village idiot Chimp EinsteinEliezer Yudkowsky (2007) Singularity Institute for AI
  • Minds-in-general Human mindsEliezer Yudkowsky (2007) Singularity Institute for AI
  • Minds-in-general Transhuman mindspace Human mindsEliezer Yudkowsky (2007) Singularity Institute for AI
  • Minds-in-general Posthuman mindspace Transhuman mindspace Human mindsEliezer Yudkowsky (2007) Singularity Institute for AI
  • Bipping AIs Minds-in-general Posthuman mindspace Freepy AIs Gloopy AIs Transhuman mindspace Human mindsEliezer Yudkowsky (2007) Singularity Institute for AI
  • Friendly AIEliezer Yudkowsky (2007) Singularity Institute for AI
  • Friendly AIEliezer Yudkowsky (2007) Singularity Institute for AI
  • The Singularity: Promise and Peril – Luke Muehlhauser – Executive Director, Singularity Institute• Which are the most important events in history? – Technology is the most important re-shaper of the world (judging by social development) – The most important technology that will ever be invented: superhuman AI• Upside: We don’t die because physics requires us to die, we die because we haven’t yet figured out how not to die• Problem: Almost all the mind designs we could pick, for superhuman AI, even if we were really careful, would steer us somewhere we didn’t want to go• Risk of over-maximising something that seems desirable – hedonic pleasure, a single experience repeated indefinitely…• Request: more funding, to enable more research! 22
  • Spread the wings of our uncertainty – Stuart Armstong – James Martin Research Fellow, FHI, Oxford• How good are we at predicting advent of AGI? – Review SI database of 257 predictions of AGI, 1950-2012 – “I believe that in about fifty years time it will be possible, to programme computers… to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning” – Turing• 1956 Dartmouth summer conference on AI – Many of the participants predicted that a machine as intelligent as a human being would exist in no more than a generation• Is there evidence in favour of the so-called Maes-Garreau law? – People tend to predict that AGI will happen just before they die – 15-25 years away – not too soon, not too far (?) 23
  • Spread the wings of our uncertainty• http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/ 24
  • 25
  • Spread the wings of our uncertainty – Stuart Armstong – James Martin Research Fellow, FHI, Oxford• What techniques can AI predictors use? – Can’t use deductive logic, scientific method, or past examples – They can only use expert opinion!• When are experts credible? – Problem decomposable? Experts agree on stimuli? Feedback available?• Grind is easy to predict, insight is hard to predict – Moore’s Law is grind, AGI requires insight!• Best timeline predictions: whole brain emulations (uploads) – Very decomposed, justified grind, clear assumptions. – Anders Sandberg ran Monte Carlo simulations – uncertainty spreads over the entire next 100 years• His own 80% estimate for advent of AGI is from 5 to 100 years 26
  • Channeling the Flood of Data – Peter Norvig – Research Director at Google• Started by rating his own predictions from SS07 – 5 out of 6 predictions stood the test of time• Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations (2009) – Hired team of academics into Google – 16,000 CPUs on 1,000 servers, deep network with one billion parameters Trained on 10 million YouTube videos, initially just one frame per video – The system conceptualised human faces … and cats• Same system applied in speech recognition – Jump in performance was akin to 20 years at previous rate (according to one of the participants)• System increasingly being applied in language translation too• Software and algorithmic advances will be more important than hardware for the next 5-10 years at least 27
  • 4 routes to greater-than-human AI – Vernor Vinge – Mathematician, Computer Scientist & Sci Fi writer1. Progress with “classical AI project” – Watson-like advances will cover more and more features of what it’s like to be human – Demos of robotics are progressively more impressive – Sufficient hardware may already be around for AGI, in this decade2. Intelligence Amplification – “ride the curve of improved cognition” – Computers provide a neo-neo-cortex for humans – Humans become the greater-than-human intelligence: hybrid intelligence3. Digital Gaia – networked ensemble of the world’s embedded microprocessors – Sensors and effectors. “Reality itself would wake up” BIGGEST RISK? – Fundamental change in the nature of reality. It would have all the stability that we currently associate with financial markets(!)4. Group minds - Crowd-sourced intelligence that trumps all the human intelligence systems of the past IDEALLY THE SOLUTION TO OTHER RISKS – The Internet plus connected databases plus billions of humans 28
  • Is a singularity near?A critical review of materialpresented at the #ss12Singularity Summit 2012,San Francisco 13-14 Oct TechnologyLondon Futurists20th October 2012David W. WoodPrincipalDelta Wisdom Ltd@dw2 Time 2050 29
  • Our Viral Future – Carl Zimmer – Popular science writer and blogger• Ramses V, Pharaoh of Egypt 1149-1145 BC – “All his power and wealth couldn’t protect him from microscopic particles” (smallpox -> yellow postules) – Smallpox has killed billions (>5M each year in Europe…)• Second virus has now been eradicated: Rinderpest, cow disease – Polio now close to eradication (except in e.g. Pakistan) – Deaths (and new infection rates) from AIDS have now crested• Dangers from new “cross-over” viruses, spread quickly globally – HIV itself is an example of a virus crossover, from chimp to humans – SARS was 2002 cross-over, from Chinese horoscope bat ANALYSIS: RAPID – SARS solved by quarantine, not by vaccination COLLABORATIVE – Schmallenberg virus – horrendous birth defects in lambs – no solution• “Viruses are going to be surprising us in the future, and at least some of these surprises will be positive ones” – e.g. gene therapy advances NEEDED: BETTER MONITORING, FASTER WAYS TO MAKE VACCINES 30
  • Over-complexity and over-interdependenciesPossible causes of relatively imminent human-caused societal collapse• “Extreme events” (“X-Events”) – John L. Casti (2012) – Worldwide plague – “Digital darkness”: Long-term failure of the Internet – Failure of the electricity grid – Electro Magnetic Pulse detonation – fries all electronics – Collapse of food supplies, water supply, world oil supplies – “The great unwinding”: Collapse of world financial markets – Runaway climate change (positive feedback cycles) – Geo-engineering catastrophe – Exotic particles created by CERN (compare first H2 bomb) – Ecosystem collapse precipitated by GMOs or nanoparticles – Runaway superhuman intelligence Insufficiently rational forward thinking 31
  • Rationality and the Future Community is key – Julia Galef – President, Center for Applied Rationality (CFAR)• “Why a better world tomorrow requires better cognition today”• Examples of irrationality – Story of Eric Blair shooting an elephant… “solely to avoid looking a fool” – People are more likely to believe others who have symmetric faces – “One man’s death – that is a catastrophe. 100k dead – that is a statistic”• “If you don’t want to be the captive of your genes, you had better be rational” – Keith Stanovich – Our genes dont care about strangers on other side of the world; WE do• Three principles observed by CFAR “Harry Potter and the – People don’t like being told they’re not rational methods of rationality” – Teach rationality skills people care about 32
  • Rational thinking & decision making – Daniel Kahneman – 2002 Nobel Prize in Economics• “Incorrect intuitions in the field of statistics” – With Amos Tversky• Examples of cognitive mistakes – Example of training Israeli flight school instructors on +ve reinforcement – We feel like we have free will even when we clearly dont – We (even prof. forecasters) are misled by stories that seem coherent – We need to be wary of the stories we tell ourselves about the Singularity – Most major risks that unfolded were not the ones anticipated (N. Taleb)• Individual experts are subject to the same biases as the rest of us – Key role of community and feedback in achieving rationality• “We tend to over-estimate the impact of technology on the short-run, but under-estimate it on the long-run” – Roy Amara 33
  • A race? Negative PositiveSingularity Singularity 34
  • A History of Violence – Steven Pinker – Professor of Psychology at Harvard University• Six major declines of violence – Based on his book “The better angels of our nature”1. The pacification process (rise and expansion of states)2. The civilising process (criminal justice nationalised)3. The humanitarian revolution (abolition of judicial torture, use of the death penalty for nonlethal crimes, witch-hunts, religious persecution, duelling, blood sports, debtor’s prisons, slavery…)4. The long peace (unprecedented decline in interstate war since 1946)5. The new peace (post Cold War: democracy, trade, global community)6. The rights revolution (decline in lynching, non-lethal hate crimes against blacks, rape crimes, domestic violence, states allowing corporal punishment, approval of spanking, animal hunting)• Has human nature changed? Unlikely – Human nature is complex: inclinations towards violence, and that oppose it 35
  • How to Create a Mind – Ray Kurzweil – Inventor, Author, Co-Founder of Sing Summit• “The secret of human thought revealed”• Confident in his predictions from ~30 years ago about the continuing growth of computer power• IBM Watson got its knowledge from reading 200 million pages in Wikipedia and other encyclopaedias – in natural language – Google self-driving cars may reach the marketplace within 5 years – “We’re right before the storm with 3D printing” (Economist front cover: violin)• Reverse engineering the brain: the ultimate source of the templates of intelligence – Human intelligence is pattern based. Computer intelligence is more logic based• Three bridges to indefinite life (the third being the Singularity & brain uploads) – By 2030, life expectancy will be increasing by at least one year every year – He has written three books on health (people ought to read them…) 36
  • Too Much Magic? – James Howard Kunstler• “Wishful thinking, technology, and the fate of the nation”• Infamous for his “The Long Emergency” (2005) – “Surviving the End of Oil, Climate Change, and Other Converging Catastrophes of the Twenty-first Century”• ‘TLE’ forecast the economic collapse of 2008-9 – Our civilisation is more inter-twined than you think• Views Kurzweil with some interest – Claims that Kurzweil (and many others) confuses “energy” with “technology” – Problems of energy capture, distribution, and storage are deeply hard – Especially when most sources of economic and political power are pursuing shorter-term interests 37
  • Personal conclusions1. Is a singularity near? Definitely Maybe. It might be – There’s nothing inevitable about the outcome – Both positive and negative singularities are feasible – There are negative outcomes apart from just “AGI goes wrong” – Negative outcomes more likely (without strong wise leadership)2. The best of all times (new golden age) is potentially ahead – But to reach it, we have to navigate potentially the worst of times3. We can’t rely on free-markets to get us there – Nor on traditional processes of politics – These pursue “local sub-optima” (and do so dysfunctionally)4. Wise leadership involves rationality and community – We need to challenge and refine each other’s ideas – Utilise technology to improve our rationality (wikis, Google) – But don’t neglect “traditional” routes to wisdom (study, meditate)5. The singularity community deserves our serious support – Learn from best principles of marketing & communications 38
  • Winning the race? Negative Positive Singularity Singularity + Moral bio-enhancement? Turn H into H+ 39
  • The case for moral bio-enhancement – Prof Julian Savulescu, Oxford University – Prof Ingmar Persson, University of Gothenburg• Forthcoming (June 2013) book “Unfit for the Future: The Urgent Need for Moral Enhancement”• http://philosophynow.org/issues/91/Moral_Enhancement• We are facing two major threats: – climate change – along with the attendant problems caused by increasingly scarce natural resources – war, using immensely powerful weapons• Our Natural Moral Psychology – insufficient to meet modern challenges• We can directly affect the biological or physiological bases of human motivation (complementing, not replacing, traditional moral education) – through drugs; or through genetic selection or engineering – or by using external devices that affect the brain or the learning process 40
  • Hacking wetware: smart drugs and beyond Sat 3 Nov, Andrew Vladimirov “What are the most promising methods to enhance human mental andintellectual abilities significantly beyond the so-called physiological norm?” “Which specific brain mechanisms should be targeted, and how?” “Which aspects of wetware hacking are likely to grow in prominence in the not-too-distant future?” Humanity+ UKSupported by Facebook group: UKH+ 41
  • People in the ‘Golden Age of Technology’ Sat 24 Nov, Nick Price Integrating human values into futures thinking“This session will look at models for thinking about the future that integrate human values (and potential changes in human values) alongside changesin more tangible, measurable elements of the world - elements such as the environment, science, technology, economics and society” Humanity+ UK Supported by Facebook group: UKH+ 42
  • The Fifth Conference on Artificial General Intelligence St Annes College, Oxford UK – December 8-11, 2012 http://agi-conference.org/ Chair: Ben Goertzel Keynotes: David Hanson, Angelo Cangelosi, Margaret Boden, Nick Bostrom “AGI-12 is an official part of the Alan Turing Year celebrations” 43
  • Is a singularity near? A critical review of material presented at theSingularity Summit 2012 #ss12 San Francisco 13-14 OctLondon Futurists David W. Wood20th October 2012 Principal, Delta Wisdom Ltd @dw2 44