My keynote at the 2018 New Profit Gathering of Leaders conference in Boston on May 17, 2018. I talk about the lessons from technology platforms, how they teach us what is wrong with our economy, and the possibilities of AI for creating better, fairer, more effective decisions about "who gets what and why" in the economy.
How to Troubleshoot Apps for the Modern Connected Worker
Â
We Must Redraw the Map
1. We Must Redraw The Map!
Tim OâReilly
@timoreilly
oreilly.com
wtfeconomy.com
New Profit Gathering of Leaders
May 26, 2018
2. How is the economy changing?
What are the implications for business?
What does technology now make
possible that was previously impossible?
What work needs doing?
Why arenât we doing it?
wtfeconomy.com
3. ââŚ47 percent of jobs are âat
riskâ of being automated in the
next 20 years.â
Carl Frey and Michael Osborne, Oxford University
âThe Future of Employment: How Susceptible
Are Jobs to Computerisation?â
4.
5. Dealing with climate change
Rebuilding our infrastructure
Feeding the world
Ending disease
Resettling refugees
Caring for each other
Educating the next generation
Enjoying the fruits of shared prosperity
11. The algorithms decide âwho gets what â and whyâ
Markets are outcomes. A better designed
marketplace can have better outcomes.
The choices made by the marketplace
designer have enormous consequences for
the participants and for society
Are they the right choices?
16. The Equinix NY4 data center,
where trillions of dollars change hands
17. What is the objective function of our financial markets?
âThe Social Responsibility of Business Is to
Increase Its Profitsâ
Milton Friedman, 1970
18. We have to let go of this map that is steering us wrong
In 1625, we thought
California was an island
19. The master algorithm asks for growth to go on forever
It should be doing a
better job of solving for
21. Fitness Landscapes
The way in which genes contribute to
the survival of an organism can be
viewed as a landscape of peaks and
valleys.
Through a series of experiments,
organisms evolve towards fitness
peaks, adapted to a particular
environment, or they die out.
Image source: http://evolution.berkeley.edu/evolibrary/article/side_0_0/complexnovelties_02
22. Technology also has a fitness landscape
In my career, Iâve watched a number of migrations to new peaks
Apple
Personal
Computer
Big Data
and
AI
Smartphones
23. Generosity takes us to the next peak
Tim Berners-Lee, 1990
The World Wide Web
Linus Torvalds, 1991
Linux
Big Data
and
AI
Tim Berners-Lee, 1990
The World Wide Web
Linus Torvalds, 1991
Linux
24. The same dynamics play out at the national level
Inclusive economies prosper.
Extractive economies falter.
Why do we incentivize extractive behavior?
27. 1. We must rewrite the rules
King George III George Washington
28. Another view of âWho Gets What â and Whyâ
Profit = Revenue â Expenses
Profit = Revenue â (Cost of materials + cost of labor
+ cost of capital)
Return to capital = Revenue â (Cost of materials
+ cost of labor)
Shouldnât the return be proportional to the contribution
of all of the inputs?
29. âThe opportunity for AI is to help humans
model and manage complex interacting
systems.â
Paul R. Cohen
30. âComputational Sustainability is a new interdisciplinary
research field, with the overarching goal of studying and
providing solutions to computational problems for balancing
environmental, economic, and societal needs for a
sustainable future. Such problems are unique in scale,
impact, complexity, and richness, often involving
combinatorial decisions, in highly dynamic and uncertain
environments, offering challenges but also opportunities for
the advancement of the state-of-the-art of computer and
information science. Work in Computational Sustainability
integrates in a unique way various areas within computer
science and applied mathematics, such as constraint
reasoning, optimization, machine learning, and dynamical
systems.â
Carla Gomes
31. âWhat good governance
and the good society look
like is now inextricably
linked to an understanding
of the digital.â
Tom Steinberg,
MySociety
2. Leaders must become digitally literate!
35. Government is a platform.
Its policies shape who gets
what and why.
What lessons should we be
taking from the success and
failure of tech platforms?
wtfeconomy.com
36. $470B
Spent on
government safety
net programs
$42B
Technology and
government are the two
most powerful ways to get
to scale.
At Code for America, we
bring them together. Charitable contributions
towards safety net
Code for America
40. This is what technology wants
âProsperity in human societies is best
understood as the accumulation of
solutions to human problems. We wonât
run out of work until we run out of
problems.â
Nick Hanauer
41. What would it take for us to
ď§ Put people to work tackling the worldâs greatest problems?
ď§ Treat humans as assets, not liabilities?
ď§ Create an economy based on caring and creativity, while machines focus
on repetitive tasks?
ď§ Apply on-demand marketplace models to healthcare, augmenting
community health workers with telemedicine and AI?
ď§ Give everyone access to knowledge on demand, whenever we need it?
ď§ Have fresh approaches to public policy based on what is possible now,
and by learning what works, rather than picking from set political menus?
42. Dealing with climate change
Rebuilding our infrastructure
Feeding the world
Ending disease
Resettling refugees
Caring for each other
Educating the next generation
Enjoying the fruits of shared prosperity
44. Let the machines do as much of the work as
they can. Let humans get on with the real
work of the 21st century.
Editor's Notes
My book WTF? Is a reflection on many of the technological changes youâve been hearing about this morning. It talks about what the great technology platforms have to tell us about the future of business and the economy.
How is work changing?
What are the implications for business?What does technology now make possible that was previously impossible?What work needs doing?Why arenât we doing it?
The book starts out with the fearful projections that AI is going to automate more and more human work, leaving us all with nothing to do.
Frey and Osborneâs projection that up to 47% of human tasks, including many white collar jobs, could be eliminated by automation within the next 20 years
seems to have been taken as gospel.
Thereâs no work left for humans? Seriously. WTF?
Thereâs so much work to be done!
Dealing with climate changeRebuilding our infrastructureFeeding the worldEnding diseaseResettling refugeesCaring for each otherEducating the next generationEnjoying the fruits of shared prosperity
My worry is very different. The world is changing and we, and our institutions, must adapt to the new world we are living in.
Thereâs a great story in Ernest Hemingwayâs The Sun Also Rises, in which the narrator asks a character named Mike how he went bankrupt. âTwo ways,â he replied. âFirst gradually, then suddenly.â Technological change happens like that too, first gradually, then suddenly.
Gradually, then suddenly, artificial Intelligence and algorithmic systems are everywhere, in new kinds of partnerships with humans
Gradually, then suddenly, large segments of the economy are governed not by free markets but by centrally managed platforms
These networks rule our lives more deeply than we think. We are all living and working inside a machine. It isnât just this worker in a Google data center.
Our modern systems are massive hybrid AIs. These AIs are not external to us. We are part of them. We are inside them. They shape what we think and how we act.
When you look at a company like Google, you see that humans are working alongside automation in very new ways. Even in a company as driven by computer technology as Google, there are humans who keep things running. There are other humans who write code and AI models, and manage and train the algorithms of search, advertising, and the Google Brain. There are other humans â all of us - who contribute new knowledge and seek it out, reinforcing neural pathways by what we link to, and what we pass on.
This has implications that we are only now starting to become aware of.
https://www.google.com/about/datacenters/gallery/#/people/14
The big question is whether that machine has human interests at heart.
A simple example of the invisible hand of a free market information economy is the supermarket checkout line. Everyone can see how long the lines are, and how much stuff the other people have in their baskets, and as a result, they choose what they think is the fastest line with some success, and all the lines even out.
Image: Getty Images
Iâm not talking about some kind of killer robot. Iâm talking about the economic machines that rule our society.
One of the things that platforms teach us that has enormous impact for the design of policy and business is that markets are outcomes. In our economic policy, we assume a free market of rational actors making decisions with perfect information. Say that again in an age woken up by the problems of Facebook and Cambridge Analytica.
It isnât the free market that decides who gets what and why. It is the folks designing and managing the marketplace at these internet platforms. The choices made by the designers of those algorithms have enormous consequences for the participants and for society.
The platformâs algorithms decide âwho gets what and whyâ â the fundamental question outlined by Nobel prize-winning economist Al Roth in his book about marketplace design.
(Roth got his Nobel prize in economics for exploring how to design better marketplaces. He worked on kidney transplant marketplaces, and he showed that if you can increase trust, you can create a âthicker marketplace,â in which it is easier to match up those who have something and those who need it.
Hal Varianâs former protege Jonathan Hall, now chief economist at Uber, pointed me to this book, and said it was really shaping how he approached his job at Uber. That gave me a language for thinking and talking about platforms that I hadnât had before. )
Uber and Lyft teach us a lot about the future world we are entering. The idea that humans are working inside the machine is no longer something that happens just in the digital realm, safely separated from âthe real world.â Uber drivers and passengers are all part of a vast digital machine. And the algorithms were designed for too long to treat drivers as a disposable commodity. Growth in users is the Silicon Valley gospel, and so the algorithms were designed with low prices to attract more users, and for them to be picked up as quickly as possible. Too many drivers drove down wages, and new drivers had to constantly be recruited with wild incentives, which were easy to game.
Doing a better job of balancing the value allocated to passengers vs the value allocated to drivers, and to the owners and investors of platform itself, is central to the success of Uber and Lyft.
Does that sound at all like it might have a lesson for our wider economy?
Uber and Lyft are now seeking a more sustainable path, where drivers are incented to stick around, and the algorithm makes a better allocation. Additional factors of impact on society, such as road congestion, are also starting to be taken into account. But thereâs still a long way to go.
But thereâs also the problem of unintended consequences. Facebook was designed to help people connect with their friends, as in this example. But in my book, I spend some time on Facebook and fake news as an illustration of how algorithmic systems can go wrong.
Facebookâs struggle with fake news is a great example of what AI researchers have warned about as âthe runaway objective function.â The algorithms do exactly what we ask them to do.
Facebookâs engineers are a bit of the same situation as Mickey Mouse in Walt Disneyâs retelling of Goetheâs story The Sorcererâs Apprentice. Mickey borrows his masterâs spellbook, and compels the broom to help him fetch water. Unfortunately, he doesnât know how to stop the broom, and before long
He is desperately trying to find a way to stop the power he has unleashed. This is what Mark Zuckerberg and team look like right now. Thatâs a runaway objective function at work.
Facebook told their systems to optimize for engagement â to show people more of what they liked, commented on, and shared, and content that people like them engaged with. Their idea was that this would lead to more human connection. It turned out instead to increase hyperpartisanship and to drive people apart, and now they are trying to stop it.
âpeople like youâ turned out to be a very powerful tool, one that got out of control, driving polarization and radicalization. Researcher Renee DiResta found that when she began researching the anti-vaxxer movement, the algorithms at YouTube and Facebook concluded she liked conspiracy theories, and before long, she was down the rabbit hole of âchem trails,â all the way to the Flat Earth. Is it any wonder that these systems became a recruiting tool for Islamic fundamentalists and a weapon for destabilizing American democracy?
But thereâs one other place that our algorithms have gone wrong, and thatâs in our financial markets. Those markets, just like Uber, Facebook, and Google, are vast, algorithmic marketplaces. And the algorithm has a runaway objective function.
And thatâs where we should be worrying about Skynet, that fabled AI gone wrong, hostile to humans.
What is the objective function of our financial markets? When, in 1970, Milton Friedman said that the social responsibility of business is to increase its profits, and when, a few years later, Michael Jensen began to preach the gospel of shareholder value maximization and the need to align executive compensation with rising stock prices, they didnât mean to create the devastation they wreaked on the economy, but itâs time to recognize it.
(Milton Friedman penned an op-ed in the New York Times arguing that the social responsibility of business was to increase its profits. Anything else was, in effect, taking money from its shareholders. Then in 1976, William Meckling and Michael Jensen wrote a paper outlining the reasoning behind aligning the interests of management with shareholders, which was eventually accomplished with executive pay via stock options. So called âshareholder valueâ thinking was soon taught in business schools, and thatâs when the great divergence between productivity and wages began.
One of the key ideas in the book is that our maps of the world are steering us wrong. In 1625, Henry Briggs published a map stolen from the Spanish by the Dutch, which showed California as an island. This map guided exploration for the next hundred years.
Henry Briggs, 1625: âCalifornia sometymes supposed to be a part of ye westerne continent, but since by a Spanish Charte taken by ye Hollanders it is found to be a goodly landâŚâ In 1705, a Jesuit priest, Eusebio Kino, led an overland expedition across the top of the Sea of Cortez, and argued that what came to be called Baja California was in fact an island. My question is why it took an overland expedition, rather than just sailing up the Sea of Cortez!
One of the bad maps that we have in economics and business is that growth goes on forever. In Silicon Valley, thereâs often talk of âexponential growth,â but itâs pretty clear that even Facebook is showing only linear growth. But even linear growth runs up against limits. Facebook is at 2 billion users, a third of the worldâs population. Instead of telling these platforms they must grow at all costs, we should be asking them to do a better job of solving for who gets what and why.
Because here has been the result of our current bad map. Take a look at the divergence of productivity and real median family income! Why do we see that, despite the continuing growth of productivity, family incomes have stagnated, and as Raj Chettyâs research has shown, most children in developed countries can no longer expect to do better economically than their parents. Inequality has skyrocketed.
What if, instead of having a tax code that incentivized companies to increase their profits at the expense off people, we had one that incentivized them to put people to work on solving the hardest problems.
Itâs rather disheartening when you hear a Goldman Sachs executive say âthereâs no money to be made keeping people healthy.â
Source http://stateofworkingamerica.org/charts/productivity-and-real-median-family-income-growth-1947-2009/ via https://en.wikipedia.org/wiki/Income_inequality_in_the_United_States
I think the idea from evolutionary biology, about fitness landscapes with peaks and values, is actually a better metaphor for how the future unfolds than the graph that goes always up and to the right.
A fitness landscape is a way of visualizing how genes contribute to the survival of an organism and a species. External conditions can be viewed as a landscape of peaks and valleys. Through a series of experiments, organisms evolve towards fitness peaks, adapted to a particular environment, or they die out.
Because of the hypercompetitive nature of Silicon Valley, and the speed with which new technologies are introduced, it is like a petri dish where you can watch evolution in action. It is also a great way to get perspective on decline and fall in the slower evolutionary landscape of nations.
In my career, Iâve watched a number of migrations to new peaks, and Iâd like to share with you some observations about what happened, and why. And then weâll talk about some lessons for digitalization of the overall economy.
When a new wave of technology hits, a new company almost always becomes dominant. The dominant company of one technology wave sometimes manages to survive, but it loses its privileged position as the technology marketplace migrates to a new peak. The path to the top of each new peak requires new competencies â a new fitness function â and the old competency actually holds back the previously dominant company.
One of the things that Iâve learned is that the surest way to drive entrepreneurs to seek the fitness peak of a new technology and a new business model is for dominant players to take too much of the value for themselves. And just as in biology, itâs easier to get to the new peak from the valley. I watched this happen with Microsoft in the 1990s. The company had used its dominance over the operating system to lock out competitors. But the innovators just went elsewhere, where there was an opportunity for open innovation, and invented the future on the way up a new fitness peak. Tim Berners-Lee introduced the World Wide Web in 1990, and Linus Torvalds introduced Linux in 1991. Between the two of them, the paradigm changed. Software was now a commodity. Big data was the new source of competitive advantage, with Google at the latest peak in the fitness landscape.
Net lesson: You lose when you try to capture too much of the value for yourself. And you lose again if you hang on to the old rules of business when faced with the resulting change in the fitness landscape.
This is also the conclusion of Oxford developmental economist Kate Raworth, author of the book Doughnut Economics, who talks about the job of economics not being how to keep growth going up and to the right, but instead about how to keep the world in âthe doughnut,â the narrow band between human undershoot, in which much of humanity is left out of the bounty that is possible, and economic overshoot, where humanity takes an unsustainable proportion of the worldâs resources, leading to instability.
This model also applies to companies. You are actually managing a complex adaptive system. Your job is to stay in balance with your ecosystem and the business environment. And sometimes that means you need to adapt to new conditions.
Image: Kate Raworth and Christian Guthier/The Lancet Planetary Health
Business is starting to recognize this as well. A recent study by the BCG Henderson Institute used machine learning to look at financial reports from tens of thousands of companies, and based on the language they used, put them into two buckets, characterized by Aristotleâs two branches of economics: oikonomia, the management of the household, and chrematistike, the pursuit of wealth. They discovered that those who pursued oikonomia â thinking of their stakeholders more broadly â actually outperformed those practicing chrematistike.
King George III, Portrait by Joaquin Zoffany, 1771
George Washington, Portrait by Gilbert Stuart
So what do we need to do? First, our world view must change. The world once believed in the divine right of kings, that some people were more equal than others, and naturally inherited wealth and power. After the American Revolution, King George III of England expected that George Washington would be crowned king in America. When George Washington went back to his farm, he is reported to have said, âHe is the greatest man in the world.â
Today, we believe in the divine right of capital. Those who have it may be generous and give back, but they accept their privilege. If we talk about âtriple bottom line,â it is a pale thing, a shadow of what it ought to be, a true understanding of âwho gets what and why,â using our newfound intelligence to make a more robust distribution of the fruits of machine productivity. Or as I put it in my book, echoing Joseph Stiglitz, we must rewrite the rules.
In my book, I made a throwaway reference to the idea that future economic historians might well look back on this period when we believed in the divine right of capital while looking down on our ancestors who believed in the divine right of kings. A reader pointed me to a remarkable book from 2001 by Marjorie Kelly called The Divine Right of Capital.
She makes the point that our values have become embedded in our profit and loss statements. If you look at the idealized P&L, you realize that profit is whatâs left over after deducting the costs of inputs such as materials, labor, and capital. Yet somehow, we rewrote the P&L such that all the surplus belongs to only one of the inputs.
Consider Apple today. It is a hugely profitable company. If their employees left, theyâd be toast. If their suppliers left, theyâd be toast. If their customers left, theyâd be toast. If the rule of law failed, theyâd be toast. If no one bought their stock, no one would care but stockholders. Yet Carl Icahn buys $6 billion in Apple stock, greenmails the company to use its cash to do $100 billion in stock buybacks. Apple could instead have lowered prices, paid more to the clerks in the Apple stores or to the workers in their Chinese factories (their corporate aristocracy is already well paid), and paid their taxes. They did none of these things, because of this crazy system that accepts the rise in stock price as the goal to be satisfied above all others.
Paul R. Cohen, a former DARPA programming manager now Dean of a new school of Information Sciences at the University of Pittsburgh recently put it beautifully at a recent meeting of the National Academies, where we were both speaking about the future of AI. He said, âThe opportunity for AI is to help humans model and manage complex interacting systems.â
On the positive side, these vast algorithmic tools let us do things that were previously impossible. Google gives searchable access to trillions of documents â itâs not quite âaccess to all the worldâs information,â but itâs the closest thing weâve seen. Facebook connects billions of people. Uber and Lyft have put millions of people to work providing on-demand transportation.
But you can also see the enormous power for algorithmic systems to do good in the new field that Cornell professor Carla Gomes calls Computational Sustainability. Sheâs working with the Brazilian national grid to build data models that determine which Amazon tributary to dam, solving simultaneously for the need for power generation, the fewest number of people that need to be displaced, and the impact on endangered species. In California, sheâs helping the water management districts time the release of water into California rice fields to coordinate with the migrations of waterfowl. Both farmers and waterfowl benefit. The possibilities are enormous.
We must use these tools to confront the challenges of the 21st century!
Second, leaders MUST become digitally literate. Tom Steinberg, the founder of UK non-profit MySociety, a pioneer in government innovation, once wrote, âYou [can] no longer run a country properly if the elites donât understand technology in the same way they grasp economics or ideology.â
And more importantly, âwhat good governance and the good society look like is now inextricably linked to an understanding of the digital.â
If you watched the recent congressional hearings with Mark Zuckerberg, you watched the consequences of that ignorance on display. But it is also on display in each of our own organizations. We are struggling to keep up with an understanding of fast-moving technology. But we must give technologists a seat at the leadership table.
https://www.mysociety.org/2012/02/11/5-years-on-why-understanding-chris-lightfoot-matters-now-more-than-ever/
Some of you may be wondering how you apply these ideas if youâre not an Amazon or a Google or a Facebook. I want to give illustrations from my own efforts, both at my company, OâReilly Media, and at Code for America, the non-profit started and run by my wife, Jennifer Pahlka.
OâReilly Media is a technology information provider. Weâre a publisher, a conference producer, and run an online learning platform called Safari. Our mission is âChanging the world by spreading the knowledge of innovators,â but our motto is âCreate more value than you capture.â
We launched Safari in 2001 as an ebook aggregation platform. We realized (network effects, duh) that if ebooks were ever to really take off, we had to bring together the entire industry around them, not just provide a service just for ourselves and our own customers. (Remember that this was six years before the Kindle launched in 2007!) So we invited Pearson, the company that was our biggest competitor at the time, to join us in the business, which eventually came to offer tens of thousands of ebooks from hundreds of publishers.
This was originally a joint venture with Pearson, with the other publishers as marketplace participants but not owners. We bought Pearson out in 2014, and it is now a wholly owned subsidiary of OâReilly.
As peopleâs learning needs extended beyond books, we added video learning, including video courses with an interactive coding environment, and even synchronous online training with a live instructor. Whenever we launch new features, weâre careful to try to bring our partners along with us, rather than using those new features to take more of the pie for ourselves, as so many other platform companies seem to do. We realized early on that if you want to create a sustainable network marketplace, you have to balance the value allocated to the members of the network, not just to the core.
Hereâs a really interesting story about how deeply we think about balancing the value to all parts of the ecosystem. Last year, when we launched live online training as a new feature, OâReilly Media President Laura Baldwin called an emergency meeting of our exec team. Her message: not all of our partners got on board as quickly as we would have liked, so, for example, OâReilly started out offering 100 courses, while Pearson had only ten. The feature was a HUGE success, and so OâReilly had taken a way larger share of the provider payments, and Pearsonâs income had dropped by nearly half in that first month. The emergency was that we had to work harder to bring them on board, NOT to extend our lead and take more of the pie!
Weâve even extended our network thinking to turning our customers into content providers, with features like case studies, where companies share the lessons learned as theyâve implemented various technologies.
Every company needs to think of itself as a networked marketplace, and to have its goal as managing the benefit to all of the participants.
The other area where I spend a lot of my time is with government. I believe that government has a lot to learn from the great technology platforms. Like them, its algorithms shape who gets what and why. But unlike the tech platforms, it is stuck in the past century.
Thatâs why a big part of my time is spent with Code for America, the non-profit started by my wife Jennifer Pahlka.
We believe that technology and government are the two most important ways to get to scale. We put them together.
We help to âdebugâ government programs so that they become more effective. To give you a sense of the leverage weâre trying to harness, consider that all charitable spending on the social safety net in America totals about $42 billion annually. The government spends nearly half a trillion.
If we can make the $470B work even 10% better, we can drive impact the size of the entire charitable sector. There is enormous opportunity to improve how government services are administered and improve the outcomes we get for those dollars.
Source:
Gov spending on welfare (safety net) $470B: http://www.usgovernmentspending.com/year_spending_2014USrn_17rs2n_4041#usgs302
Amount of charitable donations going to poverty alleviation (human services) $42B ($115B goes to religious charities, $55B to education):
http://www.charitynavigator.org/index.cfm?bay=content.view&cpid=42#.Vxlw6BMrLBI
470/42 = 11
We ask ourselves what would happen if government programs worked as well as the best consumer internet applications. How might they work differently if we put people at the center?
One of our signature projects started out with a redesign of the online application for SNAP (Food Stamps) in California. We started on this project in 2013 when the city and county of San Francisco asked us to help them understand why their participation rate in SNAP was so low. What we found was that the online application that they were sending people to through all their outreach efforts contained 200 questions, and took an hour to complete. It didnât work on a mobile phone. Many of the people applying donât own computers, so they have to use ones at the libraryâŚwhere the timeout is 30 minutesâŚand the application gave them no way to save their work. We replaced it with a mobile app that can be completed in seven minutes ⌠and then began to follow the users via text messaging, to find other problems in the process, and give feedback to our government partners.
Weâve now expanded the project to all 58 California counties, with funding from the state, and are now working on a pilot for an Integrated benefit application (SNAP, WIC, and Medicaid) in five other states, with charitable funding. (Our model is to build pilots with charitable funding, and then get to sustainability with government funding, once weâve proved that our intervention works.)
What ties both of these efforts together is the idea of technology as a force for good, which is a theme throughout my book.
As my friend Nick Hanauer put it âTechnology is the solution to human problems. We wonât run out of work till we run out of problems.â Are we done yet? Are we done yet?
There are graphs that we want to go up and to the right.
I highly recommend to doubters This World In Data, the site Max Roser runs at Oxford showing progress over the past centuries. Hereâs one showing change in human life expectancy. You can see that for hundreds of years, life expectancy was flat, unless it went down due to wars and plague. Then suddenly in the mid 1800s, it suddenly goes up and to the right. And as you add new countries to this interactive visualization, you see that as they too join the industrial revolution, they follow the same path.
What would it take for us to
Put people to work tackling the worldâs greatest problems?
Treat humans as assets, not liabilities?
Create an economy based on caring and creativity, while machines focus on repetitive tasks?
Apply on-demand marketplace models to healthcare, augmenting community health workers with telemedicine and AI?
Give everyone access to knowledge on demand, whenever we need it?
Have fresh approaches to public policy based on what is possible now, and by learning what works, rather than picking from set political menus?
Thereâs another important point: When you look at that list of unsolved problems that I showed earlier, they are all examples of what
Natasha Iskander of NYU calls âbiophilic work,â the work of improving and remediating life. This will always be with us.
We are a social species. Doing things for each other is in our nature. As long as thereâs a reasonably fair distribution of the fruits of productivity, we will make an economy for each other. As Clay Christensen noted in his âLaw of Conservation of Attractive Profits,â whenever one thing becomes a commodity, something else becomes valuable.
But meanwhile, we have vast problems to solve, the result of the extractive economy of the past.
Why can we only see AI and other WTF? Technologies of the 21st century as engines of disruption and destruction, rather than as engines of creativity and prosperity? Why arenât we talking about universal basic income as our birthright â the result of human ingenuity?
Let the machines do as much of the work as they can. Let humans get on with the real work of the 21st century.
Thank you very much.
This is just an author slide for when it gets uploaded. It wonât be part of the talk.