Three reasons why we should welcome the march of the robots

A 2017 New Yorker cover by R Kikuo Johnson painted a dystopian scene. Robots pace and trundle past a homeless human kneeling at their feet, while one deigns to lower its gaze to flip a few coins in his cup. The image expressed perfectly the pervading, and misplaced, pessimism around the impacts of automation not just among East Coast sophisticates, but across the U.S. and the developed world. In fact, it is a view that has even infiltrated one of the last pockets of optimism about the future: the wide-eyed utopianism of Silicon Valley.

When even the technorati are starting to agonize over the future of artificial intelligence and the perils of automation, you have to wonder. Elon Musk–often a champion of the human ability to improve its condition through material progress–is becoming fearmonger-in-chief of the artificial intelligence apocalypse: “There certainly will be job disruption. Because what’s going to happen is robots will be able to do everything better than us . . . I mean all of us.”

The most widely held fear, and one that taps into our earliest fears about industrialization, is of mass unemployment as robots take most of the jobs. Other critiques of the proliferation of artificial intelligence and increased automation are more nuanced. Some say that it will drive even greater inequality between the “cognitive elite” and the deskilled masses.

The Guardian reflected a widespread concern over the potential concentration of power by the robot-owning corporations: “If you think inequality is a problem now, imagine a world where the rich can get richer all by themselves.”

These concerns lie behind growing calls for Universal Basic Incomerobot taxes, and the break-up of Big Tech giants like Google and Amazon. But the situation isn’t as grim as we might think. Automation need not be stirred into a doom-laden soup along with Trump and climate change. In fact, if we step back from the narrow focus on technology and take a wider historical, economic, and humanist view, the picture is far from bleak. Counterintuitive as it may seem, automation can play a key role in creating more and better jobs, and rising prosperity. There are broadly three reasons to be cheerful about the march of the robots.

Since the Industrial Revolution, the automation of human labor has run hand-in-hand with productivity gains, economic growth, and an increase in the number of jobs and prosperity. It is productivity growth that largely accounts for why most of us are six times better off than our great-grandparents. As Paul Krugman put it, in economics, “Productivity isn’t everything–but in the long run, it’s almost everything.” How can automating work create more jobs?

A classic example of how this process can work is that, during the Industrial Revolution, 98% of the manual labor involved in weaving cloth was mechanized. But, despite the concerns of the Luddites, the number of textile workers in the U.K. exploded. As costs plummeted, demand grew, and so did the size of the industry–and therefore job numbers. The cake got bigger.

The jobs also changed from hand weaving to operating the weaving machines. A more recent example is the impact of Electronic Discovery Software (EDS) on junior lawyers and paralegals, who traditionally spent the bulk of their time sifting through piles of documents. EDS was first applied in the 1990s, and did the job more quickly and more accurately than humans. Yet paralegal and junior lawyer jobs have grown quicker than the rest of the workforce since 2000.

How so? As searching became cheaper and quicker, law firms searched more documents, and judges allowed more expansive discovery requests. Economists have a name for the intuitive, but mistaken, idea that there is a certain amount of work to do in an economy, and if productivity increases there will be fewer jobs to go around: the Lump of labour fallacy. There are, of course, occupations that fared less well in the face of technology, such as typesetters, once graphic designers adopted desktop-publishing software in the 1990s. But the general pattern is that machines take over mundane tasks, and humans move on to do more sophisticated–and often meaningful–work that machines can’t do yet.

And the net effect in a buoyant economy is job growth. A long view reveals that each round of automation brings similar fears–when the first printed books with illustrations began to appear in the 1470s, wood engravers in the German city of Augsburg protested and stopped the presses. In fact, their skills turned out to be in higher demand than before, as more books needed illustrating.

The general assumption is that if the robot doesn’t replace you, it will deskill you. Yet a study by the Boston University School of Law into the impact of automation on 270 occupations in the U.S. since 1950 found that only one was eliminated: lift operators.

The other jobs were partially automated and in many cases, this automation led to more jobs, often more skilled positions. The impact of ATMs on bank clerks is a case in point. The number of branch employees has grown since cash machines were first installed: ATMs allowed banks to operate branches at lower cost, enabling them to open many more. At the same time, banks morphed into financial-service providers, giving clerks more opportunity for upward job mobility. Machines generally take on the simple tasks, as humans move to more complex–and often more meaningful–work.

In 1979, Fiat ran a television advertisement in the U.K. for the Strada with the tagline, “Handbuilt by robots.” In the 1980s, the march of the robots was seen as inevitable and, as with the assembly line, car production would lead the way. Forty years later, Toyota, the guru of manufacturing innovation, has robots doing less than 8% of the work on the factory floor–a ratio that hasn’t changed in 15 years. When asked why, the president of Toyota Motor Manufacturing, Kentucky, replied that “machines are good for repetitive things, but they can’t improve their own efficiency or the quality of their work. Only people can.”

Even in manufacturing, automation isn’t as easy as many assume. Pessimists tend to overestimate the extent to which humans can be replaced and how fast it will happen. They share a faulty assumption with artificial-intelligence optimists, who look forward to “singularity,” when computer intelligence will supposedly surpass our own. They see impressive breakthroughs in narrow and bounded machine-learning problems, like beating humans at board games, and extrapolate that this singularity is inevitable and around the corner.

This assumption runs far ahead of current knowledge. Neuroscientists are only scratching the surface of understanding how our brains perceive, learn, and understand, while human consciousness is still a highly contested topic in both philosophy and psychology. We’re a long way from understanding human intelligence, never mind surpassing it. Gloom merchants tend to imbue technology with superpowers while running down human ingenuity. Surely our perception, curiosity, creativity, critical thinking, judgment, and adaptability will drive the world forward–aided by more automation.

We shape technology and, of course, it shapes us, but it does not define our future. Social and political forces are pivotal. The fatalism around robot-driven inequality suffers from peering at the future through technology blinkers. If robots drive inequality, how is it that Sweden has three times as many robots as the U.K. as a proportion of manufacturing workers–and much lower levels of inequality? Many other factors feed into the U.K.’s relatively high levels of inequality, such as low investment in education and in research and development, an overreliance on cheap labor, and an erosion of union power.

It is no coincidence that inequality in the U.K. soared between 1979 and 1990, during Margaret Thatcher’s assault on the unions. Fretting about robot-induced impoverishment tomorrow obscures the real policy-related causes of wage suppression today. With living standards stagnating across the developed world, boosting productivity growth should be a pressing priority. Far from running scared of it, we should be ramping up our investment in automation. Of course, the road to semi-automated economic renewal will not be pain-free–many jobs will be lost in parts of the economy, while others will be created elsewhere.

But even more will be lost if the economy continues to ossify. This is where the state has a key role to play in devising and implementing an industrial renaissance strategy to navigate the disruption caused by the next wave of automation. This should include investing in R&D in job-creating sectors such as autonomous transportation, virtual and augmented reality, and data security, as well as introducing automation to the backward construction industry as part of a desperately needed expansion in housebuilding. There is, after all, no shortage of problems to solve and work to be done, including in human-intensive sectors that desperately need revitalization, such as healthcare and infrastructure.

An ambitious program to support and retrain workers for the parts of the economy that will grow as a result of automation is also needed. In short, timidity, not technology, is the problem. We have nothing to fear, but the fear of robots itself.

An article by Be Kaler Pilgrim for Futureheads

What’s the story of your career so far?

I have a Catholic background, with a big and small ‘c’. I did time as an engineer, a designer, an academic researcher and lecturer, a design manager and social forecaster before settling on product strategy. I find working at the intersection of technology, business and culture really rewarding.

In my previous role as a director at Seymour Powell, one of the UK’s leading product design consultancies, I set up one of the first design research and strategy teams outside of a large organisation. The founders, Richard Seymour and Dick Powell, encouraged me to experiment with a mix of design, user research, product-planning and foresight methods. We grew the team to provide the initial scoping stage for many projects in the studio.

In 2004 I founded Plan as a pure-play product strategy consultancy to help in-house innovation teams bring clarity to the early and ‘fuzzy’ stages of their work. I then found myself in the position of an accidental entrepreneur. I’d never aspired to be one nor had I put enough thought into it before taking the plunge. Leading a business is like a never-ending experiment – the learning is continuous and multi-faceted, from finance to HR.

The first half of my career was dominated by the rise of the mobile and then smartphone, and figuring out how they should fit into the culture. I now focus on the ever more intersecting areas of mobility, tech and cities, which I find absolutely fascinating. The level of change and the magnitude of the challenges in this space suggest that there is plenty to keep me occupied for many years to come.

When you’ve been around as long as I have and worked in different industries, you develop a nose for hype, BS and the confines of ‘echo chamber’ thinking. I like to develop and test new ideas through writing, speaking and chairing conferences.

What advice would you give yourself when you were just starting out?

Spend more time getting to the root of the problem, setting it in a useful context, working out which parts of it to tackle, and hustling to get the right people on the team. Spend less time obsessing about methods, techniques, tools, and processes.

Also, sketch and write more. I was a terrible writer until my late twenties. Two people put me right. Bryan Lawson, the author of How Designers Think, got me over my fear of writing, by drawing a parallel with sketching. As all designers know, getting ideas out of your head and onto paper is a reality check. The same goes for writing: it externalises our thinking, so it can be interrogated and refined. More of today’s design challenges are hard to capture in sketches and visuals, and writing is an extra way of capturing and developing early ideas. The second influence on my writing was my ex-boss James Woudhuysen. When I complained about how much writing he expected me to do when I first joined Seymour Powell, he taught me that writing – or more accurately editing what I’d written – is an exercise in clarifying your thinking. He also taught me to stop writing like I thought I was supposed to write, and instead find my own voice.

What do you love most about what you do?

When I wrote the first business plan for Plan, I described our mission as ’Do great work, with great people for great clients’. I honestly feel that I spend a lot of time in that place. Plan is still small enough for me to lead some of the projects and get my hands dirty. I’ve never been as proud of my team as I am now. We have also managed to find clients who ask us fascinating questions and who are (largely!) a pleasure to work with.

What’s the most important lesson you’ve learned over the course of your career?

Tactful bravery is critical to success and integrity. Whether it’s questioning a team lead, challenging a client’s assumptions, raising an issue with a boss, making a staffing decision or committing to a strategy – ducking the hard calls is never a good move in the long run. But doing the right thing tactlessly can also blow things up. It’s what you do and the way you do it.

Like many things in life, learning this lesson is one thing – always having the judgement and cojones to apply it is another.

What do you think is going to be the biggest challenge in our industry over the next five years?

Articulating a well-judged, compelling and nuanced case for human strengths in an era of AI-based automation. I’m optimistic about the potential of this technology, as long as it’s applied wisely and work is redesigned to maximise the technology’s benefits and our own human potential. As the philosopher and cognitive scientist Daniel Dennett puts it, ‘The real danger… is not machines that are more intelligent than we are … The real danger is basically clueless machines being ceded authority far beyond their competence.’ The challenge for designers will be to champion human strengths in an age of AI.

A little bit more about Kevin

Kevin is the founder of Plan, the product strategy consultancy. Plan helps mobility and consumer tech companies explore the early stages of the product and service development. Their clients include Ford, Toyota, Yamaha, Deutsche Telecom, Carl Zeiss, Microsoft, and Samsung.

Kevin writes, speaks, and chairs conferences on design, innovation and society; and has been published in: The Wall Street Journal, The Telegraph, FastCompany, Unherd, Icon, Blueprint and The Design Management Review.

You can connect with Kevin on LinkedIn and Twitter.

If fear of automation is overblown, how will man and machine work in harmony?

The impact of AI on society is typically posed in terms of how it will replace humans, as pundits draw up lists of jobs that are at risk and those which are ‘AI proof’. While some tasks – and even careers – will be replaced, a more useful way to think about the future is how we will interlace the strengths of machines with those of humans in new ways.

Before he left Google to head up AI at Apple, John Giannandrea made it clear that he had little time for the inflated claims made about his field. Stating his preference for the term ‘machine intelligence’ over artificial intelligence, he told audiences at Tech Crunch Disrupt in 2017 that: ‘there’s just a huge amount of unwarranted hype around AI right now… [much of which is] borderline irresponsible’. His aim, he says, was not to match or replace humans but to make ‘machines slightly more intelligent — or slightly less dumb’. This approach does not dismiss the potential of computers to radically alter the way we work. It merely presents the nuanced ways it will do so.

The more we learn about AI and human psychology, the more we understand how differently people think and machines calculate. Unlike machines, we typically lean on a variety of mental rules of thumb that yield narratively plausible judgments. The psychologist and Nobel laureate Daniel Kahneman calls the human mind ‘a machine for jumping to conclusions’. On the other hand, machines using deep-learning algorithms must be trained with many thousands of photographs to recognize kittens— and even then, they have formed no conceptual understanding of cats. In contrast, even small children can easily learn what a kitten is from just a few examples. To paraphrase Michael Polanyi, the father of the idea of tacit knowledge,  ‘We know more than we can know – and therefore code’. Not only do machines not think like humans, they apply their ‘thinking’ to narrow fields, and cannot associate pictures of cats with stories about cats.

‘How we will interlace the strengths of machines with those of humans in new ways.’

One of the fundamental insights AI researchers have made is that tasks humans find hard, machines often find easy – and vice versa. Cognitive scientist Alison Gopnik summarizes what is known as Moravec’s Paradox: ‘At first, we thought that the quintessential preoccupations of the officially smart few, like playing chess or proving theorems—the corridas of nerd machismo — would prove to be hardest for computers.’ As we have discovered however, these are the very things that computers find easy whereas understanding what an

object is and handling it – something a child can do – is much harder for a computer. The conundrum is, in Gopnik’s words, this: ‘it turns out to be much easier to simulate the reasoning of a highly trained adult expert than to mimic the ordinary learning of every baby’. When IBM’s Big Blue beat Garry Kasparov at chess in 1997, it didn’t know it was playing chess, never mind know that it had beaten a grandmaster.

AI casts new light on what makes us human, not as distinct from animals, but from machines. This poses the question of what kind of relationship we should seek with smart things. If we can get beyond the thinking on them as malevolent and/or being in possession of super intelligence, but having complementary advantages to ourselves, new possibilities emerge. What if we can combine our human strengths of inspiration, judgments, making sense and empathy with computer strengths of brawn, repetition, following rules, data recall and analysis?

The term Artificial Intelligence was coined by the cognitive scientist and inventor John McCarthy in 1955. McCarthy’s mentor was a psychologist and computer scientist JCR ‘Lick’ Licklider who had graduated with a triple degree in physics, math and psychology in 1937. Rather than speculate on computers achieving human-style intelligence, Licklider argued with remarkable prescience that humans and computers would develop a symbiotic relationship, the strengths of one would counterbalance the limitations of the other. Lick said: ‘men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinisable work that must be done to prepare the way for insights and decisions in technical and scientific thinking. … the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.’

Developments in both psychology and AI suggest that Licklider’s vision of human-computer symbiosis is a more productive guide to the future than speculations about ‘super-intelligent’ general AI. As Steve Jobs put it, ‘that’s what a computer is to me … it’s the most remarkable tool that we’ve ever come up with; it’s the equivalent of a bicycle for our minds’. Predictions of a robot apocalypse may grab the headlines,

‘The goal is shifting… to designing machines that help humans think better.’

but more seasoned voices describe AI as just the latest in many phases of automation, each of which have begun with fear and ended with more jobs, economic growth and prosperity. It is worth bearing in mind the words of the philosopher and cognitive scientist Daniel Dennett: ‘The real danger … is not machines that are more intelligent than we are. The real danger is basically clueless machines being ceded authority far beyond their competence.’

More enlightened managers are starting to imagine what AI enabled work might be like, instead of fearing it. The goal is subtly shifting from building machines that think like humans, to designing machines that help humans think and perform better. Most work, after all, is comprised of a mix of tasks: some of which are better suited to us and some of which could one day be done better by machines. As the capabilities of these grow, managers will redesign work to take advantage of the strengths of both their human workers and their automated assistants.

The challenges of designing this hybrid type of work should not be underestimated. As the fatal crash of the Uber test car in Tempe, Arizona demonstrated. It was supposedly being supervised by a human backup driver, who was watching TV on his smartphone at the time.  Designing heavily automated systems that require only occasional human input are folly. It will take a lot of human ingenuity and experimentation construct and nurture these new working relationships – but the potential gains in productivity and job satisfaction are vast, as machines take on more mundane tasks.

It’s time to change our perspective. The rise of AI and automation isn’t a conflict. It isn’t a case of  ‘man vs. machine’, but of man and machine complementing one another, enabling a more productive collaboration. In an age of automation that tends to overestimate machines and undervalue people, let’s embrace the potential of AI, while championing our many and amazing human strengths.