Credit: Illustration by Chris Malbon

Last year, entrepreneur Sebastian Thrun set out to augment his sales force with artificial intelligence. Thrun is the founder and president of Udacity, an education company that provides online courses and employs an armada of salespeople who answer questions from potential students through online chats. Thrun, who also runs a computer-science lab at Stanford University in California, worked with one of his students to collect the transcripts of these chats, noting which resulted in students signing up for a course. The pair fed the chats into a machine-learning system, which was able to glean the most effective responses to a variety of common questions.

Next, they put this digital sales assistant to work alongside human colleagues. When a query came in, the program would suggest an appropriate response, which a salesperson could tailor if necessary. It was an instantaneously reactive sales script with reams of data supporting every part of the pitch. And it worked; the team was able to handle twice as many prospects at once and convert a higher percentage of them into sales. The system, Thrun says, essentially packaged the skills of the company's best salespeople and bequeathed them to the entire team — a process that he views as potentially revolutionary. “Just as much as the steam engine and the car have amplified our muscle power, this could amplify our brainpower and turn us into superhumans intellectually,” he says.

The past decade has seen remarkable advances in digital technologies, including artificial intelligence (AI), robotics, cloud computing, data analytics and mobile communications. Over the coming decades, these technologies will transform nearly every industry — from agriculture, medicine and manufacturing to sales, finance and transportation — and reshape the nature of work. “Millions of jobs will be eliminated, millions of new jobs will be created and needed, and far more jobs will be transformed,” says Erik Brynjolfsson, who directs the Initiative on the Digital Economy at the Massachusetts Institute of Technology in Cambridge.

But making firm predictions is difficult. “The technology is rushing ahead, which in a way is a good thing, but we have a huge gap in understanding its implications,” Brynjolfsson says. “There's a huge need, a huge opportunity, to study the changes.” Researchers are beginning to do just that, and the emerging evidence resists simple storylines. Advances in digital technologies are likely to change work in complex and nuanced ways, creating both opportunities and risks for workers (see 'More research needed').

boxed-text

Here are three pressing questions about the future of work in a digital world and how researchers are beginning to answer them.

Will machine learning displace skilled workers?

In previous waves of automation, technological advances have allowed machines to take over tasks that were simple, repetitive and routine. Machine learning opens up the possibility of automating more complex, non-routine cognitive tasks. “For most of the last 40 or 50 years, it was impossible to automate a task before we understood it extremely well,” Brynjolfsson says. “That's not true anymore. Now machines can learn on their own.”

Machine-learning systems can translate speech, label images, pick stocks, detect fraud and diagnose disease — rivalling human performance in some new and surprising domains. “A machine can actually look at many, many, many more data samples than a human can handle,” says Thrun. Earlier this year, he led a team that demonstrated that some 129,000 images of skin lesions could be used to train a machine to diagnose skin cancer with a level of accuracy that matches that of qualified dermatologists1.

LISTEN

Reporter Benjamin Thompson finds out how lessons from the past can help explore the future of work.

These advances have raised concerns that such systems could replace human workers in fields that once seemed too complex to be automated. Early estimates seemed dire. In 2013, researchers at the Oxford Martin Programme on Technology and Employment at the University of Oxford, UK, reviewed the advances and lingering challenges in machine learning and mobile robotics to estimate how susceptible 702 different occupations were to automation2. Their startling conclusion was that 47% of jobs in the United States were at high risk of computerization, with jobs in transportation, logistics, production and administrative support particularly vulnerable. That spelt trouble for workers such as taxi drivers, legal secretaries and file clerks.

Since then, however, other researchers have argued that the 47% figure is much too high, given the variety of tasks that workers in many occupations tend to perform. “Once you go deeper, once you look into the task structure of what people really do at work, then you find that the estimates get much lower,” says Ulrich Zierahn, a senior researcher at the Centre for European Economic Research in Mannheim, Germany.

For instance, the Oxford study reported that clerks in bookkeeping, accounting and auditing face an automation risk of 98%. But when Zierahn and his colleagues analysed survey data on what people in those professions actually do, the team found that 76% of them had jobs that required group work or face-to-face interaction. For now at least, such tasks are not easily automated3. When the authors extended their approach to other professions, they found less-alarming figures for the number of at-risk jobs in the 21 countries surveyed. In the United States, the share of workers at high risk of automation was just 9%, and the figure ranged from a low of 6% in South Korea and Estonia to a high of 12% in Germany and Austria (see 'Delaying the robot uprising').

Credit: Sources: OECD/Ref. <a data-externalid="nature.news.citation" data-type="bib-inline">[3]</a> (<a href="http://go.nature.com/2KK4D4Y">http://go.nature.com/2KK4D4Y</a>)

Brynjolfsson is now working with Tom Mitchell, a computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, to drill deeper into the impact of machine learning. They have developed a rubric outlining the characteristics that make certain tasks especially amenable to this approach. For instance, machine-learning systems are adept at tasks that involve translating one set of inputs — say, images of skin lesions — into another set of outputs, such as cancer diagnoses. They're also most likely to be used for tasks in which the large digital data sets required for training the system are readily available. Brynjolfsson and Mitchell are now going through several large occupational databases to determine how well a variety of workplace tasks match up with these and other criteria.

Even with these kinds of analysis in hand, determining the consequences for the labour market is complex. Just because a task can be automated doesn't mean that it will be; new technologies often require costly and time-consuming organizational changes. Legal, ethical and societal barriers can also delay or derail their deployment. “AI is not yet an off-the-shelf product,” says Federico Cabitza, who studies health-care informatics at the University of Milano-Bicocca in Italy. Implementing medical machine-learning systems, for instance, requires both technological readiness and willingness to devote the thousands of person-hours necessary to make these systems operational, he says — not to mention buy-in from caregivers and patients.

Research suggests that the workforce is flexible in adapting to new technologies. In the second half of the twentieth century, increasing automation prompted shifts within occupations as employees began performing more complex and non-routine tasks. In some future cases, these shifts could be positive; if automated systems start making routine medical diagnoses, it could free doctors to spend more time interacting with patients and working on complex cases. “The fact that computers are becoming good at medical diagnosis doesn't mean that doctors will disappear as a job category,” Mitchell says. “Maybe it means we'll have better doctors.”

Indeed, many people might find themselves working alongside AI systems, as the Udacity salespeople did, rather than being replaced by them. Self-driving cars, for instance, are not yet able to navigate all situations on their own, so car manufacturer Nissan is developing a human-powered solution. If one of its autonomous cars encounters a situation it doesn't understand, such as roadworks or a traffic accident, it will contact a remote command centre where a human 'mobility manager' can take control until the car has passed the trouble spot. “Machines think in a very different way, fundamentally, than humans do, and each has its strengths,” says Pietro Michelucci, executive director of the Human Computation Institute in Ithaca, New York. “So there's a real natural marriage between machines and humans.”

Will the gig economy increase worker exploitation?

Flexibility, variety and autonomy: these are the promises of the burgeoning gig economy, in which workers use online platforms to find small, short-term jobs. This sort of on-demand, digitally mediated gig work can take a variety of forms, from driving for the taxi service Uber to completing microtasks — including taking surveys, translating a few sentences of text or labelling an image — on a massive crowd-working platform such as Amazon Mechanical Turk.

These digital platforms allow workers to complete tasks from anywhere, meaning they could remove some geographical barriers to getting good jobs. “Someone in Nairobi is no longer constrained by the local labour market,” says digital geographer Mark Graham of the University of Oxford.

Graham and his colleagues have spent several years studying the digital, on-demand economy in southeast Asia and sub-Saharan Africa. They have conducted face-to-face interviews with more than 150 gig workers in these regions, surveyed more than 500 people and analysed hundreds of thousands of transactions on online labour platforms.

Their preliminary results show that these jobs do pay off for some gig workers; 68% of the survey respondents said that the work makes up an important part of their household income. And digital platforms provided jobs to a variety of people — including women who were primary caregivers and migrants without work permits — who said that their employment opportunities were otherwise limited. “There are some people who really thrive in this system,” Graham says. “But it's not like that for everyone.”

There is a pronounced oversupply of labour in the gig economy, leading some workers to drop their rates below what they consider fair. Many also work long hours at high speeds and to tight deadlines. “They tend to have a very precarious existence, so they're worried about saying no to jobs that they do get,” Graham says. “We talked to quite a few people who have done things like stay up for 48 hours straight, just working solidly in order to get their contracts done on time.”

Considerable geographical inequities remain. In a 2014 study4, Graham and several colleagues analysed more than 60,000 transactions on one major platform in March 2013. Most jobs, they found, were listed by employers in high-income countries and completed by workers in low- or middle-income countries (see 'The gigs are up').

Credit: Source: Ilabour (<a href="http://go.nature.com/2GZE5TZ">http://go.nature.com/2GZE5TZ</a>)

But those who live close to where the jobs are still seem to have an advantage. They win a disproportionate share of jobs and earn significantly more — US$24.13 per hour, on average — than foreign workers, who earned $11.66 per hour for comparable work. And some low- and middle-income nations attracted many more jobs than others; India and the Philippines are the top two recipients in Graham's analysis.

Practical concerns could explain some of these disparities. Language and time-zone differences might make some employers reluctant to hire foreign workers, and the history of outsourcing labour to India and the Philippines may have helped make workers there more attractive to employers. But discrimination, both conscious and unconscious, could play a part, too; Graham's team found task listings explicitly stating that people from certain countries need not apply. “Even though these technologies have been able to connect different parts of the world, they have not been able to bridge these kinds of differences as much as we hoped,” says Mohammad Amir Anwar, a researcher who works with Graham.

Another large ethnographic study of gig workers is beginning to reveal more about how this work gets done. It also provides some clues about what workers need to succeed. Between 2013 and 2015, two senior researchers at Microsoft Research — anthropologist Mary Gray in Cambridge, Massachusetts, and computational social scientist Siddharth Suri in New York City — surveyed roughly 2,000 gig workers in the United States and India and conducted longer interviews with nearly 200 of them.

One of the first things they discovered was that, although gig workers are often portrayed as independent, autonomous labourers, many of them were in fact communicating and collaborating with each other5. Workers helped each other to set up accounts and profiles, shared information about good employers and newly posted jobs, and provided technical and social support. Workers are making a deliberate effort to add human connections back into the system, Suri says, and they're doing it on their own time. “So they clearly must value it.”

In a more quantitative follow-up study6, in which they mapped the social connections among more than 10,000 Amazon Mechanical Turk workers, Gray, Suri and their colleagues found that this kind of collaboration can have real pay-offs. Workers who had connections to at least one other person on the platform had higher approval rates, were more likely to gain elite 'master' status, and found out about a new task more quickly than unconnected workers. For people to be productive, says Gray, “it turns out that they really need to collaborate. They need each other.”

Can the digital skills gap be closed?

For years, experts have been sounding the alarm about a looming shortage of digital skills. They have warned that there are too few trained workers to fill high-tech jobs, and that a lack of basic digital literacy could prevent workers in certain geographical regions or demographic groups from thriving in the digital economy. In response, various innovative programmes for boosting digital literacy and skills have sprung up worldwide. Research is now starting to provide some clues about what does and doesn't work — and about where skills training might fall short.

There have been some documented successes. More than a decade ago, the US Defense Advanced Research Projects Agency began developing a personalized, interactive and adaptive 'digital tutor' system to train new recruits to the US Navy for jobs as information-systems technology (IT) technicians. Students would work with the tutor one-to-one, completing lessons on different topics and solving related problems. The system prioritized conceptual learning and reflection, regularly prompting students to review what they'd learnt. When the tutoring system judged that a student had mastered the material, it would move on to the next subject.

In a 2014 review7 of the programme, researchers at the Institute for Defense Analyses in Alexandria, Virginia, found that 12 recruits who completed the 16-week course outperformed graduates of conventional, classroom-based US Navy IT training that lasted more than twice as long. The 12 even did better than a group of senior naval IT technicians — who each had an average of nearly ten years' experience — on almost every measure. “If we can do that, why not do more of it?” says Dexter Fletcher, who co-authored the review. “Why not begin to apply this seriously to workforce training?”

In a follow-up study8, Fletcher found that a slightly modified version of the digital tutor yielded similar results when it was used to train 100 military veterans for civilian jobs in IT. Within six months of completing the programme, 97% of the veterans who wanted IT jobs had landed them, earning an average annual salary roughly equal to that of someone with 3–5 years of experience in the field.

Numerous other strategies have been promoted to improve digital skills and employment, including massive open online courses (MOOCs) — university-level classes that are delivered over the Internet — and coding bootcamps, which are intensive, short-term training courses that teach the basics of computer programming.

In a 2016 analysis9 of 1,400 MOOC users in Colombia, the Philippines and South Africa, researchers determined that 80% of students were from low- or middle-income backgrounds and that 41% had only basic computer skills. More than half of the students (56%) were female, and computer science was the most popular MOOC topic. “Women are actually engaging in MOOCs in areas where they are underrepresented,” says Maria Garrido, a co-author of the report at the University of Washington's Information School (see 'Back in the classroom').

Credit: Source: Ref. <a data-externalid="nature.news.citation" data-type="bib-inline">[9]</a> (<a href="http://go.nature.com/2YFAPWC">http://go.nature.com/2YFAPWC</a>)

But the quality of these programmes can vary enormously, and few have been rigorously evaluated. Coding bootcamps can be expensive, require a significant time investment and are located primarily in technology corridors and urban settings. And achievement gaps remain; in a 2015 study10 of more than 67,000 MOOC students, two Stanford researchers found that female students and students of both genders from Africa, Asia and Latin America were less likely to reach certain course milestones — such as watching more than 50% of the lectures — and earned lower grades than male students and MOOC students from North America, Europe and Oceania.

Even those who complete digital-skills courses can still face a variety of barriers to employment. When researchers interviewed students in a Kenyan IT programme at Strathmore University in Nairobi in 2004, some of the students said that they were worried about graduating into a local economy that didn't appreciate their expertise or have jobs in which they could put it to use11. “And this was especially true for the women,” says Lynette Yarger, an information scientist at Pennsylvania State University in University Park, who was involved in the research. As one student put it: “Because I am a woman, employers may not think that they should give me a job working in IT, so I may never fully get to use all that I have learned to do, work that I want to do.”

One thing the research is already making clear is that even well-designed training programmes might not be sufficient to ensure success in the world of digital work. “The fact that you have better skills and know how to use a computer doesn't necessarily mean that you automatically can get a good job,” Garrido says. “Digital skills are an important piece of the puzzle, but they're not enough.”