A live demonstration uses artificial intelligence and facial recognition during CES 2019 in Las Vegas

Facial-recognition software is increasingly being used to track individuals without their permission.Credit: David McNew/AFP/Getty

China wants to be the world’s leader in artificial intelligence (AI) by 2030. The United States has a strategic plan to retain the top spot, and, by some measures, already leads in influential papers, hardware and AI talent. Other wealthy nations are also jockeying for a place in the world AI league.

A kind of AI arms race is under way, and governments and corporations are pouring eye-watering sums into research and development. The prize, and it’s a big one, is that AI is forecast to add around US$15 trillion to the world economy by 2030 — more than four times the 2017 gross domestic product of Germany. That’s $15 trillion in new companies, jobs, products, ways of working and forms of leisure, and it explains why countries are competing so vigorously for a slice of the pie.

For all the upsides, AI carries risks, from how facial-recognition technologies track and identify individuals, to the manipulation of elections. Yet despite vigorous academic and public discussion, governments have been slow to prioritize the ethics of AI. The United States and China are too preoccupied with the top prize, and show little appetite to work with other countries and develop codes of practice.

This leadership vacuum, however, has created opportunities for others. The national research agencies of France, Germany and Japan have teamed up on a call for research proposals on AI that incorporates an ethical dimension. The United Kingdom has created a new centre for data ethics and innovation. Officials from Canada and France, meanwhile, have been working to establish an International Panel on Artificial Intelligence (IPAI), to be launched at the G7 summit of world leaders in Biarritz, France, from 24 to 26 August.

The panel’s broad ambition is to create an expert network that will advise governments on AI issues such as data privacy, public trust and human rights. Its members will include the research community, governments, industry and civil-society organizations.

This is a welcome step, but the panel’s architecture would benefit from more discussion. The IPAI’s inspiration seems to be the Intergovernmental Panel on Climate Change. But there are important differences. First, the United Nations is not involved — hence ‘international’ in the title, and not ‘intergovernmental’. This could be a concession to those, including the US administration, that are sceptical of multilateralism. Second, industry representatives will be more prominent. This is important, because companies have access to vast amounts of data, and are the ones driving the development of AI technologies.

However, for the panel to be credible — especially when it comes to public trust in AI — its secretariat and sponsoring governments will need to ensure that it follows the evidence, and that its advice is free from interference. To achieve this, panel members will need to be protected from direct or indirect lobbying by companies, pressure groups and governments — especially by those who regard ethics as a brake on innovation. That also means that panel members will need to be chosen for their expertise, not for which organization they represent.

The first statement on AI from the leaders of the 20 biggest economies came in June — the G20 AI Principles — and the United States and China were among those to sign it. This is remarkable given the current US–China trade war, but, at the same time, the joint statement is little more than a token gesture committing nations to a “human-centered” approach to AI.

To be credible, the IPAI has to be different. It needs the support of more countries, but it must also commit to openness and transparency. Scientific advice must be published in full. Meetings should be open to observers and the media. Reassuringly, the panel’s secretariat is described in documents as “independent”. That’s an important signal.

The IPAI’s architects and panel members will encounter situations in which powerful interests will try to influence what they say. The guiding and, ultimately, regulation of a disruptive and innovative technology will need bold leadership. They must steel themselves to succeed.