Britain can cement its position as a world leader in artificial intelligence by putting ethics at the heart of the sector’s development, a House of Lords report has concluded.
The report said that the UK already boasted several strong AI companies, led by Google DeepMind, world class research institutions and a vibrant start-up scene.
But it said the sector would reach its full potential only if it mitigated the potential risks associated with the use of AI, such as algorithmic bias and the unintelligibility of “black box” systems.
“An ethical approach ensures the public trusts this technology and sees the benefits of using it,” the report said.
The select committee that drew up the report proposed five principles to underpin a cross-sector AI code, to be adopted nationally and internationally. The principles would enshrine the intelligibility and fairness of AI as well as forswearing autonomous systems that could hurt, destroy or deceive humans.
Lord Clement-Jones, who chaired the committee, said that the UK could not compete with the US and China in terms of investment.
“Where we can compete is in the way that we co-ordinate our research and achieve agreement internationally on an ethical framework,” he said. “There are great opportunities but we will not be able to take them unless we de-risk AI.”
He acknowledged that Britain’s influence might be weakened by its decision to leave the EU. But he highlighted Britain’s historic role in establishing international ethical norms, such as helping to draw up the European Convention on Human Rights in the 1950s, and its strong legal culture.
Britain faces ferocious international competition in attracting the best AI researchers. Seven of the world’s 10 most valuable companies are US and Chinese tech companies, including Apple, Amazon, Tencent and Alibaba, which have been pouring money into AI. According to Goldman Sachs research, the US invested $18.2bn into AI between 2012 to 2016, compared with $2.6bn in China and $850m in the UK.
Other countries, including Canada and France, have also trumpeted their credentials as global centres for AI. Last month, President Emmanuel Macron of France hosted a grand international conference on AI in Paris, promising to invest $1.85bn over the next five years to support the sector.
The UK report came after 24 EU countries and Norway signed a declaration to form a “European approach” to AI and ahead of a strategy paper from the European Commission expected at the end of this month that will outline the legal issues that the technology is likely to create and address fears over robots replacing workers.
Calum Chace, author of several books on AI, said the UK undoubtedly had some strengths in the field but added it would be extremely hard to set the ethical agenda for the rest of the world, especially after Brexit. “The idea that the UK can dictate the regulatory framework for AI is preposterous,” he said. “The titans of the industry are all in China and the US.”
The report emphasised the importance of mass data for the AI industry and highlighted the healthcare records of the NHS as a “unique source of value for the nation”. But it expressed concern that the current piecemeal approach to data sharing adopted by many NHS Trusts was inadequate.
In 2017 the Information Commissioner’s Office found that the Royal Free Hospital had failed to comply with data rules when it provided the personal data of 1.6m patients to Google DeepMind. One witness quoted in the report described the episode as a “fiasco”.
The report made 74 specific recommendations on how to develop the AI sector. The government is obliged formally to respond within two months. But Lord Clement-Jones said the government, which had put AI at the centre of its industrial strategy, was already moving in the right direction in many areas.