Intellectual Property, Information Technology & Cybersecurity

Regulating Artificial Intelligence (AI) – The OECD's new Recommendation and what it means for UK AI-driven businesses

Author: Simon Stokes

Today the Organisation for Economic Cooperation and Development (OECD) formally adopted its recommendation on AI – the first intergovernmental standard on this area.

The OECD represents most major industrialised nations, including the USA, the UK, Canada, France, and Germany although China is not a member. OECD Recommendations are highly persuasive – for example the OECD has previously taken the lead in promoting data protection at an international level.

The fact the USA supports the Recommendation gives it global significance in influencing the development of AI use and approaches to regulation. In fact the Recommendation falls short of advocating regulation of this area – instead a set of non-binding broad principles are outlined to ensure that as AI technology develops AI will benefit humanity rather than harm it.

The five principles for the responsible stewardship of trustworthy AI are: (1) inclusive growth, sustainable development and well-being; (2) human-centred values and fairness; (3) transparency and explainability; (4) robustness, security and safety; and (5) accountability.

The OECD also notes that trust is key in this area and that whilst certain existing national and international legal, regulatory and policy frameworks already have relevance to AI, including those related to human rights, consumer and personal data protection, intellectual property rights, responsible business conduct, and competition, the appropriateness of some frameworks may need to be assessed and new approaches developed.

In addition the Recommendation also makes recommendations for national policies and international co-operation on AI, with particular regard to SMEs – these will be very important for shaping government policy in this area.

Is AI regulation needed at all?

Whether AI should be regulated globally is a much debated topic. Some argue AI affects too many sectors - from autonomous cars to recruitment to health to financial services to criminal justice – so that one size fits all rules are inappropriate.

Also questions of accountability and liability are already addressed by existing laws – indeed the EU's GDPR already contains provisions that deal with the use of personal data in automated decision-making and profiling, for example. So the OECD's principles are one way to influence policy-makers to ensure that as AI develops countries have policies and where felt appropriate regulation to address the ethical issues surrounding AI but without recommending any one approach to regulation. The fact the US has felt able to support the Recommendation gives it global credibility.

EU and UK Developments

In addition to the OECD, the EU and the UK are already active in this area. Public awareness of the issues surrounding AI is also growing. There have been well publicised cases of AI recruiting tools that discriminate against women and predictive policing software being biased against black people, and in the US some robo-financial advisers have faced regulatory action, to name just a few examples. And a recent new report by the UN (UNESCO) highlights gender bias in AI.

In April this year the EU released detailed ethical guidelines for AI – the EU sees building trust in AI as key – AI applications should follow ethical guidelines and base decisions on transparent criteria. The UK government has also been looking at this area for several years – there was an influential House of Lords Report published in April 2018 which the Government responded to in June 2018.

Then in March this year the UK's Centre for Data Ethics and Innovation (CDEI) issued its work programme for 2019/20 which will include an investigation of algorithmic bias in decision-making in various sectors which may include: financial services, local government, recruitment and crime and justice. These sectors are seen as particularly important to investigate given the significant impact on people's lives decisions in these sectors can have and also the risk of bias.

The CDEI's likely focus will be on bias against characteristics protected under the Equality Act 2010 but the scope of review may be extended to understand bias against other characteristics such as digital literacy. Its findings – an interim report in Summer 2019 and a final report with recommendations to Government in March 2020 - will be eagerly awaited by those in the UK AI community.

Implications for UK business

The OECD's Recommendation does not have the force of law – it won't immediately change the current piecemeal legal regime in the UK that applies to AI - but it will be very influential in shaping how governments in the UK and elsewhere approach future AI regulation and policy.

So if and until we see some AI-specific laws UK businesses using AI or who intend to do so will need to be alert both to general laws which have an impact on AI (including the GDPR and the Equality Act 2010) as well as sector-specific regulation and guidance. They also need to be aware of the increasing use of codes of conduct in this area.

Codes of conduct have the benefit of being able to be developed quickly (unlike hard law) and can be applied flexibly and quickly updated in light of experience. For example in February 2019 the UK Government published a Code of Conduct for data-driven health care setting out 10 principles. Whilst the principles elaborated on in the Code are in a health data and MedTech context the principles themselves are largely sector-independent and are worth consideration by any AI-driven business. We can expect to see other sectors developing similar Codes of Conduct.

Ultimately the successful use of AI requires trust – both the EU and the OECD highlight this. Transparency and legal compliance will help build trust – making sure any personal data used is ethically sourced and its use is GDPR compliant, making sure the algorithms used avoid unfair bias, making security integral to the design and working with regulators (where relevant) from an early stage to ensure sector-specific issues are addressed (including using regulatory sandboxes where applicable) – these considerations along with the broader current policy context especially around transparency and accountability are all crucial to the successful implementation of AI in business – and the OECD's Recommendation is a very good place to start from.

< Back