Global menu

Our global pages

Close
European Commission publishes ethics guidelines for trustworthy AI

European Commission publishes ethics guidelines for trustworthy AI

  • United Kingdom
  • Technology, Media and Telecoms - Disruptive Technology
  • Technology, Media and Telecoms - Technology

10-04-2019

On 8 April, the European Commission’s High Level Expert Group on AI (“HLEG”) published its ethics guidelines for trustworthy AI. These guidelines supersede the draft guidelines issued in December 2018 (which we reported on in January). The draft guidelines have been revised following feedback of 500+ companies/people from an open consultation. The aim of the guidelines is to promote trustworthy AI in the context of the European Commission’s vision for AI which supports “ethical, secure and cutting-edge AI made in Europe”.

The premise of the guidelines is that trustworthy AI is essential to give both individuals and communities confidence in AI, to prevent and minimise the risks arising from AI systems (such as inappropriate identification and tracking of individuals and AI enabled citizen scoring in violation of fundamental rights) and to promote uptake of AI for appropriate purposes in order to realise the potentially vast benefits that it can bring (such as intelligent transport systems and systems which can diagnose, treat and prevent certain illnesses).

The guidelines provide that trustworthy AI has three core components which should be met throughout an AI system’s life cycle: it must be lawful, ethical and robust. The guidelines focus on the ethical and robust components and aim to provide guidance on how to put these principles into practice through both technical and non-technical methods when developing, deploying and using AI systems. As part of this, the guidelines contain a non-exhaustive trustworthy AI assessment list. This is not intended to be a mere tick box exercise, but rather it is a tool to ensure that the principles and requirements set out in the guidance are achieved. It can therefore be used to assess factors such as the need for a fundamental rights impact assessment; the appropriate level of human control for the particular AI system; appropriate mechanisms of control and oversight over any self-learning elements of the system; seeking to ensure cyber resilience and protection from cyber attack; how to assess the accuracy, quality and integrity of data; ensuring traceability, “explainability”, non-discrimination and fairness. Touching upon some of the focus of the assessment lists in more detail, there is strong focus on:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems
  • Privacy and data governance: Building on the data points mentioned above, citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them
  • Transparency: As mentioned above, the traceability of AI systems should be ensured
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes

The Commission intends to launch a pilot phase in summer 2019, seeking to encourage involvement from a wide range of stakeholders. Organisations can sign up to the European AI Alliance and receive a notification as to when the pilot starts.

Following the pilot phase the aim is for the AI expert group, in early 2020, to review the assessment lists for the key requirements, building on the feedback received before evaluating the outcome and proposing any next steps.

The European approach is merely one of a number of territorial approaches to this area across the globe.

As touched upon above, the guidelines do not expressly deal with lawful AI. Clearly, already, AI does not operate in a vacuum and AI is already subject to various national and international laws, including privacy, product liability, anti-discrimination, consumer, health and safety at work laws, Intellectual Property laws and more. Additionally, certain sectors will have their own regulations which will bite upon the provision of such technology. For those of you who have not seen our earlier articles relating to AI, there is work to be done in relation to the laws which do touch on AI – including, for example, some uplifting of intellectual property (and particularly certain copyright and patent) laws in certain countries to ensure the laws adequately cover AI (including inventions by AI).

The focus on privacy is, naturally, being taken very seriously across the globe. Only, recently, the UK Information Commissioner’s office invited comment from organisations on the development of its auditing framework for AI: https://ai-auditingframework.blogspot.com/2019/03/an-overview-of-auditing-framework-for_26.html More on Privacy and AI will follow in a future article coming soon.

AI companies will need to keep abreast of the changing legal and ethical landscapes to ensure their AI technology and service offerings take into account the changes in law and guidance coming through. Where different countries have differing guidance and laws, AI providers will need to consider whether they should modify or train their technology for the local market nuances in the markets they are facing and/or to build and train flexible AI which will switch approach depending on location - autonomous vehicles being the most obvious area where this will be paramount. For example, if the AI has to follow different highways and/or ethical rules in different countries, then the technology will have to switch as a vehicle moves into a different country.

At Eversheds Sutherland, we have worked on a number of AI deals over the past few years (and particularly the last 18 months). Whilst we appreciate that consultations and outputs of these consultations take time, the wave of adoption of this technology is coming fast and will begin to become more mainstream as we go further into 2019 and beyond. Customers adopting the technology will need to be mindful of the changing landscape and work with their AI technology providers and legal teams to provide for flexible and future proofed deals and contracts.

This is now the 6th in our series of articles on AI. If you would like to view any of the earlier articles, please let us know.

For more information contact

< Go back

Print Friendly and PDF
Subscribe to e-briefings