New EU Proposal for a Regulation of AI

Till Böddinghaus
Till Böddinghaus
2021-04-27

On the 21st of April 2021, the EU has released their new proposal for regulation of AI in the European Union and a new coordinated plan as a next step in creating global leadership in trustworthy AI.

Rules for trustworthy AI and important steps already taken by the EU

In February 2020 the EU has proposed a new plan to drive digital transformation and power the economy. New European solutions were addressed for the digital age and the new strategy aims to unlock good quality data and in general make more data available for researchers and businesses. The underlying goal is to build trust between society and AI applications to create a robust foundation for the development of trustworthy AI.

With the introduction of the General Data Protection Regulation (GDPR) in 2018, the lawmakers have taken important steps to foster citizen rights regarding the processing of their personal information. In 2020 the EU has come forward with new proposals for AI and Data in general:

  • New rules for high-risk AI (e.g. Health, Policy, Transport), systems must be transparent, traceable and guarantee human oversight
  • To train high-risk systems unbiased data shall be used to confidently reach proper performance, ensure fundamental rights such as non-discrimination
  • Protection of the consumer: testing and certification of data used by the algorithms
  • Justifiable use of biometric identification is discussed
  • Labelling scheme for lower risk AI applications
  • Creation of an EU governance structure to install a framework for compliance
  • Framework for data governance, access, reuse and data sharing
  • Availability of public sector data to foster innovation
  • Cloud infrastructure platforms and systems to support reuse of data
  • Building of European data spaces

Enforcement mechanisms to test Artificial Intelligence for trustworthiness

Furthermore, the EU came forward with enforcement mechanisms to test if the AI application is fulfilling all the necessary conditions in order to be considered trustworthy or ethical.

  1. The system shall be trained using data that “respects European values and rules”
  2. The system shall inform the user about its underlying goal, its limitations and capabilities
  3. The systems have to be robust and accurate
  4. There needs to be an adequate level of oversight and human involvement and interaction with the system

With these manifests the European Lawmakers want to give businesses and researchers far better access to data to drive innovation, improve research and power the economy. Data spaces should not only be used to store data but rather to make sharing possible and easy, e.g. sharing of sensitive Healthcare data by defining new policies.

New violation fines for companies to boost trust between AI and people

Recently, the EU has implemented new rules for applying AI and to further protect the aforementioned regulations. More specifically, new compliancy requirements for high-risk AI applications have been proposed. These requirements are not industry bound. The policy makers hope to further boost public trust in AI by installing this new system of checks. If companies violate these rules, the fine will be up to 4% of the global turnover or €20M, whichever is greater.

AI developers and users need to determine if a specific use-case is considered as high-risk AI and whether they need to conduct a compliance check before releasing the product into the market.

A compliant system shall hence be marked with the CE logo to indicate conformity with the rules. The member states are responsible to implement and enforce these new set of rules by designating a national authority for regulation.

We need to rethink the creation of AI

Now, more than ever, we as a society need to rethink and revision how trustworthy and ethical AI should be developed, used and supervised. To reach a level of certainty that a system is fulfilling all mentioned requirements needs a new and holistic approach to developing and testing algorithms.

New AI applications have to be transparent about their usage, capabilities and limits while always adhering to privacy regulations.

Research has shown that widely used techniques like anonymization are simply not enough to really guarantee privacy. New approaches have since been developed and proposed such as Differential Privacy, Federated Learning, Secure Multi-Party Computation or Homomorphic Encryption.

For example, Differential Privacy is a strong, mathematical definition of privacy in machine learning analysis. It „mathematically guarantees that anyone seeing the result of a differentially private analysis will essentially make the same inference about any individual’s private information, whether or not that individual’s private information is included in the input to the analysis.”

By using Differential Privacy, we as people, gain the ability to really control and govern the information leaked to external parties about our private information. By measuring the loss of privacy the data owner can properly oversee and control the data and the insights.

A platform for trustworthy AI

The new EU regulation for Artificial Intelligence can, if properly deployed, lead to more user trust in AI applications as well as more business security for companies following the rules. But regulation alone won’t change anything. Companies and AI developers need processes and tools that can be used to assess and support compliance.

To make AI regulation a success and enable companies to build truly trustworthy AI applications, one needs a software platform that

  • helps implement processes to develop ethical AI
  • ensures strong data privacy guarantees
  • enables companies to assign and monitor roles and access
  • provides a solid enterprise grade environment for trusted AI development

First, to reach the goal of deploying responsible AI applications, it is not enough to have management implement a few policy guidelines or integrate self-assessment questionnaires at the end of each development cycle. Rather, ethical AI can only be successful if it is implemented directly into the development process itself. The software platform helps data scientists, machine learning experts and software developers to follow the rules of AI regulation directly while developing the AI solutions. This way the development stands on solid grounds and the AI applications don’t have hidden holes or black boxes and are therefore trustworthy.

Second, the platform should implement guard rails and strict technical measures to ensure the highest level of data protection and data privacy. Since machine learning (as the most successful field of AI) depends heavily on data, protecting the data sets used from misuse or accidental disclosure is one of the most important functions of responsible AI development. There are many technical measures on the side of IT security as well as software or machine learning development that must be implemented in the development environment in order to ensure all data is securely protected at all times.

Implementing processes to comply with ethical AI rules also requires a clear understanding of responsibilities, roles, scopes and requirements. A software platform can help reduce the effort involved in tracking access rules, task assignments, and scoped development installments. It can also provide advanced auditing mechanisms so that human supervisors can easily keep track of all oversight duties during the development, deployment, and operations of AI.

After all, only a full scaled software development platform for ethical AI can provide the required enterprise-grade robustness and support that organizations need to securely build trusted applications. Tools and frameworks will become more accessible as corporates, research institutions and universities focus on certain aspects of the EU regulatory agenda. The first tools for explainable AI are already available that cover certain angles quite well, for example. However, companies need more than a handful of tools to reliably comply with EU regulations. The platform therefore offers all the necessary pillars: thoroughly implemented and certified processes, highest data protection guarantees, integrated secure development environments and policy-based collaboration. Backed by constant updates and a trustworthy enterprise support.

Contact us for more information on DQ0 and our solutions for trustworthy and ethical AI!

 

All blog entries