The highly anticipated EU AI Act appears to be within reach. According to Romanian MEP and co-rapporteur of the EU AI Act, Dragos Tudorache, 70% of the legal text has already been agreed upon. Nevertheless, two significant points are causing divisions among the European Parliament, member states, and the European Commission. In this blog post, we’ll explore these two crucial issues that are impeding the enactment of the EU AI Act.
Biometric Surveillance and the Privacy Question
Biometric surveillance has been a focal point since the initial draft of the EU AI Act. The European Commission took a firm stance, advocating for a complete ban on AI systems used for real-time facial recognition in public spaces. While revising the legal document, the European Parliament adopted an even stricter position, proposing an extension of the ban to “post” remote biometric identification used in public spaces. They would only allow biometric recognition for ex-post use in the context of serious crimes when pre-judicial approval is acquired.
In contrast, the European Council views biometric surveillance as vital for national security and seeks exemptions for the use of remote biometric identification systems by law enforcement and immigration authorities in cases involving threats to critical infrastructure and individual health. Additionally, the Council aims to retain the right to employ real-time biometric surveillance in border control areas.
The dispute between the European Parliament and the Council over the use of remote biometric surveillance can be seen as a clash between a human-centric and a state-centric approach.
With regulations like GDPR, Europe has firmly established its commitment to safeguarding citizens’ privacy in data and digital spaces. European laws have consistently been built on the premise that fundamental human rights must be safeguarded at any cost. Concerning digital regulations, the Parliament’s official stance on the use of AI in criminal law is that individuals not only have a right to accurate identification but also have the right not to be identified unless mandated by law for legitimate public interest.
Foundational Models and the Uncertain Future of AI
Another point of contention is foundational models, which were absent from the initial draft of the EU AI Act published in April 2020. At that time, the focus was primarily on the value chain and general-purpose AI—AI systems designed for specific functions. However, in November 2022, ChatGPT entered the market, raising awareness of the potential capabilities, both positive and malicious, of Generative AI and other foundational models. A foundational model is a deep learning algorithm pre-trained with extensive datasets gathered from the public internet. Unlike narrow AI models with a single task, foundational models can transfer knowledge between various tasks.
Both the EU recognises the necessity to regulate foundational models. Yet, the question may extend beyond simply incorporating foundation models in the final draft. If such significant changes occur within two years while the Act is still in development, how can we ensure that the Act will encompass future technologies?
The good news is that there is a broad consensus among the Council and the Parliament about including foundational models in the Act. The challenge lies in crafting regulations for something without a specific purpose. The Parliament’s approach suggests that systems on which other AI solutions can be built must adhere to transparency obligations. This entails documenting the modelling and training process, evaluating established benchmarks before market launch, and continuous testing, documentation, and auditing post-launch. Both developers and users of these foundational models would share the responsibility.
Another proposal involves introducing a new category, the “very capable foundation models,” subject to additional obligations due to their capabilities surpassing current state-of-the-art AI systems. This idea stems from policymakers grappling with the challenge of regulating future AI models. The hurdle remains in defining the threshold for such “very capable foundation models” as technology continues to evolve.
Should this proposal be adopted, “very capable foundation models” would undergo a routine assessment by external red teams, compliance controls by independent auditors, and the establishment of a risk mitigation system before market launch.
The EU is not alone in grappling with concerns over the uncertain future risks of AI. US and UK policymakers are also increasingly focused on hypothetical AI harms rather than addressing existing issues like labour market disruptions, the proliferation of misinformation, and, notably, biases in AI decision-making in areas such as hiring, policing, or medical care.
Regulating a technology prone to unprecedented and unexpected evolution poses a significant challenge. Therefore, the question arises: should AI regulations aim to be future-proof or reactive to current circumstances?