TECH NEWS

The EU AI Act, unpacked.

The EU AI Act is often described as the world’s first comprehensive AI regulation, but that framing misses the point. This is not about slowing innovation. It is about clarifying accountability when AI systems influence real-world outcomes.

January 21, 2026

Tine Larsen. President of CNPD – Dr Lucilla Sioli. Director of the EU AI Office – Elisabeth Margue. Minister of Justice and Minister delegate to the Prime Minister in charge of Media and Connectivity – Carlo Thelen Director General, Luxembourg Chamber of Commerce / Photo Credit Blitz Agency pour la Chambre de Commerce

 

The EU AI Act is often described as the world’s first comprehensive AI regulation, but that framing misses the point. This is not about slowing innovation. It is about clarifying accountability when AI systems influence real-world outcomes. For European organisations, the shift is less technical than organisational, AI is moving from an IT concern to a governance issue.

On Tuesday 20 January, more than 200 guests gathered at the Chamber of Commerce to hear from European and national policymakers, alongside private-sector representatives and the government agencies involved in the Act’s rollout, enforcement, and education. Minister Elisabeth Margue reminded the audience that, among the ten different agencies involved in AI regulatory enforcement, the CNPD will play a primary role.

Dr Lucilla Sioli, Director of the EU AI Office at the European Commission, acknowledged that the Act has faced criticism from companies and from outside Europe. However, she stressed that the EU AI Act should not be viewed in isolation. Regulation and innovation are being developed in parallel. “We want users to use AI,” she said, while noting that AI inevitably carries risk. Without a regulatory framework, many organisations would struggle to trust the technology.

One of the Act’s objectives is regulatory harmonisation across the EU, in contrast to the United States’ state-based approach. The framework is technology-neutral but introduces behavioural obligations for providers of general-purpose AI models.

 

At its core, the EU AI Act follows a risk-based approach, defining four categories:

  • Very high-risk applications – Completely banned, such as social scoring or certain biometric uses
  • High-risk applications – Subject to significant regulation, including AI used in medical devices
  • Medium-risk applications – Partial regulation, for example chatbots that impersonate humans
  • Low-risk applications – Outside the scope of the Act, such as images used in video games

 

A recurring question from organisations is whether they fall within scope. Frontier models, it is widely accepted, will introduce new risks. One of the challenges lies in the phased implementation, different parts of the Act apply at different times. Very high-risk applications are already covered, while high-risk obligations come into force next year. However, the relevant technical standards are still under development. In parallel, a forthcoming digital omnibus is expected to introduce amendments, including elements related to the Data Union Strategy and EU business wallets.

Concrete Luxembourg use cases were also presented. One of the most notable came from Stefaan Roegiers, Chief Product Manager for Digital Banking at BIL, who introduced the bank’s virtual assistant, “Berry”. The tool responds to customer prompts, for example by analysing transactions. “Clients expect ChatGPT-quality answers,” he noted, “but it only works if users ask the right prompt.” From a vendor management perspective, BIL encountered pressure from suppliers seeking to lock them into a single LLM model, while the bank prioritises the flexibility to change. “The whole process has been a discovery,” he added, “including for our CISO and DPO.”

 

By Jim Kent

Watch video

In the same category