Supporting innovation through the EU Artificial Intelligence Act

Eamonn Cahill, head of the AI and Digital Regulation Unit at the Department of Enterprise, Trade and Employment, tells eolas Magazine that the EU Artificial Intelligence Act currently being rolled out in phases will support the adoption of AI, rather than impede it.
Cahill asserts: “The AI Act is not an obstacle to the adoption or the deployment of AI systems. Quite the opposite. The AI act is designed to be supportive and even to accelerate the adoption of AI across the EU.”
It is predicated on the definition of AI by the OECD which states AI systems generate outputs based on an inference from inputs they receive. Cahill says the inference process is “the secret power of these systems”, but adds that its “inherent uncertainty” can pose risks. The Act, Cahill hopes, will “go some way towards putting manners on these systems so the power is exercised responsibly and ethically, and in a human-centric manner”.
Governance
The Act is linked with the EU’s AI strategy which is based on three pillars: innovation, governance – which includes the AI act – and guardrails. On governance, Cahill indicates that “a coherent, unified” structure regarding AI must be implemented across member states. Essential to facilitate this are national competent authorities, the European Commission’s AI Office, the European Artificial Intelligence Board, an advisory forum, and a scientific panel.
The AI Office forms “the backbone of the framework”, by producing secondary legislation including standards, guidance, and codes of practice, all of which Cahill says are “necessary for the full implementation of the AI Act”.
He asserts that the Act is designed to provide “the minimum proportionate protections that are necessary to foster the development and adoption of safe, responsible AI”. Central to this is the European Artificial Intelligence Board, where member states decide on AI strategy. Cahill describes it as “the decision-making platform for Europe’s engagement with the broader world”.
He explains that the scientific panel will comprise independent AI experts who will support the AI Office and national competent authorities to implement and enforce the Act. It has not yet been appointed, but Cahill says the Commission aims to launch a call for expression of interest for the panel soon.
The AI act is designed to be supportive and even to accelerate the adoption of AI across the EU.”
Aimed at “pre-empting any anomalies or inconsistencies” in its application across the EU, the Act is designed to intervene only “where absolutely warranted”, according to Cahill. He adds: “It does not in any way smother or hinder the adoption of AI or AI innovation.”
Guardrails
Cahill traces how the Act is applied according to a risk hierarchy comprising four categories:
- unacceptable;
- high;
- transparency; and
- minimal or no risk.
In February 2025, eight AI practices with ‘unacceptable’ risks were prohibited under the Act, including subliminal techniques, exploitation of vulnerabilities, discrimination, inference of emotions, and certain uses in law enforcement. However, there are exceptions to these rules and Cahill outlines that exemptions can relate to safety concerns.
High-risk categories fall into two categories: the use of AI connected to safety systems in 12 product categories, and applications of AI that can impact people’s fundamental rights.
Some applications of AI pose a lower risk but require transparency such as customer service chat bots that must reveal they are enabled by AI. Cahill claims most applications of AI will not pose “credible risks to health, safety, or fundamental rights”, and adds that people should be able use these applications “to innovate, untrammelled by any considerations of the AI Act”.
Rollout
The Act will be rolled out on a phased basis until August 2027. Provisions on general-purpose AI models will apply from August 2025, regulations of certain high-risk uses of AI will apply from August 2026, and obligations on the use of AI in certain products categories will apply from August 2027.
Cahill asserts that August 2025 is “the big deadline” for member states as this is when competent authorities must be designated to apply penalties for breaches of the Act. Fines for the most “egregious” breaches can be up to €35 million, or 7 per cent of annual global turnover.
Under the act, providers, developers, deployers, and importers need to ensure they have evidence which demonstrates that appropriate quality control mechanisms are in place to mitigate risks that may arise from the use of AI.
Cahill concludes: “The EU AI Act, if implemented properly, can drive AI innovation and AI adoption by building confidence in systems, and by providing regulatory certainty for investors, developers, and deployers across the EU.”