Navigating AI in public procurement

AI and analytics are transforming public procurement, but authorities must balance innovation with the EU and Ireland’s strict rules on fairness, transparency and data protection.
Existing procurement directives still apply, and the new EU AI Act (from 2024) adds oversight for “high-risk” systems. Best-practice guidance (from OECD, WEF, etc.) recommends that robust risk management and procedural safeguards should be in place when AI is being procured.
Integrating AI into public procurement (buying AI)
AI solutions differ fundamentally from traditional software. They can be off-the-shelf tools, bespoke models trained on client data, or embedded modules within larger systems. In any case, an AI offering is an evolving service, not a fixed product. This means procurement teams must specify not only the desired functions but also understand how the AI is trained, updated, and governed.
Key considerations include:
- Data rights and privacy: Contracts must define who owns the training data and outputs, forbid unauthorised reuse, and ensure GDPR and local privacy compliance.
- Explainability and auditability: Require that suppliers document how the AI works. Score bidders on model transparency, clarity of decision logic, and the auditability of outputs.
- Bias and fairness: AI reflects its training data. Buyers may need bias testing and regular fairness audits (with independent oversight if needed) to ensure equitable outcomes.
- Ongoing monitoring: AI models can change radically over time. Buyers must include clauses for retraining schedules, performance monitoring and KPIs to maintain accuracy and ethical behaviour.
Standards for AI procurement are still emerging. Most authorities rely on existing IT procurement rules. In practice this means involving legal, technical and ethical experts from the outset, using pilot phases for complex AI projects, and insisting on staged rollouts. A risk-based mindset is essential:
- high-risk AI (e.g. medical or justice) needs deep ethical vetting and rigorous testing;
- moderate-risk AI (e.g. public chatbots) still demands transparency and auditability; and
- low-risk AI (e.g. internal writing assistants) can be bought with a lighter touch but require proper staff training.
Officers must also plan for failure modes. Robust exit clauses, clear intellectual-property terms and data-portability provisions are critical if an AI solution under-delivers. In short, buying AI means buying assurance that the tool will perform fairly, safely and in line with public-sector values.
Evaluating AI-generated bid responses (managing AI in tenders)
AI is also a game-changer in the tender process itself. With bids often spanning hundreds of pages, automated tools can flag missing documents, mislabeled files or internal inconsistencies, speeding up evaluation and ensuring no compliant bid is overlooked. For lower-value or routine projects, AI-driven completeness checks could enable smaller teams to handle more competitions with fewer errors. At the same time, AI is levelling the playing field for bidders. Those with dyslexia, disabilities or weaker language skills can use AI aids (like grammar checkers or ChatGPT) to structure and polish their responses. This assistive use of AI should not be seen as cheating – it is more like providing a ramp for accessibility.
Evaluation teams should distinguish between genuine assistive use and cases where AI is misused to mask a lack of expertise. The UK’s PPN 02/24 notes suppliers’ use of AI is not banned but advises transparency and due diligence. Buyers are encouraged to ask bidders to declare any AI use and to put proportionate controls in place.
For example, authorities might require assurances that no confidential tender data was used to train the AI, and should conduct extra due diligence (site visits, clarifications or presentations) to check a supplier’s capacity when AI tools were used in bid preparation. When AI use is detected, evaluators should apply a risk-based lens. They should ask: Is this a high-stakes, €5 million contract or a routine €20,000 job? Does the bidder show real local knowledge and a solid track record? Are the technical and financial plans coherent?
Well-crafted AI-assisted text is not grounds for disqualification, but it heightens the need to verify that the bidder truly understands the work and can deliver. Contracts themselves may need updating. Many assume human authorship, so authorities should add AI-related exit mechanisms (performance bonds, milestone reviews, dynamic penalties) in case of non-performance. At the same time, these safeguards must be fair so as not to deter smaller economic operators who may rely on AI for efficiency.
A future-focused balance
AI offers immense potential in public services, from predictive analytics to fraud detection, but it requires a balanced approach. Procurement teams must blend AI’s strengths with human judgement. As one expert advises: “use AI as an “intelligent, assistive tool, not an oracle”. Global guidance from bodies like the OECD stress that AI procurement must safeguard public benefit and transparency. In practice, this means upholding core EU procurement principles – value for money, non-discrimination and transparency – even when AI is involved. It also means investing in skills. Teams must learn to write outcome-based specs and engage legal, IT and ethical experts early in the process.
Cross-functional governance (involving AI engineers, data stewards and ethicists alongside commercial officers) will become more common. In summary, navigating AI in procurement is about thinking big but buying carefully and responsibly. With the right governance, transparency and risk management, authorities can harness innovation without compromising accountability or fairness. The future of public procurement will be shaped by how well we integrate AI within the rules. The focus should not be on bending the rules for AI but bending AI to serve our rules.