
AI in Hiring: EU vs UK Rules and What to Do by 2026
Artificial intelligence is transforming recruitment across the engineering and manufacturing sectors. From CV screening to candidate assessment, AI tools promise efficiency gains that could revolutionise how we attract and place talent. However, with the EU AI Act now in force and the UK developing its own regulatory framework, organisations need to understand what's coming and prepare accordingly.
The EU AI Act: What It Means for Recruitment
The EU AI Act entered into force on 1 August 2024 and represents the world's first comprehensive legislation on artificial intelligence. For recruitment teams, the timeline is crucial: whilst some provisions are already active, full enforcement arrives on 2 August 2026.
High-Risk Classification
AI systems used in employment decisions, including recruitment and candidate selection, are classified as high-risk under the Act. This classification carries significant obligations for any organisation operating within or recruiting from the EU.
High-risk AI systems in recruitment must meet stringent requirements, including:
Transparency and Disclosure – Candidates must be informed when AI is used in their recruitment or assessment process as a legal requirement.
Human Oversight – AI cannot make hiring decisions alone; humans must meaningfully review and oversee AI outputs.
Documentation and Record-Keeping – Organisations must maintain detailed records explaining how AI is used and the logic behind automated decisions, enabling compliance checks and audits.
Risk Management Systems – Employers must have processes to test AI for bias, monitor performance, and fix problems when identified.
Data Governance – The data used to train AI must be relevant, diverse, and unbiased, especially important in addressing diversity issues in STEM hiring.
The Phased Timeline
Understanding the implementation phases is crucial for planning:
February 2025: Prohibitions on certain AI practices and AI literacy obligations became applicable
August 2025: Governance rules and obligations for general-purpose AI models take effect
August 2026: Full compliance required for high-risk AI systems, including recruitment tools
The UK's Approach
Post-Brexit, the UK is charting its own course on AI regulation. Whilst the government has published principles for AI regulation, it's taking a sector-specific, pro-innovation approach rather than introducing comprehensive legislation similar to the EU AI Act.
Key UK Principles
The UK's framework centres on five cross-sectoral principles:
- Safety, security and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
In the UK, companies using AI for hiring have fewer strict rules than those in the EU, but that doesn’t mean they can ignore the law. Existing employment, equality, and data protection laws still fully apply to AI tools. If an algorithm discriminates, it’s still considered discrimination.
For organisations recruiting across both UK and EU markets, a common scenario in engineering and manufacturing sectors like aerospace, automotive and energy, the practical approach is to align with the more stringent EU requirements.
Practical Steps for 2026 Readiness
Audit Your Current AI Usage: Many organisations are using AI in recruitment without fully recognising it. Applicant tracking systems, CV parsing tools, automated screening, and even chatbots may incorporate AI elements. Start with a comprehensive audit of your recruitment technology stack.
Assess EU Exposure: If you recruit from EU countries or have operations within the EU, the AI Act applies to you. This includes UK businesses hiring EU nationals for UK roles. Understanding your exposure is the first step toward compliance.
Review Vendor Contracts: If you use third-party recruitment technology or agency partners, review their AI usage and compliance measures. The responsibility for compliance sits with you as the employer, regardless of which tools or partners you use.
Develop Transparency Protocols: Create clear, accessible communications explaining when and how AI is used in your hiring process. This should be integrated into application processes and candidate communications.
Implement Human Oversight: Ensure that AI-generated insights inform rather than replace human decision-making. Document how human oversight functions in practice, not just in policy.
Test for Bias: Regular testing of AI systems for potential bias should become part of your recruitment quality assurance. This is particularly important in engineering and manufacturing recruitment, where diversity challenges already exist.
Train Your Teams: Whether internal recruiters or hiring managers, everyone involved in recruitment needs to understand AI regulations and their responsibilities. This includes recognising when AI is being used and ensuring proper oversight.
Is Your Organisation Ready?
With full EU AI Act compliance required by August 2026, the time to prepare is now. The regulations may seem distant, but their complexity means organisations need to start planning immediately, particularly if AI is already part of your recruitment process.
Whether you're considering an RPO partnership or managing compliance internally, understanding these regulations is essential.
At Owen Daniels, we're adapting our RPO solutions to ensure our partners remain compliant whilst continuing to attract the diverse talent that drives innovation.
Ready to discuss how an RPO partnership can support your AI compliance strategy whilst delivering exceptional STEM talent?
Up next
