Is Your AI High-Risk? A 5-minute Assessment for Business Leaders

If your organisation uses AI for recruitment, credit decisions, insurance pricing, medical diagnostics, or critical infrastructure operations, your AI is almost certainly classified as high-risk under the EU AI Act. High-risk classification triggers substantial compliance obligations with an August 2026 deadline. This assessment helps you determine where you stand.

The questions below take a few minutes. The answers determine whether you face a major compliance project or can continue with minimal regulatory burden.

Why classification matters

The EU AI Act uses a risk-based approach. Different risk levels trigger different obligations. Get it wrong and you either waste resources on unnecessary compliance work or face enforcement you didn't see coming.

  • High-risk AI systems face mandatory requirements: risk management systems, technical documentation, conformity assessment, human oversight mechanisms, and ongoing monitoring. The work is substantial. Penalties for non-compliance reach 35 million euros or 7 percent of global revenue.

  • Limited-risk AI systems face only transparency obligations. Users must know they are interacting with AI. The compliance burden is light.

  • Minimal-risk AI systems face no specific requirements under the Act. Most AI applications fall here: spam filters, recommendation engines, search tools, and similar systems.

The difference between high-risk and minimal-risk is the difference between a significant compliance investment and business as usual. Accurate classification matters.

The eight categories that trigger high-risk status

The EU AI Act defines high-risk AI through specific use case categories. If your AI system operates in any of these areas, it is presumed high-risk.

Biometrics. This includes remote biometric identification, facial recognition systems, fingerprint matching, and emotion detection. If your AI identifies or categorises people based on physical or behavioural characteristics, it falls here. Corporate security systems using facial recognition are in scope. Customer emotion analysis tools are in scope.

Critical infrastructure. AI systems managing electricity grids, water supply, gas distribution, heating networks, or road traffic fall into this category. If your AI makes operational decisions for utilities or transport, it is high-risk. This applies to both public infrastructure operators and private companies running essential services.

Education and vocational training. AI that determines admissions, assigns students to courses, assesses learning outcomes, or monitors student behaviour during tests is high-risk. Universities using algorithmic admissions screening are in scope. Automated grading systems are in scope.

Employment and worker management. This is where many organisations discover they are affected. AI used for recruitment screening, CV filtering, interview evaluation, performance assessment, promotion decisions, task allocation, or worker monitoring falls here. If your HR software uses AI to rank candidates or flag employees, it is high-risk.

Access to essential services. Credit scoring, loan approval decisions, insurance pricing, insurance claim assessment, and emergency service call prioritisation are all high-risk. Banks, insurers, and lenders using AI in customer decisions are heavily affected. This category also covers AI determining eligibility for public benefits.

Law enforcement. AI systems used for individual risk assessment, crime prediction, evidence evaluation, or polygraph alternatives fall here. This affects public sector agencies and vendors selling to law enforcement.

Migration, asylum, and border control. AI used for visa application assessment, asylum processing, border screening, or security risk evaluation is high-risk.

Democratic processes. AI systems used for political advertising targeting or voter microtargeting campaigns fall into this category.

The medical device path

There is a separate route to high-risk classification that bypasses the eight categories above. AI systems classified as Class IIa or higher medical devices under existing EU regulations are automatically presumed high-risk under the AI Act.

This affects hospitals, clinics, diagnostic companies, pharmaceutical firms, and medical technology vendors. If your AI interprets medical images, recommends treatments, supports clinical decisions, or predicts patient outcomes, it likely qualifies as a medical device and therefore as high-risk AI.

The same applies to AI embedded in other regulated products. AI safety components in machinery, vehicles, toys, lifts, and other products covered by EU product safety legislation may inherit high-risk status from the product classification.

A simple decision sequence

Four questions will tell you whether your AI system is likely high-risk.

  • First: Does your AI make or influence decisions about individual people? If the answer is no, you are likely in the minimal-risk category. Spam filters, inventory optimisation, and predictive maintenance typically fall here. If the answer is yes, continue to the next question.

  • Second: Do those decisions affect employment, credit, insurance, healthcare, education, or legal status? If yes, your system is likely high-risk. Recruitment tools, lending algorithms, diagnostic aids, and admissions systems all qualify.

  • Third: Does your AI operate critical infrastructure or essential public services? If yes, it is likely high-risk. This includes energy management, water systems, transport operations, and emergency services.

  • Fourth: Is your AI embedded in a regulated product? If yes, check that product's classification under existing EU regulations. Medical devices, vehicles, machinery, and similar products have their own risk classifications that may cascade into AI Act obligations.

If you answered no to all four questions, your AI is likely limited-risk or minimal-risk. You may still have transparency obligations if users interact directly with the AI, but the compliance burden is light.

What high-risk classification requires

High-risk classification triggers six categories of obligation. Here's what that looks like in practice.

Risk management. You must establish a process for identifying and mitigating risks throughout your AI system's life. This runs continuously, not as a one-time assessment. You identify risks before deployment, monitor for new risks during operation, and update mitigations as needed.

Data governance. You must document your training data: where it came from, how it was processed, what quality checks were applied, and how you detect and address bias. This requires knowing your data lineage in detail.

Technical documentation. You must maintain detailed records of how your system works, how it was developed, what its performance characteristics are, and what its limitations are. This documentation must be ready for regulatory review and retained for ten years.

Human oversight. You must design your system so humans can understand its outputs, question its decisions, and override them when necessary. The regulation assumes humans remain in control of consequential decisions.

Conformity assessment. Before deploying your system, you must verify it meets all requirements. Depending on the use case, this is either an internal assessment following prescribed procedures or a third-party assessment by a notified body.

Registration and marking. You must register your system in the EU database and affix CE marking. This creates public accountability and enables market surveillance.

More details from risk classifications: https://ai-act-service-desk.ec.europa.eu/en/ai-act-explorer

What to do next

Start with an inventory. List every AI system in your organisation, including those embedded in third-party software you use. Many organisations find AI in places they did not expect: HR platforms, customer service tools, fraud detection, and operational systems.

Classify each system against the categories above. Document your reasoning. For borderline cases, err toward assuming high-risk until you have clarity. The cost of underestimating is higher than the cost of overestimating.

For systems you classify as high-risk, assess your current state against the six obligation categories. Where are the gaps? How long will it take to close them? The August 2026 deadline is closer than it appears.

If you are uncertain about classification, get expert assessment. We work with healthcare and energy companies navigating exactly this question. The organisations starting now have options. The ones waiting have fewer.

Next
Next

Finnish Resilience Withstands global Change