The EU AI Act came into force in stages from August 2024. For Norwegian businesses the question is not whether the regulation applies, but how. Norway is an EEA member, and most AI rules are considered EEA-relevant. In practice that means any Norwegian business offering AI systems in the EU, or using AI to process data about EU citizens, must follow the law's requirements.
This article is not legal advice, but a practical overview of what the law requires, organized by the risk classification the AI Act uses.
Four risk categories
Prohibited risk. Some AI systems are illegal. This includes social scoring of individuals by public authorities, real-time biometric identification in public spaces (with narrow exceptions for law enforcement), and AI that exploits vulnerabilities of specific groups. Most Norwegian businesses will not cross this line in normal operations.
High risk. This category captures most of what businesses should pay attention to: AI used in critical infrastructure, education (e.g. automated assessment of students), recruitment and HR, credit scoring, and law enforcement. If you are building an AI that makes decisions affecting individuals' access to work, education, credit, or public services, you are probably here.
Limited risk. Systems that interact with humans (chatbots), generate deepfakes, or classify emotional states. The requirement here is transparency: the user must be informed they are interacting with an AI.
Minimal risk. Everything else. Spam filters, recommendation systems for product catalogues, tools that help employees write faster. No formal obligations under the AI Act for these.
What is required if you are in "high risk"?
The requirements for high-risk systems are extensive, but they boil down to six obligations: risk analysis and management, dataset quality and non-discrimination, technical documentation, system behaviour logging, transparency to users, and human oversight. In practice it means you must be able to produce documents showing you have considered these topics, and that your implementation meets the requirements.
This is work that is either done upstream during development (right) or downstream when somebody asks (expensive and panicked). We strongly recommend the former.
Practical steps for Norwegian mid-sized businesses
1. Run an AI inventory. Which AI systems does your business use or build? Many have more than they think: chatbots, CRM plugins, email sorting, product recommendations.
2. Classify each system. Most will land in minimal or limited risk. Those ending up in high risk need a dedicated compliance plan.
3. For high-risk systems: establish documentation and oversight from day one. Do not retrofit this later.
4. For limited risk: update user interfaces and terms so the transparency requirement is met (e.g. "You are talking to an AI assistant").
5. Establish an internal process for when new AI systems are introduced. Who signs off? Who keeps records? Who updates the documentation?
What Nordic AI does
All our deliveries, both Own and Lease, are built with AI Act compatibility baked in. Risk analysis, documentation, logging, and human oversight are not afterthoughts. They are part of the Nordic AI Implementation Model.
We also offer EU AI Act workshops for leadership groups and compliance leads. Get in touch if you want a tailored review for your team.