Cloud, model, and orchestration framework are picked against the workflow and the regulatory bar, not picked by default. Documented and audit-ready whichever combination ships.
Most of the quality in an AI engagement is decided before the first line of code. The cloud, the model, and the orchestration framework all get matched to the workflow, not assumed from the last project. Where we put the effort:
Cloud
GCP, Azure or AWS depending on the client's existing tenant and regulatory bar. Application hosting, vector storage, and model inference can each route through a different provider when the workflow requires it. Where US Cloud Act exposure has to be eliminated, we route through European data-sovereign cloud (Cleura, Nebul) instead.
Model
We will not deploy Claude where Gemini reads better for the task, or ChatGPT where Claude reads better. The pick is made against the workflow's actual demands (reasoning depth, latency, cost per inference, multimodal needs, jurisdiction of the inference endpoint). Where the regulatory bar requires it, we deploy a local open-weights model in the client's own infrastructure.
Agentic orchestration
LangGraph or Pydantic AI for stateful multi-agent flows, with human-in-the-loop gates on consequential outputs. Selected against the failure mode the agent needs to survive (hallucination, prompt injection, runaway autonomy), not against framework popularity.
Workflow automation, OCR, machine vision, RAG
Different problems, different tools. Document understanding picks an OCR engine that matches the document family. Computer vision picks a model that matches the visual class. RAG picks a retrieval shape that matches how the user actually queries. We use the tools that fit, not the tools we used last project.
The careful analysis and selection that happens before the build is where most of the quality lives. The choice we make at the start determines what the audit looks like, what compliance you can prove, and what the operating cost ends up being.
European clients, European servers. Application hosting, vector storage, and model inference route through European regions by default. Specific providers depend on the client's existing infrastructure and the workflow: typical setups use Railway (Amsterdam) or the client's own Azure / GCP / AWS tenant for hosting, Neon Tech (Frankfurt) or the client's existing managed Postgres for vector storage, and Vertex AI on GCP europe-west1 / europe-west4 or Bedrock on AWS EU regions for inference.
This is a deliberate choice, not an accidental result. We minimize the personal data processed and keep data within the EU/EEA wherever possible.
For critical infrastructure that must remain on European grounds, including environments where US Cloud Act exposure has to be eliminated, we partner with Cleura and Nebul on European data-sovereign cloud. Both are EU-owned and EU-operated, designed to keep data on European jurisdiction end-to-end. Activated for engagements where data sovereignty is a contractual or regulatory requirement, not the default for every project.
These providers may process data in client projects. Which ones are active depends on the stack chosen for the workflow. Clients on Azure tenancy run a different set than clients on GCP, and inference may route through Vertex AI, Bedrock or a hosted Anthropic endpoint depending on the model and the regulatory bar.
| Provider | Usage | Default region | DPA status | Links |
|---|---|---|---|---|
| Anthropic | Claude API for client products, Claude Team for internal development and operations. Per Anthropic's ToS, data is not used to train the models in either case. | EU/US per setup | Integrated in Anthropic's ToS. For Own projects, the client contracts directly with Anthropic. | |
| OpenAI | Used only in the AI Pitch Analyzer (Whisper transcription). Anthropic is the primary choice elsewhere, but offers no voice model. | EU when possible | Signed Nordic AI DPA | |
| Railway | Hosting and PaaS for client projects. | Amsterdam (EU-West) by default | Signed Nordic AI DPA | |
| Neon Tech | Postgres and pgvector for vector databases and application data. | Frankfurt (AWS eu-central-1) | Regulated under Databricks DPA, ref Neon §3.4 | |
| Google Cloud Platform | Project- and client-dependent. Active only when chosen for the solution. | europe-west1 (Belgium) by default; other EU regions per project | Per client setup | |
| Microsoft (M365 / SharePoint / Azure) | Project- and client-dependent. Common when integrating with existing client setups. | Configurable, European regions as first choice | Per client setup |
We hold signed versions of several DPAs. They are made available on request via email.
The two models build on the same components, but ownership, operation, and DPA relationships differ fundamentally.
| Dimension | Own | Lease |
|---|---|---|
| Infrastructure ownership | Client | Nordic AI |
| Tenant model | Single-tenant in client environment | Per-vertical, isolated per client |
| DPAs with third parties | Client signs directly (Nordic AI guides) | Nordic AI is the primary contract, client is sub-processor |
| Upgrades | Client timing | Nordic AI rolls them out |
| Customization | Full | Within the vertical framework |
| Data location | Client choice | EU by default, can be tailored |
Own
Client
Lease
Nordic AI
Own
Single-tenant in client environment
Lease
Per-vertical, isolated per client
Own
Client signs directly (Nordic AI guides)
Lease
Nordic AI is the primary contract, client is sub-processor
Own
Client timing
Lease
Nordic AI rolls them out
Own
Full
Lease
Within the vertical framework
Own
Client choice
Lease
EU by default, can be tailored
Our standard setup meets the bar for most projects. For especially sensitive personal or business data, we offer two escalation tiers that adapt the Own delivery.
Anthropic via the client's own cloud tenant
Anthropic is available as a managed service on AWS Bedrock, Google Vertex AI, and Azure. We route model calls through the client's own tenant, selected to match the existing tech stack, so processing and audit trails stay inside the client's perimeter.
Local on-prem models
For exceptionally high requirements, or explicit on-prem requests, we deploy local open models in the client's infrastructure. This is a premium delivery on the Own model, priced as an extended Own project.
In Own projects, Nordic AI helps the client set up direct data processing agreements with relevant third parties. We deliver an overview of which providers will process personal data in the client's setup, point to the right DPA template, and coordinate with the client's legal resource.
The legal work itself, evaluation and signing, is done by the client's own counsel. We are technical coordinator, not legal advisor.
We typically help set up DPAs with:
We are not lawyers and do not provide legal advice. The client's own legal resource evaluates and signs the agreements.
The SLA frames are clearly different. Concrete numbers are set in contract by scope.
Own
The client's own infrastructure has its own uptime. Railway, GCP, Microsoft and others publish their own SLAs. On top of that, Nordic AI delivers a support SLA with response time and incident handling, set by scope.
Lease
Nordic AI provides a single combined SLA covering platform uptime and support response time. Concrete numbers will be published when the lease model launches in Q3 2026.
Measures are tailored to the project's risk level and the client's requirements, but the fundamentals are consistent.
We answer technical and compliance questions in detail, including questions about signed DPAs, data location, and provider setup.
Send us an email