Coming soon..
Foundations for keeping
ai responsible

A Systems Approach to Accountable Decisions
Part I — When Responsibility Breaks
Chapter 1: Why Responsibility in AI Is a Systems Problem
Chapter 2: What Changes When Responsibility Is Enforced
Chapter 3: Why Responsible AI needs to be designed
Part II — Designing for Responsibility
Chapter 4: Defining Responsibility: Context and Risk
Chapter 5: Binding Responsibility: Runtime Ownership
Chapter 6: Preserving Responsibility: Version Integrity
Chapter 7: Controlling Responsibility: Human-in-the-Loop
Chapter 8: Inspecting Responsibility: Decision Observability
Part III — System Behaviour
Chapter 9: End-to-End Responsible AI Systems
Before you look at obligations, first understand what role your system is actually playing.
These roles are not about ownership or job titles. They come from where you have control in the system.
If you can change how the system is built, what it outputs, or how it is used in a real workflow, you are defining its role.A system can shift roles when:
- its purpose changes
- its behaviour is modified
- its usage moves into a different workflow
1. Provider
You are acting as a Provider when you control how an AI system comes into existence and enters use.☐ Do you build an AI system or a general-purpose model?☐ Do you get an AI system built for you, even if a third party writes the code?☐ Do you put an AI system into use under your own name, whether you sell it or use it internally?☐ Do you take an existing system and significantly change how it works?☐ Do you change what the system is meant to do, especially if that pushes it into a high-risk use?☐ Do you embed AI into a product that you place on the market under your name?If yes to any of these, you are operating as a Provider.2. Deployer
You are acting as a Deployer when you decide how the system is used in a real workflow.☐ Are you using an AI system within your organisation?☐ Is that use part of a business process, service, or operational workflow?If yes, you are operating as a Deployer.3. Importer
You are acting as an Importer when you bring an external system into the EU market.☐ Are you based in the EU?☐ Are you placing an AI system on the EU market that was built outside the EU?☐ Are you the first point of entry for that system into the EU?If yes, you are operating as an Importer.4. Distributor
You are acting as a Distributor when you move the system through the market without changing it.☐ Are you part of the supply chain but not the original provider?☐ Are you making the system available to others (resale, licensing, distribution)?☐ Are you leaving the system as-is, without changing its behaviour or purpose?If yes, you are operating as a Distributor.5. Authorised Representative
You are acting as an Authorised Representative when you stand in for a provider inside the EU.☐ Are you based in the EU?☐ Have you been formally appointed by a non-EU provider?☐ Do you act on their behalf for regulatory interactions?If yes, you are operating as an Authorised Representative.6. Scope Check
☐ Does the output of your system get used in the EU?If yes, the Act can still apply, even if you are not based in the EU.
Regulation (EU) 2024/1689 (n.d.). Article 25. EU Artificial Intelligence Act. https://artificialintelligenceact.eu/article/25/