Episode 11·

Your 30-Day EU AI Act Compliance Plan (Before August 2026)

Intro

This episode is for nomad agencies and micro-SaaS founders who sell into or operate within the EU and need to understand their AI Act obligations without hiring a legal team. You'll get a concrete 30-day implementation plan that satisfies core transparency and logging requirements while positioning compliance as a competitive advantage.

In This Episode

Santi and Kira walk through the MVCP (Minimum Viable Compliance Plan) — a six-item, 30-day checklist specifically designed for AI deployers. They cover the difference between deployer and provider obligations, explain which articles actually apply to small teams, and show how to implement AI disclosure pages, evidence logs, incident playbooks, and vendor tracking without pausing growth. They also reveal how to price compliance overhead into retainers and turn the MVCP into a revenue-generating service for other nomad businesses.

Key Takeaways

  • Most nomad agencies are 'deployers' not 'providers' under the AI Act, which means bounded obligations focused on transparency, logging, and human oversight rather than complex provider requirements
  • The 30-day MVCP covers six items: AI disclosure page, model/data inventory, evidence log, incident playbook, DPIA/FRIA triggers, and DPA tracking — all implementable without legal counsel
  • You can price compliance overhead at $75/month per client and sell the MVCP as a $2,500 fixed-scope sprint, turning regulatory requirements into recurring revenue

Timestamps

Companion Resource

  • European Commission press release (IP_24_4123)

    ec.europa.eu

    • - The majority of rules of the EU AI Act start applying on August 2, 2026 (general date of application). ([ec.europa.eu](https://ec.europa.eu/commission/presscorner/api/files/document/print/en/ip_24_4123/IP_24_4123_EN.pdf))
  • EU AI Act Service Desk timeline; European Commission “Navigating the AI Act” FAQ

    ai-act-service-desk.ec.europa.eu

    • - The AI Act applies progressively: general provisions and prohibitions apply since February 2, 2025; GPAI provider obligations since August 2, 2025; most remaining provisions on August 2, 2026; with full roll‑out foreseen by August 2, 2027. ([ai-act-service-desk.ec.europa.eu](https://ai-act-service-desk.ec.europa.eu/en/ai-act/timeline/timeline-implementation-eu-ai-act))
  • European Commission — Guidelines on obligations for GPAI providers (FAQ)

    digital-strategy.ec.europa.eu

    • - GPAI provider obligations enter into application on August 2, 2025; providers must comply and notify the AI Office about models with systemic risk. ([digital-strategy.ec.europa.eu](https://digital-strategy.ec.europa.eu/en/faqs/guidelines-obligations-general-purpose-ai-providers))
  • White & Case — EU AI Act enforcement timeline (PDF)

    whitecase.com

    • - High‑risk AI enforcement continues staging into 2027 and, for certain transitional situations involving public authorities, into 2030. ([whitecase.com](https://www.whitecase.com/sites/default/files/2024-07/wc-eu-ai-act-enforcement-timeline.pdf))
  • ArtificialIntelligenceAct.eu (consolidated OJ text of Regulation (EU) 2024/1689)

    artificialintelligenceact.eu

    • - Deployers of high‑risk AI systems must take appropriate technical and organizational measures to use systems per instructions, assign qualified human oversight, monitor operation, and keep automatically generated logs under their control for at least six months. ([artificialintelligenceact.eu](https://artificialintelligenceact.eu/wp-content/uploads/2024/01/AI-Act-FullText.pdf))
  • EU AI Act Service Desk — Article 26 (deployers)

    ai-act-service-desk.ec.europa.eu

    • - Article 26(9) links deployers’ use of information from providers to the GDPR DPIA obligation (Article 35) where applicable. ([ai-act-service-desk.ec.europa.eu](https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-26))
  • EU AI Act Service Desk — Article 27; European Commission “Navigating the AI Act” FAQ

    ai-act-service-desk.ec.europa.eu

    • - Certain deployers must conduct a Fundamental Rights Impact Assessment (FRIA) before deploying specified high‑risk AI systems (e.g., public bodies, private entities providing public services, and specific Annex III use cases such as creditworthiness/insurance). ([ai-act-service-desk.ec.europa.eu](https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-27?utm_source=openai))
  • EU AI Act Service Desk — Article 50

    ai-act-service-desk.ec.europa.eu

    • - Transparency obligations (Article 50) require informing natural persons that they are interacting with an AI system and impose disclosure duties around AI‑generated/manipulated content; specific duties apply to deployers of emotion recognition/biometric categorisation systems. ([ai-act-service-desk.ec.europa.eu](https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-50?utm_source=openai))
  • European Parliament Research Service — AI Act timeline (AT A GLANCE)

    europarl.europa.eu

    • - The EP’s ‘At a Glance’ brief confirms the general application date of August 2, 2026 and indicates the Act should be fully effective by 2027. ([europarl.europa.eu](https://www.europarl.europa.eu/RegData/etudes/ATAG/2025/772906/EPRS_ATA%282025%29772906_EN.pdf))
  • LetsLand AI Disclosure statement

    letsland.io

    • - LetsLand (landing page generator) — AI Disclosure
    • - Concrete example of a small SaaS publicly disclosing AI use, models, and fallbacks; demonstrates a plain‑language Article 50‑aligned disclosure page that micro‑SaaS founders can emulate.
  • MUFG Investor Services — AI Transparency and Responsible Use Policy

    mufg-investorservices.com

    • - MUFG Investor Services — AI Transparency Policy
    • - Illustrates enterprise‑grade transparency language and governance sections (scope, monitoring, review cadence) that small agencies can scale down for a disclosure page/SOW annex.
  • Cendyn — AI‑Use Policy (July 14, 2025)

    cendyn.com

    • - Cendyn — AI‑Use Policy PDF
    • - Another named, public AI‑use policy with clear, plain‑English statements; useful as a structure reference for MVCP disclosure language.

Santi: August second, twenty twenty-six. Hundred days from now. That's when the bulk of the EU AI Act hits — and the European Commission has confirmed fines up to seven percent of global annual turnover for violations.

Kira: Seven percent.

Santi: Seven percent of revenue. Not profit — revenue. For a nomad agency doing, say, three hundred K a year, that's twenty-one thousand dollars. Gone. For a compliance violation you didn't even know applied to you.

Kira: Okay, but most of us aren't building high-risk AI systems. We're calling APIs. We're deployers, not providers.

Santi: Right. And that distinction matters — a lot. But here's the part that tripped me up when I actually read the regulation. Deployers still have obligations. Transparency disclosures, evidence logs, human oversight documentation. Article twenty-nine says you keep logs for a minimum of six months. Article fifty says if someone interacts with your AI system, you tell them.

Kira: And most of us are doing... none of that.

Santi: None of it. I checked my own stack last week. Two SaaS products, one agency — zero disclosure pages, zero evidence logs, zero incident playbooks attached to SOWs.

Kira: The cobbler's children.

Santi: The cobbler's children have no compliance plan. So we built one. Thirty days, six items, and it doesn't require a lawyer or a pause on growth to ship it.

Santi: Right now, every EU-facing proposal you send without an AI disclosure attached is a proposal your competitor can beat by simply having one. August second isn't just a compliance deadline — it's the date when "we take AI governance seriously" becomes a differentiator that wins contracts.

Kira: So today we're shipping the MVCP — Minimum Viable Compliance Plan. Six items, thirty days, no legal degree. And we'll show you how to turn the overhead into a line item that actually makes you money.

Kira: Before we get into the checklist — quick disclaimer. We are not lawyers. This is operational guidance based on our reading of the regulation and published EU Commission resources. If you're unsure whether your use case is high-risk or whether you've drifted into provider territory, talk to qualified counsel. We'll link the official EU AI Act Service Desk in the show notes. It's genuinely useful.

Santi: Good. Now — the first thing you need to nail down is your role. The AI Act splits the world into providers and deployers. Providers build and distribute AI models. Deployers use them. If you're calling OpenAI's API, fine-tuning a model through Anthropic's console, running Claude through your SaaS — you're a deployer.

Kira: And that matters because the obligations are completely different.

Santi: Completely different. Provider obligations — the GPAI stuff — those actually kicked in August twenty twenty-five already. But they're aimed at companies like OpenAI, Anthropic, Google. The companies building the models. If you're consuming APIs, those rules don't apply to you.

Kira: Unless you cross the line.

Santi: Unless you cross the line. A and O Shearman — big global law firm — published an analysis on this. If you significantly modify or rebrand a model and offer it as your own, you can shift from deployer to provider. And suddenly you're in a different regulatory universe.

Kira: So the white-label crowd needs to be careful.

Santi: Very careful. But for most of us — agency running Make scenarios with GPT calls, micro-SaaS wrapping Claude for a specific vertical — you're a deployer. And deployer obligations are bounded. They're doable.

Kira: Okay, so what actually applies to us? Walk me through it.

Santi: The big ones come from three articles. Article fifty — transparency. If someone interacts with your AI system, you have to tell them it's AI. That applies regardless of risk level. Article twenty-nine — if you're deploying high-risk systems, you need human oversight, you need to monitor operations, and you need to keep automatically generated logs for at least six months. And Article twenty-six ties into GDPR — it says deployers need to use information from providers to complete data protection impact assessments where applicable.

Kira: Okay but how many nomad agencies are actually deploying high-risk systems? Most of us are doing content generation, lead qualification, maybe some data analysis.

Santi: Probably not many. And that's actually the counterargument we should address head-on. Some people are going to hear this and think — my chatbot isn't high-risk, my content tool isn't high-risk, why should I spend a month on compliance?

Kira: It's a fair question.

Santi: It is. And the honest answer is — if you're only running low-risk systems, your mandatory obligations on August second are narrower. Mostly transparency under Article fifty. Label your chatbots as AI. Disclose when content is AI-generated. That kind of thing.

Kira: So why do the full MVCP?

Santi: Two reasons. First — Article fifty applies to everyone, not just high-risk. So you need the disclosure page and the labeling regardless. Second — you don't always know in advance whether a use case will be classified as high-risk. What if a client asks you to build something that touches creditworthiness? Or hiring? Or insurance underwriting? If you already have the evidence log, the incident playbook, the vendor inventory — you're ready. If you don't, you're scrambling while the clock is ticking.

Kira: It's the Antifragile argument. You're not building compliance because you have to right now. You're building it because the cost of having it is tiny and the cost of not having it when you need it is enormous.

Santi: Exactly. And the whole MVCP is designed to be small. We're talking thirty days, maybe two to three hours a week.

Kira: Alright, let me walk through how I'd actually implement this. Because I did a version of it for my agency last month after Santi wouldn't stop sending me Article twenty-nine screenshots in our group chat.

Santi: You're welcome.

Kira: Days one through four — you publish your AI use disclosure. This is Article fifty. A plain-language page on your site — slash AI disclosure, whatever fits your nav. It covers what AI you use, which models, which providers, what data categories flow through them, what the human fallback is, and how users can reach a real person.

Santi: LetsLand — small landing page SaaS — already has one of these live. They list their model providers, their fallback procedures, user rights. It's maybe five hundred words. Not a legal document — a plain-English explanation.

Kira: And you link it in your footer, in your onboarding flow, and — this is the part people miss — in your SOWs. Every new statement of work should reference it.

Santi: You put it in the SOW?

Kira: In the SOW. Because when your client's legal team does their vendor review — and they will — the disclosure is already there. You're not scrambling to draft something at two AM before a contract renewal.

Santi: Smart. Okay, what's next?

Kira: Days five through eight — model and data inventory. You open a spreadsheet or a Notion database and you list every AI use case in your business. Model name, provider, version, API endpoint, what data goes in, whether any of it is personal data, where it's stored, who owns it. And you link the vendor's DPA — data processing agreement — to each entry.

Santi: This is the one I actually got excited about. Because once you have this inventory, you can pipe it into your ops dashboard. I set mine up in Notion with a rollup that flags any use case where personal data is marked yes but no DPA is linked. Took me forty-five minutes to build. Now it yells at me automatically.

Kira: Of course you automated the compliance tracker.

Santi: I automated the compliance tracker. And the DPA collection — that's just emailing your vendors. OpenAI, Anthropic, whoever. They all have DPAs available. You download them, you link them, you note the subprocessors. Tedious but not hard.

Kira: Days eight through thirteen — this is where it gets real. You set up logging and you start your evidence log. Article twenty-nine says deployers of high-risk systems keep automatically generated logs for at least six months. But even if you're not high-risk yet, having request and response logs with timestamps, model versions, and decision overrides is just good ops hygiene.

Santi: I'll be specific about what to log. Every API call — prompt, response, model version, parameters, a unique request ID. Store it centrally with access controls. Don't auto-delete anything before six months. And if a human overrides an AI decision, log that separately with a reason.

Kira: The evidence log itself is simpler — it's a weekly journal. Key runs, anomalies, anything weird your models did, any provider incidents, any overrides. You're building a paper trail that says "we were paying attention."

Santi: And there's no official EU guidance yet on the exact format for these logs. That's a gap in the regulation. So we're working from the Article twenty-nine text and practitioner consensus. The point is — something is infinitely better than nothing.

Kira: Days fourteen through eighteen — the incident and risk playbook. You write down your top failure scenarios. Hallucination that reaches a client. Bias in an output. PII leak. Provider outage. Surprise model deprecation. For each one — what do you do in the first hour? Who do you notify? What's the fallback? And you attach this playbook to every SOW as an annex.

Santi: I'll be honest — this was the one I resisted. Writing down "what if my AI hallucinates to a client" felt like inviting bad luck.

Kira: It's not a jinx, Santi. It's a runbook.

Santi: I know, I know. And once I actually wrote it, it took maybe two hours. Five scenarios, each with a first-hour checklist and a comms template. And now when I onboard a new client, the playbook is already in the SOW. They see it and they think — this person takes this seriously.

Kira: Which is the whole point. Compliance isn't just risk mitigation — it's a trust signal.

Santi: Days sixteen through twenty — flag your DPIA and FRIA triggers. DPIA is the data protection impact assessment from GDPR. The AI Act, through Article twenty-six, explicitly links deployer obligations to GDPR's DPIA requirement. So if you're processing personal data at scale, or monitoring people, or doing anything that GDPR Article thirty-five would flag — you mark it in your inventory.

Kira: FRIA — the Fundamental Rights Impact Assessment — is narrower. Article twenty-seven. It applies to specific deployers in specific contexts. Public bodies, private entities providing public services, certain Annex three use cases like credit scoring or insurance. Most nomad agencies won't trigger it. But you should know it exists so you can flag it if a client engagement drifts into that territory.

Santi: And then days twenty through thirty — you train your team, run a dry-run incident, and lock your review cadence. Thirty-minute walkthrough with anyone who touches AI in your business. Simulate a failure. Practice the playbook. And set a quarterly calendar reminder to review everything.

Kira: Now here's where this stops being a cost center and starts being revenue.

Santi: Right — all of this needs maintenance. Quarterly reviews, log retention, disclosure updates when you swap a model. That's real work you should charge for. I added a line item to my retainers — "AI compliance operations" — seventy-five dollars a month per client. Covers everything. On twenty clients, that's eighteen hundred a month in MRR for maybe three hours of actual work per quarter.

Kira: And you can sell the MVCP itself as a fixed-scope sprint. Two weeks, twenty-five hundred dollars. You deliver the disclosure page, the inventory, the evidence log template, the playbook, and a thirty-day implementation schedule. You're not practicing law — you're doing operational setup.

Santi: You're selling the thing we just described but doing it for someone else.

Kira: Exactly. With a clear disclaimer — operational guidance, not legal advice. The EU Commission's AI Act Service Desk is free and genuinely helpful for the classification questions you shouldn't be answering yourself.

Santi: Quick section on what you can safely skip. The GPAI provider obligations — those are for companies that build and distribute general-purpose AI models. They kicked in August twenty twenty-five. If you're not training and offering your own foundation model, these don't apply to you.

Kira: But watch the line. If you fine-tune a model heavily, rebrand it, and offer it as your own product without attribution to the original provider — you might have crossed into provider territory.

Santi: The White and Case enforcement timeline also shows that high-risk system rules continue staging into twenty twenty-seven, and some public-authority grace periods extend to twenty thirty. So this isn't a one-and-done. You ship the MVCP now, you review quarterly, and you adjust as the regulation matures.

Kira: Which is exactly why the quarterly review cadence matters. You're not building a fortress. You're building a living document that grows with your business and the regulation.

Santi: So — hundred days. That's the number we started with. Seven percent fines, six checklist items, thirty days to ship. And the thing that surprised me most about doing this? It wasn't hard. It was just... undone. Sitting there waiting for someone to actually open a Notion doc and start typing.

Kira: That's the pattern, right? The regulation sounds massive until you scope it to what actually applies to you. And for deployers — for the people listening to this show — it's bounded. Disclosure, inventory, logs, playbook, DPIA flags, DPA tracking. That's the whole EU AI Act 2026 checklist for a small team. And if you want to skip the drafting from scratch — the thirty-day MVCP Starter Kit is on the Resources page. Same templates we're running ourselves.

Santi: One thing this week. Go to your site — do you have an AI disclosure page? If the answer is no, that's day one. Five hundred words. What AI you use, which providers, how someone reaches a human. Publish it. Link it in your footer. You're already ahead of ninety percent of the market.

Kira: And not legal advice — we said it at the top, we'll say it again. The EU AI Act Service Desk is free, run by the Commission, and it's the best starting point if you need to confirm whether your use case triggers high-risk classification.

Santi: See you Wednesday.

Kira: See you Wednesday.

EU AI Actcompliancedigital nomadsAI regulationbusiness operationslegal requirementsrisk managementdocumentationrevenue optimization