Checklist

30‑Day MVCP (Deployer Edition): EU AI Act Starter Kit

A one‑page, deployer‑only 30‑day checklist to ship an EU AI Act MVCP without stalling growth—publish a plain‑language AI‑Use Disclosure, build your model/data/vendor inventory, start an evidence log with ≥6‑month retention, and attach a lightweight risk playbook to every SOW.

A minimum‑viable, 30‑day plan for tiny agencies and micro‑SaaS teams that deploy third‑party AI models. This checklist maps each task to the EU AI Act’s deployer duties (Articles 26/27/29/50) and GDPR linkages, and assumes you’re a deployer using model APIs—not a provider of your own GPAI model. As of April 17, 2026, the Act’s general date of application is August 2, 2026.

Included templates you’ll duplicate on Day 1:

  • Plain‑language AI‑Use Disclosure (Article 50‑aligned)
  • Evidence Log & Incident Playbook (Article 29(5)‑aligned; ≥6‑month retention)
  • DPA/Subprocessor & Model/Data Inventory tracker (Article 26(9)/Article 27 triggers)

Disclaimer: This is practical ops guidance, not legal advice. Confirm classification and edge cases with qualified counsel, especially if your use case may be high‑risk or drifting toward provider obligations.

  1. 1

    Day 1 — Confirm role and scope

    Write down that you’re a deployer (consuming third‑party models/APIs) and not a provider; note that provider‑only GPAI duties (effective Aug 2, 2025) don’t apply unless you substantially modify/rebrand or offer your own model.

  2. 2

    Day 1 — Appoint an owner and open the kit

    Name a single MVCP owner; create a shared folder; duplicate the three templates; add a calendar reminder to retain logs for ≥6 months under your control (Art. 29(5)).

  3. 3

    Days 2–4 — Publish your AI‑Use Disclosure

    Fill the disclosure template (use cases, models/providers/versions, human‑in‑loop, fallbacks, data categories, user rights, contact, last‑updated) and publish it at /legal/ai‑use (or /ai‑disclosure); link it in your footer and relevant UI (Art. 50).

  4. 4

    Days 3–5 — Add transparency in product and comms

    Label chatbots as AI, flag AI‑generated/manipulated media, and add a short notice in onboarding, email footers, and SOWs explaining where AI is used and how to reach a human (Art. 50).

  5. 5

    Days 5–7 — Build the Model/Data/Vendor Inventory

    Use the tracker to record each use case: model name/provider/version, endpoint, data categories, personal/special‑category data Y/N, geography/transfer notes, owner, and review date (Art. 26(9)/Art. 27).

  6. 6

    Days 6–8 — Capture DPAs and subprocessor chains

    Collect vendor DPAs and subprocessor lists; link them to each entry in the tracker; add SCCs/transfer safeguards where relevant; set renewal reminders (GDPR Art. 28; AI Act Art. 26(9) linkage).

  7. 7

    Days 8–10 — Assign human oversight and fallback

    Name qualified reviewers per use case, define approval thresholds and a kill‑switch/manual fallback, and note them in the playbook and SOW annex (Art. 29).

  8. 8

    Days 10–13 — Configure basic logging/traceability

    Enable request/response logging with prompt/output IDs, model/version, parameters, dataset/version, and decision overrides; store centrally with access controls; set auto‑deletion only after ≥6 months (Art. 29(5)).

  9. 9

    Days 12–14 — Start the Evidence Log

    Begin weekly entries for key runs, anomalies, provider incidents, overrides, and post‑mortems; attach screenshots/links and tag the use case owner (Art. 29(5)).

  10. 10

    Days 14–18 — Draft the Incident/Risk Playbook

    List top scenarios (hallucination, bias, PII leak, outage, surprise model change) with first‑hour steps, comms template, fallback path, and 5‑day review; attach to every SOW (Art. 29 + Art. 50 where disclosures are triggered).

  11. 11

    Days 16–20 — Flag DPIA/FRIA triggers

    Use the tracker’s flags to mark use cases needing a GDPR DPIA (e.g., large‑scale personal data, monitoring) and FRIA where specified (public‑service/Annex III contexts); add links to DPIA/FRIA stubs (Art. 26(9)/Art. 27; GDPR Art. 35).

  12. 12

    Days 20–22 — Train the team (30 minutes)

    Walk through the disclosure, how to label AI interactions, how to log evidence, when to trigger the incident playbook, and who holds human‑oversight responsibilities.

  13. 13

    Days 23–26 — Run a dry‑run incident

    Simulate a realistic failure (e.g., harmful output or PII leak), practice pause→notify client→fallback, and log the exercise; tune thresholds/SLAs based on what broke.

  14. 14

    Days 27–30 — Lock cadence, publish changelog, price ops

    Add a quarterly review to your calendar, publish a ‘Last updated’ + changelog on the disclosure page, add a small ‘compliance ops’ line in retainers, and re‑check you haven’t drifted into provider territory; if you have, pause and reassess duties.