Guide

The 2026 AI Defensibility Scorecard + 14‑Day Moat Sprint (Notion)

A practical Notion guide for nomad founders: score your AI product’s defensibility (0–100), run a focused 14‑day moat sprint, embed into HubSpot/Slack, capture small data that compounds, and ship an enterprise‑ready SLA with HITL and failure‑handling.

Build a real moat around your AI wrapper or productized service without a big data team. This guide packages a 0–100 Defensibility Scorecard, a 14‑day Moat Sprint, two plug‑and‑play integration playbooks (HubSpot + Slack), micro‑SOPs for small‑data capture loops, and a one‑page SLA snippet pack. Use it to grade where you are, choose the next two moat levers, and ship them in two weeks.

What’s inside and how to use it

  • Duplicate the Notion workspace.
  • In the Scorecard database, add one row per product/plan you offer.
  • Score each moat 0–5 using the rubrics below; Notion auto‑calculates a 0–100.
  • Pick two levers with the best score-to-effort upside and commit to the 14‑day sprint.
  • Use the HubSpot/Slack playbooks to lock your workflow into systems your customers already live in.
  • Ship the SLA page before you pitch any enterprise buyer.

Pro tip: speed is a multiplier, not the moat. You’ll see it as a small adjustment on top of your core score.

0–100 Defensibility Scorecard (weights, rubric, and formula)

Score each dimension 0–5, then apply the weights. BaseScore is capped at 100. Speed is a small multiplier (0.9–1.1) that rewards tight build/ship cycles.

Weights (sum to 100):

  • Distribution & Channel Control — 25
  • Workflow Lock‑In (Deep Integrations + SOPs) — 20
  • Proprietary Small‑Data Loops — 20
  • Switching‑Cost Mechanics — 20
  • Outcomes Guarantees & SLAs — 15

Notion properties to add per row:

  • Dist (0–5), Workflow (0–5), Small Data (0–5), Switching (0–5), SLA (0–5), Speed Multiplier (0.90–1.10)

Notion formula for AdjustedScore:
```
min(100, round(((prop("Dist")/5)25 + (prop("Workflow")/5)20 + (prop("Small Data")/5)20 + (prop("Switching")/5)20 + (prop("SLA")/5)15) prop("Speed Multiplier")))
```

Rubrics (0–5 each):

  1. Distribution & Channel Control
  • 0 — No owned channel; dependent on marketplace whims.
  • 1 — One inconsistent channel (e.g., personal X only); no search intent capture.
  • 2 — One owned channel with basic cadence OR one integration directory listing.
  • 3 — Two owned channels with consistent cadence; one search‑led asset (e.g., integration page) bringing weekly leads.
  • 4 — Repeatable engine: integration pages or programmatic SEO + partner co‑marketing; email list growth ≥3%/month.
  • 5 — Two or more engines with tracking and SOPs; ≥20% of new trials from owned/partner channels.
  1. Workflow Lock‑In (Deep Integrations + SOPs)
  • 0 — Standalone UI only.
  • 1 — One read‑only integration; copy/paste back to system‑of‑record.
  • 2 — Two‑way writebacks to one core tool (HubSpot/Slack/GDrive) OR documented manual SOP.
  • 3 — Two‑way integrations across two core tools + step‑by‑step SOP users actually follow.
  • 4 — Embedded actions where users already work (slash commands, timeline events); admin policies baked in.
  • 5 — Mission‑critical automations tied to business outcomes; removal breaks daily work.
  1. Proprietary Small‑Data Loops
  • 0 — No data retained; stateless prompts.
  • 1 — Ad‑hoc history saved; not used to improve outcomes.
  • 2 — Structured capture of prompts/settings/files with consent.
  • 3 — Feedback/corrections loop updates a private evaluation set.
  • 4 — Per‑account context store measurably improves accuracy/latency.
  • 5 — Data loops power compounding improvements and personalized outcomes.
  1. Switching‑Cost Mechanics
  • 0 — Users can leave with no loss.
  • 1 — Some convenience loss (shortcuts) but assets portable.
  • 2 — Basic assets/history retained; export exists.
  • 3 — Embedded assets + SOP retraining cost (hours) to replicate elsewhere.
  • 4 — Multiple costs: assets, retraining, and policy/admin set‑up (days) to switch.
  • 5 — High procedural and asset costs to move; your outputs become their records.
  1. Outcomes Guarantees & SLAs
  • 0 — No promises.
  • 1 — Implied reliability only.
  • 2 — Internal SLOs; nothing published.
  • 3 — Public uptime/latency targets and status page.
  • 4 — SLA with credits + failure‑handling runbook; HITL for risky intents.
  • 5 — SLA tied to business outcomes with clear claims, error budgets, and comms SOP.

Example scores (sanity check):

  • WhatsApp concierge wrapper: Dist 2, Workflow 1, Small Data 3, Switching 1, SLA 0, Speed 1.05 → ~32/100.
  • Niche B2B RAG with history store: Dist 3, Workflow 3, Small Data 4, Switching 4, SLA 2, Speed 1.00 → ~65/100.
  • Agency productized service (HubSpot/Slack embeds): Dist 4, Workflow 5, Small Data 3, Switching 4, SLA 4, Speed 1.05 → ~84/100.

14‑Day Moat Sprint (daily actions)

Ship one high‑leverage integration, one small‑data loop, one SLA page, and one distribution asset in 14 days. Block 90 minutes/day.

  • Day 1 — Baseline & choose levers: snapshot activation, 7/30‑day retention, support load; pick two weakest moats with fastest lift.
  • Day 2 — Distribution asset plan: pick 1 target integration keyword (e.g., “HubSpot AI call summary”), outline an integration landing page and partner co‑post.
  • Day 3 — HubSpot prep: create private app; add custom properties: aiprocessingstatus, ailastsummaryat, aiconfidence, ailasterror.
  • Day 4 — HubSpot writeback: on Deal moves to Discovery, generate summary + key intents; write to Timeline Event + properties; add rollback if 429/500.
  • Day 5 — Slack bot scaffold: slash /summarize, /draft-reply; wire to your service; ephemeral messages only by default.
  • Day 6 — Small‑data loop #1: prompt library capture with owner, tags, last‑used, effectiveness score; surface favorites inline.
  • Day 7 — Small‑data loop #2: corrections capture (gold edits) → evaluation set; store before/after, label, reviewer, pass/fail.
  • Day 8 — Switching‑cost mechanics: auto‑build a per‑account "assets" archive (histories, settings, outputs) + admin policy template; keep export button visible for trust.
  • Day 9 — SLA draft v1: fill placeholders (uptime, P95 latency, credits table, HITL triggers); add claims email + response clock.
  • Day 10 — Instrumentation: add status page (manual is fine); track P50/P95 latency and error rates by model/region.
  • Day 11 — Lisbon Test: simulate model/region failure; verify fallback and comms (see section below).
  • Day 12 — Integration landing page live: how it works, setup steps, screenshots, metrics, FAQ; add partner co‑post draft.
  • Day 13 — Social proof: 1 mini‑case (before/after, numbers) + a 45‑second screen demo; pin template link.
  • Day 14 — Re‑score and commit: update Scorecard; lock next 14‑day cycle for the next weakest moat.

Integration Playbook — HubSpot (embed where sales already lives)

Goal: make your service feel native in HubSpot. Users should never copy/paste.

Core objects and properties:

  • Objects: Contact, Company, Deal, Ticket.
  • Custom properties (examples):
  • aiprocessingstatus (enum: queued|running|done|error)
  • ailastsummary_at (datetime)
  • ai_summary (long text)
  • ai_confidence (number 0–1)
  • ailasterror (text)

High‑leverage triggers → actions:

  • Trigger: Deal stage → Discovery
  • Action: Summarize last meeting transcript; write ai_summary; create Timeline Event “AI Summary Posted” with entities (people mentioned, intents, blockers).
  • Trigger: New inbound Ticket
  • Action: Classify severity, draft first response, set aiprocessingstatus.

Implementation notes:

  • Auth: Private App token with least privilege. Rotate every 90 days.
  • Webhooks: subscribe to object changes; debounce bursts; implement exponential backoff on 429.
  • Idempotency: use a deterministic request_id per object change to avoid duplicates.
  • Timeline Events: include model, latency_ms, and confidence in metadata for audits.
  • Error handling: dead‑letter queue; nightly retry; alert via Slack on >N failures/hour.
  • Data safety: never store full PII in prompts; pass stable IDs and look up sensitive fields server‑side.

User‑facing SOP (2 steps):

  1. “Open the Deal, click AI Summary → ‘Insert to Notes’.”
  2. “If confidence <0.7 or missing citations, escalate to human review.”

Definition of done: user can run the full motion end‑to‑end inside HubSpot without leaving the record.

Integration Playbook — Slack (meet users where they talk)

Goal: keep the team in flow. Bring your actions into Slack with clear receipts back to the system‑of‑record.

App surface:

  • Slash commands: /summarize, /draft-reply, /file-intake.
  • Message action: “Send to RAG.”
  • Events: appmention, message.channels, reactionadded (use ✅ as human‑approve signal).
  • Responses: ephemeral by default; post a public thread reply only when approved.

Patterns to ship fast:

  • Thread summary: on slash, fetch thread → summarize → write a thread reply + link to HubSpot Deal.
  • Triage macro: reaction ✅ on a customer message triggers a draft reply + opens a Ticket.
  • Guardrails: if prompt includes PII or model returns low confidence (<0.7), DM the user with a “needs review” checklist instead of posting.

Implementation notes:

  • OAuth scopes: minimal (chat:write, commands, channels:history, links:write).
  • Interactive Blocks: surface buttons (Approve, Edit, Discard); store ts, channel, user_id for audit.
  • Rate limits: queue bursts; honor Retry-After.
  • Mapping: keep a table that links channel/thread_ts ↔ CRM object IDs.
  • Privacy: offer /ai-off per‑channel; document what is logged and retention periods.

Definition of done: a non‑technical user can trigger the top 2 actions from Slack and see the outcome reflected in HubSpot within 10 seconds on average.

Micro‑SOPs — Small‑Data Capture Loops You Can Ship This Week

Small data is your compounding edge. Ship these capture loops with consent, tight scopes, and visible value.

Micro‑SOP 1 — Prompt Library Capture

  • Trigger: user runs a prompt >2 times.
  • Capture: prompt text, variables, target tool, owner, tags, last‑used, outcome rating (1–5).
  • Store: per‑account library; default private; opt‑in to share across team.
  • Use: rank prompts by outcome × recency; surface top 3 as one‑click buttons in UI/Slack.

Micro‑SOP 2 — Corrections → Evaluation Set

  • Trigger: user edits your draft or answers “Was this helpful?”
  • Capture: before/after text, labels (hallucination|tone|format), reviewer, approval ✅.
  • Store: append‑only eval set with version; never overwrite originals.
  • Use: nightly regression tests; block releases that degrade pass rate.

Micro‑SOP 3 — Context Store (Light RAG)

  • Trigger: new document uploaded or CRM field updated.
  • Capture: title, type, owner, access scope, embeddings/vector, canonical URL.
  • Store: per‑account vector index; tag PII; retention [N_DAYS].
  • Use: retrieval first; require citation links in responses; if no source, escalate to human.

Micro‑SOP 4 — Outcome Receipt

  • Trigger: important action completes (email sent, note posted, ticket resolved).
  • Capture: action type, latency_ms, model, confidence, link to record.
  • Store: audit log; expose in status page rollups.
  • Use: SLA reporting; anomaly alerts when P95 spikes.

Privacy defaults:

  • Consent: show a first‑run modal with what you store and why; easy toggle off.
  • Retention: set [N_DAYS] per data type; purge job nightly.
  • Portability: one‑click export (JSON/CSV) to build trust even as switching costs rise.

SLA Snippet Pack — copy/paste with your numbers

Use this as a starting point; fill in the brackets with your numbers. This is not legal advice.

  1. Scope

```
This SLA applies to [PRODUCT/PLAN] accessed by [CUSTOMER/TEAM]. Covered components: [API/UI/Slack App/HubSpot App]. Measurement timezone: [TZ].
```

  1. Availability & Latency Targets

```
Uptime Commitment: [99.5–99.9]% monthly, measured in 5‑minute intervals.
Latency Targets: P50 ≤ [X] ms, P95 ≤ [Y] ms for [ENDPOINTS/ACTIONS].
Exclusions: Scheduled maintenance (≤[N] hrs/month, [48]‑hour notice), upstream cloud incidents, Customer‑side network.
```

  1. Error Budgets & Release Gates

```
Monthly error budget: [E]% failed requests or P95 latency > [Y] ms for >[M] minutes. Breach halts feature releases until budget restored; hotfixes only.
```

  1. Failure Handling & Credits

```
Severity 1 (service unavailable >[T1] min): status page update ≤[5] min; Slack/email alert to admins ≤[10] min; failover to [SECONDARY MODEL/REGION]. Credit: [10–25]% of monthly fee if uptime < commitment.
Severity 2 (degradation >[T2] min): status update ≤[15] min; rate‑limit non‑critical jobs; no credit.
Claims: file within [30] days at [CLAIMS_EMAIL] with timestamps/logs.
```

  1. Hallucination Mitigation & Risk Controls

```
For categories [legal/medical/financial/compliance], responses must include citations to [SOURCE TYPES]; missing citations trigger HITL review. High‑risk intents route to human within [X] business hours. We never auto‑send without human approval for these categories.
```

  1. Human‑in‑the‑Loop (HITL)

```
We guarantee human review within [X] business hours for flagged items. During HITL, automations pause for the affected record; we provide an audit trail of edits.
```

  1. Data Handling

```
Data residency: [US/EU]. Retention: prompts/context [N] days; logs [M] days. Deletion: upon request within [D] days. Sensitive fields masked in prompts; only stable IDs passed to models.
```

  1. Communication

```
Real‑time: [STATUS PAGE URL]. Incidents: posted within [5–15] minutes with updates at least every [30] minutes until resolved.
```

Place this SLA on a public URL and link it in your security/FAQ and proposals.

The Lisbon Test — do you survive an outage gracefully?

A 45‑minute reliability fire drill you can run monthly. Goal: prove you degrade gracefully when a model/region hiccups.

Setup (5 min)

  • Pick a representative workflow (e.g., Deal summary via Slack → HubSpot writeback).
  • Define pass/fail: user gets a correct result or a clear fallback within [X] minutes.

Drill (30 min)

  1. Kill switch: toggle primary model/region off in staging; if not possible, inject 5xx for 10 minutes.
  2. Observe: does your service fail open (safe) and surface a helpful message?
  3. Fallback: verify secondary model/region is used; log includes fallbackmodel, latencyms.
  4. Receipts: check HubSpot Timeline Event posted with a “fallback used” flag; Slack sends a brief explanation.
  5. Comms: update status page and send admin alert within the promised window.
  6. Error budgets: did this breach the month’s budget? If yes, freeze feature releases.

Wrap‑up (10 min)

  • Capture: MTTD, MTTR, % of requests affected, P95 latency delta.
  • Fix list: top 3 changes to ship this week (e.g., cache warmup, backoff tuning, clearer end‑user copy).
  • Record: embed the drill log in your Notion workspace and link it on the SLA page.

Pass when a non‑technical user experiences a graceful slowdown, not a hard stop—and your logs, status page, and CRM all tell the same story.