Episode 3·

Build an AI Org Chart That Works While You Sleep

Intro

This episode is for nomad founders running AI-powered agencies or productized services who are tired of being the single point of failure in their operations. You'll get a concrete framework for building coverage that works across time zones, plus the templates and SLA structures to implement it immediately.

In This Episode

Kira shares her Oaxaca disaster story where missing backup coverage led to unreviewed content shipping with client name errors, then she and Santi build out the four-role AI org chart that prevents these failures. They walk through three scalable patterns—solo founder with contractors, pod model for agencies, and multi-pod operations with dispatchers—plus the coverage grids, SLA matrices, and handoff protocols that make async operations reliable. The conversation covers everything from GitLab's Reviewer Roulette system to Atlassian's severity-based escalation frameworks, with practical guidance on setting response time expectations (15 minutes for leads, 4 hours for code reviews, 24-48 hours for content approvals) and building blameless postmortem processes that improve reliability without adding bureaucracy.

Key Takeaways

  • Assign four roles with named backups: Builder (ships work), Operator (owns quality and client SLAs), Reviewer (independent check), and Agent/Dispatcher (routes work and maintains coverage)
  • Create a UTC-based coverage grid showing overlap windows between team members, plus five-field handoff packets (context, constraints, last good output, budget left, fallback) for seamless time zone transitions
  • Publish tiered SLAs with auto-escalation: 15-minute acknowledgment for revenue-critical leads, 4-hour first review for code, 24-48 hour content approvals with auto-approve on silence if guardrails pass

Timestamps

Companion Resource

Kira: So I'm in Oaxaca — it's a Wednesday, maybe eleven PM local — and I get a Slack ping from my contractor in Lagos. She says, "Hey, the Meridian blog batch is done. Who reviews this?"

Santi: Eleven PM your time.

Kira: Eleven PM my time. And I look at the message and I realize — I have no answer. The person who normally reviews content is my editor in Berlin, and she's on PTO. Has been since Monday. And I knew that. I approved the PTO.

Santi: But you didn't assign a backup.

Kira: I didn't assign a backup. I didn't even think about it. So now I've got fourteen blog drafts sitting in a queue, a contractor in Lagos who's done her job perfectly, and a client in Austin who expects delivery by nine AM Central. That's — what — seven hours from now?

Santi: Seven hours. And you're the only person who can unblock it.

Kira: I'm the only person. So I sit down at this tiny desk in my Airbnb and I start reviewing blog posts at eleven fifteen PM after a full day of work. I get through nine of them before I fall asleep on my laptop.

Santi: Nine out of fourteen.

Kira: Nine out of fourteen. The other five shipped unreviewed. My contractor in Lagos — who is excellent, by the way — she saw the deadline, saw no reviewer, and made the call to send them. Which is exactly what I would've told her to do. Except one of those five had a client name wrong. Not a typo — the wrong company name. In the headline.

Santi: In the headline.

Kira: The client caught it before it went live, thank god. But that conversation — that seven AM phone call where I'm apologizing from a mezcal hangover in Oaxaca — that was the moment I realized my agency didn't have an org chart. It had me.

Kira: If you disappeared for twenty-four hours right now — no Slack, no email, phone off — does your team know who reviews what, who replies to leads, and who can approve a deployment without you? Not "they'd figure it out." Do they have a name, a written SLA, and a fallback?

Santi: Because that's the difference between an AI org chart and a group chat with dependencies. Four roles, published response times, one coverage grid. That's what we're building out loud today — and it works whether you're solo with two contractors or running a multi-pod agency across six time zones.

Santi: So Kira's Oaxaca disaster — that's not a content problem. That's a coverage problem. Nobody knew who owned the review because nobody was assigned the review. And this happens constantly in nomad teams. Somebody builds a great AI workflow, hires two or three contractors, and then the whole thing runs on one assumption — that the founder is always available.

Kira: Which is the opposite of why we went location-independent in the first place.

Santi: Exactly. You built the business to travel. And then the business requires you to never be offline. That's not a business — that's a leash with better scenery.

Kira: A leash with better scenery. I'm stealing that.

Santi: So the fix is four roles. Not four people — four roles. One person can hold multiple roles when you're small. Builder — ships the thing. Writes the draft, builds the automation, pushes the code. Operator — owns quality, schedules, budgets, client comms. Reviewer — independent check. This is the role that was missing in Oaxaca. And Agent or Dispatcher — routes work, maintains the schedule, pages someone when things break.

Kira: And the key word there is "independent." The Reviewer can't be the Builder on the same task. That's the whole point of the check.

Santi: Right. If I build a Make scenario and I also review it, I'm just proofreading my own homework. The Reviewer has to be a different brain.

Kira: Okay but I want to slow down here because I can already hear people thinking — I have three contractors. I don't have four roles' worth of humans.

Santi: You don't need four humans. You need four assignments. I ran my micro-SaaS for six months with two contractors and myself. I was Builder and Operator. One contractor was my secondary Builder. The other was Reviewer. And for the Dispatcher function, I used a Make automation that routed tasks based on due date and tagged the on-call person in Slack.

Kira: So the automation was the dispatcher.

Santi: The automation was the dispatcher — with a human backstop. If nobody acknowledged a task within thirty minutes, it pinged me directly. Solo founder, two contractors, one automation.

Kira: And you rotate the Reviewer role so nobody becomes the bottleneck.

Santi: GitLab has been doing this for years — they call it Reviewer Roulette. Random assignment from a pool. For a small team, you just need a shared doc that says — this week, Reviewer is Maria. Next week, Reviewer is Tomás. And when Maria's on PTO, Tomás is the backup. Written down. Visible.

Kira: Which is exactly what I didn't have in Oaxaca.

Santi: Which is exactly what you didn't have.

Kira: Yeah.

Kira: So pattern two is where it gets interesting for me, because this is closer to how my agency runs now. The pod model. You bundle Builder, Operator, and Reviewer into one cell that owns a client or a product line. Sakas and Company, who coach agency operators, have been pushing this for years. Once you pass about thirty clients, you can't run everything through a single chain of command.

Santi: How big is a pod?

Kira: Three to five people. Each pod has a lead writer — Builder — an ops person who manages the client and approves deliverables — Operator — and a rotating editor — Reviewer. The AI workflows sit underneath all of them. The models draft, the humans govern.

Santi: And the Operator owns the SLA with the client?

Kira: They own the SLA. And this is the important part — the SLA is published. The client can see it. Content approval turnaround is twenty-four hours for priority campaigns, forty-eight hours for everything else. And if nobody acts within forty-eight hours and the content passed our automated style and policy checks, it auto-approves.

Santi: Auto-approves on silence?

Kira: Auto-approves on silence. Because the alternative is what happened in Oaxaca — work sitting in a queue with no owner. The auto-approve only fires if the guardrails passed. If the automated checks flag something, it blocks until a human clears it.

Santi: That's actually smart. I would've over-engineered that with three layers of fallback logic.

Kira: I know you would have.

Santi: I would have built a whole decision tree in Make with conditional branches and—

Kira: And it would've taken you two weeks and nobody else could maintain it.

Santi: ...Yeah, probably.

Santi: Now — once you've got roles assigned, you need two more things. A coverage grid and real SLAs. The coverage grid is just a spreadsheet. One row per person. Columns for name, role, UTC offset, work start, work end, PTO dates. You convert everything to UTC so you can see the gaps.

Kira: And you calculate overlap. How many hours does your Builder in Bogotá overlap with your Reviewer in Bangkok? If the answer is less than two hours a day, you have a handoff problem.

Santi: Wait — you're doing the coverage grid math now?

Kira: I've been doing it since Oaxaca. And when the overlap is thin, that's where the handoff packet comes in. Before you pass work across time zones, you attach five fields to the task. Context — what we're doing and for whom. Constraints — deadlines, budgets, brand rules. Last good output — the most recent working version. Budget left — hours or dollars remaining. And fallback — what to do if you're blocked for twelve hours.

Santi: Five fields.

Kira: Five fields. And the receiving person has to comment "I own it" and restate the next checkpoint in UTC. If they don't, the handoff didn't happen.

Santi: That's the Lisbon Test applied to handoffs. Could this work keep moving for twenty-four hours while you're offline?

Kira: Exactly. And if any of those five fields is blank, you don't have a handoff — you have a hope.

Santi: Okay — SLAs. Harvard Business Review published a study — it's from twenty eleven, so it's older, but it's been replicated — showing that responding to a lead within five minutes dramatically increases your odds of qualifying them versus waiting thirty minutes. For nomad teams with async ops, that's terrifying. If a lead comes in while your Agent is asleep, it sits there for eight hours.

Kira: So what's the actual number you'd set?

Santi: Fifteen minutes for a first acknowledgment on qualified leads during the sender's business hours. If nobody acks within fifteen minutes, auto-escalate to the on-call backup.

Kira: Fifteen minutes? For a three-person team?

Santi: Only for revenue-critical paths. Content approvals? Twenty-four to forty-eight hours, like you said. Code reviews? Engineering ops sources suggest starting around four hours for a first look during business hours — treat that as a baseline to adapt, not gospel. Support requests? Same business day.

Kira: So you tier it by severity.

Santi: Atlassian's incident framework does exactly this. SEV one is revenue impact right now — gets paged immediately. SEV two is major degradation or a deadline today — thirty-to-sixty-minute ack window. SEV three is normal work — waits for business hours. And you publish these numbers. If it's not visible, it doesn't exist.

Kira: And the escalation — how many layers?

Santi: Two. Layer one is the on-call Agent. If they miss the window, layer two — the Operator — gets paged automatically. PagerDuty, Opsgenie, even a Slack workflow with a timer. Two layers is plenty for a small team.

Kira: And when something breaks — because it will—

Santi: Blameless postmortem. Google wrote the book on this. What happened, timeline in UTC, root cause, what worked, what failed, three ranked fixes with owners and due dates. Run it within seventy-two hours while the details are fresh.

Kira: Not to punish anyone.

Santi: Never to punish. To fix the system.

Kira: Okay, I need to push back on something though. Because I've been nodding along, but I'm also thinking about the person listening who has two contractors and a Notion board. You just described RACI matrices, SLA tiers, escalation trees, coverage grids, postmortem templates — that's a lot of scaffolding for a team of three.

Santi: It is.

Kira: And the follow-the-sun literature is full of cautionary tales. Handoffs introduce context loss. Ownership gets confused at the seams. You can spend so much time documenting the process that you don't do the work.

Santi: Yeah... I've done that. I once spent a week building an escalation system in Make for a project that had two people on it. Two people. I could've just texted the guy.

Kira: The over-engineering blind spot.

Santi: The over-engineering blind spot. So here's what I'd actually recommend for someone starting out. Minimum viable process. One-page RACI per offer — not per task, per offer. One coverage grid in a Google Sheet. The five-field handoff packet. And a two-tier escalation where the fifteen-minute window only applies to revenue-critical leads. Everything else gets same-business-day response times. Pilot it on one client for two weeks. See what breaks. Then iterate.

Kira: And one more thing — this isn't just about efficiency anymore. The EU AI Act hits August second, twenty twenty-six. Most provisions. If you're deploying AI in your workflows — which all of us are — having named Reviewers, documented approval chains, and evidence logs isn't just good ops. It's compliance readiness.

Santi: We're not lawyers. This isn't legal advice. But operationally, if you can show who reviewed what, when they approved it, and what the escalation path was — you're in a much stronger position than someone running everything through a group chat with no audit trail.

Kira: And that's the AI org chart. Not a diagram. Not a tool that draws boxes for you. A living document that says — here's who owns what, here's how fast they respond, here's who steps in when they can't, and here's the proof.

Santi: So — Oaxaca. Fourteen blog posts. No backup Reviewer. Wrong company name in a headline. If that version of Kira's agency had even one of the things we talked about today — a named backup on a shared doc, a five-field handoff packet, a forty-eight-hour auto-approve with guardrails — that phone call never happens.

Kira: That phone call never happens. And honestly, the thing that changed wasn't the tools. I didn't buy new software. I opened a Google Sheet, listed every role on every active client, and asked one question per row — if this person disappears tomorrow, who's the backup? And for half the rows, the answer was blank. That was the whole problem, right there in a spreadsheet.

Santi: And filling in those blanks took — what, an afternoon?

Kira: Less. Maybe two hours. The coverage grid took another hour. The handoff packet template — I wrote it on a flight to Mexico City. The hard part wasn't building the system. The hard part was admitting I needed one.

Santi: Yeah. That's always the hard part.

Kira: So here's what we want you to do this week. One thing. Go grab the AI-Augmented Org Packet on the Resources page — it's got the RACI template, the SLA matrix, the coverage grid, the escalation tree, the handoff checklist, all of it. Duplicate it. Fill in your roles for one client or one product. Assign a backup for every single role. And then run one red-team handoff — hand a real task to your backup overnight and see if it ships without you touching it. If it does, your org chart works. If it doesn't, you just found the gap before a client did.

Santi: One client. One red-team handoff. That's the test.

Kira: That's the test. See you Wednesday.

Santi: See you Wednesday.

AI org chartasync operationstime zone managementRACI frameworkSLA designnomad business operationscoverage planninghandoff protocolsreviewer rotationescalation systemsEU AI Act complianceremote team management