Post-launch workflow
What separates products that get better fast from ones that drift. Founder dogfooding, the first-10-users 1-on-1 protocol, the kill list, lifecycle email, and the iteration cadence that compounds.
Most MVPs ship and then nothing happens. The founder posts on X, the launch list gets one email, a few hundred people sign up, and then the product sits there. The roadmap goes stale, the analytics dashboard goes unread, and three months later the project is "paused." This is the workflow we run with every team after the first deploy goes live — the thing that separates products that compound from ones that drift.
The first 30 days are the whole game
Launching is the start, not the end. The MVP is a hypothesis printed in TypeScript — every assumption you encoded (about who wants this, what they'll click, what they'll pay for) is now testable for the first time. You have 30 days of elevated attention from the people who showed up, and the highest-density signal you'll ever get, because the product is small enough that every observation maps cleanly to a decision.
The cadence that compounds: dogfood daily, talk to users weekly, ship a release monthly, send a feature email with each release. The drift that kills: ship the MVP, wait for "feedback to come in," refresh the analytics tab once a week and feel vaguely bad, queue up a redesign in two months because the metrics are flat.
The difference isn't talent or budget. It's whether you treated the launch as a milestone or as the start of an operating system.
Founder dogfooding — the rule, not a nice-to-have
Use your own product daily, starting tomorrow. Not "test it before each release." Not "click around once a week." Daily, in the same way a real user would, with a real account, on the production URL, on your phone and your laptop both. If the product is for journaling, you journal in it. If it's for invoicing, you invoice through it. If it's for booking calls, your calendar link goes through it.
This sounds soft; it's not. Founder dogfooding catches what tests can't: the empty state that's wrong on day three because you forgot what zero entries looks like, the email that lands in spam, the button you keep almost clicking but don't, the loading spinner that's 200ms too long on cellular. Tests verify what you remembered to check. Dogfooding surfaces what you didn't.
The honest case where this breaks: B2B tools where the founder isn't the user. You're building procurement software and you've never run procurement. Fine. Find the closest proxy — your first design partner, an advisor in the role, a friend who used to do the job — and get them dogfooding instead. Wire their channel into the project's notes surface so their observations land where yours would have. The rule isn't "the founder must dogfood." The rule is "someone with skin in the game must, and their notes must enter the agent's context every iteration."
The Notes capture system
The configurator's Notes tab is where dogfooding observations live. Each entry is dated, one paragraph, raw observation. No solutioning, no judgment, no "we should add X." Just what you tried, what happened, and what surface area it touched.
Patterns that work:
- "Tried to do X, hit Y friction." (e.g., "Tried to invite a teammate from the project page; the invite button is buried in settings and I had to search for it.")
- "This empty state is wrong." (e.g., "First-time dashboard with zero data shows the same chart skeleton as the loading state; users will think it's broken.")
- "I keep missing this affordance." (e.g., "I've now opened the wrong tab three days in a row because the active-tab indicator is too subtle.")
The agent reads the Notes tab in sub-skill 18 (deliverables) alongside customer discovery transcripts during the improvement-cycle synthesis. That's the loop: you observe, you write a paragraph, the agent ingests it next iteration with everything else, and the synthesis produces a ranked change list. If you stop writing notes, the loop stalls — there's nothing for the synthesis to chew on, and the next release becomes guesswork.
The first 10 users, 1-on-1
Pick 10 from your first 30 signups who match the audience profile most closely. Not the loudest ones, not the ones who replied to your launch email with compliments — the ones whose use case maps tightly to the audience you defined in PROJECT.md. Reach out personally, by name, with a specific 30-minute Zoom request. "I'd love to watch you use the product for 30 minutes and ask a few questions. I'm trying to make this 10x better in the next month and your perspective would directly shape it." Offer a $25 gift card if it helps, but most will say yes without one.
Block the sessions over 1-2 weeks. Don't batch them into a single day — you'll lose pattern recognition. After the third call you'll start hearing the same friction described in different words; that's the signal you can't get from any other source, and it requires sleeping between sessions for your brain to do the cross-referencing.
Generic baseline questions (ask every user):
- Walk me through how you found this and what you expected.
- What surprised you, good or bad?
- Where did you get stuck, even for a moment?
- If this disappeared tomorrow, would you actually miss it? What would you replace it with?
- What would you tell a friend this is for?
Per-platform tailored questions: ask the agent to generate 5-7 specific ones from PROJECT.md (the audience profile, the value prop, the differentiating features). These are the questions that probe whether your specific bet is right, not whether you built a generally usable thing.
After 5 sessions you'll have signal. After 10 you'll have direction.
Process the transcripts — synthesize, don't just collect
Record and transcribe every call (Zoom does both with one toggle). Paste each transcript into the configurator's Discovery surface, or share with the agent in chat-mode if you're working that way. Don't try to summarize them yourself first — the raw transcript is the unit of evidence, and your summary will smooth out the parts that should make you uncomfortable.
After 3+ sessions are loaded, the agent extracts:
- Themes that recurred. 3+ mentions = signal. 2 = noise. 1 = ignore unless it's a critical bug.
- Confusion points. Every spot where multiple users paused, asked "wait, what does this do," or clicked the wrong thing first. Every one of those is a UX fix, not a "user education" problem.
- Words users used. The actual vocabulary they reach for. This is the copy your landing page, your onboarding, and your in-product strings should use. If five users called it a "workspace" and your UI calls it a "tenant," your UI is wrong.
- Feature requests. Deduplicated and ranked by frequency × audience-fit. A feature requested by 6 of 10 right-fit users is not the same as a feature requested by 1 enthusiastic edge case.
- Retention signal. The yes/no count on "would you miss this." If fewer than half say yes by the time you've talked to 10, you don't have a feature problem — you have a positioning problem or a wrong-audience problem, and the next month should be spent re-running the audience hypothesis, not adding features.
The kill list — what comes off, not just what goes on
The agent maintains decisions.kill_list_candidates throughout the build. At each iteration it surfaces 3-5 features that aren't earning their place: low usage from analytics (sub-skill 08), negative dogfooding notes from your Notes tab, the agent's own observations of complexity-without-value when it's reading the codebase.
Most founders default to "keep everything." The case for cuts is unsentimental:
- Removing ships faster than adding. A delete-PR is reviewable in an hour.
- Removing eliminates maintenance debt forever, not just today.
- Removing focuses the product so the remaining surface area is sharper and easier to explain.
- A smaller product is a more honest product. It doesn't promise things it doesn't do well.
Per item, the decision is one of two:
- Remove. Full code + tests + schema deletion. Bump the version (likely MAJOR if the deletion is user-visible — see sub-skill 17 ship-checklist for the semver rules). Note it in the CHANGELOG.
- Keep with documented rationale. Write the reason into
decisions.kept_despite_kill_recommendationwith a revisit-after-launch date. The act of writing the rationale forces honesty; "I just like it" is a fine reason as long as it's the reason on paper.
The kill list is reviewed at every monthly release. Not optional.
Lifecycle email — the retention lever
Transactional email (verification, receipts, password resets) is table stakes. Lifecycle email is what moves retention. Sub-skill 04 (lifecycle email) sets up the eight cadences:
- Day-1 welcome. Founder-personal, plain-text, signed by name. One question: "What brought you here?" You will get replies. Reply to all of them.
- Day-3 onboarding tip. One specific thing the user probably hasn't discovered yet, with a deep link.
- Day-7 re-engagement. Only sent if usage has dropped to zero. Light touch.
- Day-14 milestone-or-bust. Last chance to reactivate. Founder-personal again. "I noticed you haven't been back — was it something we got wrong?" These convert better than any "we miss you" copy ever written.
- Weekly digest. Content, social proof, what's new. Skippable, but the open rate trains your domain reputation.
- 30/60/90d re-engage. Tiered, with the 90d version being more direct: "we'll stop emailing you unless you'd rather we didn't."
- Win-back after deactivation. One email, 30 days after they cancel/delete, with a single "what would have made this work" question.
- MAJOR / MINOR feature announcement. Sent with each release. Tied directly to the CHANGELOG entry the agent generates.
Frequency cap: max 2 lifecycle emails per week per user, ever. This is enforced at the send layer, not at the campaign layer — campaigns don't know about each other.
Critical: one-click unsubscribe via a users.email_lifecycle column distinct from transactional opt-in. If a user opts out of lifecycle, they still get receipts. If you collapse the two columns into one, you'll either spam people who unsubscribed or stop sending receipts to people who need them. Both are worse than the small schema cost of two columns.
Cadence — what to do every week, every month, every release
Weekly:
- Read every dogfooding note from the past week (yours and any proxies).
- Scan the analytics dashboard — sub-skill 08 wires up the admin tabs (sub-skill 07) so this is one URL, not five.
- Reply to every feedback submission. Every one. Even "thanks" works; silence trains people not to send the next one.
Monthly:
- Synthesize across the month's discovery sessions + dogfooding notes + analytics. The agent runs this on demand.
- Ship one MINOR release with the top 3 changes. Not 8. Three.
- Write the CHANGELOG entry (the agent drafts; you edit).
- Send the feature announcement email (cadence 8 above).
Per release:
- Bump the version (semver — sub-skill 17 ship-checklist enforces the rules).
- Git tag the release.
- Update the public
/changelogpage. - Show a "what's new" first-sign-in card to returning users on their next session. Not a tour-style onboarding modal — users hate those, they never work. A small dismissible card with two bullets and a "details" link.
The agent helps with all of it. The SKILL.md operating rules say the agent owns the synthesis, the CHANGELOG draft, the email draft, the version bump, and the tag. You own the product judgment, the user calls, and the kill-list decisions.
What "done" looks like
When do you stop iterating and trust the product? Honest answer: never, but the cadence shifts.
- Months 1-3: weekly improvements, monthly MINOR releases, all 10 user calls done in the first month and re-run with a new cohort of 10 in month 3.
- Months 4-6: bi-weekly improvements, releases as features warrant. User calls move to 5 per quarter, focused on whichever segment is converting best.
- Months 7+: monthly improvements, only on signal. The product has earned the right to be left alone for stretches. You're now optimizing rather than inventing.
The trap to avoid: skipping straight to month 7 cadence in month 2 because you're tired or because the launch felt like the finish line. The first 6 months of post-launch work do more for the product's trajectory than any other 6 months will, because every fix lands while the audience is still small enough to feel each one.
Closing
Post-launch is the part nobody puts in the demo video and the part that decides everything. Dogfood daily, talk to ten users, run the kill list, send the lifecycle emails, ship monthly. The agent does the heavy lifting on synthesis and drafting; you bring the judgment and the conversations.
For the work that comes before this stage, see pre-build due diligence. For thinking about what's next once the product is ramping, see idea to funding and accelerators.