Scope.Cut.Ship.
/ Execution

Ship an MVP in 30 Days: A Founder’s Scope-and-Cut Playbook

April 27, 2026

Most founders don’t miss their launch window because they are slow — they miss it because they built the wrong thing at the wrong size.

Thirty days is a real constraint, not a marketing number. Founders who ship inside it are not faster coders — they are better at cutting. They start with a single hypothesis, draw a hard line around the features that test it, and refuse to move the line even when the next feature sounds essential. The constraint is the method.

This post covers:

  • What an MVP actually is (and the V1 trap most teams fall into)
  • The one-sentence test that forces honest scope decisions
  • A scope-and-cut framework with a launch-blocker filter and comparison table
  • How Dropbox shipped before writing a line of production code
  • A week-by-week 30-day plan with activation built in from day one

What “MVP” Actually Means

The term gets abused so consistently it has nearly lost meaning. Most teams ship a V1 dressed up as an MVP — more features than needed, more polish than earned, months past the intended deadline. They call it an MVP because they did not build everything they originally wanted to. That is not an MVP. That is a delayed full product.

A real MVP is the smallest thing you can build to test a specific hypothesis. Not “is our product generally useful?” — that is too broad. A specific hypothesis: “if we build automated expense categorization for solo consultants, they will pay $29 per month instead of using spreadsheets.” Every feature you include should directly serve that test. Everything else is a distraction.

Before scoping anything, write the hypothesis in one sentence. The format: “If we build [Feature X] for [Audience Y], they will [Action Z].” Action Z must be measurable — paying, returning, inviting, referring. Not “understanding the product” or “feeling the value.” If you cannot write the sentence, your scope is not ready.

At Decagrowth, this sentence is our entry condition for any build conversation. Without it, every subsequent scope decision is arbitrary.

MVP Thinking vs. V1 Thinking

DecisionMVP thinkingV1 thinking
User settings pageSkip — hardcode sensible defaultsBuild it; users will want it
Edge casesHandle with a “contact us” linkWrite code for each one
Admin dashboardUse a spreadsheet or direct DB accessBuild a proper UI
Multi-user / teamsSingle user only to startMulti-tenant from the beginning
IntegrationsOne import method — CSV or manualConnect five tools via API
Email notificationsManual emails from your inboxAutomated drip sequence

The right column is not wrong. It is just not for month one. Building V1 features at MVP stage means three months of engineering before you have any signal that the core hypothesis is even correct.

The Scope-and-Cut Framework

Start With the User’s First Session

Draw the flow a new user takes from signup to first real value. Not the full product tour — just the critical path. What is the minimum number of screens and actions between “signed up” and “got the outcome they came for”? Every feature that does not appear on this critical path is a candidate for cutting.

For a project management tool, the critical path might be: create account, create first project, add first task, mark it done. Four steps. An MVP does not need recurring tasks, due-date reminders, or color labels to validate whether people will pay to organize their work this way.

Apply the Launch-Blocker Test

For every feature on your list, ask one question: can we launch without it? If yes, it is not in scope. If no — meaning users literally cannot complete the hypothesis test without it — it stays.

This test is harder to apply honestly than it sounds. Almost every feature feels launch-critical when you built it. The discipline is asking whether the feature breaks the hypothesis test specifically, not whether a user might want it eventually. Most “might want” features fail.

Handle Edge Cases With UX, Not Code

A manual review queue, a “coming soon” placeholder, or a plain support email can buy months of runway that would otherwise go to engineering. Clear error states and simple fallback flows teach you which edge cases actually occur in production versus which ones you invented in your head. Teams that handle edge cases with UX instead of code typically ship two to three weeks faster — and arrive at launch with a real list of what is worth building next.

How Dropbox Shipped Before Writing a Line of Production Code

In 2007, Drew Houston posted a three-minute demo video showing what Dropbox would do: sync files seamlessly across devices. The software shown in the video did not exist as a production system. What existed was a prototype convincing enough to demonstrate the hypothesis — that people wanted frictionless file sync badly enough to sign up and wait for the real thing.

The video drove 70,000 signups overnight. Houston had his evidence before committing the engineering investment to build real sync infrastructure at scale. The MVP was the video, not the product. It answered the hypothesis in days at near-zero cost.

The lesson is not “make a video instead of building.” It is that the hypothesis test does not always require production software. A landing page with a waitlist, a concierge flow where you do the work manually, or a prototype demo can answer core questions faster and more cheaply than a full build. When you do build, you already have evidence you are building the right thing.

The moment real users show up, you need to know what action predicts whether they will stick around. Read the guide to finding your activation metric before your first session — knowing what to measure from day one means you do not spend week four guessing why users dropped off.

A 30-Day Week-by-Week Plan

This plan assumes you have a defined hypothesis going in. If you do not, run five user interviews first, then start the clock.

Week 1: Define and Draw

Write the one-sentence hypothesis. Draw the critical-path user flow on paper — no wireframing tools, no high-fidelity mockups. List every feature your team has mentioned, then apply the launch-blocker test to each one. What survives is your build list. Set up your analytics events so you can measure the activation action from the first real user session. Do not write code yet.

Week 2: Build the Core Loop Only

Build exactly what is on the critical path. Skip the settings page, use hardcoded defaults, and handle edge cases with error messages and a support link. Your goal by end of week two: a single user can complete the hypothesis test from start to finish, even if it is rough. Do not polish anything a user will not see in their first session.

Week 3: Ship to Five to Ten Real Users

Do not run a public launch. Recruit five to ten people who match your hypothesis audience and watch their first sessions — live if possible, recorded if not. Your job is not to explain the product. It is to watch where they get confused, where they drop, and whether they reach the outcome defined in your hypothesis. Collect feedback from at least 60 percent of these users within two weeks. That signal is the fastest way to confirm or kill the hypothesis.

Week 4: Fix the Top Three Drop-offs

Do not add features. Look at session data and fix the three moments where users most commonly stopped progressing. That is the only work for week four. By the end, measure your activation rate — the percentage of new users who complete the hypothesis-testing action in their first session. That number is your compound baseline for everything that comes after launch.

What to Do This Week

  • Write the one-sentence hypothesis for your current or planned product. If you cannot write it in one sentence, your scope is not defined yet.
  • Draw the critical-path user flow — signup to first real value, on paper, in five steps or fewer.
  • Apply the launch-blocker test to every feature on your list. Mark each one “launch blocker” or “post-launch.” Be honest: most land in the second column.
  • Identify your activation action before writing a line of code. What single event in a user’s first session tells you the hypothesis is being tested? Instrument it from the start.
  • Recruit five users who match your hypothesis audience now, before you have anything to show. The earlier they know you want their feedback, the faster your first real sessions happen.

Shipping in 30 days is quiet work. It is mostly saying no — to features, to polish, to the internal voice that insists users will not understand unless you add one more thing. Founders who ship consistently trust the hypothesis more than their intuition about what users might want. The discipline compounds over time: each shipped MVP teaches you something no roadmap meeting could.

If you are working through scope decisions right now or want a peer perspective on whether your MVP is sized correctly, reach out. We do this work with our own products and with the founders we partner with. Read more about how Decagrowth operates before deciding if we’re the right conversation.