How to Know If Your Startup Idea Is Not Viable: 6 Kill Signals to Watch
Six signals that your startup idea is not viable — written down before you launch, so you can actually walk away. The kill-criteria checklist.
A 2.4% conversion rate is a Rorschach test. To one founder it's "almost 3%, let me iterate the headline." To another it's "below my kill line, I'm out." The number is identical. The decision is opposite. The difference isn't the data — it's whether the founder wrote down their kill criteria before they ever launched the test.
We've watched dozens of founders run validation tests. The ones who shipped real businesses wrote down — in advance — exactly what would make them walk away. The rest negotiated with the data after it came in. Validation without kill criteria isn't validation; it's a slow-motion confirmation bias machine.
Every founder writes down the conditions under which they'd build. Almost none write down the conditions under which they'd kill. Without pre-committed kill criteria, validation devolves into rationalization. Below are the six specific signals that should kill an idea — and the rule that you must write them down before the test launches.
This is the negative twin of when is a startup idea actually validated. The positive piece tells you when to stop testing and ship. This one tells you when to stop testing and walk.
Why kill criteria must be pre-committed
Here's the psychological mechanism, briefly, because it explains everything that follows.
Before you launch a validation test, you have no skin in the game. The numbers are abstract. "3% would be a no" is easy to write because nothing is at stake yet. After you launch, three things change at once: you've spent money (sunk cost), you've told people about the idea (social cost), and you've built an identity around being the founder of this thing (ego cost). Now "3% is a no" reads differently. Now it's personal.
At that point, motivated reasoning takes over. Your brain starts looking for reasons the bar should be lower. Maybe the seasonality was off. Maybe the headline confused people. Maybe Reddit isn't the right channel. Each is sometimes true; none are usually true. They're what your brain produces when the result threatens your identity.
Pre-commitment short-circuits this. If you wrote down the kill rate before launch, the comparison is mechanical: did the result clear the bar or not? The version of you that wrote the criterion was calmer and didn't yet need the answer to be yes. Trust that version.
The discipline is simple to state and hard to keep: if you didn't write the kill criterion before the test, the test will always say yes. Rationalization fills the vacuum.
Kill signal 1: paid conversion below threshold on two channels
A pre-committed conversion threshold (5% B2C waitlist, 1.5% pre-payment, 2% B2B call-booking) that the test must clear, on at least two independent paid channels.
The metric: conversion rate per channel after €150–€250 of paid traffic across two channels for 10–14 days.
The threshold: below the bar on both channels, or above on one and dramatically below on the other (e.g., 6% Reddit, 0.4% Meta). The latter looks like a partial yes; it's a niche-only signal that won't scale.
Why it matters: one channel can be lucky. Two cannot. A loud Reddit thread isn't a market — it's a subreddit. The DocSend deck-traction data is blunt about this: investors weight engagement that holds across channels far more than a single hot acquisition source, because the second channel is the proxy for "would this actually scale."
The rationalization founders use to ignore it: "Reddit was the real test, Meta targeting was off, let me just retry with a different audience." We've watched this loop play out four iterations deep before the founder realized the offer, not the channel, was the problem. Hustle Fund's investors put it bluntly when they called this kind of pattern "vanity metrics dressed up as validation" — a hot single-channel number that founders read as a green light because they want to.
Real example, killed at this signal: AI Slack assistant for ADHD knowledge workers. Threshold: 4% across Reddit and Meta. Result: 7.1% Reddit, 1.6% Meta. We killed it. Full math in when is a startup idea actually validated.
Kill signal 2: the "why" is unexplainable after 14 days
Sometimes a test converts and you genuinely can't say why. A conversion you can't explain is a conversion you can't reproduce when you ship the actual product — you'll guess wrong about which features deliver on the implicit promise, and the people who signed up will quietly bounce.
The metric: can you write two specific sentences naming who converted and why, after 14 days of campaign data plus 5 customer conversations?
The threshold: if the answer is hand-waving — "I think the targeting was good?" — kill it. Lenny Rachitsky's data on early-stage founders puts the customer-conversation floor at 5–10 real conversations before you can claim a "why." Below that, you're guessing.
Why it matters: lucky conversion is worse than no conversion. No conversion sends you back to the drawing board. Lucky conversion gets you to build, with real waitlist signups breathing down your neck and no idea what they actually wanted.
The rationalization founders use to ignore it: "The number is good, I'll figure out the why during the build." You won't. The why is what tells you which features matter. Without it, the build is six months of guessing.
A clean Signal-2 pass reads: "Indie founders, 27–40, mostly bootstrapped, converting on the time-saving headline because they hate manually setting up Make zaps." If yours doesn't read like that, walk.
Kill signal 3: the audience can't be reached organically later
Paid traffic works during the test. It does not work forever. If your converters are "people who clicked a Meta ad about anti-aging" with no other coherent identity, you don't have an audience — you have a paid-acquisition tax that compounds for the life of the business.
The metric: within 12 months of launch, can you reach the same kind of person at €0 — content, community, search, word of mouth?
The threshold: no plausible €0 channel within 12 months. If you can't sketch one without "I'll figure it out later," kill before the test runs more spend.
Why it matters: ad costs only go up. CAC compounds. A business that can't acquire customers organically eventually pays its margins to Meta and dies. We've watched founders convert at 3.2% on Meta, ship the product, and burn out 18 months later because every customer cost €40 in paid acquisition forever.
The rationalization founders use to ignore it: "Meta works now, I'll layer in SEO and community later." Nobody layers in SEO and community later. They layer it in first, or never. Pieter Levels' graveyard of 70+ killed ideas reads, in retrospect, like a ledger of which audiences were and weren't reachable from his own Twitter and indie-hackers presence — the survivors had a built-in organic angle from day one.
Kill signal 4: sales cycle exceeds half your runway
Particularly for B2B. The deal closes — but it takes 9 months, and you have 6 months of runway. The math doesn't care that the idea is good.
The metric: time from first ad impression to first invoice paid. For B2B, "book a call" is not the end of the funnel.
The threshold: sales cycle > 50% of your remaining runway. Solo founders without external capital rarely survive cycles longer than 90 days.
Why it matters: validation is necessary but not sufficient. You can validate demand cleanly and still die on cash-flow timing. Enterprise security tools, regulated B2B verticals, and most procurement-driven SaaS all have 6–12 month cycles. If your runway can't span the cycle plus the gap to the next deal, the idea isn't viable for you even if it's viable in the abstract.
The rationalization founders use to ignore it: "I'll raise once I have the first paying customer." If your runway runs out before that customer pays, you'll raise from desperation, which is the worst position in fundraising. Or you won't raise at all.
Real example, killed at this signal: enterprise security tool, 2.3% LinkedIn-to-demo conversion (above threshold). Comparable products had 11-month time-to-paid. The founder had 8 months of runway and no investors. Idea was fine; timing was lethal.
Kill signal 5: the offer requires audience-building you don't have
Some products only work if the founder has — or is willing to spend 18 months building — a personal audience. Courses, paid newsletters, creator tools, founder-led SaaS in crowded categories. If the GTM motion is "build a Twitter following first" and you don't have one, the idea is dead in a way no conversion test will catch.
The metric: owned audience size required for launch (newsletter subscribers, X followers, Discord members) vs. owned audience size you actually have.
The threshold: GTM requires >5,000 owned audience members within 6 months and you're under 500. Building an audience is 12–24 months of consistent output, and most founders won't do it no matter what they tell themselves.
Why it matters: a landing-page test gives you conversion on borrowed (paid) traffic. It tells you nothing about whether your launch audience actually exists. Pieter Levels' wins were possible because he was already on indie hackers and had a Twitter following — that wasn't a separate variable, it was the GTM. Strip his audience out and the same conversion numbers don't ship a business.
The rationalization founders use to ignore it: "I'll build the audience while I build the product." Both at once is two-year work. Most courses, newsletters, and creator products that look like overnight wins were 18 months of audience-building first, then six weeks of "launch."
Real example, killed at this signal: €299 course on niche analytics. 4% conversion, clean why, reachable audience. Killed after walking through launch math: needed 8,000 newsletter subscribers, had 200.
Kill signal 6: you'd be miserable working on this for 24 months
The least-quantitative signal, and the one most validation frameworks skip. Most pre-MVP ideas take 18–36 months to find product-market fit if they ever do. The first 6 months are exciting; the next 18 are grinding. Without curiosity about the domain or interest in the problem, you'll quit. Quitting after 14 months of work is more expensive than quitting after 14 days of validation.
The metric: write a one-page description of your daily life 12 months from now if this works — the customers, the bug reports, the conferences, the support tickets. Read it back.
The threshold: if the honest answer is "I'd rather be doing something else, but the math works," kill it. No compensation structure makes 24 months of grinding on a problem you don't care about feel okay.
Why it matters: founder burnout is the most common cause of startup death after the validation phase. It's not in the data because it gets logged as "pivoted" or "ran out of runway." Most of those are "founder couldn't bring himself to keep going." Validation isn't just about the market; it's about whether you're the founder this idea needs.
The rationalization founders use to ignore it: "The opportunity is too good to pass up." It isn't. There's another opportunity that's almost as good and runs on a problem you actually care about. The world has more good problems than founders.
Real example, killed at this signal: B2B accounts-payable tool with strong validation result. Founder background: ex-marketing, hated finance, took the idea because conversion looked clean. Killed by Signal 6. Three months later he ran the same playbook on a content-marketing tool — less clean conversion, but a domain he'd happily live in. He shipped that one.
How to write your kill criteria — the one-page template
Before any validation test launches, fill this out and save it where you'll re-read it after the data comes in. It takes 20 minutes. It is the highest-leverage 20 minutes in pre-MVP work.
- Idea name & one-line offer: the headline of the landing page, in plain language.
- Pre-committed conversion threshold: exact percentage, per channel. "Below 4% on either Reddit or Meta over €150 spend = kill."
- Pre-committed "why" check: "If after the test we can't write 2 sentences explaining who converted and why, kill."
- Organic acquisition story: name the €0 channel for year two. If you can't name one credibly, kill before the test.
- Sales cycle ceiling: "If average time-to-commitment exceeds [X] days, kill." Set X = 50% of remaining runway.
- Audience prerequisite check: "Launch needs [X] owned audience members. We have [Y]. If Y < X/10, kill."
- 24-month liveability check: "Can we honestly work on this for 24 months?" Yes / no. If no, kill.
- Sign & date: with a co-founder if you have one. The signature is the pre-commitment.
The point of writing this down isn't bureaucracy. It's that future-you, holding a 2.4% conversion result, will look for any reason to keep going. Past-you, who wrote the document, didn't need the answer to be yes. The document is past-you protecting future-you from the predictable failure of motivated reasoning.
What kill criteria don't do
Kill criteria don't prevent pivots. If your test fails on Signal 1 but you spotted, in customer conversations, a real adjacent need — that's a new idea, not a continuation. Kill the original cleanly, write fresh kill criteria for the adjacent idea, and run a new test. Don't treat the new direction as a salvage operation on the old one.
Kill criteria also don't replace judgment. They make sure judgment runs on the version of you that wasn't emotionally invested. We keep a graveyard of the ideas we've killed — names, dates, results, the kill signal that fired. The ideas that survived cleared all six signals in writing, before the test ran. None were rescued post-hoc by a generous re-reading of borderline data.
Kill is a feature
Most founders treat killing as failure. It's the opposite. Killing fast is the only way you get to the idea that works.
Pieter Levels has killed 70+ ideas. The ones we know him for — Nomad List, Remote OK, Photo AI — survived because the other 70 were killed quickly enough that he could afford to start them. A founder who kills one bad idea cleanly in three weeks has more shots than a founder who drags a bad idea out for 18 months.
The math is brutal and freeing: if a validation test costs €150 and three weeks, you can run 12 of them in a year. If you can't bring yourself to kill, you'll run one — for 18 months — and then either ship a doomed product or quit anyway, with nothing learned.
Pre-committed kill criteria are what make 12 shots a year possible. Without them, you have one shot, and the shot's already aimed at your foot.
How LemonPage fits
Kill criteria run on any tool. LemonPage compresses the validation sprint — page, ads, conversion tracking, kill-criteria worksheet — into one workflow so the mechanical parts take a weekend instead of a month. The hard parts — writing honest kill thresholds, respecting them after the data, walking away from an idea you wanted to work — those stay with you.
Tools don't walk away from ideas. Founders do, when they've pre-committed to the conditions that make walking away the obvious move.
Run a validation test on LemonPage →
Common questions
What are kill criteria for a startup idea?
Kill criteria are the conditions you write down before launching a validation test that, if met, mean you walk away from the idea — no iteration, no "one more headline test," no pivot. They are pre-committed thresholds (conversion below X across two channels, sales cycle longer than your runway, miserable working on it for 24 months) that protect you from rationalizing weak results once you have skin in the game.
Why do kill criteria need to be pre-committed?
Once founders see real numbers, motivated reasoning kicks in fast. A 2.4% conversion rate looks like "almost there" if you decide after the test, and like a clean kill if you committed to 3% before launch. Loss aversion and sunk-cost bias quietly rewrite the bar after the fact. Writing kill criteria before the test is the only way they survive contact with a result you don't like.
How is a kill criterion different from a "fail" threshold in validation?
A fail threshold says "this test didn't work." A kill criterion says "this idea doesn't work and we're stopping." The first invites another test; the second ends the project. Most validation guides describe fail thresholds and call them kill criteria. The semantic difference is small. The behavioral difference is the entire ballgame.
What if only one of the six kill signals is hit?
One signal is enough. The six aren't a scorecard where you tally points — they're six independent ways an idea can be unworkable. A great offer with a 60-month sales cycle and 6 months of runway is dead even if conversion is brilliant. Hitting one kill signal means hitting one. Walk.
Can I rewrite my kill criteria mid-test?
No. The whole point is that they're pre-committed. You can decide before the next test that you've learned something and want a different threshold, but you can't move the bar while the test is running or after the result is in. Rewriting kill criteria after seeing the data is exactly the failure mode they exist to prevent.
Related reading: when is a startup idea actually validated · how to validate without an MVP · the validation stack: tools that actually work · the cheapest way to validate a product idea.