Successful businesses adapt.

There’s no “one size fits all” advice because:

  • What worked 15 years ago is garbage today
  • What works in San Francisco fails in Adelaide
  • What works for $15/month SaaS dies with $10k consulting

You might be smarter than everyone else, but you’re definitely dumber than yourself in 3 months. The only way forward is gathering real data.

The problem with low information

When you start, you have almost no data. Maybe 10 close friends you can ask. Maybe 50 cold emails a month that get 1% response rate (hopefully). You’re making decisions with 1-2 data points.

Let’s simulate it. Say you’re testing two email “pitches”, and you’ve somehow found an amazing pitch (B, 2% success) that is twice as good as your normal pitch (A, 1% success).

A/B Testing Simulator

Discover why sample size matters in A/B testing

Simulation Results

Based on exact binomial probability calculations

--
A Wins
--
B Wins
--
Ties

Total trials per test: --

Method: --

Hmmmm so after 3 months you’re still wrong 1/3 of the time. Even though B is literally twice as good.

Enter outliers

Early in my business we tested “friendly vs professional” LinkedIn messages. “Friendly” won with 1 success in 3 months. When we met the guy, he mentioned he always says yes to coffee. The actual success of each approach was close to 0%.

A/B Testing Simulator with Outliers

See how outliers affect A/B test reliability

Percentage of trials with extreme (100%) success rate

Simulation Results

Based on Monte Carlo simulation with outliers

--
A Wins
--
B Wins
--
Ties

Total trials per test: --

Method: Monte Carlo simulation (10,000 runs)

This is your reality at the start: low conversion rates + tiny sample sizes + inevitable outliers = you’re flying blind.

Ideas on how to learn better

Aim for 15-85% success rates on your tests. try adjusting A=15% B=30%, see how much better your data is. You can achieve this by:

  • Lower your prices temporarily. Make the ask smaller. Even if it’s “not sustainable”, you’re gathering data, not building a business model yet.
  • Break down the steps. Instead of “will they buy?”, test “will they reply?”, “will they take a call?”, “will they agree to a trial?”. Warning: be careful about “Sales Funnels”. I get a lot of messages offering things for free, even if I respond to the initial message I’m not going to buy it when they reveal it actually costs $500/month.

Test vastly different things. If your 2 approaches are different enough, the outliers just become valuable real data. For example, at the moment we’re testing building an interactive scavenger hunt vs flying to melbourne.

Link data to decisions. If the data won’t change what you do, don’t gather it. “Should I build feature X?” is a question worth testing. “Should my button be blue or green?” is not.

Play with the simulators. Change the success rate from 1-2% to 20-80%. See how much easier it becomes to find signal in the noise.

When you’re in a low information world, your job isn’t to gather perfect data. It’s to structure your tests so the data you gather actually means something.