Experiment Decisions FAQ
Making the right call on your experiments can be tricky. This FAQ covers the three most common scenarios you’ll encounter and how to handle each one.
Scenario 1: Low Statistical Significance After a Long Run
Section titled “Scenario 1: Low Statistical Significance After a Long Run”My experiment has been running for weeks but still shows no statistical significance. What should I do?
Section titled “My experiment has been running for weeks but still shows no statistical significance. What should I do?”This is one of the most common situations. If your experiment has been running for an extended period (typically 2-4 weeks) without reaching statistical significance, it usually means one of two things:
- There’s no meaningful difference between your variants
- The effect size is too small to detect with your current traffic
Recommended actions:
| Traffic Level | Recommendation |
|---|---|
| High traffic (10k+ users/day) | If no signal after 2 weeks, the variants likely perform similarly. Consider shipping your preferred variant based on other factors (brand voice, simplicity). |
| Low traffic (under 1k users/day) | Extend the experiment to 4-6 weeks before deciding. Small sample sizes need more time. |
Should I keep waiting for significance?
Section titled “Should I keep waiting for significance?”Not indefinitely. Set a maximum runtime when you launch (we recommend 4 weeks for most experiments). If you haven’t reached significance by then, make a decision based on:
- Directional trends — Is one variant consistently (even if not significantly) better?
- Secondary metrics — Are there meaningful differences in other metrics like engagement or retention?
- Business priorities — Do you need to move on to test other hypotheses?
Tip: A “no significant difference” result is still valuable. It tells you this particular change doesn’t meaningfully impact your key metric, freeing you to focus elsewhere.
What if the results keep fluctuating?
Section titled “What if the results keep fluctuating?”Fluctuating results often indicate high variance in your metric or external factors (seasonality, marketing campaigns) affecting performance. Consider:
- Extending runtime to smooth out variance
- Segmenting results by attribute to see if specific audiences respond differently
- Checking for data quality issues — are events firing correctly?
Scenario 2: Auto-Tune Notification
Section titled “Scenario 2: Auto-Tune Notification”I received an Auto-Tune notification. What does this mean?
Section titled “I received an Auto-Tune notification. What does this mean?”An Auto-Tune notification means JustAI has detected that one or more variants are performing significantly better than others. Auto-Tune has automatically started shifting more traffic toward the winning variant(s).
What’s happening behind the scenes:
- JustAI continuously monitors variant performance
- When a variant shows statistically significant improvement, Auto-Tune increases its traffic allocation
- Poor-performing variants receive less traffic, minimizing opportunity cost
Do I need to take action when I get an Auto-Tune notification?
Section titled “Do I need to take action when I get an Auto-Tune notification?”Not immediately. Auto-Tune is designed to optimize automatically. However, you should:
- Review the results — Open the experiment dashboard to see which variant is winning and by how much
- Check the metrics — Confirm the winning variant aligns with your goals
- Monitor for stability — Watch for a few more days to ensure the winner remains consistent
Can Auto-Tune be wrong?
Section titled “Can Auto-Tune be wrong?”Auto-Tune uses statistical methods to make decisions, but early signals can occasionally shift. That’s why Auto-Tune adjusts traffic gradually rather than switching 100% immediately. If the signal was a false positive, the system self-corrects.
When to intervene:
- If the “winning” variant has unintended consequences (e.g., higher clicks but more unsubscribes)
- If you notice data quality issues affecting results
- If business context has changed (e.g., a variant references an expired promotion)
Should I let Auto-Tune run forever?
Section titled “Should I let Auto-Tune run forever?”No. Even with Auto-Tune, you should eventually ship a winner. Use Auto-Tune to:
- Minimize losses while the experiment runs
- Gather confidence in the winning variant
- Learn which themes and approaches work for different segments
Once Auto-Tune has clearly identified a winner (typically 80%+ traffic allocation), consider shipping it permanently.
Scenario 3: Ship Notification
Section titled “Scenario 3: Ship Notification”I received a Ship notification. What does this mean?
Section titled “I received a Ship notification. What does this mean?”A Ship notification means JustAI has high confidence that a winning variant has been identified and recommends you ship it as the permanent version. This notification appears when:
- A variant has shown consistent, statistically significant improvement
- The result has been stable over time (not a temporary spike)
- There’s sufficient sample size to trust the result
What should I do when I get a Ship notification?
Section titled “What should I do when I get a Ship notification?”- Review the winning variant — Open the experiment to see performance details
- Check the lift — Understand how much improvement you’re getting (e.g., +12% click rate)
- Verify segment performance — Ensure the winner works across your key audiences
- Ship it — Click “Ship Winner” to make this variant the permanent version
What happens when I ship a winner?
Section titled “What happens when I ship a winner?”When you ship:
- The winning variant becomes the default for all users
- The experiment ends and stops collecting data
- You can no longer revert to other variants (without creating a new experiment)
- Your template is updated automatically
Can I ignore the Ship notification?
Section titled “Can I ignore the Ship notification?”Yes, but it’s not recommended for long. Reasons you might delay shipping:
| Reason | Recommendation |
|---|---|
| Want more confidence | Let it run another week, but don’t wait indefinitely |
| Winner works for some segments but not others | Consider creating segment-specific templates instead of shipping globally |
| External factors (holidays, campaigns) | Wait until normal conditions resume, then verify the winner still holds |
Warning: Delaying shipping means you’re leaving performance gains on the table. If JustAI recommends shipping, the data strongly supports it.
What if I disagree with the recommended winner?
Section titled “What if I disagree with the recommended winner?”Trust the data, but consider context. If you have strong reasons to doubt the result:
- Check for data issues — Are events tracking correctly?
- Review secondary metrics — Is the winner causing problems elsewhere?
- Consider qualitative factors — Does the winner align with brand guidelines?
If the data is solid but you still prefer a different variant, you can manually ship your preferred choice. Just document your reasoning for future reference.
Quick Decision Guide
Section titled “Quick Decision Guide”| Situation | Signal | Recommended Action |
|---|---|---|
| No significance after max runtime | None | Ship based on directional trend or preference |
| Auto-Tune notification | Emerging winner | Monitor, let Auto-Tune optimize, plan to ship soon |
| Ship notification | Clear winner | Review and ship the winner |
| Fluctuating results | Unstable | Extend runtime, check data quality, segment analysis |
| Winner in some segments only | Mixed | Consider segment-specific templates |