Auto-Tune
Automatically monitor experiments and surface recommendations to improve performance.
Experiments in JustAI test multiple content variants against each other in real time to find what performs best for your audience. Unlike traditional A/B testing, JustAI uses multi-armed bandit algorithms that continuously shift traffic toward winning variants as data accumulates — so you learn faster and waste less traffic on underperformers.
| Traditional A/B Test | JustAI Experiment | |
|---|---|---|
| Traffic allocation | Fixed split (e.g. 50/50) for the entire test | Dynamic — shifts toward winners automatically |
| Number of variants | Typically 2 (A vs. B) | Many variants tested simultaneously |
| Duration | Runs for a predetermined period | Continuous — runs until you ship a winner |
| Optimization | Manual analysis, then manual switch | Auto-Tune recommends actions; you approve |
| Segmentation | Same variant for all users | Learns which variants work best per segment |
Traditional A/B tests require you to wait for a fixed test period, analyze results, pick a winner, and manually deploy it. JustAI experiments run continuously, reallocating traffic in real time based on live performance data.
Create a template with your variables, attributes, and themes. Generate variants in the Studio and approve them. Configure your AB split and integration settings.
Activate the template. JustAI begins serving variants to incoming traffic, splitting between the control group and experiment variants based on your configured ratio.
In the early phase, JustAI explores broadly — distributing traffic across all active variants to gather performance data. The epsilon parameter controls how aggressively the system explores vs. exploits known winners. Higher epsilon means more exploration.
As data accumulates and the system gains confidence, traffic gradually shifts toward better-performing variants. Underperforming variants receive less traffic. Auto-Tune monitors results and surfaces recommendations:
When a variant reaches your configured statistical significance thresholds, you can ship it:
Multi-armed bandit: An optimization algorithm that balances exploring new variants with exploiting known winners. JustAI uses Thompson Sampling to make allocation decisions.
Epsilon: Controls the explore/exploit tradeoff. A higher epsilon allocates more traffic to exploration (testing less-proven variants). A lower epsilon focuses traffic on current top performers.
Statistical significance: A variant is considered a confident winner when it passes both the p-value threshold and minimum sample size configured in your template settings.
Control holdout: A percentage of traffic always sees the control variant, providing a consistent baseline for measuring lift.
Auto-Tune
Automatically monitor experiments and surface recommendations to improve performance.
Flows
Group templates into journeys and measure performance across an entire sequence.
Milestones & Shipping
Ship winners and continue optimizing with iterative experiments.
Ranking Algorithms
How Thompson Sampling and contextual bandits allocate traffic.