Skip to content

Experiments

Experiments in JustAI test multiple content variants against each other in real time to find what performs best for your audience. Unlike traditional A/B testing, JustAI uses multi-armed bandit algorithms that continuously shift traffic toward winning variants as data accumulates — so you learn faster and waste less traffic on underperformers.

How Experiments Differ from Traditional A/B Testing

Section titled “How Experiments Differ from Traditional A/B Testing”
Traditional A/B TestJustAI Experiment
Traffic allocationFixed split (e.g. 50/50) for the entire testDynamic — shifts toward winners automatically
Number of variantsTypically 2 (A vs. B)Many variants tested simultaneously
DurationRuns for a predetermined periodContinuous — runs until you ship a winner
OptimizationManual analysis, then manual switchAuto-Tune recommends actions; you approve
SegmentationSame variant for all usersLearns which variants work best per segment

Traditional A/B tests require you to wait for a fixed test period, analyze results, pick a winner, and manually deploy it. JustAI experiments run continuously, reallocating traffic in real time based on live performance data.

Create a template with your variables, attributes, and themes. Generate variants in the Studio and approve them. Configure your AB split and integration settings.

Activate the template. JustAI begins serving variants to incoming traffic, splitting between the control group and experiment variants based on your configured ratio.

In the early phase, JustAI explores broadly — distributing traffic across all active variants to gather performance data. The epsilon parameter controls how aggressively the system explores vs. exploits known winners. Higher epsilon means more exploration.

As data accumulates and the system gains confidence, traffic gradually shifts toward better-performing variants. Underperforming variants receive less traffic. Auto-Tune monitors results and surfaces recommendations:

  • Archive underperforming variants
  • Approve new AI-generated variants based on winning patterns
  • Ship a winner when statistical significance is reached

When a variant reaches your configured statistical significance thresholds, you can ship it:

  • Ship & Iterate — The winner becomes the new control. Launch a new experiment on top of it (Milestones).
  • Ship & Lock — The winner serves all future traffic. The experiment ends.

Multi-armed bandit: An optimization algorithm that balances exploring new variants with exploiting known winners. JustAI uses Thompson Sampling to make allocation decisions.

Epsilon: Controls the explore/exploit tradeoff. A higher epsilon allocates more traffic to exploration (testing less-proven variants). A lower epsilon focuses traffic on current top performers.

Statistical significance: A variant is considered a confident winner when it passes both the p-value threshold and minimum sample size configured in your template settings.

Control holdout: A percentage of traffic always sees the control variant, providing a consistent baseline for measuring lift.

Auto-Tune

Automatically monitor experiments and surface recommendations to improve performance.

Flows

Group templates into journeys and measure performance across an entire sequence.

Milestones & Shipping

Ship winners and continue optimizing with iterative experiments.

Ranking Algorithms

How Thompson Sampling and contextual bandits allocate traffic.