Analytics
JustAI provides analytics at two levels: template-level dashboards that show experiment performance and variant metrics, and an org-wide usage page that tracks API consumption.
Template Overview Dashboard
Section titled “Template Overview Dashboard”Each template has an Overview tab that serves as its performance dashboard. This is the first thing you see when you open a template.
Topline Metrics
Section titled “Topline Metrics”The top of the dashboard shows aggregate performance for the template’s key metric:
- Overall lift — How much the experiment variants outperform the control, expressed as a percentage
- Sends — Total number of API requests served
- Key metric rate — The current rate for your optimization metric (e.g. open rate, click rate)
Lift is calculated by comparing the weighted average performance of experiment variants against the control group. A positive lift means your variants are outperforming the original content.
Variant Performance Table
Section titled “Variant Performance Table”Below the topline metrics, a table breaks down performance by individual variant:
| Column | Description |
|---|---|
| Variant | Name and theme tags |
| Sends | Number of times the variant was served |
| Key metric | Performance on the template’s optimization metric |
| Lift vs. control | Percentage improvement over the control |
| Significance | Whether the result is statistically significant |
Use this table to identify your top performers and spot variants that should be archived.
Segment Analysis
Section titled “Segment Analysis”If your template uses attributes, the dashboard shows how variants perform across different audience segments. This reveals which content resonates with which users — for example, a “social proof” variant might outperform for enterprise users while an “urgency” variant wins for startups.
Learnings
Section titled “Learnings”The Learnings section surfaces key takeaways from the experiment, including which themes and messaging strategies are working best. These insights carry forward when you generate new variants in the Studio.
Analytics Tab
Section titled “Analytics Tab”The Analytics tab provides more detailed performance data beyond the Overview dashboard. Use it for deeper analysis, including:
- Performance trends over time
- Metric breakdowns across multiple dimensions
- Detailed statistical analysis of variant comparisons
Metric View Modes
Section titled “Metric View Modes”JustAI supports two ways to count events:
| Mode | Description | Best for |
|---|---|---|
| Unique | Counts each user once, regardless of how many times they trigger the event | Open rates, click rates — avoids inflated counts from repeat actions |
| All events | Counts every event occurrence, including repeats from the same user | Revenue, total engagement volume |
Toggle between these modes in the analytics view to see both perspectives.
Date Range and Lookback
Section titled “Date Range and Lookback”Analytics data is governed by two parameters:
- lookback_days (default: 14) — How many days of historical data to include in the analytics window
- offset_days (default: 3) — How many recent days to exclude, allowing time for delayed events (like conversions that happen days after an email open) to be fully recorded
Together, these create an analytics window that shows the period from (today - lookback_days - offset_days) to (today - offset_days). This ensures your analytics reflect complete data rather than partial results from recent days.
Projected Results
Section titled “Projected Results”For experiments that haven’t yet reached statistical significance, JustAI shows projected results — an estimate of where metrics are trending based on current data. Projected results are clearly labeled and should be treated as directional rather than conclusive.
Interpreting Lift
Section titled “Interpreting Lift”When reading lift percentages:
- Positive lift means the variant outperforms the control on your key metric
- Negative lift means the variant underperforms the control
- Lift with significance (marked with a confidence indicator) means there’s enough data to trust the result
- Lift without significance means more data is needed — the result could change as traffic accumulates
A variant showing +20% lift without statistical significance is not necessarily a winner. Wait until the minimum sample size and p-value thresholds are met before making shipping decisions.
Org-Wide API Usage
Section titled “Org-Wide API Usage”The Analytics page in the main navigation shows organization-level API usage:
- Monthly usage — Total API requests across all templates for the selected month
- Usage cap — Your plan’s monthly API request limit
- Month selector — View usage for any previous month
This page helps you monitor consumption and plan for capacity.
Related Resources
Section titled “Related Resources”- Template Configuration — Set stat sig thresholds and AB split ratios
- Metric Configuration — Configure which metrics your org tracks
- Auto-Tune — How JustAI surfaces performance recommendations