Multi-variant testing
On this page
A multi-variant test (sometimes written A/B/n) compares one control variant with two or more test variants in a single experiment. Instead of running several A/B tests in sequence you try every idea in parallel.
Why run multi-variant tests?
- Speed. You discover the best setting faster because every variant is evaluated at the same time. There will be less time to gather enough samples than if you tested 2 or more variants sequentially.
- Fair comparison. All variants share the same traffic window, removing calendar effects like holidays or seasonal campaigns.
- Less engineering overhead. One set-up, one end date, one set of results to review.
Limitations and trade-offs
- Traffic is divided. With more variants each one receives a smaller share of users, so it takes longer to collect enough data. This is still less time and risk than running sequential tests, however.
- Statistical penalty. Every additional comparison increases the risk of a “lucky” winner. We correct for this (see below) by making the confidence threshold stricter.
Number of variants
We limit customers to 5 variants (1 control and 4 test variants). This limit is in place because a test with too many variants extends the test duration too far.
How we keep results reliable
By default, Algolia applies the Benjamini-Hochberg (BH) method. You can also choose the more conservative Bonferroni method:
Method | Corrected threshold |
---|---|
Benjamini-Hochberg (default) | αᵢ = (i / m) × 0.05 |
Bonferroni | α = 0.05 / m |
where m is the number of comparisons and i the rank of the p-value when they’re sorted from smallest to largest.
A comparison is marked Confident when its p-value is smaller than (or equal to) its corrected α. With Bonferroni, if there’s only one test variant (m = 1) the threshold stays at 0.05 - just like a regular two-variant test.