Multi-Armed Bandit optimization for feature experiments

Written by Julie Trenque

Updated on 11/15/2024

1 min

Advanced

Was this content useful?

You can leverage the potential of Multi-Armed Bandit (MAB) optimizations in Kameleoon to achieve quick wins while maintaining significant lifts in performance. This article will discuss the implications and use cases of MAB algorithms for you to better determine when to choose MAB optimizations for your feature experimentation.

Once you’ve set up your experiment (or even while the experiment is live), simply click on the Allocate traffic dynamically toggle at the bottom of your variations list.  

Note: Please note that the allocation update is based solely on the lift of the primary goal of the test.

When using dynamic allocation (MAB), exposure rates cannot be manually edited. Instead, Kameleoon will automatically measure improvement over the original variation and estimate the gain in total conversions using the Epsilon Greedy algorithm. This process repeats hourly for the algorithms to allocate traffic to variations based on their observed performance. In doing so, the MAB pushes traffic towards higher-performing variations, even without statistical significance which can drastically reduce the time it otherwise takes to reach your winning or losing variations.

Note: Auto-optimized experiments rely on the original variation (“off” for Feature Experiments) to optimize the deviations. If the (off) variation does not receive any traffic, the deviation may not be updated, causing the allocation to remain at 50/50 despite a clear winning variation.

Do note that MABs do not rely on a control or baseline experience. Unlike A/B tests, MABs focus on improvement over an equal initial allocation, and dynamically adjust traffic allocation based on real-time performance. Thus, in cases where statistical analysis is of less importance and ‘exploration’ time needs to be cut short, MABs can be useful as they are more ‘exploitation’ focused.

You can read here to learn more about how Multi-Armed Bandit optimization works, or read our Statistical white paper to dive deeper into the technical details of how our M.A.B algorithm works.