Skip to main content

Understanding visit count mismatches in capped experiments

When you use display limits (capping) in your feature flags or experiments, the total number of visits recorded for your variations often appears uneven compared to the Original version.

If you notice that the Original version has significantly more visits than the Variation, this behavior is expected and does not indicate a failure in your experiment setup, such as a Sample Ratio Mismatch (SRM).

Use this guide to understand exactly why capping creates this difference in visit counts and why you should focus on the unique visitor count instead.

The effect of display limits on experiment data

In advanced setups, a display limit is often achieved by dynamically adding a user to an exclusion segment after they have seen the variation once, which is commonly done by setting a custom data or local storage value for the visitor.

You can also configure display limits in the Code editor for experiments and set similar capping limits for Personalizations.

Detailed example: Pop-in exclusion

Suppose you run an experiment where the Variation displays a pop-in, but the Original does not. You implement a segment with the following rule to ensure a visitor only sees the pop-in once:

  • Segment rule: Exclude users where the custom data or local storage value "see popin" is set to true.

Original: No restriction

When Kameleoon allocates a visitor to the Original version:

  • The pop-in is never shown.
  • The custom data value "see popin = true" is never set.
  • The visitor is never excluded by the segment.
  • Every unique visitor allocated to the Original is therefore targeted on each subsequent visit.

Variation: Subject to exclusion

When a visitor is allocated to the Variation:

  • On their first visit, they see the pop-in.
  • The custom data value "see popin = true" is set (this acts as the capping mechanism).
  • On their next visit, the visitor enters the exclusion segment and is not targeted by the experiment. They will not see the variation.

The visit count disparity

In this scenario, you maintain a balanced split at the unique visitor level. However, every visitor on the Original is targeted at each visit, while every visitor on the Variation is targeted only for the first visit before being excluded.

This intentional difference creates the disparity in total visits, but the experiment split is still valid.

MetricOriginal (no pop-in)Variation (pop-in)
Unique visitors500500
Total visitorsHigher (targeted on every visit)Lower (targeted only once)

This is not a Sample Ratio Mismatch

It's common for users to believe that a test isn't split correctly when they see a large difference in visit counts (for example, 1,000 visits versus 500 visits).

The difference in visit counts does not mean you have an SRM.

  • SRM: An SRM is a statistical error where the initial allocation of unique visitors to the Original versus the Variation does not match the configured split, which often indicates a core tracking or implementation error.
  • Capping behavior: When capping is used, the initial allocation of unique visitors remains correct. The disparity only occurs in the secondary metric: total visits, because the capping feature is working as intended to reduce exposure.

Always confirm your experiment split by looking at the number of unique visitors. If the unique visitor count is balanced according to your configuration, your experiment is running correctly despite the difference in total visits.