Skip to main content

Finalizing an experiment

Watch this video in our academy for more information about the differences between A/B and multivariate tests.

Access the finalization page

Once you create your variations, you are ready to launch your experiment.

In the header's right side, you will find the Finalize button. Clicking the button opens the finalization page where you can complete these steps before launching your experiment:

On the finalization page, you can also:

  • Estimate an experiment's duration
  • Schedule an experiment

Define targeting

The first step in finalizing your A/B experiment is defining targeting. You must outline the segments and triggers that comprise your targeting.

Segments

Segments affect which users will be included in your experiment.

You can target:

  • All visitors: All visitors to your site will be included in your experiment.
  • Target a segment: Target an existing segment or create a new one.
  • Target High or Low-Engaged Users with AI: Choose to include or exclude certain visitors based on their likelihood to convert on a goal when a specific trigger occurs.
    • You must wait seven days before receiving predictions relevant to your goal: the AI must learn first.
note

This option requires the Contextual Bandit and AI Targeting add-on. Contact your Customer Success Manager for more information.

  • Target specific visitors: Include visitors who meet a certain combination of criteria.

Triggers

Triggers affect when your experiment will activate.

You can have your experiment trigger:

  • When a web page is reached: Trigger your experiment when a visitor accesses a page fulfilling certain URL requirements.
    • A specific page: Trigger your experiment when a visitor lands on a particular page on your site. Enter the page's URL in the text field.
    • The URLs containing a specific fragment: Trigger your experiment when a visitor lands on a page containing a certain URL fragment (for example /product/). Enter the fragment in the text field.
    • The URLs of all modified pages: Trigger your experiment when a visitor lands on a page that has been modified by your Kameleoon experiment (for example, a page on which you have created a variation).
    • The entire site: Trigger your experiment when a visitor lands on any page of your site.
  • When a specific trigger occurs: Select a trigger for your experiment, or create a new one.
  • When a combination of triggers occurs: Trigger your experiment when a combination of conditions are met.

Distribute traffic

The second step of finalizing your A/B experiment is traffic allocation. By default, traffic is evenly distributed among your variations; however, you can change this setting.

In the example above, 33.33% of visitors will see variation 1; 33.33% will see variation 2; and 33.33% of visitors will see the original variation.

To change the traffic allocation:

  • Click and drag the slider next to a variation.

OR

  • Click the number to the right of a slider and enter your desired percentage.

Click Next to validate this step.

Excluded traffic

The traffic that you don't assign to any of your variations will be automatically attributed to Excluded traffic. These visitors will see your page's original versions.

Equal allocation per variation

Specify the percentage of traffic to divert to experiment variations.

For example, with three variations, a 75% diversion percentage allocates 75% of traffic to the variations and 25% to the original page. Kameleoon will then display each variation equally (25% of the time).

Different allocation per variation

To allocate different traffic percentages to each variation, use the sliders to adjust the desired percentage for each.

You can also click the percentage and enter the value you want to apply to the variation.

At any time, you can return to an equal distribution between the variations by clicking the Allocate equally, which is just below the list of variations.

Allocation method

You select your experiment's traffic allocation method from the dropdown. You can choose one of the following options:

  • Manual: You manually set a static traffic allocation for your visitors.
  • Multi-armed Bandit: Kameleoon will automatically adjust your traffic allocation based on your variations' performance.
  • Contextual Bandit: Contextual Bandit personalizes variations based on specific visitor attributes. It dynamically selects the best variation for each visitor.
note

Contextual Bandit requires the Contextual Bandit and AI Targeting add-on. Contact your Customer Success Manager for more information.

Contextual bandits

Contextual bandits dynamically optimize traffic allocation in experiments using machine learning. They adapt in real-time to redistribute traffic based on variation performance and user context to maximize effectiveness.

Key differences exist between multi-armed bandits and contextual bandits. Understanding these differences is critical for choosing the best method for your experiments:

  • Multi-armed bandits:
    • Multi-armed bandits optimize traffic distribution among multiple variations (arms) to maximize a defined goal, such as click rates or conversions.
    • Multi-armed bandits treat all users equally; no distinction is made based on user attributes.
    • Ideal for scenarios where user-specific data is unavailable or unnecessary, and the focus is on finding the best-performing variation for the overall audience.
  • Contextual bandits:
    • Contextual bandits incorporate additional user-specific data, like device type, location, or behavior, into decision-making.
    • Contextual bandits allow more personalized decisions, tailoring variations to specific users for improved outcomes.
    • The variability introduced by user attributes allows contextual bandits to optimize decisions in dynamic environments.

So, multi-armed bandits optimize traffic allocation uniformly across users, while contextual bandits leverage contextual data to make personalized decisions.

If you would like more details on how Kameleoon’s contextual bandits work, read our Statistical paper.

Configuring contextual bandits

To enable contextual bandits:

  1. Navigate to the Finalization panel.
  2. Click Traffic allocation.
  3. Click the dropdown menu beneath Select the allocation method > Contextual bandit.

By default, the algorithm will begin optimizing traffic based on predefined user attributes available in Kameleoon.

However, to fully leverage the power of contextual bandits—especially with the Contextual Bandit Premium feature included in the AI Predictive Targeting add-on (paid or trial)—you can provide custom data as additional input to the machine-learning model. Custom data allows the algorithm to make even more accurate predictions by incorporating business-specific attributes (for example, CRM segments, purchase history, in-app behavior).

To activate the use of custom data in your experiment:

  • Go to your custom data configuration panel.
  • Enable Use this custom data as input for AI Predictive Targeting.

Once enabled, Kameleoon’s algorithm will use these attributes as part of the decision-making process to deliver the most relevant variation to each visitor.

To learn more about AI Predictive Targeting, read our article on the subject here.

Advanced reallocation

note

Advanced reallocation is only available for online experiments.

The Advanced reallocation feature allows you to redistribute traffic among variations in your experiment. When applied, the traffic allocation is reset, and visitors who had previously seen a specific variation will be treated as new visitors. This can be particularly useful when you want to focus on a subset of variations or exclude certain variations from receiving further traffic.

Click on the Advanced reallocation option located at the top-right of the traffic distribution step. In the panel that appears, you can choose which variations will be part of the reallocated traffic.

This reallocation will be effective once you'll have clicked on the Reallocate button and then on the Save button in the top-right of the page.

Define goals

This step is mandatory unless you configured an integration (reporting tool).

Select one or several goals to activate Kameleoon as a reporting tool.

Available goals

To use Kameleoon as a reporting tool, you must define a conversion goal. A goal is what you want to improve with your A/B experiment.

Several goals are available:

  • Engagement: This goal is achieved if the visitor visits other pages after the landing page.
  • Click tracking: This goal is achieved if the visitor clicks on a specific element you defined.
  • Scroll tracking: This goal is achieved if the visitor scrolls beyond a specific part of your page.
  • Access to a page: This goal is achieved if the visitor reaches a page of your choice.
  • Number of pages viewed: This goal is achieved if the visitor visits a certain number of pages.
  • Time elapsed: This goal is achieved if the visitor spends a predefined amount of time on your website.
  • Custom goal: For more complex goals, you can create custom goals via a Kameleoon API call.

Create a new goal

To learn how to add a new goal, read this article.

Associate a goal to your experiment

Once you have created a goal, you need to associate it with your experiment.

  1. Click Goals in the Finalization page.
  2. Find your goal and click it.
  3. Click Next to validate this step.

Set up reporting tools

This step is mandatory unless you configured a goal.

Add a new integration

To learn how to add a new integration, read this article.

Activate an integration on an experiment

Once you've added a reporting tool to your list of integrations on the Integrations page, you can associate it with a campaign.

To do this:

  1. Click Integrations in the Finalization page.
  2. Find your desired tool and click it.
  3. Click Next to validate this step.

Simulate

Simulation mode allows you to check if:

  • Your variations or personalizations are displayed correctly.
  • Your campaign's targeting is configured correctly, and if not, understand why.
  • The goals you have defined convert.
  • Your different visitors see the right content at the right time.

For more information about simulation, read this article.

Estimate an experiment's duration

In the finalization panel of the editor, it is possible to estimate an experiment's duration.

To do this, you must provide certain information:

  • Average number of visitors per day visiting the tested pages - This is the amount of visitors your test will target daily across all the test's variations.
  • Current conversion rate of the goal (which will be used as a reference) - This is an estimation of the current conversion rate of the main goal of the experiment you are trying to improve.
  • Minimum Detectable Effect (MDE) - This is the smallest change in the goal metric you aim to identify. It's calculated relative to the control variation's mean. For example, with a control conversion rate of 1%, a 5% MDE means you can detect if the rate falls below 0.95% or rises above 1.05%.
  • Desired reliability rate (by default, it's 95%, but you can change its value) - This setting lets you balance the risk of detecting an improvement which is not real - a false positive result. A common value is 95%. Increasing this parameter will lower your risk of getting a false positive result at the cost of increasing the required number of visitors needed to detect the same change.
note

This is an estimation; once your experiment is launched, the reliability index will inform you if your results are reliable. For more information, you can consult our documentation on the Results page.

The estimator automatically accounts for the traffic allocation and the number of variations.

note

You can also use our free A/B testing duration calculator to get a more precise estimate based on your traffic and conversion goals.

Launch

Launch immediately

When you complete all mandatory finalization steps, a green check icon appears.

We strongly recommend simulating your experiment, but it's not mandatory. Simulating checks your variations' display, your experiment's targeting, and whether the defined goals lead to conversions.

When you are satisfied with your variations and experiment, click Launch.

A Configuration summary panel allows you to check your experiment settings.

To modify settings, click a configuration option > Edit. When you are satisfied with your settings, click Launch in the bottom-right.

Your experiment is now online.

note

There may be a short latency time (up to 10 minutes) between the launch of an A/B experiment and its visibility on your website. Don't worry if your experiment does not appear immediately.

Schedule

You can schedule your experiment by defining a starting date, an ending date or both.

To do so, you can either click the three-dots menu > Schedule:

Or click Schedule at the bottom of the Configuration summary.

A panel will open allowing you to schedule your experiment.

Advanced schedule allows you to set the experiment time zone and/or automate the experiment's conclusion. Automatic stops can be triggered when the reliability rate reaches and stabilizes at the configured value, or when the experiment reaches a defined traffic threshold.

We recommend you avoid setting an end date for A/B experiments before launch. The confidence rate is the primary indicator of whether an experiment can be stopped or should continue before conclusive results are obtained. However, defining an end date can be beneficial for experiments tied to specific events or timeframes. Regardless of whether an end date is set, always review the confidence rate before analyzing experiment results.