Skip to main content

Cross-Campaign Analysis

Cross-Campaign Analysis allows you to filter or break down an experiment's data based on visitors who were also exposed to other experiments, features, or personalizations. Cross-Campaign Analysis helps you understand how different elements on your site interact and affect overall user behavior, providing a powerful way to interpret complex results.

Filter experiment results

  1. On the Results page of your experiment, open the Filter audience dropdown in the Audience tab (located in the right-hand side panel).
  2. Click Add filter.
  3. Select an option to filter your results:
    • Exposed experiments
    • Exposed personalizations
    • Exposed features
  4. Choose whether to Include (only show visitors exposed to these) or Exclude (only show visitors not exposed to these).
  5. Select the specific items you want to include or exclude:
    • Experiments: Click the checkbox next to the experiment's name to include the experiment and all its variations. Alternatively, click the checkboxes for specific variations if you only want to include a subset.
    • Personalizations: Select the specific personalizations.
    • Features: Select the specific features.

Break down experiment results

Breakdowns segment your current experiment's results into groups based on the exposure of other campaigns, allowing you to compare the performance within your current experiment based on external exposure.

  1. On the Results page of your experiment, open the Breakdown audience dropdown in the Audience tab (located in the right-hand side panel).
  2. Click Add breakdown.
  3. Select an option to break down your results:
    • Cross experiments
    • Cross features
    • Cross personalizations
  4. Select which experiments, features, or personalizations you want to use to segment your results.

Interpretation and benefits

Use Cross-Campaign Analysis to answer critical questions about your platform's holistic performance:

  • Identify interference: Determine if a high-performing experiment is actually being negatively impacted when run simultaneously with a low-performing personalization.
  • Validate feature interactions: See how a new feature flag affects the conversion rate of an unrelated experiment, ensuring your releases are stable.
  • Measure synergies: Confirm whether two specific variations or campaigns deliver a greater uplift when combined than when run separately.