Skip to main content

Testing PageBuilder content with Bandito

Content testing, also known as Bandito, is Arc XP's content optimization feature that allows you to test different content variants to maximize audience engagement. It allows you to determine which content variants perform best with your audience.

For example, you can test headlines, photos, bylines, or story blurbs to see which version generates more clicks. The system automatically tracks performance and routes more traffic to better-performing variants over time.

Content testing is an add-on feature that costs extra. Contact Arc XP Customer Support if you are interested in using this functionality.

Prerequisites

To use the content testing feature in PageBuilder Editor, you must:

  • Have Bandito content testing functionality provisioned for your organization.

  • Have published pages to start the content testing. Bandito content testing does not work on templates or unpublished pages.

  • Use non-static components, such as promo blocks, to run the test, as variants render on the client side. Bandito does not support static blocks, such as feed-based blocks, because they do not re-render in the reader's browser and cannot display the variant content correctly.

Getting started

Bandito content testing implements a multi-armed bandit test approach. Rather than splitting traffic evenly between variants for a set period, as in a traditional A/B test, Bandito dynamically routes more traffic to the better-performing variants as it learns from your users' behavior. This process allows for continuous optimization while the test is running.

The testing system works through two key mechanisms:

  • Exploitation score - measured through Click-Through Rate (CTR), the number of clicks divided by the number of times the content is served to your users.

    For example, if one variant was served 100 times and clicked 50, its CTR is 0.5. If another variant was served 50 times and clicked 40 times, its CTR is 0.8. This second variant has a higher CTR even though users clicked it fewer times because the CTR is calculated by the number of times the system served each variant.

  • Exploration score - measured through variance. In Bandito, test variance measures the confidence the experiment places in determining a variant's performance during the trial. A higher variance indicates less confidence in the data for that variant, which means the experiment is more likely to explore that variant further.

    The system calculates variance using this formula: log(total test serves) / individual variant serves.

Bandito combines the exploitation score (CTR) with the exploration score (variance) to determine which variant to show. As an experiment progresses and collects more data, the variance scores drop, and the experiment begins to serve content based on how well each variant performs (exploitation). As the test progresses, it automatically routes more traffic to higher-performing variants.

The system declares a winner when one variant performs significantly better than others for over five minutes.

You can add new variants at any time during the test. When introducing a new variant, it receives higher exposure initially due to high variance (low confidence), allowing the system to quickly gather data about its performance.

While there's no limit to the number of variants you can test, Arc XP recommends you ensure each variant receives at least 100 clicks for statistical significance. The effectiveness of multiple variants depends on the overall traffic volume and test duration.

See the following documentation to start using Bandito for testing your content: