A/B Testing consists of a randomized experiment with two flow variants, A and B, which are run simultaneously to determine which version improves your business metrics. You should use this feature when you want to test a hypothesis about a flow setup you think will optimize your payment results.
An example of a hypothesis: I think Payment Service Provider X will have higher authorization rates and lower costs when compared to Payment Service Provider Y.
Even if you think your hypothesis is obvious, we highly encourage you to test it before making significant changes in your flows.
Transactions of the flow under an A/B Testing are randomly sampled between the variant A and B of the experiment. In addition, some transactions are allocated to a Control Group that will work as a sanity check to evaluate whether the sampling is biased.
You need at least one active flow to create an A/B Testing. Then you define the hypothesis you want to test in that flow, the percentage of the total volume you want to allocate to the experiment, and the flow B setting that will reflect your hypothesis. You can remove, add, edit and change the order of all templates of your experiment flow.
Finally, you have to publish your A/B Testing to start running your test with actual transactions and evaluate the results. You can monitor the performance of all your A/B Testing with the Flow Metrics. Metrics related to the experiment will be displayed close to regular flows in the Flows section in your Console.
Be careful about not jumping o a conclusion too quickly. As with any A/B Testing, you need statistical relevance for conclusive results. Thus, it might need some time before you have enough transactions for your experiment.
Once your experiment has statistical relevance, you can promote variant B of your experiment to process all transactions of that flow or keep using variant A.