The evolution in A/B testing
By Nima Yassini
New Republiques’ The Pulse Report, Australia’s most in depth study into the top experimentation programs in the country, revealed 100% of respondents used A/B testing as part of their testing mix, with just over half of evolving organisations using A/Bn testing.
The diversity of experimentation types were higher among organisations with dedicated experimentation teams or product teams, compared to organisations with other program owners. A new mixed method emerging in the US, aims to give experimentation practitioners better insights as to ‘the why’ behind the behaviour.
Finding the ‘why’
A/B testing is a common entry point into experimentation. It is reasonably simple to set up and run, with a % figure attached to the choice customers preferred. However, it restricts your experimentation in ways we are only addressing now.
The issue is A/B testing is largely about quantifying customer behaviour – it gives two choices, one of which is ‘better’ than the other. The aim of A/B testing is to find the next experience that will test better and do that again (and again) so you have incremental improvements based on comparing choices. However, it doesn’t reveal that B is the best possible experience, only that it is better than A, and both are limited by the quality of the hypothesis given by the practitioner.
Mixed method experimentation changes all of this. It is designed to uncover sentiment – how someone feels towards an experience, so that organisations can better shape an experience that aligns with what the brand represents, or a feeling it wants to create.
‘Why’ is the most important question in UX. It brings in elements of customer sentiment – how a customer feels about the brand based on the product, design or features you’re presenting – and gives you qualitative data about your experiment. Mixed method is all about the ‘why’.
Comparing mixed method with A/B testing
Mixed method is a new research method in experimentation designed to uncover sentiment. It’s far more involved and complex than a standard A/B test, and takes longer, but it is somewhat more valuable to find the ‘why’.
How does this work in practice? Say you’re a fashion retailer with an ecommerce channel. You have two models: one who is size 6, the other is size 12. If, for example, you ran an A/B test on social sharing and the result was that customers preferred to share the image of the size 12 model 55% of the time, over the size six model 45% of the time, does that mean people prefer a more realistic size over an aspirational size?
The fact is the quantitative results from A/B testing doesn’t tell you either way, only that between the two choices presented, that one was preferred over the other. Mixed method helps you uncover why they are sharing it – they could be asking for an opinion, making a suggestion, or requesting someone buy it – and give you more context. They could be sharing it because they don’t like the image, which is negative sentiment and not something captured in the ‘win’ of 55% over 45%.
Mixed Method uses an integration of traditional UX research methods tweaked for experimentation and traditional A/B testing. What is new about it, is the way in which these two methods are brought together to uncover sentiment and how these lead to an experiment.
Also note that not everything belongs in a mixed method test. Enabling customers to filter by size in a dropdown menu or a button does not involve sentiment, so there’s no value in using a mixed method in a test like that. A good way to decide whether something belongs in a mixed method test is to ask, ‘how important is a customer’s feeling towards this product, design or feature?’ If feeling is not important, it’s a functional test; if feeling is central, it’s a sentiment test. Mixed method is ideal for sentiment-led testing.