Organisations keen to start their experimentation journey often establish a team with little trouble but then find it a challenge to scale, which hinders their maturity. At first it may be hard to tell if this is a problem with the experimentation program, or a problem with scaling, so here are some issues to look for when trying to diagnose maturity blocks.
Add new test types
Maturity is cyclical, not an endpoint. The main challenge is that as you build a process and a system around one type of experimentation, that expertise and experience doesn’t translate to the next stage of a different type of experiment. This means that once you’re mature in one type of test, you can’t expect to be adept at other types of testing because the way you would set up one type of test will be different to how you set up another.
So, just because you’re mature in A/B testing and run a high velocity of those tests doesn’t mean you’re going to switch to expert level on personalisation. Adding personalisation will require you to start from the beginning in terms of building the right skills, processes, tools and systems – they are not transferable from A/B testing. You need to reinvent processes to make it work.
Beware of silos
It is theoretically possible to mature more than one process simultaneously but the challenge is that you will be required to split your attention between them. I’ve personally never seen anyone do it well simultaneously; those who focus on one at a time find it easier to make it work logistically. They also show returns quicker as it’s much more contained.
Bigger organisations try to accelerate the maturity process by splitting and dedicating teams to specific types of tests, for example one will focus on personalisation, another on conversion rate optimisation, under the umbrella of one big experimentation team. What tends to happen is they end up in silos and things get disjointed: there are two different – sometimes competing – plans and they’re trying to get different results. They don’t talk to each other and they don’t learn from each other.
In any test, you’re looking at message, placement, position and form. A really good experiment takes all four of these features into consideration.
A siloed team will each test either content or structure to deliver a better experience, which means they start to contain experiences to the action that they’re testing rather than looking at the wider customer journey perspective. So a test might lose based on where in the journey you delivered it but because of the silo you might not see that it’s actually the right message and then put up another test at a different point in the customer journey to confirm that hypothesis.
Each team can probably expect a micro-uplift whereas if they work together, they will start delivering much better results. In the speed to market, businesses can make massive errors like this.
Identify testing gaps
One way that we gauge whether we’re not quite getting it right is when an experiment goes through the production process. We count the number of times it comes back because certain things weren’t detailed, or certain things were missed. We actively watch our bounce rate to find out which part of the process isn’t working.
We have a large retail client who came to us believing they were very mature in their experimentation. When I asked about how they organised their experiments, the client said, “I just look for stuff I don’t like”, which meant they were designing an experience for themselves rather than using customer data to drive what they should be fixing or improving. They looked at a friction point to justify the friction that they had, which is the wrong way to look at an experiment.
Create a culture shift
Finally, experimentation is a cultural shift, you can’t have just one person or one team doing it – even if they do it well – because experimentation relies on so many others to deliver the full benefits. Once a business changes its mindset from using experimentation to increase sales to increasing learning (that will eventually lead to increased sales), that creates a scaffold to scale and becomes a firmer path to maturity. I have previously written on the reasons why CRO needs a culture of experimentation to thrive which you can read about here.
One of our clients has had an experimentation program for many years but are on round four of trying to increase velocity because they hadn’t cleared roadblocks within their organisation to go further. While they were mature at testing within one team, they were immature in their scaling ability because the culture of experimentation hadn’t penetrated their whole organisation.
You have to be aware that you may be mature in some ways but immature in others and get help for the areas where you’re weak so you can scale effectively while still delivering quality. Your processes are not just about the tests you’re running but also about making the ability to scale more robust.
There’s no single symptom that will tell you that the obstacle to experimentation maturity is your inability to scale but taken together these issues might give you a picture of what needs attention or assistance.
This article is the fourth in a 5-part series breaking down the key findings of the 2019 Australian Experimentation Maturity Index. To read more articles about why we're tracking experimentation maturity in Australia please click here
For a copy of the report or for further information please don't hesitate to get in touch via firstname.lastname@example.org
We have also recently launched the New Republique Podcast,
Australia's first podcast dedicated to all things CRO (conversion rate optimisation), experimentation and personalisation. Listen via iTunes or Spotify