How does SEO testing sit alongside product testing?

How does SEO testing sit alongside product testing?

There’s a very common question that we hear regularly when we’re talking to companies that are considering implementing an SEO testing program for the first time which is:

It comes up all the time because we are typically talking to companies that are quite advanced in their general approach to testing as an organisation - teams who have a test and learn mentality, are already testing the easiest channels and activities (e.g. paid search, conversion rates), and are generally quite far along the maturity curve when it comes to testing. SEO testing is a newer thing, and most people we speak to want to add it to their existing testing mix.

The good news is that it’s possible - and indeed, desirable - to run product testing alongside SEO testing. That’s the punchline. Let’s get into the details.

The first thing to realise as you plan adding SEO testing into the mix is that if you are already doing any on-site SEO, your SEO and product development programs already need to work nicely together. In my experience, this is the best place to start when thinking about how to bring SEO testing into the mix.

The key to a successful rollout of SEO testing is to build upon the existing processes and priorities that your org has built to collaborate on SEO and product changes. In my experience, this starts by asking “What happens right now when the SEO team believes that a user-facing change will benefit organic visibility?”

In the most high-performing teams I’ve come across, the answer is based on tight collaboration founded on mutual respect. SEO needs to believe in the value and effectiveness of the work the product team does, and product org needs to internalise the truth that without organic search traffic, most web-centric businesses would be in dire trouble, if not failing entirely. From this position of mutual trust, teams can build processes that might include:

In practice, this might include conversation starters like:

I’m very aware that not all organisations have relationships that are this functional, so if yours doesn’t sound like this, then planning to roll out SEO testing is a great opportunity to reset these relationships and make things more productive.

Most organisations that we work with came to us running product / UX / CRO testing but not yet running a sophisticated SEO testing program. We typically advise them that the first stages of building SEO testing into the process are to swap “test the SEO impact of change X” into the process wherever the current process reads “make change X requested by the SEO team”. That might mean different things for different kinds of hypothesis:

(Note: we tend to get the question that triggered this post this way around - about whether introducing SEO testing will impact product tests - but it’s worth noting that product changes and product tests can interfere with SEO efforts as well! Most of my recommendations here can be read the other way around regarding the care that needs to be taken with rolling out product changes and tests to avoid messing with SEO performance or tests, but for some reason that seems to be a much smaller concern for most of the people we speak to. Perhaps this is just due to the relative maturity of the testing disciplines.)

I am a fan of using explicit RACI charts to keep track of who should be responsible, accountable, consulted, and informed about what changes and which tests:

Some tests (the ones denoted “red” above) are clearly not possible simultaneously. If the SEO team wants to move an element up the page while the product team wants to remove that element from the page entirely then no flow chart or process diagram is going to resolve the conflict. In these situations you will have to dig into the underlying thinking and reasoning to understand whether one or other priority should win out for the business, or whether you need to run a more sophisticated set of tests to understand the relative value of different moves.

With amber tests, however, it’s a different story. These are the ones I denoted as “could conflict” above. Typically this happens when both teams are considering tests that modify the user experience on the same page, but, unlike with red tests, they are potentially compatible. The easiest version to understand is something like:

It is possible for any combination of test results: product test winning with SEO test winning or losing, or product test losing with SEO test winning or losing.

SEO tests divide up pages, but give the same user experience to every user (including googlebot) on any given page. Product tests divide up users on each page, but for any one user make the same change to all pages in the test. This means that as long as they don’t mess with each other’s changes directly - such as in the example outlined above - the tests can be what is called orthogonal. Orthogonal tests can be analysed independently even if they ran over the same set of users and pages.

In a situation like this, we have all four areas of the quadrant in play:

The SEO test compares the performance of column A vs column B:

While the user test compares the performance of row X vs row Y:

For these amber tests, it is a judgement call to decide whether there is too much conflict (e.g. the SEO test wants to move the element that product is modifying right to the bottom of a very long page) in such a way that the signal may be drowned out in the noise, or whether the changes are essentially independent. If you can imagine any of the 4 sections of the quadrant being the right answer, then in general you can carry on and run the two tests in parallel without them clashing or affecting each others’ methodology.

There will always be subtle edge cases.

A classic question comes from the possibility that the SEO test brings a different blend of traffic that skews the conversion rate, meaning that the product test is a winner on SEO variant pages but a loser on SEO control pages, yet the SEO test is a failure such that SEO wants to roll out control. This is a very difficult situation to identify or analyse - but it’s also likely to be rare, and we fall back on our common mantra that we are doing business, not science and make pragmatic decisions. If we wanted to be extremely confident in every single test result, we might need to go slower and run more tests of the various combinations, but speed has a benefit all of its own, and in most situations our experience has been that biasing for speed and cadence wins out over the portfolio of possible tests.

From a theoretical SEO perspective, you could also make the objection that user signals could be a ranking factor, and by showing a different user experience to some percentage of page visitors, you may muddy the water of the pure SEO test. Given the amount of personalisation present on many websites, the difference between logged-in and logged-out experiences, and the prevalence of UX and CRO testing across the whole web, my view is that this is very unlikely to impact the outcome of an SEO test. As a result, we should take the pragmatic view that we are looking for SEO impacts that are large enough that they are robust to this kind of confounding factor. In particular, we want to roll out SEO winners that are robust to survive future product test iterations like the UX test in question.

I’ve focused most of this post on the question of how existing product and SEO processes can fit together as you add SEO testing into the mix. There are, however, obvious logical extensions of the underlying questions. As you roll out SEO testing, you are likely to go quickly from:

The answer to this question is full funnel testing which my colleague Craig wrote more about here. This is a methodology we have developed that enables us to run a single test to measure the conversion rate and the visibility impact of a change all in one go.

It is significantly more complicated, and requires true integration of product and SEO thinking, so it’s out of scope for this first question about how to move from SEO recommendations to SEO testing without disrupting the product team’s workflow, but I thought it worth mentioning as it is the next stage up the maturity curve of SEO testing. It lets you connect SEO initiatives not only to visibility and traffic, but all the way through to conversions and revenue:

Product testing’s relationship to SEO testing should be essentially indistinguishable from the current relationship between product testing and (untested) SEO.

Instead of deploying SEO changes without testing after whatever conversation would normally happen between the product team and the SEO team, those tests would move to SEO testing rather than straight to deployment.

Everything else I’ve written in this post is about the subtleties of those conversations between the SEO and product teams! The fundamental point is that if you are currently running product / UX / CRO tests and running any on-site SEO initiatives, then it will be possible to integrate SEO testing into the workflow without changing anything on the product / UX / CRO testing side.

If you have any questions, drop me a line on Twitter to discuss: @willcritchlow.

Images Powered by Shutterstock