CXL Growth Marketing Minidegree: Week 4

Sara Miteva
4 min readMay 16, 2021
Photo by Green Chameleon

This is the fourth article out of 12 articles that describe my journey while attending lessons for the Growth Marketing Minidigree at the CXL Institute.

At the end of last week, I started the A/B testing course with Ton Wesseling, the founder of the conversion optimization agency Online Dialogue. He has over 15 years of experience in conversion optimization and experimentation, helping over 50 organizations with their conversion optimization challenges.

This was the most difficult course so far and I had to watch some videos over again to make sure I’ve remembered the most important stuff as there’s really a lot of valuable information.

Hypothesis setting

Last week, I learned about the history of A/B testing, with its turn point being in 2010, the year when A/B testing started to be widely used in marketing. After learning the value of A/B testing, the data you need, the KPIs to choose, and the research you need to conduct, we move forward to hypothesis setting.

The objectives here are to:

  • Understand why it’s good to write a hypothesis before you run a growth or research experiment
  • Learn how to write a proper hypothesis

According to the course, you need a hypothesis to get everyone on the team aligned on:

  • The problem
  • The proposed solution
  • The predicted outcome

To achieve this, the hypothesis should look like this:

If [I APPLY THIS], then [THE BEHAVIORAL CHANGE] will happen, (among [THIS GROUP]), because of [THIS REASON].

EXAMPLE:

If we show a happy 30-year-old person, then the conversion rate will increase, among our 25–35 target group, because we’ve reduced their fear of potential obstacles.

After the hypothesis setting part, we have the last lesson of the planning module, where we learn about prioritization. The goals here are to:

  • Understand when you should apply what kind of prioritization model
  • Learn how to use a prioritization model and transfer it to an A/B-test roadmap

Execution

After we’ve planned our A/B test, it’s time to execute it. That’s why the next lesson is on design, development, and QA on our A/B test. The most important things about the design to remember are:

  • We only want one challenger, not two, three or more.
  • The change should be visible.
  • More than one change is okay, but we need to think about optimization.
  • The design should be mirrored with the hypothesis.

When it comes to development, remember this:

  • Do not use the WYSIWYG code editor.
  • It’s an experiment, if it works, it works.
  • If you can’t make it work within time — propose design changes.
  • Consider injecting client-side code also in the default.
  • Add measurements and extra analytics events to your code.

Finally, some things to consider regarding QA:

  • Think of device/browser combinations
  • Don’t disrupt page interactions

Moving on, we also need to configure our test, which is most commonly done in the Google Optimization tool. Calculating the length of the A/B test is also important, with this calculator presented as a tool you can use. There are also some tips on how to shorten your A/B test and how NOT to shorten your A/B test, together with a reminder NOT to continue the A/B test once your planned time is over.

Next comes the monitoring phase. Here, the course mentions some stats we need to monitor during the A/B test, as well as some indicators on why you should stop a test:

  • Is something broken? > YES!
  • Is there an SRM error? > YES!
  • Are we losing too much money? > YES!

Results

After the execution come the results of the A/B test. Here’s what we need to do when we start analyzing the outcomes:

  • Analyze in the analytics tool — not in the test tool.
  • Avoid sampling.
  • Analyze users — not sessions.
  • Analyze users who have converted, not users and total conversions.
  • Check if the population of users that have seen the test are about the same
    per variation (SRM error checks)

When there’s no winner and the result is inconclusive, that doesn’t mean that the assumption was bad. We just weren’t able to prove it. The result could still be implemented, but there won’t be any significant business case impact. You shouldn’t dig into all sorts of segments to find a winner.

Once you get a winner, implement it ASAP.

However, don’t expect conversions to go up immediately by a certain percentage. That was just the measured result, you still don’t know the real difference. Dive into segments to find out who caused the change of behavior.

Then comes the lesson about presenting your outcomes. Here, you’ll learn:

  • What information is valuable to present to which people.
  • How to create an A/B-test outcome presentation template that leads to action

The next lesson is about business case calculation, focusing on how to make a proper business case calculation of your A/B-test program. These should be the next steps of your program:

  • Increase budget (more A/B tests)
  • Increase knowledge (better A/B tests)
  • Decrease budget (less A/B tests)

Finally, we learn how to scale up testing and share insights. There’s also a bonus lesson, which focuses on understanding what 20% more data means for the A/B-testing program.

This course ends with an exam that contains 30 questions. I have to admit that the questions were quite challenging and it took me a lot to think and answer them. This is what I like about CXL’s programs in particular — it’s not just watching videos, you really get to learn and remember stuff because you know that in the end, you can’t get the certificate without proving that you got the knowledge.

At the end of this week, I started the course about Statistics fundamentals for testing. I have to admit that I wasn’t a big math and statistics fan back in uni, so I expect this to be a challenge. But, we’ll see how it goes next week :)

--

--

Sara Miteva

Senior Technical PMM @ Checkly | Secure your app's uptime with Monitoring as Code | https://www.linkedin.com/in/sara-miteva/