Post three in our continued series on Conversion Rate Optimization will focus on using the baseline data you’ve gathered to plan, prepare, and launch your tests.
First Steps: Analytics Data and User Feedback
1.) Establish top-level findings
These are the main takeaways from your data before you start slicing and dicing for more granular insights. Starting with a top-level approach will ensure that you don’t miss the forest for the trees.
Conversion Metrics — Your end-conversion action (or “macro-conversion) is the place to start. This is the user action around which all tests and optimizations will be centered, so it’s important to center your analysis around it as well.
- Conversion Volume
- Conversion Rate
- Return on Investment (ROI) / Return on Ad Spend (ROAS)
Behavior Metrics — How users experience your site — usability, design, navigability, functionality, etc., — plays a huge role in driving conversions, so it’s important to measure and optimize to this goal.
- Bounce Rate
- Page Exit Rate
- Time on Site
- Number of Pages Viewed
- Views of Key Pages
User Survey Findings — Look for trends in your feedback; things that really stick out across a large number of responders.
Within all three of these areas, the idea is to start with the obvious. Here are a few examples of top-level findings:
- We’re losing money!
- Our visitor feedback is overwhelmingly negative, and users are generally pointing to the same reason why (product-related, pricing-related, customer service-related, etc.)
- Our users are abandoning their shopping carts with a tragically high degree of reliability.
These are all big-picture problems which need addressing. A simple, top-level analysis will produce these important insights quickly, and provide more context as you begin to drill down for a more granular perspective.
2.) Break down the data — You can start by breaking data down in the following ways:
- By Channel — Look at each channel and assess both traffic volume and response. Volume is important because you don’t want to create a bunch of tests addressing pain points only displayed by a specific user segment representing 2% of your total traffic. Response is important because different channels correspond to different engagement and conversion intent, and this context is important as you consider optimizations. For example, your direct traffic may display a higher level of engagement than your paid search channels simply due to the nature of the channel.
- By Demographic — To the extent that you can get your hands on this data, it’s helpful to segment by the different types of people using your site — men vs. women, different age ranges, geographical segmentation, etc. Use both analytics and user survey data for this.
Bring this segmented analysis to the three areas mentioned before — conversion, behavior, user response. You want two categories: top-level findings and segmented, granular findings, in each area.
3.) Establish a list of key findings — You’ve looked at your data from both a top-level and more granular view. Now make a lists of your findings and consider possible tests to run. It’s helpful to make a chart for this:
|Users are abandoning their cart when prompted to sign in or create an account -->||-- Consider removing sign-in requirement entirely
-- Make it easier to create an account - less fields required, etc.
|Users are not adding products to their shopping carts at the rate you’d like -->||-- Create a stronger call to action on product pages
-- Provide an incentive (discounted shipping, etc.) to product page
-- Make “Add to Cart” button more prominent
|Homepage bounce rate is too high -->||-- Remove cluttered content
-- Make it easier to digest
-- Make color scheme more attractive
-- Provide enhanced navigability options (clearer navigational tabs, etc.)
|Users are not viewing out top-margin |
|-- Make those products more prominent
-- Provide links to product pages from the homepage
You now have your list of significant user pain points, and have a number of potential solutions to test for each. Next, plan your tests:
Start With The Most Important Tests First
Focus on the areas which represent the most immediate concern. Using the example above, the most pressing optimization to test would likely be the removal of the sign-in requirement from the checkout funnel. Afterall, this is directly standing in the way of users completing their purchase!
Run a Quality Assurance Test on New Content
This should be obvious, but it’s often overlooked. It is critical that your test content is ready for PrimeTime before you launch. This means both quality (making sure it’s visually appealing, features correct information, is consistent with your brand, etc.), as well as functionality (does it work?, does it actually do what it’s supposed to do?). For example, make sure that the removal of the sign-in requirement doesn’t break your checkout system. Remember, these tests are real changes you’re making to your website. Don’t skimp on quality assurance just because they may be temporary.
Establish A Timeline For Testing
Make a plan. If you have ten tests you want to run, create an order and a schedule for running them. Remember, run the tests addressing the largest problems first. Taking an organized approach like this will help you keep tests isolated from one another, and thus, will make it easier to draw the right conclusions when you get the results.
Interpreting Your Data
Once you’ve run your tests, you’ll have new data to compare to your baseline data. This is how you’ll know whether or not the new variations you’ve tested should be implement or eliminated.
A few things to consider while making this assessment:
- Segment — Just as with your analysis of your baseline data, you want to take a granular approach as you make your comparisons. Assess results in terms of behavior, conversions, demographics, etc. Be sure to apply this to your user survey data as well. It’ll be less quantifiable, but very interesting to hear what users have to say before vs. during your experiments.
- Make sure comparisons are 1-to-1 — If you experience major fluctuations in traffic from one season to the next, you need to factor this change into your analysis. For example, its not fair to your conversion test elements to compare your fall data (when you ran your test) to summer data (when you gathered your baseline data) if you generally experience much higher conversion rates during the summer months. Always consider external performance elements.
When planning and preparing your test:
- Analysis data and user feedback
- Look for top-level findings
- Segment data for a more granular perspective
- Establish a list of key findings and proposed solutions
- Start with your most important tests first
- Make sure the content you’re going to test is ready to go before launching
- Establish a timeline for testing
- Interpret your data against baseline data
- Be careful of external elements when comparing data
References and Resources: