Sometimes you have a page that you want to optimize but there’s no clear metric to determine success. For example, it might be a customer service or support page – no add to cart buttons or lead generation form to give you hard data on if you’re using the optimal design.
In such situations, if you’re trying to get customers to the information they need, you might use time on the page to determine success. The less time on the page, the better the design. You got your customers to the information they need quickly and they left happy. But that’s an unclear metric; it could be the page design is not optimal and people are just leaving quickly in frustration because they can’t find what they need. AKA: a high bounce rate.
Another interesting way to measure such pages that have no clear metric of success is to incorporate survey data into your analysis. I have done this with survey providers like Foresee and Opinion Labs. Now imagine that a web page has an optimization test running. For simplicity sake, say it’s an A/B test of two page designs. 50% of traffic sees design X and 50% sees design Y. Or, Sarah sees page design X and Jim sees page design Y.
Sarah comes to the page and sees design X. She can’t find the information she needs and she’s frustrated so she answers the survey and gives the site a negative review. Jim, however, sees design Y and gets exactly what he needs in a few seconds. He answers the survey and gives a very positive review. Now multiply Sarah and Jim’s experience by 100. With a larger sample of data you get a better idea of how your page design performs.
By combining time on the page with survey results, you get a better sense of which design is optimal. If design Y has a lower time on page metric and has better survey results you have two metrics that confirm each other and you can be more confident that you have a better design.