Measuring the Results of User Experience
There are few things more exciting to agency teams like ours than launching a brand new website. After weeks or months of designing, developing, and creating content for the shiny new site, it’s finally live for the world to see and interact with. But how do we know that our interface works? How can we be sure that we’re achieving the objectives we set out to create?
The answer to this question is by analyzing data. There are several significant methods to testing usability and the success the optimization.
1. Conversion Rate
Conversion Rate is the most straightforward measure of success, and considers the number of users who complete a desired action against the total number of users to the site.
It is most often applied in the context of e-commerce, where a conversion rate is a purchase completion. But it’s important to note that a conversion event could be defined as anything; for a blog, a “conversion” might be a new signup to the email newsletter. Conversion actions could also be visiting a specific page, completing an application or survey, etc.
2. Success Rate
Success Rate is the number of people who complete a desired action against the number of people who attempt it. It differs from success rate in that the entire population only considers people who begin an action and end an action. For example, if a store’s conversion rate looks at the number of people visiting a site that complete a purchase, success rate would look at the number of people who start checkout vs. complete a purchase. This is an important metric because a low success rate could indicate a problem with the interface.
3. Success Time
Success Time considers the amount of time it takes, in the form of a mean, median, or mode, to complete the success event. In our checkout example, the timer would start when the user begins their checkout and end when the user successfully completes it. Analytics tools such as Google Analytics can be set up to track time between two events (i.e. start checkout and complete purchase). These times can be compared to earlier designs to check for improvement or compared to industry benchmarks to determine whether there is cause for concern.
4. Error Rate
Error Rate can be thought of as the inverse of Success Rate, but that isn’t strictly true. For any visitor that begins a task, there are three possible outcomes: success, abandonment, or error. Abandonment is drop-off due to user’s choice, for example they get frustrated with a long form, they change their mind, or their promo code doesn’t work. Error rate is drop off due to a server issue. The server should log issues such as page not found, database issues, connection problems, etc. A high error rate could indicate a server problem or a design problem that consistently triggers a server error state (for example, a bad link leading to a non-existent page).
5. Voice Of Customer
Voice Of Customer (VOC) provides important metrics from customers and users directly through a variety of methods, including surveys, NPS (Net Promoter Score), social media mentions, reviews monitoring, etc.
Remember, there is context to each of these metrics. These indicators are useless in a vacuum and must be compared to industry benchmarks or considered over a period of time in order to define “good” and “bad.” A new site won’t necessarily have high scores across the board and an old site won’t necessarily have low scores. However, for sites that are struggling, these metrics can offer simple quantitative measures of performance that can be compared over time, with a baseline score compared to a score taken after remediation.
Increasing scores over time forms the basis of our conversion optimization program. We consider your site’s current scores when we begin to be benchmarks and compare measurements regularly after our remediation efforts to determine and confirm success. Learn more about our conversion optimization program.