The definitive guide to creating effective Preference test
Introduction
When developing a product, application or a website, it’s always a good practice to create several different designs first. But how do you choose which design to further develop and where to invest your time and energy? One option is to gain feedback straight from your customers. If you need help in choosing between different designs early in product development, you can use preference testing to see what your users like and why.
This guide will cover the nature of the preference testing, its pros and cons, and the different options you have to customize your preference test. We will discuss the steps you need to take in order to execute a preference testing study as well as some alternatives of how to look at the obtained data.
What is preference testing
In a preference test, you simultaneously present to your respondents two or more products or design options and ask them to pick the one they prefer. These tests are usually used to assess which designs are the most aesthetically pleasing, easy to comprehend, or encourage the greatest sense of trust. You can use preference tests to compare a wide range of design formats including icons, logos, colour palettes, whole webpages or even sound and video recordings.
Preference tests have several advantages: they are a fast and effective way to obtain feedback from users, they simulate what participants do when they are purchasing products or services - choosing among several alternatives; they are straightforward and easily understood - hence suitable also for younger audiences.
But you can - and you should - go much further with a preference test, than to just collect a simple preference selection from each respondent. Other than gathering plain quantitative data on preference and attitude towards your designs, you can also collect a lot of qualitative insights about why they prefer one design over another or what are their impressions of the designs.
Preference test is a UXtweak tool that implements preference testing. It allows you to compare several designs at the same time and even several sets of designs within one study. The option selected as well as the time it takes to choose are both recorded. It offers a wide range of follow-up questions formats and understandable outputs.
When should I use it?
The best time to use preference testing is early in the product development, before you invest too much time and effort into the design. It enables you to determine where to direct your resources, what is the more viable option and how you can make it even better. You can also use preference testing when redesigning a product or a website to compare the old and the new version of the design or to compare your design against that of your competitors.
What is it not?
While preference testing is beautifully simple yet effective, it has its limitations. For instance, when you ask your users to pick a preferred design, you are not getting information about the size of the preference. It is possible that they kind of like both designs, but one a tiny bit more. Or they chose the one they dislike slightly less because they are required to choose, but on its own, they still find it unappealing. Or maybe they really prefer one over another by a lot. To clarify these differences, it is good to include the follow-up questions.
Follow-up questions can help you to gain a better understanding of why your users picked one option over another, but in general, people are not very good at explaining why they prefer something. Furthemore, your participants will be in a state of mind where they are observing each design option as a whole, not analysing detailed features. Nonetheless, your design may have one or two noticeable characteristics that may inform their decision or even cause other characteristics to be viewed as more positive or negative. This is an example of what we would call the “halo effect” and it may make it even more difficult for the respondents to explain their choices.
Another issue is that some people tend to confuse preference testing with A/B testing. Preference testing is used early in the development phases and asks respondents to indicate preferences based only on brief inspection of the design; they generally do not interact with it. On the other hand, when conducting A/B testing, respondents interact with several different variants of a functioning prototype. Usually one half of the participants are introduced to version A of the prototype and the other half to version B. They are asked to perform tasks using these prototypes (i.e. finding a product on the page, registering for a service etc.). The two options are compared based on users’ performance. To summarize - in preference tests, different options are evaluated based solely on respondents’ internal preferences, while in A/B testing they are evaluated based on respondents’ performance and behaviour. This is why A/B testing is usually implemented later in the product development process than preference testing.
So you have your early designs and want to perform a preference test on them. Where should you begin?
Set your objectives
Firstly, make sure you know what kind of information you want to obtain from your users and why. Are you creating a new product or redesigning an existing one? Do you have two or more possible designs? Based on what criteria should respondents pick the preferred design? Are there any company values that the design should emphasise? The answers to these and other questions will determine what kind of instructions you will give, what designs you are going to compare, who the participants should be or what follow-up questions you want to ask.
When creating your instructions, it is always good to give participants some background. For instance, when you are comparing two different icons, don’t simply ask “Which icon do you prefer?” Include context such as “Which icon do you prefer for incoming messages?” or “Which icon do you find more trustworthy?”. This level of specificity allows you to evaluate your hypothesis with more precision. However, try not to be leading in your questions. People are generally trying to be agreeable and if they have a suspicion you yourself prefer one of the designs, it may affect them. They also tend to be less critical, even if instructed to be as sincere as possible, so it may sometimes be useful to not ask direct questions to find out why they prefer one design over another.
Prepare materials
Gather the elements you want to use for your preference test, whether they are logos, icons, colour pallets or website designs. Usually two or three options are used in one preference test task, however UXtweak enables you to include up to six designs in one comparison. Bear in mind that the more designs you will include into one task, the more difficult it may be for your participants to decide and then to justify that decision.
Designs you are trying to compare should not be too similar so that the preference test doesn't become a spot the difference game. Your respondents don’t want to look for a tiny change on the entire website design. If that is what you want to assess, don’t be afraid to drop the irrelevant parts of the design. This being said, as already mentioned, the context is important as well, so try to find middle ground between the two.
The order in which the designs are displayed can influence the preference of your users as well. Fortunately, UXtweak offers a functionality that will display the designs in a random order for each participant and this will eliminate the possible influence of the order of elements.
You can include several sets of designs in one study. For instance you can ask your respondents to indicate preference among three logos, two icons and four color palettes within one study. This enables you to effectively use the respondents’ time, but you should be careful not to overwhelm them. It is always good practice to “pilot” your study on a few colleagues or acquaintances before launching it; that way you can be sure that everything is working as expected, the instructions are clear and you can get an estimate of how long the study will take (this is good information to mention in the welcome message as well). In UXtweak, you can use a convenient feature called Preview that will show you how your study will display to the participants. It can be used throughout the setup process and you can interact with the study the same way your participants would, although no data will be saved.
Create follow-up questions
Based on your research objectives and the type and number of elements that you’re comparing, you can create a wide range of follow-up questions to get qualitative data from your respondents. UXtweak offers different question formats, from open free text questions to dropdown select or rating in a Likert scale format. This gives you a lot of options to gain additional information.
For example, when you are comparing more than two designs, you will get the information about the preferred design, but it may be useful to include in ratings for each design to ascertain the degree of preference (for example a Likert scale question for each design asking How visually appealing do you find design xy?), you could also ask your users to identify not only the preferred design but also the least preferred design.
You can ask your respondents why they chose the preferred design or what they like/dislike about the preferred design using a free text question. Another possibility is to create a list of adjectives - for example, the characteristics that you want to associate with your brand - and ask respondents to mark those they feel to be related to their preferred design. Alternatively, if you don’t want to restrict your respondents to a set list of adjectives, you can ask them to share their own words to describe the design.
As you can see, there are plenty of options on how to use the follow-up questions based on your needs. UXtweak also offers a screening question displayed at the very beginning of the study that enables you to exclude some respondents before they start the study; and the pre-study questionnaire where you can collect information such as demographics of your respondents. You can also utilise the post-study questionnaire for other purposes, such as feedback on your study setup.
Gather respondents and collect data
The respondents should reflect your customers, who will eventually be the ones to interact with the studied designs. The most appropriate way is to address the people who utilize - or are expected to utilize - the product or service and who also understand the context of the presented designs. UXtweak offers a handy recruitment tool, a little widget that will appear on your website and ask your customers to participate in your study. Sometimes you want a fresh perspective though, from people whose opinions aren’t clouded by biases of already being familiar with your website or product. In that case, it’s better to find respondents who are not your customers but still reflect your target market.
The number of respondents you need will depend on the number and complexity of the designs that you’re comparing, as well as on the nature of the follow-up questions. One of the recommendations is to get at least 20-30 respondents. You may find out that the responses are highly repetitive even with a smaller sample. On the other hand, if you’re comparing many designs or asking complex questions, you may need even more than 30 respondents to see patterns emerge.
Interpret the results
The most basic information you will get from a preference test is how many times each of the designs was chosen as the preferred one. This information is displayed in absolute numbers as well as a percentage.
It may well happen that none of the designs shows clear preference, or the preference is not statistically significant. In this situation, you can repeat the testing with the same respondents. Experience and research shows that it isn’t unusual for people to change their preference from trial to trial. In case you get the same results even after repeated tests, you may wait for new iterations of the design for both options and use those to repeat the test.
For qualitative data from your follow-up questions, you have several options. Where applicable, you will again get frequencies for different answers. For free text questions, you can group participants who preferred the same design and look for similar answers and patterns.
Preference testing, while simple and with limitations, is very versatile and can enable you to quickly obtain a lot of valuable information about your designs from the early stages of your site’s or product’s development. With the UXtweak Preference Test tool, you can comfortably set up a remote preference study, recruit participants and get the results in a handy shareable format.