Frameworks for product and UX research planning

Let’s discuss how to convert complex business requests into understandable research methods

Maks Korolev
Acronis Design

--

One of the main researcher’s purposes is to answer business questions.
Sometimes it is very easy to choose a research method to answer them, but some questions are challenging — it is not clear what method to use, what to ask, or even what to observe in order to get the necessary data.

Does the site associate with our brand?

Will the NTC grow if we make this widget?

Is the new display of settings better than the old one, or worse?

I use several frameworks to structure such requests, make them more understandable and select research methods for them.

Operationalization

Sometimes stakeholders ask to evaluate the abstract concept or metric. For example: “Is the new site better than the old one?” or “Does the design of these posters reflect our brand?”. It is difficult to work with such requests because it is not clear what it means to be “better than the old” and what it means to “reflect the brand”. Before planning a study, these concepts need to be operationalized, connected to the specific user behavior we want to achieve.

What do you mean, “the site has become better”? What scenarios are important to us? What metrics are important to us? What changes should occur in the physical world as a result of launching a new website/product? Depending on the answer, the methods of verification will be different.

Has the site become better? Operationalize: translate “better” into questions about physical phenomena:

  • Will people find the right article faster? (Usability testing or tree testing)
  • Will the number of refusals decrease? (First click or 5-second test)
  • Will I be able to prove to my manager that the new site is better (The study of managers preferences)

The same is for requests about the brand. Why should the poster design reflect the brand? Why is it important? What happens if it doesn’t reflect? We operationalize: we disclose that the customer invests in “reflecting the brand”:

  • Will people understand that these are posters of our company, even if they don’t see the logo? (5-second)
  • Does the warm attitude towards our company translate into posters? (Semantic differential)
  • Poster design is bold (regardless of whether the brand is bold, the customer’s request may be in it)? (Semantic differential)

The general idea is simple: if you get the request to research some abstract concept, try to ground it on some specific metric or behavior.

Description of the stages of interaction with feature/product

This framework helps to test interaction with products or features on each step of the customer journey.

“Is the new widget okay?”. To answer this question, we need to look at the interaction with the widget at all stages of the user path:

  • Attention: Will, the user finds the widget? Will the user guess that it even exists? (usability testing, tree testing, expectation testing)
  • First use: Is it clear on the first interaction how to configure the widget, what data it displays and what does it mean? (usability testing by script)
  • Use: Is it convenient to use the widget on a regular basis? Doesn’t it irritate during regular work? Does it fit into everyday product usage scenarios? (interviews with experienced users, diary research)
  • “Need help” step: Is it clear where to find help and documentation? Are error messages clear? (usability testing, tree testing, analysis of support calls)

“Is the new application icon good?”. We think through the stages of interaction between the user and the icon:

  • Attention: Does it attract attention? (Eye-tracking, observation)
  • Interest: Does it make the user want to click? (First, click testing)
  • Use: Is it clear from the icon what kind of application it is? (Expectations testing survey)
  • Use more: Does it differ from others and allows you to easily find the right application among other icons? (Search for icons among others for speed)

If an icon, for example, needs to be evaluated after the launch, then instead of research methods we will have to come up with metrics, and at the interest stage, we will measure the conversion of views into downloads instead of the first click.

The steps may be different, in some cases, the icon should not reflect the content of the application and we will not check it. It is important that the splitting user interaction into granular stages allows you to think more fully about what makes an icon or widget good at every step of the interaction with the user.

Searching for key assumptions of the hypothesis

Some statements are quite specific, but it is difficult to test them directly. But you may try to find the assumptions on which they are based and which, in their turn, you can research. If the assumptions are correct, the statements themselves are correct.

Should we add a new feature? We are looking for assumptions on the basis of the hypothesis:

  • Are there clients who have a problem that closes with this feature? (Interviews, surveys, community analysis)
  • Are these clients paying for the solution to this problem now? (Interview)
  • Are there competitors who are willing to offer a better solution than we are? (Competitive analysis)

Should I display the filter by color on the toy catalog page (show only blue toys)? Or is it better to show the color variants already in the card of a particular product (this machine is available in red and blue variants)?

Looking for assumptions on the basis:

  • Is it often the case that color is the most important characteristic when choosing a product, e.g. only blue toy cars are looking for? (Interviews, search query analysis, internal query analysis)
  • Are there cases when users buy a lot of toys of the same color at once? (Order analysis from CRM)
  • Are there any cases when users who use color in the search query (buy a blue toy car) bought goods of another color? How often? (Interviews, analysis of search queries and purchases by them, but time-consuming)

Let’s sum it up

Here’s how you can structure complex research requests:

  1. Translate a general concept into physical phenomena or scenarios of user behavior (a good site a site, on which users quickly find the right article)
  2. Describe the process of interaction between a user and a product/feature/design pattern, split it into stages, and then check each stage (a good icon attracts attention, attracts clicks, describes the content, recognizable when viewing again)
  3. We are looking for key assumptions on the basis of the scenario (whether to add a feature whether there is a segment of customers who need it + whether there are competitors who are already doing it no worse than us)

When structuring complex queries, you can use several frameworks at once. As a result, we have a structured list of statements to be checked, for each of which we can choose a research method.

Read by topic

Follow Acronis Design on

--

--