Methods & Tools Software Development Magazine

Software Development Magazine - Project Management, Programming, Software Testing

Scrum Expert - Articles, tools, videos, news and other resources on Agile, Scrum and Kanban

Conducting a Kano Study with Remote Users

Daniel Zacarias, https://foldingburritos.com/, @listentodaniel

There will always be much more feature ideas than those we are able to build. They come from many different sources: stakeholders, users, your team and yourself. We can look at them through different lenses and try to prioritize what we work on based on that. Still, there is an underlying motivation that drives us: we all want to create software products that delight our users. It's only by continuously doing so that we get to stay in business.

However, how do we measure satisfaction and delight? What do these things even mean and how do they behave? More concretely, how can we know how customers will feel towards a given set of features?

Fortunately, there is a pretty popular model for doing just that: the Kano Model. In this article, I'll share with you a practical approach on how to conduct a Kano study with remote users (the ones we have on most web and mobile applications.)

Before we dive into that, I'll first describe the model for those who are unfamiliar with it.

The Kano Model

Noriaki Kano [1], a Japanese researcher and consultant, published a paper in 1984 [2] with a set of ideas and techniques that help us determine our customers' (and prospects') satisfaction with product features. These ideas are commonly called the Kano Model and are based upon the following premises:

  • Customers' Satisfaction with our product's features depends on the level of Functionality that is provided (how much or how well they're implemented);
  • Features can be classified into four categories;
  • You can determine how customers feel about a feature through a questionnaire.

1. Satisfaction vs. Functionality

Kano proposes two dimensions to represent how customers feel about our products:

  • one that goes from total satisfaction (also called Delight and Excitement) to total dissatisfaction (or Frustration);
  • and another called Investment, Sophistication or Implementation, which represents how much of a given feature the customer gets, how well we've implemented it, or how much we've invested in its development.

2. The Four Categories of Features

Features can fall into four categories, depending on how customers react to the provided level of Functionality.

Performance

Some product features behave as what we might intuitively think that Satisfaction works: the more we provide, the more satisfied our customers become.

Must-be

Other product features are simply expected by customers. If the product doesn't have them, it will be considered to be incomplete or just plain bad. This type of features is usually called Must-be or Basic Expectations.

Attractive

There are unexpected features which,when presented, cause a positivereaction. These are usually called Attractive, Exciters or Delighters.

Indifferent

Naturally, there are also features towards which we feel indifferent. Those which their presence (or absence) doesn't make a real difference in our reaction towards the product.

3. Determining how customers feel through a questionnaire

In order to uncover our customer's perceptions towards the product's attributes, we need to use the Kano questionnaire. It consists of a pair of questions for each feature we want to evaluate:

  • One asks our customers how they feel if they have the feature;
  • The other asks how they feel if they did not have the feature.

The first and second questions are respectively called the functional and dysfunctional forms. To each "how do you feel if you had / did not have this feature", the possible answers are:

  • I like it
  • I expect it
  • I am neutral
  • I can tolerate it
  • I dislike it

For each answer-pair, we use this table to determine the category where the respondents falls, letting us know how he or she feels about the feature.

From the individual responses and resulting categories you can go into two levels of analysis:

  • Discrete: each answer-pair is classified using the table above and feature's category will be the most frequent across all respondents;
  • Continuous: each functional and dysfunctional answer gets a numerical score, which can then be averaged over all respondents and plotted on a 2D graph.

As a general rule of thumb, features should be prioritized such that this order is followed: Must-Be > Performance > Attractive > Indifferent.

One important addition to the Kano methodology, suggested by multiple teams [3] is to include another question after the functional/dysfunctional pair. This question asks customers how important a given feature is to them.

Having this piece of information is very useful to distinguish features among each other and know which are most relevant to customers. It gives you a tool to separate big features from small ones and how they impact your customer's decisions on the product.

The self-stated importance question may be asked in the following format: "How important is it or would it be if: <requirement>?". For example, "How important is it or would it be if: exporting videos always takes less than 10 seconds?".

Responses should be in the form of a scale from 1 to 9, going from Not at all important to Extremely important.

There are a lot more details that are worth exploring about this method. If you're interested, head over to the extensive, in-depth guide to the Kano model I wrote [4].

Using the Kano Model with remote users

In this section we'll go over a practical approach and set of tools you can use to conduct your very own Kano analysis. The process is composed of 3 steps, which are described next.

Step 1: Choose your target features and users

You're probably working on some new features and ideas for your next product release. Out of those, some may be internal or supporting functionality for other teams like Marketing or Accounting. Other feature ideas will of course be intended for your final users. There is not a right or wrong answer when it comes to the mix of internal and external features you consider; any product team will have different goals and constraints.

Since this model applies to how users feel about product attributes, we should only use it for externally visible features (those with which users will be able to interact.) From this group, pick (up to) 5 features; there is no need to analyze more on your first go at this. You should pick features that were already likely candidates for development, as to avoid wasting your time on things you already know don't add value to the product.

Next, you should define the user segments that each of these features target. The way some user feels about a feature will be directly related to how relevant it is in his or her context of using your product. When the product targets multiple segments, it's important to perform the Kano study and analysis for each feature with the proper group of intended users. Aim to pick around 10 users for each group. Just as with the number of features, it's best to start with smaller user groups and work your way up, if needed (this will depend on how scattered the final results end up being.)

If you're using Intercom [5] or Mixpanel [6], it will be very easy to select a subset of your customers within your target. Then, you should export their basic information into a CSV or Spreadsheet file (I'll explain why in a second.) Here is how this might look on Intercom:

Step 2: Get the (best possible) data from your customers

There are two parts to this step:

  • Defining the questions to ask our users, customers or prospects;
  • Creating and distributing the survey to gather responses.

Defining the questions

The traditional Kano study is based on asking a set of text-based questions describing some product benefit (not a feature), and then asking people to reply.

However, Jan Moorman tested it and reported [7] having much better results when doing the questionnaire just after users have interacted with some prototype. This is because it's easy to run into problems in how you phrase the question and how users interpret what you mean. Complimenting the text with something more concrete leads to a better understanding of the benefit by the user and how it might be implemented as a feature.

If you work on a software product, you probably have wireframes or mockups for your ideas and feature specifications. If you do, you already have the basis for the "question" to present to your respondents. What you need is to make those wireframes or mockups interactive (if they aren't already).

Using a tool like Balsamiq [8] or InVision [9], link your wireframes together so they're interactive. This will make the feature come alive for the user and help overcome any problems in your question's wording. Take a look at how this works, both exporting directly from Balsamiq [10] and when using InVision [11]. In this case I have used the same mockups for both examples, but InVision also lets you use visual artifacts produced by designers. Here is another example [12] of the power of InVision, if you're not familiar with the tool.

Then, the "question" you ask customers is composed of:

  • a description of the benefit;
  • a 'demo' of the feature(s) that provide it;
  • the actual question and answering options.

The way to present this question will depend on the type of tools you use, as we'll see in the next section.

Creating and distributing the survey

In order to get the input you want from target users, you need to:

Contact them

Analytics and customer communication tools (like the aforementioned Mixpanel and Intercom) usually have features to contact users directly. This is the most straightforward way to do it, but of course any email tool will also work.

Provide context

Respondents need a short explanation of the survey's goal, answer format and what they need to do. This context can be given in the email, in-app message you send them or in the survey you send out. You should very briefly describe the goal of the feature, provide a link to the interactive mockups and ask the user to come back to the survey. A nice touch could be adding a special note at the "end" of your interactive wireframe, asking users to close that tab and go back to the survey.

Capture their responses

To get responses from our target users, the obvious choice is to go for a survey tool. For this, it's preferable to use a Google Form [13] because of two reasons: it sends results straight to a spreadsheet (which comes handy on step 3) and it allows you to have a field pre-filled by using an URL parameter. This is important because we need a customer identifier (like their email), so we can later know which users have responded and the segment to which they belong. Most email and customer communication tools let you add variables to the links that are being sent out, avoiding the need for users to manually add this piece of information.

Here is how to do this using Intercom and Google Forms:

1. Select the intended users and add some context to explain what you're asking of them.

2. Head over to Google Forms, to get the base URL to send out [14] (use a dummy email address for this.) If you don't already have a form, here is an example form [15] you can duplicate and adapt for yourself. When you're done, get the sharing link.

3. Now head to your customer communication tool and use the dynamic field feature, and prepare the rest of the message. It could look like this:

4. When the user clicks on the link, they'll get the form already filled in with their own email.

One final note regarding this step. If you're expecting each user to give feedback on multiple features, you need to decide (and experiment) around two possible models:

  • One survey for all features - each feature would have a question-set in the survey, and link to the appropriate interactive demo. The risk here is that it's an all or nothing approach: if users don't go through all features in the survey, you get no answer at all;
  • Multiple surveys, one per feature - in the message that gets sent to users, you list every feature and provide a link for each survey. The risk here is that it's more confusing for users and they might miss a feature or drop out from the process without giving you a full set of answers.

Step 3: Analyze the Results

After gathering enough replies, you can now proceed to the analysis step. I put together an Excel spreadsheet that can get you started in your analysis [16]. It does the following for you:

  • From each response (functional, dysfunctional and importance), calculates the discrete category, functional and dysfunctional scores;
  • Calculates each feature's discrete and continuous Kano categorization;
  • Automatically stack ranks features based on potential dissatisfaction, satisfaction and importance;
  • Draws a scatter plot graph that shows each feature's positioning, relative importance as well as data variance through error bars.

Here is how it works:

  • Add your user's details (from step 1), in the 'Users' sheet. It has 3 columns for User Id (usually email), Name and Segment.
  • List the features you're evaluating in this study in the 'Features' sheet.
  • Export responses from the Google Form and paste results into the 'Responses' sheet. Note the field order the analysis spreadsheets expects, and also that each Functional/Dysfunctional answer should be one of: "I like it", "I expect it", "I am neutral", "I can tolerate it", "I dislike it"

When you've followed these steps, you can go the 'Results' sheet and you will see something like this:

At this point, you can also easily play with the data, make some pivot tables and start drilling into the details.

Although it would be great if we could do this process from a single tool, this is a very viable solution to get to a Kano-based suggested prioritization based on feedback from remote users. I hope you give it a try and find it useful.

References

[1] Noriaki Kano, https://en.wikipedia.org/wiki/Noriaki_Kano

[2] Noriaki Kano et al., "Attractive Quality and Must-be Quality," research summary of a presentation given at Nippon QC Gakka: 12th Annual Meeting (1982), January 18, 1984

[3] Robert Blauth, Reinhart Richter and Allan Rubinoff, "Experience in the Use of Kano's Methods in the Specification of BBN RS/1 Release 5.0" on "Kano's Methods for Understanding Customer-defined Quality", Center for Quality of Management Journal, Fall 1993

[4] The Complete Guide to the Kano Model, http://foldingburritos.com/kano-model/

[5] Intercom, https://www.intercom.io

[6] Mixpanel, https://mixpanel.com

[7] Jan Moorman, "Measuring User Delight using the Kano Methodology," https://vimeo.com/62646585, Interaction13 conference, Toronto, January 2013

[8] Balsamiq, https://balsamiq.com

[9] InVision, http://www.invisionapp.com

[10] Example of Balsamiq 'interactive' PDF, https://s3-eu-west-1.amazonaws.com/foldingburritos/public/BalsamiqExample.pdf

[11] Example of InVision interactive mockup, https://invis.io/4U5TCX7WQ

[12] Another example of interactive mockup made by the InVision team, https://invis.io/HT5T7543V

[13] Google Forms, https://www.google.com/forms/about/

[14] Pre-populate form answers, https://support.google.com/docs/answer/160000?hl=en

[15] Example Google Form to use, https://docs.google.com/forms/d/16UYwdKSBRiA2IwIcr8SYwor7B4PjraEDNo1seRxx1Ds/viewform?entry.387034092=email@example.com

[16] Analysis spreadsheet, https://s3-eu-west-1.amazonaws.com/foldingburritos/public/KanoAnalysis-MT-v1.2-s.xlsx


More articles on software requirements and specifications

Non-Functional Requirements: Do User Stories Really Help?

Something Old, Something New: Requirements and Specifications

Agile Requirements

Software requirements and specifications resources

Software Requirements Management

Software Requirements Management Tools


Click here to view the complete list of archived articles

This article was originally published in the Spring 2016 issue of Methods & Tools

Methods & Tools
is supported by


Testmatick.com

Software Testing
Magazine


The Scrum Expert