A week ago, I wrote my previous blog post about a survey I had set up, to figure out how important each of the requirements we had collected for a common IM / chat solution for KDE is for us.
All in all, 132 people followed my request to participate in the survey, and answered the 108 questions in it. Thank you all for taking the time!
This post presents the result of the analysis of your answers.
Survey Participant Experience
Normally whenever I conduct a survey, before releasing it onto participants, I collect feedback from a bunch of testers to see if there is anything confusing or annoying in it. It’s just standard practice.
I should have known that if I ever decide not do it, it would backfire. Normally I design surveys from scratch or reuse ones I or a colleague has made. This time, I followed a recommendation from a fellow KDE usability consultant to use kanosurvey.com, since it is a pre-made survey tool with an established underlying model. You just have to enter the features you want to ask about, the tool does the rest. I thought “Hey, these surveys are conducted all the time in all kinds of contexts with all kinds of people, always with the same questions, so it should be safe to just apply it here as well, right?”
Wrong. As soon as I asked people to participate, I got lots of comments about how difficult it was to fill out the survey, how unsure people were which answers to choose.
This is how a question in the survey looks:
The biggest problem participants had was whether to choose “I like it this way” or “I expect it this way” as their answer to the first question. They tried to mentally bring all answer options into a logical order, but were unsure whether “like” or “expect” was stronger.
Now the problem is: There is no strict logical order to the answer options. The reason for that is that the Kano model does not simply split features into “want” and “don’t want”. What it aims to do is categorize features into mandatory (expected) ones, exciting (surprising) ones, those which are “the more, the better” (called “linear” or “one-dimensional”), those about which people just don’t care (“indifferent”), and those which they actually dislike (“negative”), the specific terms used differ between people applying it.
A feature which users expect to be present, and dislike to be absent, is a must-be, whereas one which they like to be present but don’t care if it’s absent is an exciting feature. Neither of them is necessarily “stronger”: Must-be features are mandatory, but if you want people to choose your product, you better make sure you also have some features which cause excitement.
Participants expressed their concern that they might have distorted the results if they were unsure whether to choose “expect” or “like” if something felt like a must-have for them. The good news is: As long as they chose “dislike” on the absence question, it doesn’t really matter that much which of the two they chose on the presence question. We’ll make sure that both the must-be and the linear features will be present in the solution we choose, because we don’t want people to miss anything that they care about.
On the other hand, I also got positive feedback from participants who liked the idea of looking at each feature from two different angles.
Still, a survey which makes participants confused or even angry is not a good survey. I won’t use kanosurvey.com again, at leats not for this target audience.
With that out of the way, let’s get to the actual results. What kanosurvey.com spits out after calculating the individual answers is, for each requirement, how many individual answers fell into each of six categories: The ones mentioned above, plus “Questionable” (which I’ve relabeled to “Misunderstanding” in the table below). Questionable means that the combination of answers for a participant are unexpected from the Kano model perspective and therefore indicate that the participant misunderstood either the description of the requirement or the question. We’ve had a few of those in the survey, but luckily never more than one participant per requirement. This shows that despite the subjective confusion, the answers which the participants gave were still almost entirely in line with the what the model expects, and therefore usable.
Now the next question is: What to do with this? In general market(ing) research, what people do is calculate which product configuration would give them the best chance in the market through a combination of meeting people’s expectations and getting them excited.
We’re not doing regular market research, however. The solution we choose should not exclude people, so we can’t just say “As long as enough people will like our solution, it’s fine”. Therefore, I chose a custom approach to the analysis:
- If a feature is mandatory or linear for more participants than an exciter, and less than half (66) of the participants are indifferent or negative, I considered it a Must-have. There is no way around this feature if the solution is supposed to be accepted by the community.
- If a feature is an exciter for equally or more participants than mandatory and linear combined, and less than half of the participants are indifferent or negative, I considered it an Attractor. If we want our community to welcome a solution as an improvement, it better had that feature.
- If at least half of the participants were indifferent about a feature, but it’s still mandatory or linear for at least 20 participants (and negative for less than 20), I categorized it as “Inclusion”. That means although the majority doesn’t care, not having this feature would exclude a significant number of contributors from using the solution. That would only be okay if we offer those people an alternative (e.g. a bridge to a solution that does have that feature).
- If more than half of the participants were indifferent about a feature, less than 20 considered it mandatory or linear and less than 20 considered it negative, I categorized a feature as “Nice-to-heave”. That means exactly that: It’s nice if it’s there (and may attract some people), but the majority just doesn’t care and few people consider it a must, so if it’s not there, it’s no big deal.
- If more than 20 participants were negative about a feature, but it was overall positive for more than 20, I considered it “problematic”. That means some people would like a solution to have this feature, but a significant number of people would actually dislike it because of that. That means if we choose a solution which has that feature, there better be a way for users to turn it off.
- If more participants disliked a feature than those on the three positive categories together, I labeled it “avoid”, meaning that the presence of such a feature would do more harm than good.
Here is the list of requirements sorted by the categories:
Unfortunately I was not able to turn the results into a proper HTML table in a way that worked for me, so the current presentation is not screen-reader-compatible. People who need screen readers or who want to do more things with the numbers can download the spreadsheet from which the image above was exported from share.kde.org.
The Next Step
So, now that we have the prioritized (or rather categorized) requirements, the task is to find a solution that:
- Has all he must-haves
- Ideally has all the attractors
- Does not exclude those who need the inclusion features
- Maybe also has some of the nice-to-haves
- Does not have the Problematic or Avoid features at all, or has an off-switch for them if it has them
Time to get searchin’!