Guerilla UX Testing, and Other Experiences From Akademy

It’s about a month now since the end of Akademy 2018 and I’ve finally found the time to write up some of my impressions from my favorite event of every year, and to encourage all of you to embrace both your inner User Experience (UX) Researcher and your inner guerilla.

My Talk: Guerilla UX Testing

che-155389_1280.png

“Che Stallman”, CC0 from Pixabay / openclipart

I usually try to give at least one talk at each Akademy I attend, and this year I wanted to shake up the way KDE approaches UX design and research a bit again. The idea to give a talk about Guerilla UX came when we had one of those design discussions on Phabricator where different sides of the discussion had different ideas in their minds of what The User™ would want or need, and we had no way to tell which side knew our users better, or even if either side knew them well at all.

Usually, a UX person’s reaction (at least if they don’t suffer from delusions of grandeur) to such situations is “Why don’t we just test it with actual users?”. As a distributed community of mostly volunteers with limited personnel and monetary resources, however, this is much easier said than done. This is where Guerilla UX Research comes in: According to UX Magazine “Guerrilla research is a fast and low-cost way to gain sufficient insights to make informed decisions.”

In short: Guerilla Usability / User Experience Testing has you walk up to someone resembling your target audience (can be a stranger, can be someone you know as long as they are not aware of what you’re currently discussing), show them the product / UI / design you want to find something out about, give them a task and watch them use what you’ve made, recording any user experience issues you notice.

This is often all we need: A relatively quick and cheap way to gain some insight into how our users think and behave, even if the insight is only limited. It’s not as good as detailed user research with specifically selected participants, but as long as the test participants are at least similar enough to the actual target audience, it’s definitely better than generalizing from the project team’s own experience to that of the target audience.

I’d therefore heartily encourage everyone who needs to design a user interface or decide on features and isn’t 100% sure what their users need to take maybe a day (or maybe an hour each if you split the task among the team) to perform some guerilla user testing. I can promise you that you will be surprised by the results and that you will have fun!

For more details see the slides from my talk:

(in case the embedded viewer doesn’t work for you, here are the slides on Slideshare)

Akademy from the Perspective of a Board Member

This year’s Akademy was especially interesting for the board of directors of KDE e.V.. in several ways.

Sponsors Dinner

On Saturday, we could witness first-hand how KDE is able to connect different organizations: At the sponsors dinner, where representatives from all of Akademy’s sponsors spend an evening together with the board to share our ideas and projects.

I had the opportunity to sit at a table with representatives from four companies that are active in the hardware business and all are connected to KDE now: Blue Systems, Mycroft, Pine64, Purism and Slimbook. It was really cool to see the people from the different companies discuss their plans and their experiences with trying to make hardware in a FOSS-friendly, user-freedom-respecting and ethical way. All of them had already heard of the other companies before, but most of them had never met in person before, so we really had an effect there.

Annual General Assembly

On Monday, we had KDE e.V.‘s Annual General Assembly (the official minutes in German can already be found on our website, the English translation is still in preparation). For me personally, the biggest event of the AGM was the election of a new board member to replace Sandro Andrade, whose term had ended (thank you again for your great work on the board, Sandro!).

Before the AGM, Andy Betts was the only candidate for the position. Not really a problem since Andy was a great candidate, but an election with only one candidate where all one can do is vote for or against that candidate always feels a bit weird to me. That’s why I was happy when Tomaz Canabrava spontaneously stepped up as a second candidate during the AGM. Now we were in the comfortable position of having two excellent candidates to choose from.

After a great Q&A session with both candidates, Andy won the election. I’m very happy to have Andy on the board, though I would have been equally happy with Tomaz.

Meeting with The Qt Company

Right after the AGM, the board met with a delegation from The Qt Company (which included their CEO and CTO) to discuss how we can collaborate to make sure that Qt continues to serve both the needs of The Qt Company and those of KDE (and the ecosystem of Qt users as a whole).

We learned a lot about The Qt Company’s plans and the challenges they’ve overcome and are still facing, and were able to share our experiences as one of the key players in the Qt ecosystem. Overall it was a very informative and constructive meeting, and we’re looking forward to continue collaborating with The Qt Company toward our common goal: To keep Qt as great as it is!

My Highlights from the BoF Days

Unfortunately the meeting with The Qt Company coincided with two BoFs that I had really wanted to attend: One was about the VVAVE project, a very promising KDE music player and music discovery platform created by Camilo Higuita, the other was about the newly-rebooted KDE Human Interface Guidelines. I had contributed a lot to the original content of the HIG but haven’t had much time to work on it recently, so I’m eternally grateful that Fabian Riethmayer took over maintainership and is putting countless hours of work into them.

I spent the first half of Tuesday in two sessions about the current state and goals of the VDG (and Plasma design topics in particular), led by Andy, and the second half discussing mobile and convergence topics in the sessions about our convergent UI toolkit Kirigami UI (led by Maco Martin), and about the MauiKit (led by Camilo) which extends Kirigami further. We discussed what we consider to be the defining attributes of Kirigami, what the commonalities and differences between MauiKit and Kirigami are, and how to make sure that both users and developers have the best possible experience across both.

Trainings

Thursday, my last day at Akademy, I spent in the Online Fundraising and Campaigning training, led by Florian Engels from more onion (via NPO academy). All participants agreed that we learned a lot of practical strategies and tips for fundraising and campaigning, which we are sure to apply in the coming campaigns for KDE, Krita and Nitrux!

I also heard a lot of positive feedback on our other trainings: The training on Nonviolent Communication by freelance communication trainer Tilman Krakau, the Technical Documentation Training by Stefan Knorr from SUSE’s documentation team, and the Public Speaking training by Marta Rybczynska, technical public speaker and trainer, and active member KDE.

Since I was in charge of setting up the trainings and this was (as far as I know) the first time KDE offered professional training to our contributors at Akademy, I’m glad that they seem to have worked out very well!

Conclusion

As in every year, this was again a very successful Akademy, with lots of productive discussions, interesting talks, great conversations with friends over dinner and beer, and overall a wonderful time!

 

Tagged with: ,
Posted in KDE, User Experience

Retrospective: The KDE Mission Survey

It might sound a bit weird that I’m now talking about something that took place two years ago, but I just realized that while the call to participate in the survey for the KDE Mission was published on the Dot, the results have so far not received their own article.

People who have participated in the survey but don’t read the Community list might have missed the results, which would be a pity. Therefore, I’d like to offer a bit of a retrospective on how the survey came to be and what came out of it.

The Backstory

To recap a bit on what lead to the survey: After we had finally arrived at a shared Vision statement for KDE in April 2016, the next step was to distill the more high-level Vision into a more concrete Mission statement. We started brainstorming content for the Mission in our Wiki, but soon realized that there were diverging viewpoints on some issues and the relatively small group discussing them directly on the mailing list wasn’t sure which viewpoint represented the majority of the KDE community.

We also wanted to know what our users care about the most. Although in the end it’s the community who defines our Mission, we don’t make software just for ourselves, so we hoped that our goals would align with those of our users.

The Survey

To find out what is most important to the community at large as well as our users, I applied one of the standard tools in my belt: An online survey, in which participants indicated their perceived importance of goals that were brought up in the brainstorming, as well as the perceived usefulness of several measures towards achieving each goal. They also gave their opinion on a few of the contending viewpoints from the brainstorming, as well as on the importance of certain target audiences and platforms.

A few demographic questions, such as whether participants identified as users of or contributors to KDE software, how long they had been using and contributing to KDE software, as well as in which area they contribute or whether they do it as part of their job, aimed at making sure that our sample isn’t skewed toward certain groups.

We invited participants via the aforementioned Dot article, several big KDE mailing lists as well as Google+ (where we have a pretty lively community).

Analysis

I ran series of one-sample t-tests (with Benferroni correction) to check whether the sample averages for the importance ratings were significantly different from the scale mid-point (4 on a 7-point scale). I did that instead of a within-subjects ANOVA because we cared about whether each goal or means was above- or below-averagely important to the KDE community more than than about how they compared to each other.

Furthermore I compared the two variables representing both sides in each case of contending viewpoints using a paired-sample t-test.

Results

Sample

We had 201 currently or formerly active KDE contributors participating in the survey, as well as 1184 interested users.

Demographics showed that no particular group of people dominated our sample in any of our demographical variables (other then users vs. contributors, which were analyzed separately anyway).

Some result highlights

I will only report a few interesting highlights in this post, please see the full report here for all results. If you want to do your own analysis, you can find all data here.

 

Chart showing the perceived importance of several proposad goals

Perceived importance of several proposad goals (click to enlarge)

The importance of the goals turned out to be quite similar between contributors and users, with creating software products which give users control, freedom and privacy, as well as providing users with excellent user experience and quality being the most important for both groups. Contributors rated all goals’ importance as significantly above the scale midpoint. The biggest difference between users and contributors is on  whether we should reach as many users as possible, with users rating it much lower than contributors. This is to be expected, as current users naturally don’t care much about whether we try to reach other users.

Chart showing the agreement with contending viewpoints

Agreement with contending viewpoints (click to enlarge)

Regarding the contending viewpoints within the brainstorming group, the survey showed a statistically significant preference among the KDE community at large for one of the viewpoints in three out of four cases: On average, the community shows a preference for focusing on applications covering our users’ most common tasks, for covering GUI applications as well as non-GUI applications, and for focusing on Qt. The community seems largely indifferent about whether we should strive for consistency between KDE GUIs across platforms, or for adherence to specific platform guidelines.

Chart showing relative importance of different target operating systems

Relative importance of different target operating systems (click to enlarge)

When it comes to which operating systems we should target, GNU/Linux is still preferred by a large margin, followed by Android according to our contributors, and by *BSD and other Free operating systems according to our users. There was a big difference in perceived importance of Windows, OS X (now macOS) and iOS, showing that KDE contributors are on average far more interested in supporting proprietary operating systems than our current users are. One possible interpretation of that could be that the KDE community takes a more pragmatic approach to OS support, whereas our current users take a more ideological standpoint. A different interpretation would be that our users care about what they currently use, and while Windows and macOS has seen some support by our applications, the userbase on those is likely comparatively small.

Chart showing relative importance of different target audiences.

Relative importance of different target audiences (click to enlarge)

An interesting finding was the very close match between contributors and users when it comes to the importance of different target audiences. This shows that apparently, KDE has so far reached pretty much exactly the users we aimed for.

Overall, the results confirm that the ideas coming out of the brainstorming are mostly shared by the wider community as well as our users. They also show that the KDE community has some ambition to expand our userbase and target platforms beyond what we serve today, but still wants to stay true to our roots such as Qt.

What Happened Next

The full results were presented at an Akademy BoF session to discuss the KDE Mission, as well as to the Community mailing list. They were used to guide the further discussion that eventually lead to KDE’s Mission and Strategy statements.

Tagged with: , ,
Posted in KDE

Hey Mycroft, Drive Me to our Goals!

Intro

Almost three months after Akademy 2017, I finally found the time to write a blog post about how I experienced it.

Akademy is where I learn again about all the amazing things happening in our community, where I connect the dots and see the big picture of where all the effort in the various projects together can lead. And of course, I meet all the wonderful people, all the individual reasons why being in KDE is so amazing. This year was no different.

Some people voiced their concern during the event that those who are not at Akademy and see only pictures of it on social media might get the feeling that it is mostly about hanging out on the beach and drinking beer, instead of actually being productive. Everyone who was ever at Akademy of course knows this impression couldn’t be further from the truth, but I’ll still take it as a reason to not talk about any of the things that were “just” fun, and focus instead on those that were both fun and productive.

Tales from the KDE e.V. Board of Directors

One thing that happens at every Akademy is the Annual General Assembly (AGM) of the KDE e.V.

This association, usually just called “the e.V.” by the community, is KDE’s legal and financial representation. Among other things, it raises funds and uses them to make in-person meetings within KDE happen, such as developer sprints or conferences, and sponsor attendees who cannot afford travel and accommodation for those themselves. It also takes care of legal issues such as maintaining KDE’s trademarks, and this year it also uses some of its funds for two marketing contractors, Ivana and Paul, to help push our promo efforts. Since last Akademy I am on the board of directors of the KDE e.V., so the AGM is especially important to me.

One important aspect of this year’s AGM was that three out of the five positions in the board of directors were up for re-election. Two of the board members whose terms ended, our president Lydia Pintscher and vice president Aleix Pol, ran for another term and were re-elected, whereas Marta Rybczynska was not able to run for another term as treasurer, and is now followed by Eike Hein. Eike used to be one of the “KDE phantoms”: He’s been a very active KDE contributor for many years (most notably the maintainer of Konversation, Yakuake and several key parts of Plasma such as the Task Manager), but the majority of his fellow KDE members have not seen him in person until this year.

New Board 2017.jpg

The new board (from the left): Eike, yours truly, Aleix, Lydia and Sandro

Fortunately I had the opportunity to talk to Eike more than anybody else at this Akademy because we had been assigned as roommates. He had lots of interesting stories to tell, from the way IRC facilitates building communities, to communication culture in Korea (where he now lives), to the experience of moderating multiple Subreddits, and much much more. I’m really looking forward to working with Eike on the board!

My highlights from two days of talks

The following are explicitly my personal highlights. If you’re a non-programmer like me, who is also especially interested in design and/or in KDE software running on anything that isn’t a traditional desktop or laptop PC, chances are you might find those talks as interesting as I did. If you know more about programming than I do, you’d also enjoy a lot of the other presentations, where I could just sit and be amazed by how people like you can understand all of that technical stuff.

So, here goes:

Aleix Pol’s talk about A laptop by KDE  summarized our experience with the KDE Slimbook, the very first KDE-branded hardware on the market, and gave a few ideas on what we might do next in that area. This was especially important to me because I was deeply involved in promoting the KDE Slimbook initiative. The talk was followed up by a BoF session during the week where we did an in-depth retrospective on how the Slimbook project went so far and what we learned from it.

A talk which was especially relevant for me as a user researcher was the one on (K)UserFeedback, where Volker Krause introduced the new framework that allows applications to – after opt-in and fully anonymized, of course – collect usage data and send them to KDE to use it for improving our software. Given that privacy is at the core of our Vision and Mission, of course we are extremely cautious in that area, but some usage data is needed for us to make software that fits the needs of our users, not just our own. Volker’s talk was accompanied by a BoF on Tuesday where we discussed what our policy on collection, storage and use of that data should look like in order gain useful information without compromising our users’ privacy.

A talk which was interesting for me from a strategic, design and user perspective was the one about Mycroft AI Plasmoid & Plasma Desktop Integration, in which Aditya Mehra presented some of the amazing things the Mycroft AI can already do in Plasma, as well as his plans for the future. Digital Assistants are one area where the Linux desktops clearly lag behind all big proprietary operating systems. Many Free Software proponents reject digital assistants outright due to a perceived inherent privacy problem, but Mycroft (apart from currently using a Google service for speech recognition, but there are plans to replace that) shows that privacy-protecting and fully user-controlled digital assistants are possible. That is why from my perspective, this is a hugely important strategic area for KDE. This talk was also accompanied by a BoF on Tuesday, about where else in Plasma we can use Mycroft’s capabilities. If you are as excited as I am about the role KDE could play in Free Software solutions for AI and home automation, consider participating in the discussion on my community goal proposal on that subject.

Aditya giving his talk about Mycroft

In his talk Opening new doors: KDE in embedded, Agustin Benito Bethencourt presented some of the ways in which KDE could play an important role in the world of embedded systems, for example in the automobile industry. He has been involved in two different projects in that area and told us that the industry is waking up to the benefits of open source, and that from his perspective, now would be a great time for KDE to make ourselves known in that space. This talk, too, was accompanied by a BoF session, where we discussed next steps for getting our software to run on automotive systems. This is also an area where I believe it’s important that KDE champions Fee Software, because, like with virtual assistants: What have we won if our PCs and phones run Free Software, but our cars are not in our full control and might even spy on us? If you are interested in this project, head over to the Automotive project on Phabricator and join the discussion and work there!

In his talk Looking for Love, our marketing contractor Paul Brown taught us the importance of focusing our communication strategy on the users’ needs, by presenting, in clear and easy to understand words, what benefit our products bring to them, instead of trying to describe their purpose as precisely as possible from our own perspective. That he took the Kirigami wiki page, which I had contributed significantly to, as a negative example of a description which uses way too much jargon and focuses on technical details instead of user benefits, of course meant that I had to endure seeing one of my “babies” being ripped apart in front of my eyes, but it was definitely worth it! The talk was meant as an appetizer for a workshop on Monday, were Paul helped everyone who wanted to improve their product website.

In his talk Input Methods in Plasma 5, Eike Hein made it clear that the state of input methods (which are needed primarily for text input in e.g. many Asian writing systems, but can also handle things such as emoji input or auto-completion or -correction) in Plasma and KDE applications is currently lagging behind other popular operating systems and desktop environments. He presented what needs to be done to improve the situation, and is now rallying people behind a proposal for a community goal to make it happen together. So here as well: If you also feel that improving input methods in KDE software is important, join the conversation on the proposal!

Camilo Higuita, author of the Babe music player, gave a talk Introducing Babe and a contextual approach to multimedia desktop apps where he demonstrated how Babe uses various techniques and online services to find connections between songs in order to give smart answers to search queries. His talk was also accompanied by a BoF session during the week, where we discussed some design ideas and how to use Kirigami to make Babe a convergent desktop and mobile application.

Yours truly also gave a presentation, together with Dan Leinir Turthra Jensen, titled Folding Your Applications, where we talked about the design behind the Kirigami Ui framework and how application developers inside and outside of KDE are already using it to easily create mobile and convergent user interfaces.

Putting Ideas to Action: The Workshop and BoF Days

Monday through Thursday were dedicated to workshops and Birds of a Feather (BoF) sessions, where various groups in KDE – established project teams, groups spontaneously forming around a topic, or often a combination of both – discussed how to drive their ideas forward.

In addition to the already mentioned follow-up sessions to talks from the previous days, these were the sessions that inspired me most:

In the Plasma Mobile part of the Plasma BoF, we learned about Plasma Mobile’s current status and discussed what needs to be done next. There is also a proposed community goal to improve the Plasma Mobile platform for end-user needs, so if you agree with me that Plasma Mobile is of strategic importance to KDE, please participate in the discussion there!

If you ask yourself what the deal is with all those community goals I keep referring to: The initiative to define some concrete mid-term goals for KDE for the next 3-4 years was actually born at Akademy, during a BoF titled ‘”Luminaries” Kabal Proposals’, where Kevin Ottens, Frederik Gladhorn and Mirko Boehm presented to us what came out of their discussion about how they think KDE can be put on the right track towards the future. The goal-setting initiative was one of their proposals. Another one was integrating the KDE e.V. working group reports, which have so far been part of the AGM, into the general conference schedule, allowing people who are not members of the KDE e.V. to learn what’s happening there, as well as significantly shortening the AGM. This proposal will be implemented at next year’s Akademy. Their third proposal, making sure the barrier of entry to contributing to KDE is as low as possible, has been picked up in two goal proposals (which are likely to be merged into one), so, once more: If you agree that this is important for KDE, join the discussion over there!

In a BoF titled “Visual Design Direction”, Andres “Andy” Betts brought up some ideas on how to better integrate designers into the Plasma development process again, and volunteered to spearhead the next round of design improvements. Andy has also submitted a goal proposal related to this, so… you know the drill by now.

BoF wrap-up session, where BoF leaders summarize the results of their session to the rest of the attendees

Closing Words (and a shameless plug)

Now that I’ve advertised various community goal proposals here (one of them being my own), let me use the final paragraph to link to my other proposal, Making KDE software the #1 choice for research and academia. This goal aims to give KDE software the exposure in the research and academic sector that it deserves due to its features and quality, but currently does not have. I think KDE has a lot to offer to researchers, teachers and students, so I’d like us to get in touch with them, promote our software to them and improve it based on their direct feedback. If you agree, participation is welcome!

With hat out of the way, I can summarize that this year’s Akademy was a very successful event, despite being slightly smaller than usual (due to the location being a bit hard to reach and the timing falling into vacation time for many KDE members). I’m now full of enthusiasm again about the things to come for KDE, and looking forward to next year’s Akademy in Vienna!

Tagged with:
Posted in KDE

Results of the Requirements Survey for a KDE-wide Chat Solution

A week ago, I wrote my previous blog post about a survey I had set up, to figure out how important each of the requirements we had collected for a common IM / chat solution for KDE is for us.

All in all, 132 people followed my request to participate in the survey, and answered the 108 questions in it. Thank you all for taking the time!

This post presents the result of the analysis of your answers.

Survey Participant Experience

Normally whenever I conduct a survey, before releasing it onto participants, I collect feedback from a bunch of testers to see if there is anything confusing or annoying in it. It’s just standard practice.

I should have known that if I ever decide not do it, it would backfire. Normally I design surveys from scratch or reuse ones I or a colleague has made. This time, I followed a recommendation from a fellow KDE usability consultant to use kanosurvey.com, since it is a pre-made survey tool with an established underlying model. You just have to enter the features you want to ask about, the tool does the rest. I thought “Hey, these surveys are conducted all the time in all kinds of contexts with all kinds of people, always with the same questions, so it should be safe to just apply it here as well, right?”

Wrong. As soon as I asked people to participate, I got lots of comments about how difficult it was to fill out the survey, how unsure people were which answers to choose.

This is how a question in the survey looks:

Kano survey question.png

The biggest problem participants had was whether to choose “I like it this way” or “I expect it this way” as their answer to the first question. They tried to mentally bring all answer options into a logical order, but were unsure whether “like” or “expect” was stronger.

Now the problem is: There is no strict logical order to the answer options. The reason for that is that the Kano model does not simply split features into “want” and “don’t want”. What it aims to do is categorize features into mandatory (expected) ones, exciting (surprising) ones, those which are “the more, the better” (called “linear” or “one-dimensional”), those about which people just don’t care (“indifferent”), and those which they actually dislike (“negative”), the specific terms used differ between people applying it.

A feature which users expect to be present, and dislike to be absent, is a must-be, whereas one which they like to be present but don’t care if it’s absent is an exciting feature. Neither of them is necessarily “stronger”: Must-be features are mandatory, but if you want people to choose your product, you better make sure you also have some features which cause excitement.

Participants expressed their concern that they might have distorted the results if they were unsure whether to choose “expect” or “like” if something felt like a must-have for them. The good news is: As long as they chose “dislike” on the absence question, it doesn’t really matter that much which of the two they chose on the presence question. We’ll make sure that both the must-be and the linear features will be present in the solution we choose, because we don’t want people to miss anything that they care about.

On the other hand, I also got positive feedback from participants who liked the idea of looking at each feature from two different angles.

Still, a survey which makes participants confused or even angry is not a good survey. I won’t use kanosurvey.com again, at leats not for this target audience.

The Results

With that out of the way, let’s get to the actual results. What kanosurvey.com spits out after calculating the individual answers is, for each requirement, how many individual answers fell into each of six categories: The ones mentioned above, plus “Questionable” (which I’ve relabeled to “Misunderstanding” in the table below). Questionable means that the combination of answers for a participant are unexpected from the Kano model perspective and therefore indicate that the participant misunderstood either the description of the requirement or the question. We’ve had a few of those in the survey, but luckily never more than one participant per requirement. This shows that despite the subjective confusion, the answers which the participants gave were still almost entirely in line with the what the model expects, and therefore usable.

Now the next question is: What to do with this? In general market(ing) research, what people do is calculate which product configuration would give them the best chance in the market through a combination of meeting people’s expectations and getting them excited.

We’re not doing regular market research, however. The solution we choose should not exclude people, so we can’t just say “As long as enough people will like our solution, it’s fine”. Therefore, I chose a custom approach to the analysis:

  • If a feature is mandatory or linear for more participants than an exciter, and less than half (66) of the participants are indifferent or negative, I considered it a Must-have. There is no way around this feature if the solution is supposed to be accepted by the community.
  • If a feature is an exciter for equally or more participants than mandatory and linear combined, and less than half of the participants are indifferent or negative, I considered it an Attractor. If we want our community to welcome a solution as an improvement, it better had that feature.
  • If at least half of the participants were indifferent about a feature, but it’s still mandatory or linear for at least 20 participants (and negative for less than 20), I categorized it as “Inclusion”. That means although the majority doesn’t care, not having this feature would exclude a significant number of contributors from using the solution. That would only be okay if we offer those people an alternative (e.g. a bridge to a solution that does have that feature).
  • If more than half of the participants were indifferent about a feature, less than 20 considered it mandatory or linear and less than 20 considered it negative, I categorized a feature as “Nice-to-heave”. That means exactly that: It’s nice if it’s there (and may attract some people), but the majority just doesn’t care and few people consider it a must, so if it’s not there, it’s no big deal.
  • If more than 20 participants were negative about a feature, but it was overall positive for more than 20, I considered it “problematic”. That means some people would like a solution to have this feature, but a significant number of people would actually dislike it because of that. That means if we choose a solution which has that feature, there better be a way for users to turn it off.
  • If more participants disliked a feature than those on the three positive categories together, I labeled it “avoid”, meaning that the presence of such a feature would do more harm than good.

Here is the list of requirements sorted by the categories:

Table with results

Unfortunately I was not able to turn the results into a proper HTML table in a way that worked for me, so the current presentation is not screen-reader-compatible. People who need screen readers or who want to do more things with the numbers can download the spreadsheet from which the image above was exported from share.kde.org.

The Next Step

So, now that we have the prioritized (or rather categorized) requirements, the task is to find a solution that:

  • Has all he must-haves
  • Ideally has all the attractors
  • Does not exclude those who need the inclusion features
  • Maybe also has some of the nice-to-haves
  • Does not have the Problematic or Avoid features at all, or has an off-switch for them if it has them

Time to get searchin’!

Tagged with: , ,
Posted in KDE

The Quest for a Common Chat/IM Solution

Konqi between dchat solutions

Konqi between chat solutionsA Free/ Open Source Software community usually uses several means of communication: Among them are email, forums, code review and bug tracking systems, nowadays also video chat systems, but one of the central communication channels is usually real-time text communication, also known as instant messaging or chat.

Traditionally, IRC has been the cornerstone of chat in the FOSS world, as it is open, easy for everyone to join (no account needed) and not in control of any single organization. IRC still does what it was designed for perfectly well, but while it is still basically the same as it was 20 years ago, the world of chat and instant messaging around it has evolved significantly in the meantime: Instant messaging services such as WhatsApp or Telegram (or KakaoTalk, WeChat or Line in Asia) are used by pretty much everyone (and their parents, literally) and systems such as Slack are dominating company communication, and those systems have shaped how people expect a chat system to look and behave.

While this does not really bother long-term members of FOSS communities such as KDE, who know IRC inside and out and feel perfectly comfortable with it, we have noticed that for many new and young potential contributors, IRC feels like a “relic of the past”, due to how it is presented and the features it offers.

Therefore, Telegram has become the standard communication tool for the designers in the VDG, and WikiToLearn uses rocket.chat and Telegram as well. This works okay for the time being, as users in Telegram groups can communicate with users in corresponding IRC channels via bridges. However, it still feels disconnected, because the bridges limit functionalities like mentioning users or sharing images, and setting up a bridge requires manual effort for each group/channel, while people often create ad hoc Telegram groups for specific topics. It’s also difficult for people to find relevant Telegram groups unless a link to them is put manually on the community wiki.

The biggest problem with Telegram is, however, that while its client code is open-source and they offer an API for developing additional clients, the server side is fully controlled by a single company, which means our communication on Telegram is fully dependent on whatever that company decides to do.

To fix this situation, three weeks ago Jonathan Riddel proposed on the KDE Community mailing list that we switch from our mix of solutions to rocket.chat for all of KDE. This proposal sparked a very long and occasionally heated debate with each side arguing why their solution is better. At some point I realized that we’ll hardly ever reach a conclusion if we don’t even know what exactly our requirements are for a common chat solution. To fix this, I started an Etherpad to collect requirements.

While doing that, we realized that there was quite some disagreement about which features or properties were “must-haves” and which were “nice-to-haves”. That’s when Heiko Tietze suggested that it could make sense to use the Kano model to figure out which features are indeed must-have for the majority of our community, which are not needed but would make people want to use a solution, which ones the majority doesn’t actually care about and which might even annoy people. He also suggested a tool called Kano survey which allows to capture this information easily in an online survey.

So I set up a survey about “Requirements for a primary Chat/IM solution for KDE” to prioritize our identified requirements. Now if you would like to contribute your opinion on the importance of the different requirements because you regularly communicate with KDE on our current IRC, Telegram or rocket.chat channels, please fill in the survey here until Thursday, August 31st.

Tagged with: , ,
Posted in KDE

My Plans for Akademy / QtCon

going-to-akademy-2016

It’s finally that time of the year again which many KDE contributors have been looking forward to: The time when we all get together at Akademy, to meet our KDE friends in person, give and listen to talks and make plans for world domination!

This year is special because Akademy will be part of QtCon, a joint conference with Qt, FSFE, VideoLAN and KDAB, which means even greater opportunities to learn something new, reach an audience beyond KDE, and deepen our alliances!

This year, I’ll give three quite different talks:

The first one, on Saturday, is titled “Quo Vadis, KDE? – A FOSS Community’s Journey toward its Vision and Mission“. There I will talk, together with Lydia Pintscher, about how the desire to find a direction for KDE lead to the KDE Evolve initiative, which lead to the KDE Vision and Mission initiatives, and beyond.

The second one, on Sunday, titled “Meet Kirigami UI – How KDE’s new framework can help to create multi-platform mobile and convergent applications” will be a more product-oriented / technical one. Here, Marco Martin and I present our convergent application framework, Kirigami. I will talk about some design background, whereas Marco will go into technical detail and explain how to set up a project that uses Kirigami.

The third talk I’ll be giving (also Sunday), this time together with Jens Reuterberg, is again more on a “meta level”: Under the title “Movements and Products” we will talk about two different mindsets with which contribution to a Free Software community can be approached: A product-focused mindset or a movement-focused mindset. The two are not mutually exclusive, and in fact we’d recommend adopting some of both for a community like KDE to succeed as a movement that creates products.

If you can’t be at QtCon or can’t make it to the talks: I assume the pages linked above will have recordings to download at some point.

Giving talks is not the only thing I do at Akademy / QtCon, of course. There are also all kinds of BoF sessions to attend: On Monday, I’m planning to be at the Plasma BoF (and especially at the Kirigami-focused part in the afternoon, of course), as well as the “Appstream metadata on software releases” BoF (because I was the one who pushed that topic with Aleix). Tuesday morning will be dedicated to Kube, and where I’ll spend the afternoon mainly depends on whether I’ll be elected into the KDE e.V. board at KDE e.V.’s Annual General Meeting this Thursday. Wednesday morning will be all about Discover.

Had I known there were so many important BoFs for me, I’d probably have stayed longer than Wednesday evening, but that wasn’t clear yet at the time I booked my travel, so I’ll have to make as much of the first three BoF days as I can.

Aside from all that, there are of course lots of hallway discussions (Akademy is always great for those!) as well as lots of fun to be had!

It will be great as always, I’m really looking forward to the second half of this week and the first half of the next one.

See you in Berlin!

Posted in KDE

A Usability Guy’s Journey to Creating his First KDE Tool – Part 2: A Vision

Advice

As already suggested in the first article of this series, fixing a bug was just the start of my journey. What I now want to do is help KDE improve our AppStream metadata. Why is this so important to me?

I first got in contact with AppStream as part of my interaction design work on Discover. The fact that all I had for linking the word Discover to was a not very informative quickgit project page perfectly exemplifies one of two big problems that I want to help solve: That only a minority of KDE projects have a proper representation on the web. The other issue is that the default software centers for Plasma (Discover) as well as GNOME and Unity (both use GNOME Software) both draw the information they present on applications from AppStream data, but far from all of KDE’s applications provide these data.

The nice thing is that we can reuse the information for AppStream to automatically create a website for each application which is at least a lot more informative for end users than what quickgit provides. Of course, those projects who have the manpower and motivation to create gorgeous websites like that for Minuet or Krita should still do that, but the others would still get at least something useful.

Of course I am aware that AppStream metadata do not grow on trees. Someone has to write them. AppStream database builders read metadata about an application from an appdata.xml file provided with the application’s source code. Now I know that just seeing the extension “.xml” is enough to raise many developers’ neck hair. The fact that developers have to write an xml file by hand may be one of the reasons why not everyone does it.

It may not be the only reason, though. Maybe people need to be reminded that they should check before release whether their AppStream data is up-to-date? Maybe they need AppData generation and maintenance to be better integrated into their workflow?

I want to help developers to create and update their applications’ AppData more easily, and I want to create a tool for that. For now I’ll call that tool AppData Helper.

This is its product vision:

AppData Helper is a tool that makes it easy for application maintainers and developers to create and maintain an AppData file for their application. It takes as much of the “bureaucracy” out of the process as possible, allowing them to focus solely on the actual information they have to provide.

Now I have two questions for you, dear potential users. First of all: Do you think such a tool may make it more likely that you provide AppData for your application? Please answer that question in the poll below. The second question is: What specifically would such a tool need to do in order to be of help to you? Would it have to be a GUI application, a command-line tool/script, a KDevelop plugin, or something else? Please provide input for that in the comments!

Thank you,

Thomas

Tagged with: ,
Posted in KDE

A Usability Guy’s Journey to Creating his First KDE Tool – Part 1: Baby Steps

path

I, psychologist by education, human-centered interaction designer by trade – my only completed software project so far being a web-based Advent calendar I made years ago – have embarked on a journey to write my first KDE tool. How did it come to this?

It all started with me wanting two things:

  1. To change the speed of Plasma animations on my system
  2. To have proper AppStream metadata for KDE software

These two may sound completely unrelated at first, but they both were key in sending me down this path. This article will tell the story behind 1, which is about my very first code commit to KDE. I will go into quite some detail because I feel that my journey may provide some insights for KDE.

This story started on April 6th. I was discussing with Aleix Pol about actual vs. perceived performance in Plasma and other KDE software. One thing we agreed on was that animation speed has a big impact on perceived performance. During that discussion, we found out that the setting for the animation speed is almost impossible to find in System Settings, because it sits in a module where you would not expect it to be (Display and Monitor > Compositor), and searching for “animation speed” points you to the wrong module (this was due to an oversight when the “Desktop Effects” module was split in two and the search keywords were not adapted). The “it sits in an unexpected module” problem is about to be fixed by moving it into the “Workspace behavior” module, but first I wanted the actual bug with the search pointing to the wrong module to be fixed.

At first, as usual, I wrote a bug report about it. Then, Aleix, being a cunning little Spaniard (*scnr*, I know you’re Catalonian), said these fateful words: “You could fix this one yourself!”. Now the cunning part was that he knew I could not defend myself by saying “But I don’t know C++ …” because the search keywords are defined in .desktop files, easily read- and writable for mere mortals like me. So, without any good argument why I couldn’t, I set out to fix it myself.

The first obstacle on my journey was that even after years of being a KDE contributor, I still did not have a KDE developer account. The reason is simple: My contributions usually come in the form of text and mockups, not code. I describe my ideas in wiki pages, emails, forum posts, chats, review requests, bug reports, blog posts, …, but not in repositories. For this simple patch, I could have just put it up on Reviewboard or Phabricator and have someone else commit it, but if I was going to contribute code, I wanted to do it properly™.

Besides, I saw an opportunity in this exercise: We often hear that while new contributors perceive KDE as a very friendly and welcoming community, they perceive the organizational process of becoming a KDE contributor as relatively difficult, and now I could experience that process first-hand and see what could be improved. So I wanted to go all the way.

Now first of all, I needed to get a developer account. I know that community.kde.org is supposed to be the place where every information for the community can be found, so I looked there. The “Get Involved/development” page looked promising. And indeed, this had a link to the “Get a Contributor Account” page. Here I found the first inconsistency: On the page, it was called a “contributor account” whereas in the actual application process it’s called “developer account”. I have fixed that now, changing (hopefully) all of it to “developer account”. “Developer account” made more sense to me, because – as evidenced by for example by me – not all contributors need such an account. Apart from that, though, the documentation there was pretty straightforward.

With that and Aleix’s blessing, I had my Developer account created the next day. Now the more difficult part was setting up Git.

The “Get a Contributor Account” page (which will hopefully soon be renamed by someone with the necessary privileges) had a link to a “next tutorial” in the “And now?” section which in turn was just linking back to the page you came from, along with something about me having to adapt my local copy to a new server (without further specification). I did not understand that part, so I asked Aleix what it meant and now we have made it more specific.

Olivier Churlaud, our new Master of the Wikis, then pointed out to me that the best pages for further information would be the Git or Subversion pages, depending on the project one wants to contribute to, so I’ve now linked to those from the “And now?” section.

So, next step, setting up Git. I followed the steps in the “Git Configuration” page, which involved a tedious amount of “git config –global” commands. I wonder if maybe we should provide a template .gitconfig file to make this easier, at least for things that are ideally the same for anyone, anyway? One of the instructions in there told me to enter

git config --global user.name <Your Real Name>

so what I entered was

git config --global user.name Thomas Pfeiffer

Following the instructions by the letter bit me in the behind later on when I tried to commit, because of course Git ignored the last name after the space and then the server could not identify me. Without Aleix’s help, I’d never have been able to identify the problem, let alone fix it.

This may all be obvious for seasoned shell users, but it clearly wasn’t obvious for me, so I changed the instructions on the wiki to

git config --global user.name "<Your Real Name>"

just to remind people to put their name in quotes.

The page also recommended to use the Git commit template from kdelibs. This sounded  outdated to me given that kdelibs is long deprecated by now, but since I wasn’t aware of any other commit template, I went ahead and used the one from kdelibs, anyway. I’ve now started a discussion on the kde-devel mailing list regarding where we should put that template in the future.

Aleix suggested that I’d use Arcanist to link my commit to a review on Phabricator, but since I didn’t want to install another tool just for my one commit (and Arcanist was never mentioned in the pages I’d come across on the wiki, either), I decided to skip that for the time being. So I manually created a Diff on Phabricator (like the “Get Involved/development” page had told me), which was reasonably simple.

Now it was time to actually commit and push my changes. For that, the page “Infrastructure/Git/Simple Workflow” sounded like the most fitting one to follow. I did what I was told there to add and commit my change. Now for the commit message. Following the link trail from the Simple Workflow page lead me (through one unnecessary hoop which I’ve now cut out) to the “Special keywords in GIT and SVN log messages” section of the “Commit Policy” page.

There I found what I was looking for, except for how to link to the Diff on Phabricator (I’ve added that now). With my commit message complete, I could finally push. I was asked whether I should add the server with the fingerprint SHA256:75pLvlr+9A0ki9YlWAz+il4UPI+N81uMvcocs42d0wU to my list of trusted servers. I remembered that the Git Configuration page had also mentioned server fingerprints, but unfortunately, those did not match the fingerprint Git presented to me. Aleix told me that the server was okay and the page was just outdated. Accepting a fingerprint which does not match the one you find in the documentation of course defeats the purpose of putting fingerprints in the documentation in the first place, so we have to either remove them from the wiki (I did that for now) or find a way to keep them up to date.

Now, finally, my first commit was pushed! Closing the bug automatically failed, unfortunately, because I had not heeded the advice in the page about getting a developer account which said “Also note that this email address should be the same one that you use on bugs.kde.org“. This one’s on me, then.

So, this is the story of my very first commit to KDE! I learned a lot along the way, uncovered some slight shortcomings in our documentation and fixed most of them. I can say that trying to relive the first contribution experience first-hand is a good lesson for anyone who cares about improving it.

In the next posts of this series I will expand on that by creating my own little tool for making the creation and updating of AppStream metadata for KDE applications easier, and I will be doing it in the way I know best: the user-centered way.

Have you started contributing to KDE recently, or know someone who has? If so, please let us know about your experience! What went well, where could we improve our onboarding process in KDE? Tell us in the comments!

Tagged with:
Posted in KDE

Do Plasma users know all the useful shortcuts?

shortcut2

“keyboard shortcut bulletin board” by arvind grover

Dear reader,

when Heiko Tietze and I discussed the usefulness of a user assistance feature that is being considered, we realized that we had very different assumptions about how well users know the keyboard shortcuts that KWin and Plasma offer.

Although I know that the readership of my blog is not exactly representative of the Plasma and KWin user base, I’d like to get at least a rough feeling about whose perception might be closer to the truth. For that, I need your help: I’d like to ask you to do the following:

  1. Go to System SettingsShortcutsGlobal Keyboard Shortcuts
  2. Select “KWin” in the “KDE component” dropdown
  3. Go through the shortcuts which have a “Default” value set and count those which you did not know by heart (or not at all) but think could be useful to you.
  4. Do the same for the “Plasma” component
  5. Add the two numbers
  6. Select the appropriate option in the poll below.

If you know any Plasma users who don’t read Planet KDE / my blog, it would be great if you could link them to this.

Thank you!

P.S.: Sorry for the confusion I might have caused with the picture above. It has nothing to do with Plasma or KWin, actually. I just wanted a nice picture, and I explicitly did not want to show any Plasma or KWin shortcuts in order to not influence the experiment.

Tagged with: ,
Posted in KDE

Plasma Mobile’s Vision

We’re currently in the process of designing the first phone user interfaces for KDE applications, applying the KDE Mobile Human Interface Guidelines (a set of soft rules and recommendations for designing user interfaces for mobile applications by KDE). Therefore, the third part of my blog series about the HIG creation still has to wait a bit.

In the meantime, I’d like to talk a bit about another foundation on which Plasma Mobile is built: Its vision. As I’ve already laid out in my blog post about creating a vision for the KDE PIM Framework, a vision is very important to align the work in a project towards a common goal, and to inspire those contributing it. Inspired by the talk Andrew Lake and I had just given at Akademy about product/ project visions, it did not take much convincing to get the Plasma Mobile team to start working on a vision.

Shortly after Akademy, I met online with Sebastian Kügler, Ivan Čukić and Jens Reuterberg to draft a first proposal for the vision, along with potential target personas. The result was then presented to the Plasma mailing list, where it was well received, and consequently adopted with only with minor revisions.

And here it is, our vision for the future which guides the design and development of Plasma Mobile (this does not say when Plasma Mobile will meet these goals):

Plasma Mobile aims to become a complete software system for mobile devices. It is designed to give privacy-aware users back the full control over their information and communication. Plasma Mobile takes a pragmatic approach and is inclusive to 3rd party software, allowing the user to choose which applications and services to use. It provides a seamless experience across multiple devices. Plasma Mobile implements open standards and it is developed in a transparent process that is open for the community to participate in.

These are the four key elements:

1. Control over your data and communication

monitor-1054708_640

With most current mobile operating systems and applications, users can only trust those who create them that they will not abuse the detailed and sensitive information that is stored on their devices, and the communication that happens through them.

When installing an app, we can see the – vaguely defined – permissions that the app requires, and – depending on the operating system and version – we can either decide whether to take it as it is or to leave it, or we can block certain permissions. But beyond that, we can never tell what exactly happens with their information and communication. Which files or records are accessed? Where is information stored, and who or what may access it in the future? Where is information sent? Is the communication secure and private? We simply cannot tell. We can only trust others that they do what they say they’re doing.

In Plasma Mobile, we want to change that. We want users to be able to see what happens to their information and communication, to the degree of detail that they prefer. They should be able to allow an application to access some files, but not others, and they should be able to control when and where their information is sent.

We do not want to overburden users with feedback and choices they cannot handle, though. Our aim is to present the information and choices as clearly as possible, and give users a choice only when they want to have one.

2. Pragmatic and inclusive

welcome-sign-760358_640

We are realists. We know that a mobile operating system is nothing without apps. And we know that we, as a community, cannot provide applications for everything that users may want an app for. We aim to provide our own apps for the most common tasks (e.g. browsing the web, viewing common file types, email, calendaring, instant messaging, …) in order to create the most seamless experience possible across them, but the more specialized a task or service, the less likely we’ll be able to provide an app for it ourselves.

For that reason (and because we believe that choice is a good thing), Plasma Mobile won’t exclude any application as long as it’s technically possible to run it (and it’s not malware), whether it was written for Plasma Mobile, Android, Ubuntu Touch, desktop Linux, Sailfish or whatever. If it can be made to run on Plasma Mobile, it’s welcome there!

Pragmatism also means that we won’t keep users from giving up part of the control the system offers them again, by running proprietary applications or accessing services on it which invade their privacy. We will strive to keep those applications as sand-boxed as possible, but we won’t keep users from doing something with their device which we wouldn’t do.

3. Seamless cross-device experience

Cross-device

When using Plasma on multiple devices, we want those devices to feel really integrated. KDE Connect already offers impressive integration between a Linux desktop and an Android mobile device, but with a platform that we fully control, we can still go beyond that (users could fully control their phone from their desktop, for example, or tethering could be made a one-click experience). Your selection of applications or certain settings could be synchronized between your desktop or tablet and phone (yes, that one is not new, other mobile OSes can do that already), or a desktop application can suggest to automatically install the corresponding mobile application on your Plasma Mobile devices (for example when an office suite offers a mobile “companion” for controlling presentations).

“Seamless experience across multiple devices” also refers to the same application binary offering an optimized experience on different form factors through interaction-shifting components (user interface elements which change their look and behavior depending on the type of device) and adapted QML files. For example, while contextual actions are offered using a side-drawer or “slide to reveal” controls on a touch device, the good ol’ right mouse button is much more convenient when using a mouse. Another example is scrolling: On a touch-based device flicking is the best way to scroll, whereas with a mouse, the combination of scroll wheel and a traditional scroll bar still works best. The general layout of content or the navigation structure, on the other hand, can often be kept the same across devices, allowing the user to transfer their knowledge from one type of device to the next.

4. Openness

open-966315_640

Yes, we do realize that Plasma Mobile is not the only open source mobile shell. The Android Open Source Project is, as the name implies, open source. Yet, most of its development happens behind closed doors at Google. That means that even though anybody can do anything they want with the end result, people outside Google cannot freely contribute to it or influence any decisions. Sailfish OS has a fully open base, but the user interface layer is still proprietary. Tizen or Ubuntu Touch are perhaps more open, but still mostly in the hands of commercial entities.

Plasma Mobile wants to be different. Even though Blue Systems initiated the development, it is explicitly not a Blue Systems product (otherwise it would probably be called Netrunner Mobile). Blue Systems of course will only contribute their resources to efforts that are in line with their goals for Plasma Mobile, but they will not keep anyone from working towards their personal goals for Plasma Mobile in their own time.

Plasma Mobile was, is and always will be a community project. All of its development happens in KDE repositories where everyone with a KDE developer account has write access, and all commits can be reviewed by the community. All of its components are Free Software.

An equally important aspect of Plasma Mobile is that it uses open standards. Android is Linux, but one only realizes how important the “GNU” part of “GNU/Linux” is to the actual experience of a regular GNU/Linux distribution once that isn’t there.  Other mobile operating systems are closer to the stack we’re used to, but still diverge from standards that are or will be well established in the desktop world, like Wayland Display Servers. Plasma Mobile aims to make it as easy as possible for developers who have previously worked in the desktop area to transfer their knowledge (and code).

Concluding remarks

So this is what Plasma Mobile aims for. Whenever we make important decisions, we will check whether they are in line with the vision. People are of course free to spend their time on things which do not necessarily get us closer to these goals, but we would not allow things which work against them.

Now, I’d love to hear whether these goals are in line with what you’d expect from Plasma Mobile or what we might have missed.

 

Tagged with: ,
Posted in KDE