Behind the scenes of the KDE Phone HIG – Part Two: The HIG creation process

Plasma Mobile HIG Process

As I wrote in part one of this series, we are currently creating Human Interface Guidelines (HIG) for phone user interfaces for KDE applications.

In this part, I want to tell a bit more about the process of creating these guidelines. This is rougly the process we’re planning to follow from the first idea brainstorming to a finished guideline:

Step 1: Identify which guidelines are needed

First of all, we have to find out where guidelines are needed. There are different sources for this. One source are the already existing, desktop-specific KDE HIG. There we can identify the cases where what we suggest for desktop applications can’t be transferred 1:1 to phone user interfaces, and then write the phone-specific guidelines accordingly.

Another source are of course the HIG users, i.e. mostly developers and interaction designers. From time to time, we explicitly ask them where they need guidelines the most, or they come to us telling us what they need. A third source are other phone HIGs, like for example the Material Design Guidelines. They’ve been written based on past experience with what mobile app designers and developers need, so it makes sense to check out what they define, as it’s likely our designers and developers will need the same information.

Step 2: Brainstorm

The extent of this step depends on the specific topics. Things like defining how checkboxes should be used involve more “dull” work and less creativity, whereas more innovative parts like navigation patterns involve more creative ideation. Our brainstorming usually happens on a Telegram group we have created for that purpose. There everyone can present their ideas, we discuss them, weigh the pros and cons.

Then we have to think about edge cases to prevent bad surprises later on. Once we think our ideas are solid, we can go to the next step

Step 3: First draft

Now it is time to write a first draft of the HIG. If it is an addition to an existing HIG page which just details phone-specific guidelines, the existing page has to be separated into global and desktop-specific parts and then the phone-specific parts are added. This is mostly work on the details. If the HIG is completely phone-specific, it has to be created from scratch. The challenge is to find the sweet spot of “as long as necessary, as short as possible”. Too many details, and it risks becoming too long for people to want to read it. Too few details, and we risk a lack of consistency between applications because everyone interprets it differently. A good HIG of course also has to be understandable and to feel useful for its readers.

Step 4a: Discussion on the mailing list

The draft is then sent to the KDE Guidelines mailing list (you’re invited to subscribe to it here if you’re interested) where everyone interested can give feedback. Particularly (but not only) developers are invited to give feedback on technical feasibility here. In this step, the HIG is iterated until no more suggestions for improvements are coming in.

Step 4b: Creating a library implementation of the HIG

If the HIG describes a specific part of a user interface (which most of them do, except for very generic ones such as wording or icon usage), a developer creates a library implementation (for example a shared QtQuick Component) which allows developers to easily implement the HIG without starting from scratch. This happens ideally in parallel with the discussion on the mailing list. Creating a library implementation is very important because the HIGs are much more likely to be followed correctly – and much more likely to be followed at all – if following them makes a developer’s life easier, not harder.

The library implementation is at the same time also a first test case for both the clarity of the HIG and the technical feasibility of implementing it, often resulting in valuable feedback from the developer creating them.

Stop 5: Finalizing and publishing the HIG

When all the designers and developers involved are happy with the state of the HIG, it is time to put it on the HIG Wiki. This is the point where the HIG is ready for use by designers and developers.

HIGs as living documents

That does not mean it’s set in stone yet, however. While the API of the components should be kept as stable as possible, the HIG itself has only existed in a vacuum up to that point. It still has to see implementation in real application user interfaces, where we can test prototypes with real users and see how our ideas which were sound in theory hold up in practice. Based on the feedback from these tests and experience with implementing it in general, the HIG may still be refined later on.

On the one hand, we don’t want to force developers to continuously adapt their user interfaces to ever-changing HIGs, but on the other hand, setting a HIG in stone and refusing to update it with new knowledge would prevent us from providing users the best experience we can.

The next (and probably last) part of the series will describe the application of the HIGs in the design of a KDE application’s phone user interface. You will have to be a bit patient for that one, though, because we’re only just starting the design work which the post is going to talk about.

Tagged with: ,
Posted in KDE

Behind the scenes of the KDE Phone HIG – Part One: Basic Assumptions

Example of column-based navigation on different screen sizes

The Task

Right after Plasma Mobile was first announced at Akademy 2015, the team put me in charge of the creation of a crucial pillar of the ecosystem: The Human Interface Guidelines for KDE phone applications. This means guidelines for “applications made by KDE for phones”, not “applications made for KDE Phone”, because

  1. There is no “KDE Phone”, just the phone UI of Plasma Mobile
  2. KDE applications for phones do not only run on Plasma Mobile, but may also run on e.g. Android, Ubuntu Touch or Sailfish OS

Of course third parties who design applications for Plasma Mobile should follow these guidelines as well.
Fortunately, I wasn’t alone with the herculean task of writing the HIG. Alex L. and Andrea del Sarto, two designers who started posting amazing Plasma Mobile mockups on Google+ right after it was announced, joined the effort right away.

We met in a Telegram group to discuss our ideas, and I started putting whatever we agreed upon on the Wiki, accompanied by mockups from Alex.

A HIG is not written in a day, though. The first step is thinking about some basic assumptions from which to derive basic interaction principles that some or all KDE phone applications will likely have in common. These assumptions and basic principles take the longest to come up with, because if they are to be used across most or all of KDE’s apps, they have to be very robust and applicable very generally.

So here are some of the assumptions we made so far and user interface patterns we derived from them:

1. Convergence is the goal, but each device gets its optimized UI

Calligra Gemini as an example for a converged application

Calligra Gemini as an example for a converged application

We live in a world where barriers between different device classes become more and more blurred. Tablets become laptops by attaching hardware keyboards and pointing devices, smartphones can be put into docking stations that turn them into desktop PCs, a smartphone can turn a TV into a mediacenter.

One approach to deal with such a converging world is to simply scale user interfaces to fit the screen, but this is far from ideal. A smartphone isn’t just a laptop with a smaller screen where the mouse is replaced by your finger, and even tablets and smartphones are used differently.

A device used with keyboard and mouse can use interaction methods like mouse-over, right-/middle-click or keyboard shortcuts, which are not available on a touchscreen. A touchscreen, on the other hand, can use all kinds of gestures which are unavailable on a mouse.

The input method is not the only important difference, however. For another example, a phone is mostly used in portrait mode and often interacted with using only one hand (with the other hand either holding the phone or not touching it at all) , whereas a tablet is mostly used in landscape mode with both hands available. These differences may seem superficial at first, but if you want to create user interfaces that feel truly comfortable, you can’t just ignore them and simply scale your UI to the target screen size and pixel density.

One HIG to rule them all

Therefore, KDE approaches convergence by designing optimized user interfaces for each device class the application targets, often even with different task priorities: As you can see in the screenshots above, Calligra Gemini Stage focuses on creation or fundamental editing of presentations in desktop mode, whereas tablet mode is focused on interactive presentations and minor touch-ups.

Nevertheless, we want designers/ developers to keep all target device classes in mind when designing a UI, which is why I decided against completely separating phone, tablet and desktop Human Interface Guidelines. Instead, currently the guidelines for different form factors are just different sections on the same page, with guidelines that apply to all device classes at the beginning. We might adjust that organization in the future, but there will always be just one central HIG document which only branches out where the guidelines for different device classes actually diverge.

2. Phones are more for communication and content consumption than for content creation

As already seen in the example of Calligra Gemini, different devices have their strengths for different tasks. Phones are great for checking news or emails on the go, but writing a book or even just creating a presentation on a phone isn’t much fun. Following this assumption, the phone HIG emphasizes patterns for browsing and viewing content over those for creating it.

3. Users prefer to interact with the center of the screen

A typical way of holding a phone, making the center of the screen the easiest to reach

A typical way of holding a phone, making the center of the screen the easiest to reach

As research shows, smartphones are predominantly held upright (portrait mode) and in one hand, simply because that is the most convenient way to use it and it leaves one hand free e.g. to hold onto something while standing in a bus.

If we want an application to be used conveniently with one hand, we have to make sure that everything that is needed often can be reached with that hand’s thumb without having to reposition the hand (although the aforementioned research also shows that people do switch how they hold their phone whenever needed).

Regardless of the way users hold their phone, research shows they generally are more precise in interacting with – and prefer to interact with – the center of the screen.

Based on these findings, the HIG recommends interaction patterns which do not require users to reach for the far ends of the screen (mostly the top), but give them on-demand controls near the center of the screen.

4. Space is limited, and content is king.

Although smartphone screens have been getting bigger and bigger over time, they are still very small compared to laptop or desktop screens. Since neither readability nor comfortable interaction should be sacrificed by just making things smaller, we have to find other ways to save space compared to desktop or tablet user interfaces.

In combination with assumption 2) that most phone applications are used more for viewing than for manipulating content, this leads us to the approach that the limited screen space should be mostly reserved for content, with controls only exposed when needed.

5. Content often has a hierarchical structure

Column-based navigation according to the HIG

Column-based navigation according to the HIG

We have found that often the content to be browsed through has an inherent hierarchical structure. Whether it’s files in a folder hierarchy, emails in (sub)folders in accounts, news items in feeds (optionally) in folders or tracks in albums by artists, one often has several levels of hierarchy to browse through to get to the object one wants to see.

And since navigating through that hierarchy is often one of the tasks the user performs most often with these applications, optimizing the interaction for that task will make using the application overall more efficient and a more pleasant experience.

What’s next?

With these assumptions in mind, we set out to create the actual Human Interface Guidelines. In part 2 of this series, I will talk about the process of we write the HIGs while Marco Martin is working on creating components to help developers create HIG-compliant applications easily.

Now I’d like to know, dear reader: Do you agree with our assumptions, or do you disagree with some of them? I’d love to hear your opinion in the comments!

Tagged with: ,
Posted in KDE

This is what we do if someone offers us some constructive criticism

A whiteboard at the "Phoronix BoF" at Akademy

In July, Eric Griffith wrote an article at Phoronix where he compared Plasma and KDE Applications to GNOME and detailed where they do things better than us and what he finds annoying about our software. We don’t react to angry ranting, but Eric took the time to point out in detail where exactly we could do better, and I found that in many regards, he had a point.

We in KDE don’t ignore constructive feedback, so at Akademy, we set out to find solutions to the issues he pointed out. In order to maximize the reach of our efforts’ documentation, I decided to write a two-part series about it over at Linux Veda, a “web-magazine to share and spread knowledge about Linux and Open Source technologies” which has always been very interested in – and generally supportive of – KDE.

So far I’ve finished the first part, which covers the login screen, KWallet and media player applications.

Tagged with: , ,
Posted in KDE

How to install ArangoDB on Arch Linux or Manjaro Linux

Arango on Manjaro

After J Patrick Davenport told me all about the advantages of ArangoDB during my Interview with him, it’s now time to get my hands dirty with the software.

Yes, I know, a rolling-release distribution such as Arch Linux isn’t exactly a common choice for production servers, but

  1. Arch-based distros are great for trying things out because it’s so easy to get the hottest and newest stuff on them
  2. I run the Arch-based Manjaro Linux on all my computers and I don’t need a dedicated server just to get familiar with a new piece of software

and therefore I’m just going to install ArangoDB on my Manjaro machine first. Ha! And while I’m at it, I’ll document what I did so you can easily repeat the steps.

While ArangoDB offers packages for all major operating systems (even including Raspbian, which is awesomely geeky) in their download section, there isn’t one for Arch. This makes sense, because a) Arch isn’t commonly used on servers and b) the common way to get third-party software on Arch isn’t pre-built packages, but the Arch User Repository (AUR), which is basically just a site that hosts build scripts for users to compile software automatically on their own machines.

The downside, of course, is that the burden of compiling is transferred to the user, but the advantage of that system is that it makes maintaining an AUR package as easy as writing a build script and updating it when new versions of the software come out or the source code URL changes. That – and the fact that the Arch community is just hyperactive – are probably the reasons why you have to try really hard to find a piece of software for which no AUR package exists. And since Manjaro Linux is fully compatible with Arch, it is also fully compatible with pretty much any AUR package (the only exception would be if the AUR package depended on other packages which have only just arrived in the Arch repos and therefore still needed some time to arrive in Manjaro).

So, if you run an Arch-based distro and in case you have yaourt installed, normally what you’d do in order to get ArangoDB would be

yaourt -S arangodb

if you wanted the latest stable release or

yaourt -S arangodb-git

if you’d like to compile directly from the development repository (for example if you’d like to become an ArangoDB  contributor).

I say “normally” because unfortunately, at the time of writing, the stable AUR package is out of date (it is at version 2.6.0 while the latest stable release is 2.6.2). The problem is that 2.6.0 does not compile correctly with current GCC, so the compilation of the current version of the AUR package will exit at some point with an error. I have already commented on the AUR package asking for it to be updated, but until that happens, you can use my patched PKGBUILD file from my Dropbox. Just download the file, put it somewhere with enough space (I haven’t measured, but commenters say that the build takes up to 3.2 GB of space at its peak, but don’t worry, the resulting package takes less than 200MB of space), then run

makepkg -i

in that directory to automatically download the source files and build ArangoDB.

UPDATE: By now, the AUR package has been updated to version 2.6.2, so you can now use it directly.

Whichever route you take, both yaourt and makepkg will automatically install any unmet dependencies (on my machine I had everything except Go), compile ArangoDB and offer you to install it.

The post-install script will ask you to run a database upgrade if you have an existing ArangoDB database, and enable and start the ArangoDB systemd service. If you’re installing ArangoDB for the first time, you can safely ignore the first instruction.

Since I’ve installed ArangoDB on my machine only for taking my first steps in it and not to run an actual server, I didn’t enable the service so that it won’t auto-start with every boot. I just started it manually with

sudo systemctl start arangodb.service

After that, to see if ArangoDB runs properly, you can direct your browser to


and you should be greeted by its web interface.

The ArangoDB web interface

The ArangoDB web interface

However, after reboot, ArangoDB wouldn’t start anymore on my system. “systemctl status” told me that it had exited with “FATAL cannot write pid-file ‘/var/run/arangodb/'”. If you’re running into the same problem, create a file “arangodb.conf” in /lib/tmpfiles.d/ with the following content:

d /var/run/arangodb 0755 arangodb arangodb -

This tells systemd to create a folder on startup if the service is enabled, which is writeable by the arangodb user (and readable by everyone else).

To create the needed directory immediately instead of after the next restart, run

sudo systemd-tmpfiles --create arangodb.conf


sudo systemctl start arangodb.service

should work again.

Congratulations, you should now have a working ArangoDB installation!

As the next step, if you haven’t already done so, I’d recommend checking out the First Steps section of the ArangodB documentation to make yourself familiar with the database. In the chapter Collections you will learn how to create your first collection with some documents in it and query for them. This also shows you if your installation really works.

If you’d like to try out ArangoDB on an Arch-based distro, I hope you will find this little how-to helpful. If there is anything unclear or not working, please comment and I’ll be glad to help ypu!

Posted in ArangoDB

Joining the press – Which topic would you like to read about?

WritingWhen I saw an ad on Linux Veda that they are looking for new contributors to their site, I thought “Hey, why shouldn’t I write for them?”. Linux Veda (formerly muktware) is a site that offers Free Software news, how-tos, opinions, reviews and interviews. Since its founder Swapnil Bhartiya is personally a big KDE fan, the site has a track record of covering our software extensively in its news and reviews, and has already worked with us in the past to make sure their articles about our software or community were factually correct (and yes, it was only fact-checking, we never redacted their articles or anything).

Therefore, I thought that a closer collaboration with Linux Veda could be mutually beneficial: Getting exclusive insights directly from a core KDE contributor could give their popularity an additional boost, while my articles could get an extended audience including people who are currently interested in Linux and FOSS, but not necessarily too much interested in KDE yet.

I asked Swapnil if I could write for him. He said it would be an honor to work with me, which I must admit made me feel a little flattered. So I joined Linux Veda as a freelance contributor-

My first article actually isn’t about anything KDE-related, but a how-to for getting 1080p videos to work in Youtube’s HTML5 player in Firefox on Linux, mainly because I had just explained it to someone and felt it might benefit others as well if I wrote it up.

In the future you will mainly read articles about KDE-related topics from me there. Since I’m not sure which topics people would be most interested in, I thought I’d ask you, my dear readers. You can choose between three topics that came to my mind which one I should write about, or add your own ideas. I’m excited which topic will win!

Of course this doesn’t mean I won’t write anything in my blog here anymore. I’ll decide on a case-by-case basis if an article would make more sense here or over at Linux Veda. I hope you’ll find my articles there interesting and also read some of the other things they have on offer, you’ll find many well-written and interesting articles there!

Posted in KDE

In Free Software, it’s okay to be imperfect, as long as you’re open and honest about it


Saying “sorry” and really meaning it goes a long way towards maintaining or rebuilding trust, especially in the Free Software community

Humans are fallible. We all make mistakes sometimes. Most of these mistakes affect only ourselves or those close to us, but sometimes we make mistakes that affect more people. Work in a Free Software project is one of those areas where a mistake can easily affect many people in a community, or thousands of users.

The same is true for organizations in the proprietary software or service world, but it feels to me that two things play an amplified role in the Free Software world; openness and honesty. While users/customers of proprietary software or services also do appreciate if a vendor openly admits a mistake, honestly apologizes for it and promises to make up for it, the default expectation is rather that they don’t. Or if they do, that it’s a calculated PR effort, done because some suits crunched some numbers and decided it’s better for the bottom line.

In the FOSS world, people seem more likely to really see the person, not just the community they belong to. And from a person, they expect that they really and honestly feel sorry if they made a mistake. And they seem to be more forgiving if a FOSS contributor admits a mistake and apologizes than if a proprietary software company does. It’s not only individuals, though. It seems like even companies in the FOSS field are expected to be more open and honest than those in the proprietary software field.

Exhibit 1: InstallerGate

A case which many readers on PlanetKDE – which are likely to also be interested in Qt – are probably familiar with is the issue of the Qt online installer forcing the creation of a Qt Account in order to continue the installation (since it seems to be hip to call everything that is even the slightest bit scandalous “[something]gate” these days, let’s call it InstallerGate). As the comments on the blog post announcing the change may hint at, the reactions by Qt users and contributors to it were not pretty. At all. People claimed that Qt is becoming “less free” or that “pushy greed is determined to ruin Qt and turn decades of efforts into a waste”. And those are the comparably harmless comments.

And it wasn’t just on the blog post itself, people from all corners of the internet started attacking The Qt Company. Of course reverting the change and making creation of a Qt Account optional was the obvious reaction, but The Qt Company did not stop there – a very wise decision. In the blog post announcing the change, Tuukka Turunen, Qt’s R&D Director, after thanking the community for their feedback, said “We clearly ill-calculated how asking for a Qt Account with the online installer would make our users feel. A mistake. Sincere apologies.”. That does not sound like marketing-bullshit, that sounds like someone who has realized that he (or whoever made that decision) screwed up. Later in the post he thanks the community again: “So, thanks again for your feedback, discussions and for guiding us in this tough endeavor.” He admits that The Qt Company needs guidance through their endeavors by the community. Pretty strong statement.

And the community really appreciated that honesty, expressing it clearly in comments like “Thank you! Apart from the technical merits of Qt, it’s also nice they hear their users and developers (I’ll however create a Qt Account for myself! ;-))”. Of course the actual decision is the most important part, but open and honest communication about it goes a much longer way to maintain or rebuild trust than just reverting a change.

Exhibit 2: ShowDesktopGate

Of course community Free Software contributors make mistakes, too. Like me, for example. This March, I was asked to provide user experience input on a review request titled “Show Desktop feature: refurbished“. It was a quite big change, with quite a few changes in the interaction with the feature to comment on. One question was what to do with the panel when showing the desktop.

A big issue there is that if one clicks on a task in a task manager while in Show Desktop mode, there are different things that could happen, each with its own drawbacks: The appearing window could “break” the Show Desktop mode, returning to the previous state before it was activated. This works, but may confuse users because clicking one task suddenly brings back all windows that were visible before Show Desktop was activated. Alternatively, only the window which was clicked on could be shown. That would cause the least surprise initially, but would leave things in a poorly defined state (What happens if one leave Show Desktop now? What happens if another task is clicked? Or the current one is clicked again? Which windows are shown or hidden in which modes?), which is prone to cause confusion as well as bugs. Simply hiding the window belonging to the task behind the desktop like all other windows would be logical from a technical standpoint, but very confusing for the user (“I clicked on that task, why can’t I see it?”).

With that in mind, this is what I replied to the question: “Panels: I’m a bit torn here. The thing is that we have not defined what the dashboard is supposed to be used for. If it is only for glancing at or quickly interacting with desktop widgets, then panels would only be distracting. If it is for interacting with the workspace in general, then of course panels should be accessible. One argument against showing panels would be that clicking on a window in the task switcher would break the mode (the window should not go below the dashboard, as that would be inconsistent with switching via alt-tab). Still, I see valid arguments for both alternatives, so I’ll leave that to you. Just make sure that either panels are fully visible and can be interacted with, or are hidden/dimmed/faded and cannot be interacted with, as any mix of the two would be confusing.”

I thought that since in Show Desktop mode the focus should be – as the name suggests – on the desktop, we could avoid the issue that the taskmanager can cause by just not showing the panel at all. Problem solved, right? What I had completely missed, though (since my mind was focused on triggering Show Desktop mode via alt-tab), is that another common way to activate Show Desktop mode is by a Plasmoid. In a panel. The unfortunate part is that now as soon as one clicks that Plasmoid in Plasma 5.3, the panel is hidden, and the Plasmoid with it. Boom! Impossible to leave the state in the same way it was activated. Confusion! Panic! Usability bug.

I screwed up. Yes, I could hide behind the fact that I didn’t explicitly advise for hiding the panel but left the decision to the devs, but it isn’t their job to detect usability bugs in the making, it’s mine. And they trust me on that (which is great!), so if I say that something is okay, developers think they can safely assume that I’ve thought it through and there are indeed no issues. Only that this time I didn’t.

Of course when people noticed the problem, I was informed immediately and agreed with the developers that the panel should not be hidden in Show Desktop mode after all. This problem will be fixed with the first bug fix update of Plasma 5.3. However, I still have to sincerely apologize for the confusion this change will likely cause to users in its short lifetime, and to the developers for not living up to the trust they put in my judgement in this particular case. I will think things through more thoroughly before the next time I comment on a review request as complex as this one.

In the meantime: Can you be mad at someone who gives you this?
Cute puppy
I sure hope not.

Openness and honesty as a criterion for choosing a FOSS product

Openly admitting mistakes and apologizing for them isn’t the only occasion where openness in communication is important in the FOSS world, though. As I learned from my interview with a user and contributor of the open source NoQSL database ArangoDB, the team’s – in his opinion (even compared to other open source NoSQL databases) – very open communication actually was one of the main reasons why he chose it for his project. He sees it as an important plus for them that “They are open about their shortcomings” because “If I’m going to build a product on something, I want a relationship with the makers and users of that something. In every relationship I want honesty. Openness by a product maker is as close to honesty as I can get.”

So even if you haven’t screwed up yet, it is beneficial to be open about what you do (including your product’s shortcomings) right from the start of an open source project, because that builds confidence and trust. Trust is important for end users, but probably even more important for people who build their products on your software.

Now, dear reader, I’d like to hear your take on this: How important is openness and honesty of a product’s creators for you? Do you, too, feel that it plays an even more important role in the FOSS world than in the proprietary software world, or do you think it’s the same? Let me know in the comments!

Disclosure: No, I did not get sponsored for writing this article. I included the reference to the interview because I think it fits the theme that I observed with the Qt installer and FOSS projects in general.

Tagged with:
Posted in ArangoDB, KDE

Meet Patrick, ArangoDB User and Community Contributor

As announced in my previous blog post, I’d like to tell the world a bit about the multi-model NoSQL database ArangoDB, and since my profession is to focus on users, I thought that the best way to start would be by talking to users. So I looked around in ArangoDB’s support Google Group and its IRC channel #arangodb for an active user who would like to do an interview with me. Lucky for me, the first user I happened to talk to on IRC was not only a very enthusiastic ArangoDB user, but also an active community contributor to the project (though he is in no way affiliated with ArangoDB GmbH, the company behind the software).

So let me introduce to you J Patrick Davenport, a freelance Solutions Architect form Palatka, Fl, USA, who… nevermind, let’s let him do the talking now 🙂

Thomas:  Hi Patrick, to start off, could you tell me a little about yourself (your background, the job in which you’re using ArangoDB, …)?
Patrick: About me: I’m a Solutions Architect, working for my own company, DeusDat Solutions. I’ve been working over 8 years as a software Bob Villa (This Old House, American Reference). My clients call me in to renovate code bases that are near collapse. I’ve improved multiple Fortune 500 company’s core business applications. Recently I designed half to the Medicare Fraud Detection System for the US Government. That was based on Hadoop.

Thomas: Ah, interesting! So, what’s the story of you and Arango?
Patrick: Given the above, it’s implied that I’ve worked in many a corporate office. With the poor chairs and fluorescent lighting comes corporate tool sets. Going off on my own, I decided to learn a whole new toolset. At the same time I decided to write a book on NoSQL and NewSQL. Those two dreams coalesced in me wanting to support ArangoDB. I didn’t like MongoDB. I thought that was too hyped and that they believed their own press. Writing the book gave me a whirlwind tour of the NoSQL world. What I wanted was a document store, with geo-index and a healthy feel of youthful vitality. Turns out there aren’t too many of those around. ArangoDB was the only system that provided all three. RethinkDB was close, but lacked the geo-indexes.
As I learned more about it, I saw that it was wide open for tools and drivers. There was one Clojure driver. That driver was a simple attempt, but violated many Clojure purisms like global state. So I wrote a driver called travesedo. I needed to work with some Hadoop tools to make processing a large CSV data store into documents. I wrote Guacaphant (yes, it’s in Java). I needed to migrate and deploy database changes. Clojure has a community API for that called Ragtime. I wrote Waller just this last week to use with ArangoDB.

What I wanted was a document store, with geo-index and a healthy feel of youthful vitality. Turns out there aren’t too many of those around. ArangoDB was the only system that provided all three.

Thomas: Thanks! I’d like to dig a bit deeper into some points, if that’s okay. So where did you first learn about ArangoDB?
Patrick: I first learned about Arango while researching Document Stores for my book. At the same time, I knew that I wanted to use a document store with geo-indexes for personal projects. I found ArangoDB in an overview site.
Now I’ve got a few projects that I plan to back with ArangoDB. One focuses on enabling cities to better respond to tourists. As a transplanted Floridian, I understand how much of the state’s economy and even my town depends on the flow of tourist dollars. When I look around at the economic development going on, I see that even large cities have yet to app-ify themselves. My goal is to change this. Geo-location is a huge part of that. I need a database that supports it. Another project focuses on home management. Arango’s multi-model structure allows me to naturally relate home processes via a graph structure. While Arango’s graph infrastructure is presently simple, I have faith that the development team will expand it into something more powerful, distributed, that’s able to take on Neo4J.

Thomas: Thanks! You said you were looking for a DB with “a healthy feel of youthful vitality”. What about ArangoDB made you feel that it has more of that than e.g. MongoDB?
Patrick: One thing about MongoDB is it’s slow to change, or to react to valid negative criticism. When mmap’ing issues were pointed out MongoDB’s response was smoke and mirrors. When the global locks slowed things down, they waved their hands. When the benchmark numbers were shown to be inflated due to default drivers not waiting for confirmation of saving, MongoDB folk talked-talked-talked, but said nothing.
ArangoDB is different. They are open about their shortcomings. Mmap is clearly written out as a design constraint. They talk about how datasets have to be mostly in memory for at least the active pages. Questions on the Google Group are answered quickly, by core team members. Same is true to Stackoverflow. They also actively promote competition in the tooling space. When I first brought up the idea of creating a competing driver for Clojure, I was supported. Some made sure I did my homework before doing that, but I had support.

ArangoDB is different. They are open about their shortcomings. Mmap is clearly written out as a design constraint. They talk about how datasets have to be mostly in memory for at least the active pages. Questions on the Google Group are answered quickly, by core team members. Same is true to Stackoverflow.

Thomas: Interesting! Could you tell me why is it important to you that the company behind a database is as open as ArangoDB GmbH?
Patrick: If I’m going to build a product on something, I want a relationship with the makers and users of that something. In every relationship I want honesty. Openness by a product maker is as close to honesty as I can get. I want a team that clearly says what they’re going to do, keeps the community apprised of their efforts (including failures) and requests dialogue. I believe that if a company does this, it can stand the test of time.
ArangoDB is not backed by Oracle. It’s not backed by IBM. I need to have faith in their ability to be there tomorrow with a good product. Openness is a sign that I’ve picked the right horse.

Thomas: Thanks! So I take it from your previous answer that from your experience, ArangoDB does better in that regard than most of their competition?
Patrick: Yep. I haven’t seen that type of life in any of the established NoSQLs. Cassandra feel tired. Voldemort feels vanquished. MongoDB seems stuck (I grant that they did release an improved 2ndary engine about 3 months ago). Rethink doesn’t have Geo-indexes.

Thomas: Ok. So you said you’re using or are going to use document stores and graphs in your projects. Have you used key-value stores in ArangoDB as well, or are you going to use them in one of the projects you’re planning?
Patrick: I don’t presently have a need for the straight key-value stuff (slight tangent, arango-session for Ring does this against a simple collection). I find structured data a better fit for my modeling.

Thomas: Ah, okay. So what about other features of ArangoDB: Are you using joins? Transactions?
Patrick: Interestingly graphs seem to limit my need for joins. They are implicit joins. I want every product that I own. While I could model that as a Product with an attribute of “owner”, I can share the product definition in a products collection with everyone via the relationship. They are also implicitly transactional. I can’t modify relationships and get into an unstable state. Transactions aren’t a huge sell for me right now, but since I do contracting work, transactions will be a huge sell to future clients. ArangoDB is the only Document Store that I’ve seen that supports them.

Thomas: Could you expand on that a bit on which future clients you expect transactions will be a huge sell for?
Patrick: My goal is to get ArangoDB in the enterprise. I don’t know how the market looks in the EU right now, but NoSQL in the Fortune 1000 is pretty small outside of some niche uses like logging. Document stores provide an easy migration path into NoSQL, especially if you’re in a dynamic, weakly typed language like Clojure. Everything in JSON == map. Given this, I think the compelling arguments for Clojure development speed ups, Arango’s natural modeling (in general and in Clojure) and finally the transaction support should make many pointy haired bosses feel safe and their developers happy ‘cause there is a new toy in town. My understanding is that operations against an individual document are transactional. Since I will probably be doing that (set count to (dec count)), I won’t have bulk modifications.

I think the compelling arguments for Clojure development speed ups, Arango’s natural modeling (in general and in Clojure) and finally the transaction support should make many pointy haired bosses feel safe and their developers happy ‘cause there is a new toy in town.

Thomas: Have you used ArangoDB with other languages than Clojure (or are you planning to)?
Patrick: I’ve used it with Java for guacaphant, but that’s it. My useage there was 1) incredibly short (to just get a cursor to an AQL query) and 2) incredibly no Java ideomatic. I didn’t use data classes like I asked Arango to give me the cursor items as maps. So it’s very Clojure/Functional.

Thomas: Have you used Foxx yet? If so: What for? If not: Do you see a case where you might use it in the future?
Patrick: I haven’t used Foxx. I’m a bit torn by its existence. I understand that it could be used as rapid prototyping API platform, but it seems to have scalability issues. Now I have to have my DB server and Application running on the same box? I don’t like it too much. Added to this is the fact that Clojure makes rapid prototyping easy too. All the benefits of the JVM without the deployment hassle.

Thomas: Is there a way in which Foxx could be improved so that it would be beneficial to you?
Patrick: I don’t think so. Perhaps someone using Foxx could come out and say, “We did X with Foxx”. That might start my creative juices flowing. Until then, I don’t see the need for it other than fronting Arango with a really, really thin veneer for CRUD APIs.
But, I like that Arango is trying something. If people take it up, great. I’m terrible at picking trends. If the community doesn’t really use it, as long as Arango pulls back, great. Experimentation is how we learn.

Thomas: Okay, thank you for that open and honest answer!
Patrick: You’re welcome. See, we’re partnering on this interview.
Thomas: Indeed! So now I’d like to learn a bit about your experience with learning and using ArangoDB. Was it easy to set up? Did you find AQL easy to learn? Were your expectations all fulfilled so far, or were some of them disappointed?
Patrick: Getting going with Arango was pretty easy. The documentation is well formatted, and great for starting. That said, I’ve found it hard to learn about advanced concepts like advanced AQL. I ask a lot of questions on the Google Groups about idiomatic AQL. What I’d like is a more in-depth write up for CRUD web apps. Especially around the idea of using AQL for modification. Another point of weakness is that the documents don’t discuss how to deploy in a clustered environment. I know that there is a simple walk through on that, but it feels more like a toy networking project than a how to. I will guard that sentiment by saying I haven’t looked at the Puppet Scripts. They might show a better way.
As a driver implementor this is a huge gap in documentation. For example, there is nothing that says a call for the next cursor batch must be directed to the same node the last batch came from. This is important for the driver. Only my driver even attempts to support clustering and replicated instances. But it is really weak on preserving the batch call requirements (i.e. it doesn’t do that yet). I found out about this by reasoning about how I would implement a distributed system and then questioning the group.

Getting going with Arango was pretty easy. The documentation is well formatted, and great for starting.

Thomas: Have you read the articles about setting up ArangoDB on Google Compute Engine or Digital Ocean (both released this month) yet? If so: Did you find those useful? (they just came to my mind because they’re both about clustered environments)
Patrick: I haven’t.
Thomas: Just for reference: and
Patrick: Thanks

Thomas: You already mentioned the immediate support via the Google Group and Stackoverflow as a plus for ArangoDB. Does that way of getting support work well for you in general, or would you prefer other means of communication to get support?
Patrick: It works well. I’ve tried IRC. Unfortunately there is a time gap. I’m several hours behind the core team, and probably most ArangoDB users. The Group and Stackoverflow make the gap feel less real.
Thomas: Do you see the fact that ArangoDB GmbH currently only has an office in Germany as a disadvantage for the US market?
Patrick: No. It seems like the disadvantage is that they aren’t covered much by American press like Techcrunch. MongoDB was a darling for a while with them.

Thomas: How was your experience writing drivers for ArangoDB (apart from the clustering problem you already mentioned)? Was it easy to get started with it? Was the effort you needed acceptable?
Patrick: The documentation for the HTTP API is pretty good. They show possible inputs and the expected output. Wrapping that in Clojure was easy. I’ve been implementing the features that I need first. So I focused on DB/Collection/Document creation. Since I’m starting to need graphs, I’m working on that now. When I found issues with the documentation not working or being vague, the ArangoDB team corrected it within an hour of me posting on the group (during one of those happy moments when I’m working early, and they are working late).

Thomas: Sounds great! Okay, so the last question I have on my list: Apart from improving the documentation on clustering, what would you like to see the ArangoDB development focus on in the near future?
Patrick: I would like to see their focus split into two performance enhancing tasks. A) distributed graphs. I know that they’re on the road map. I really want to see them. It would be fun to have another open source, heavy duty graph storage to compete with Neo4J. Neo4J doesn’t even really do distributed searches. B) I want to have array and sub documents indexible. I don’t think Arango does this presently. This could make services like ratings based on locality quicker without the use of joins (but hey, Arango has them).

When I found issues with the documentation not working or being vague, the ArangoDB team corrected it within an hour of me posting on the group (during one of those happy moments when I’m working early, and they are working late).

Thomas: Okay, thank you a lot for your input! It was really interesting. Is there anything you feel hasn’t been touched yet, or anything you’d like to add?
Patrick: Nope. Thanks for the opportunity.

The interview was slightly edited in a few occasions for the sake of improved readability, while carefully keeping the content and tone of the original statements intact. Some comments that were unrelated to ArangoDB were left out.

If you have any experience with ArangoDB so far, does it match Patrick’s experience, or was it different in some way? Is there anything you’d like to know from Patrick which was not covered in this interview?

Let us know in the comments, and maybe Patrick will answer to you if he reads it (or I’ll point it out to him).

Full disclosure: I was sponsored by ArangoDB GmbH to write some blog posts to promote ArangoDB. ArangoDB GmbH does not exert any influence on the content of those posts, however, and they were not formally approved by them. Therefore, the content of this post is 100% J. Patrick Davenport’s and mine. EDIT: Just to clarify: I was sponsored for conducting the interview. Patrick did not receive compensation for the interview, in order to allow him state his honest opinion without fearing negative financial consequences.

This post is licensed CC-BY 4.0

Tagged with: , ,
Posted in ArangoDB