This is what we do if someone offers us some constructive criticism

A whiteboard at the "Phoronix BoF" at Akademy

In July, Eric Griffith wrote an article at Phoronix where he compared Plasma and KDE Applications to GNOME and detailed where they do things better than us and what he finds annoying about our software. We don’t react to angry ranting, but Eric took the time to point out in detail where exactly we could do better, and I found that in many regards, he had a point.

We in KDE don’t ignore constructive feedback, so at Akademy, we set out to find solutions to the issues he pointed out. In order to maximize the reach of our efforts’ documentation, I decided to write a two-part series about it over at Linux Veda, a “web-magazine to share and spread knowledge about Linux and Open Source technologies” which has always been very interested in – and generally supportive of – KDE.

So far I’ve finished the first part, which covers the login screen, KWallet and media player applications.

Tagged with: , ,
Posted in KDE

How to install ArangoDB on Arch Linux or Manjaro Linux

Arango on Manjaro

After J Patrick Davenport told me all about the advantages of ArangoDB during my Interview with him, it’s now time to get my hands dirty with the software.

Yes, I know, a rolling-release distribution such as Arch Linux isn’t exactly a common choice for production servers, but

  1. Arch-based distros are great for trying things out because it’s so easy to get the hottest and newest stuff on them
  2. I run the Arch-based Manjaro Linux on all my computers and I don’t need a dedicated server just to get familiar with a new piece of software

and therefore I’m just going to install ArangoDB on my Manjaro machine first. Ha! And while I’m at it, I’ll document what I did so you can easily repeat the steps.

While ArangoDB offers packages for all major operating systems (even including Raspbian, which is awesomely geeky) in their download section, there isn’t one for Arch. This makes sense, because a) Arch isn’t commonly used on servers and b) the common way to get third-party software on Arch isn’t pre-built packages, but the Arch User Repository (AUR), which is basically just a site that hosts build scripts for users to compile software automatically on their own machines.

The downside, of course, is that the burden of compiling is transferred to the user, but the advantage of that system is that it makes maintaining an AUR package as easy as writing a build script and updating it when new versions of the software come out or the source code URL changes. That – and the fact that the Arch community is just hyperactive – are probably the reasons why you have to try really hard to find a piece of software for which no AUR package exists. And since Manjaro Linux is fully compatible with Arch, it is also fully compatible with pretty much any AUR package (the only exception would be if the AUR package depended on other packages which have only just arrived in the Arch repos and therefore still needed some time to arrive in Manjaro).

So, if you run an Arch-based distro and in case you have yaourt installed, normally what you’d do in order to get ArangoDB would be

yaourt -S arangodb

if you wanted the latest stable release or

yaourt -S arangodb-git

if you’d like to compile directly from the development repository (for example if you’d like to become an ArangoDB  contributor).

I say “normally” because unfortunately, at the time of writing, the stable AUR package is out of date (it is at version 2.6.0 while the latest stable release is 2.6.2). The problem is that 2.6.0 does not compile correctly with current GCC, so the compilation of the current version of the AUR package will exit at some point with an error. I have already commented on the AUR package asking for it to be updated, but until that happens, you can use my patched PKGBUILD file from my Dropbox. Just download the file, put it somewhere with enough space (I haven’t measured, but commenters say that the build takes up to 3.2 GB of space at its peak, but don’t worry, the resulting package takes less than 200MB of space), then run

makepkg -i

in that directory to automatically download the source files and build ArangoDB.

UPDATE: By now, the AUR package has been updated to version 2.6.2, so you can now use it directly.

Whichever route you take, both yaourt and makepkg will automatically install any unmet dependencies (on my machine I had everything except Go), compile ArangoDB and offer you to install it.

The post-install script will ask you to run a database upgrade if you have an existing ArangoDB database, and enable and start the ArangoDB systemd service. If you’re installing ArangoDB for the first time, you can safely ignore the first instruction.

Since I’ve installed ArangoDB on my machine only for taking my first steps in it and not to run an actual server, I didn’t enable the service so that it won’t auto-start with every boot. I just started it manually with

sudo systemctl start arangodb.service

After that, to see if ArangoDB runs properly, you can direct your browser to


and you should be greeted by its web interface.

The ArangoDB web interface

The ArangoDB web interface

However, after reboot, ArangoDB wouldn’t start anymore on my system. “systemctl status” told me that it had exited with “FATAL cannot write pid-file ‘/var/run/arangodb/'”. If you’re running into the same problem, create a file “arangodb.conf” in /lib/tmpfiles.d/ with the following content:

d /var/run/arangodb 0755 arangodb arangodb -

This tells systemd to create a folder on startup if the service is enabled, which is writeable by the arangodb user (and readable by everyone else).

To create the needed directory immediately instead of after the next restart, run

sudo systemd-tmpfiles --create arangodb.conf


sudo systemctl start arangodb.service

should work again.

Congratulations, you should now have a working ArangoDB installation!

As the next step, if you haven’t already done so, I’d recommend checking out the First Steps section of the ArangodB documentation to make yourself familiar with the database. In the chapter Collections you will learn how to create your first collection with some documents in it and query for them. This also shows you if your installation really works.

If you’d like to try out ArangoDB on an Arch-based distro, I hope you will find this little how-to helpful. If there is anything unclear or not working, please comment and I’ll be glad to help ypu!

Posted in ArangoDB

Joining the press – Which topic would you like to read about?

WritingWhen I saw an ad on Linux Veda that they are looking for new contributors to their site, I thought “Hey, why shouldn’t I write for them?”. Linux Veda (formerly muktware) is a site that offers Free Software news, how-tos, opinions, reviews and interviews. Since its founder Swapnil Bhartiya is personally a big KDE fan, the site has a track record of covering our software extensively in its news and reviews, and has already worked with us in the past to make sure their articles about our software or community were factually correct (and yes, it was only fact-checking, we never redacted their articles or anything).

Therefore, I thought that a closer collaboration with Linux Veda could be mutually beneficial: Getting exclusive insights directly from a core KDE contributor could give their popularity an additional boost, while my articles could get an extended audience including people who are currently interested in Linux and FOSS, but not necessarily too much interested in KDE yet.

I asked Swapnil if I could write for him. He said it would be an honor to work with me, which I must admit made me feel a little flattered. So I joined Linux Veda as a freelance contributor-

My first article actually isn’t about anything KDE-related, but a how-to for getting 1080p videos to work in Youtube’s HTML5 player in Firefox on Linux, mainly because I had just explained it to someone and felt it might benefit others as well if I wrote it up.

In the future you will mainly read articles about KDE-related topics from me there. Since I’m not sure which topics people would be most interested in, I thought I’d ask you, my dear readers. You can choose between three topics that came to my mind which one I should write about, or add your own ideas. I’m excited which topic will win!

Of course this doesn’t mean I won’t write anything in my blog here anymore. I’ll decide on a case-by-case basis if an article would make more sense here or over at Linux Veda. I hope you’ll find my articles there interesting and also read some of the other things they have on offer, you’ll find many well-written and interesting articles there!

Posted in KDE

In Free Software, it’s okay to be imperfect, as long as you’re open and honest about it


Saying “sorry” and really meaning it goes a long way towards maintaining or rebuilding trust, especially in the Free Software community

Humans are fallible. We all make mistakes sometimes. Most of these mistakes affect only ourselves or those close to us, but sometimes we make mistakes that affect more people. Work in a Free Software project is one of those areas where a mistake can easily affect many people in a community, or thousands of users.

The same is true for organizations in the proprietary software or service world, but it feels to me that two things play an amplified role in the Free Software world; openness and honesty. While users/customers of proprietary software or services also do appreciate if a vendor openly admits a mistake, honestly apologizes for it and promises to make up for it, the default expectation is rather that they don’t. Or if they do, that it’s a calculated PR effort, done because some suits crunched some numbers and decided it’s better for the bottom line.

In the FOSS world, people seem more likely to really see the person, not just the community they belong to. And from a person, they expect that they really and honestly feel sorry if they made a mistake. And they seem to be more forgiving if a FOSS contributor admits a mistake and apologizes than if a proprietary software company does. It’s not only individuals, though. It seems like even companies in the FOSS field are expected to be more open and honest than those in the proprietary software field.

Exhibit 1: InstallerGate

A case which many readers on PlanetKDE – which are likely to also be interested in Qt – are probably familiar with is the issue of the Qt online installer forcing the creation of a Qt Account in order to continue the installation (since it seems to be hip to call everything that is even the slightest bit scandalous “[something]gate” these days, let’s call it InstallerGate). As the comments on the blog post announcing the change may hint at, the reactions by Qt users and contributors to it were not pretty. At all. People claimed that Qt is becoming “less free” or that “pushy greed is determined to ruin Qt and turn decades of efforts into a waste”. And those are the comparably harmless comments.

And it wasn’t just on the blog post itself, people from all corners of the internet started attacking The Qt Company. Of course reverting the change and making creation of a Qt Account optional was the obvious reaction, but The Qt Company did not stop there – a very wise decision. In the blog post announcing the change, Tuukka Turunen, Qt’s R&D Director, after thanking the community for their feedback, said “We clearly ill-calculated how asking for a Qt Account with the online installer would make our users feel. A mistake. Sincere apologies.”. That does not sound like marketing-bullshit, that sounds like someone who has realized that he (or whoever made that decision) screwed up. Later in the post he thanks the community again: “So, thanks again for your feedback, discussions and for guiding us in this tough endeavor.” He admits that The Qt Company needs guidance through their endeavors by the community. Pretty strong statement.

And the community really appreciated that honesty, expressing it clearly in comments like “Thank you! Apart from the technical merits of Qt, it’s also nice they hear their users and developers (I’ll however create a Qt Account for myself! ;-))”. Of course the actual decision is the most important part, but open and honest communication about it goes a much longer way to maintain or rebuild trust than just reverting a change.

Exhibit 2: ShowDesktopGate

Of course community Free Software contributors make mistakes, too. Like me, for example. This March, I was asked to provide user experience input on a review request titled “Show Desktop feature: refurbished“. It was a quite big change, with quite a few changes in the interaction with the feature to comment on. One question was what to do with the panel when showing the desktop.

A big issue there is that if one clicks on a task in a task manager while in Show Desktop mode, there are different things that could happen, each with its own drawbacks: The appearing window could “break” the Show Desktop mode, returning to the previous state before it was activated. This works, but may confuse users because clicking one task suddenly brings back all windows that were visible before Show Desktop was activated. Alternatively, only the window which was clicked on could be shown. That would cause the least surprise initially, but would leave things in a poorly defined state (What happens if one leave Show Desktop now? What happens if another task is clicked? Or the current one is clicked again? Which windows are shown or hidden in which modes?), which is prone to cause confusion as well as bugs. Simply hiding the window belonging to the task behind the desktop like all other windows would be logical from a technical standpoint, but very confusing for the user (“I clicked on that task, why can’t I see it?”).

With that in mind, this is what I replied to the question: “Panels: I’m a bit torn here. The thing is that we have not defined what the dashboard is supposed to be used for. If it is only for glancing at or quickly interacting with desktop widgets, then panels would only be distracting. If it is for interacting with the workspace in general, then of course panels should be accessible. One argument against showing panels would be that clicking on a window in the task switcher would break the mode (the window should not go below the dashboard, as that would be inconsistent with switching via alt-tab). Still, I see valid arguments for both alternatives, so I’ll leave that to you. Just make sure that either panels are fully visible and can be interacted with, or are hidden/dimmed/faded and cannot be interacted with, as any mix of the two would be confusing.”

I thought that since in Show Desktop mode the focus should be – as the name suggests – on the desktop, we could avoid the issue that the taskmanager can cause by just not showing the panel at all. Problem solved, right? What I had completely missed, though (since my mind was focused on triggering Show Desktop mode via alt-tab), is that another common way to activate Show Desktop mode is by a Plasmoid. In a panel. The unfortunate part is that now as soon as one clicks that Plasmoid in Plasma 5.3, the panel is hidden, and the Plasmoid with it. Boom! Impossible to leave the state in the same way it was activated. Confusion! Panic! Usability bug.

I screwed up. Yes, I could hide behind the fact that I didn’t explicitly advise for hiding the panel but left the decision to the devs, but it isn’t their job to detect usability bugs in the making, it’s mine. And they trust me on that (which is great!), so if I say that something is okay, developers think they can safely assume that I’ve thought it through and there are indeed no issues. Only that this time I didn’t.

Of course when people noticed the problem, I was informed immediately and agreed with the developers that the panel should not be hidden in Show Desktop mode after all. This problem will be fixed with the first bug fix update of Plasma 5.3. However, I still have to sincerely apologize for the confusion this change will likely cause to users in its short lifetime, and to the developers for not living up to the trust they put in my judgement in this particular case. I will think things through more thoroughly before the next time I comment on a review request as complex as this one.

In the meantime: Can you be mad at someone who gives you this?
Cute puppy
I sure hope not.

Openness and honesty as a criterion for choosing a FOSS product

Openly admitting mistakes and apologizing for them isn’t the only occasion where openness in communication is important in the FOSS world, though. As I learned from my interview with a user and contributor of the open source NoQSL database ArangoDB, the team’s – in his opinion (even compared to other open source NoSQL databases) – very open communication actually was one of the main reasons why he chose it for his project. He sees it as an important plus for them that “They are open about their shortcomings” because “If I’m going to build a product on something, I want a relationship with the makers and users of that something. In every relationship I want honesty. Openness by a product maker is as close to honesty as I can get.”

So even if you haven’t screwed up yet, it is beneficial to be open about what you do (including your product’s shortcomings) right from the start of an open source project, because that builds confidence and trust. Trust is important for end users, but probably even more important for people who build their products on your software.

Now, dear reader, I’d like to hear your take on this: How important is openness and honesty of a product’s creators for you? Do you, too, feel that it plays an even more important role in the FOSS world than in the proprietary software world, or do you think it’s the same? Let me know in the comments!

Disclosure: No, I did not get sponsored for writing this article. I included the reference to the interview because I think it fits the theme that I observed with the Qt installer and FOSS projects in general.

Tagged with:
Posted in ArangoDB, KDE

Meet Patrick, ArangoDB User and Community Contributor

As announced in my previous blog post, I’d like to tell the world a bit about the multi-model NoSQL database ArangoDB, and since my profession is to focus on users, I thought that the best way to start would be by talking to users. So I looked around in ArangoDB’s support Google Group and its IRC channel #arangodb for an active user who would like to do an interview with me. Lucky for me, the first user I happened to talk to on IRC was not only a very enthusiastic ArangoDB user, but also an active community contributor to the project (though he is in no way affiliated with ArangoDB GmbH, the company behind the software).

So let me introduce to you J Patrick Davenport, a freelance Solutions Architect form Palatka, Fl, USA, who… nevermind, let’s let him do the talking now :)

Thomas:  Hi Patrick, to start off, could you tell me a little about yourself (your background, the job in which you’re using ArangoDB, …)?
Patrick: About me: I’m a Solutions Architect, working for my own company, DeusDat Solutions. I’ve been working over 8 years as a software Bob Villa (This Old House, American Reference). My clients call me in to renovate code bases that are near collapse. I’ve improved multiple Fortune 500 company’s core business applications. Recently I designed half to the Medicare Fraud Detection System for the US Government. That was based on Hadoop.

Thomas: Ah, interesting! So, what’s the story of you and Arango?
Patrick: Given the above, it’s implied that I’ve worked in many a corporate office. With the poor chairs and fluorescent lighting comes corporate tool sets. Going off on my own, I decided to learn a whole new toolset. At the same time I decided to write a book on NoSQL and NewSQL. Those two dreams coalesced in me wanting to support ArangoDB. I didn’t like MongoDB. I thought that was too hyped and that they believed their own press. Writing the book gave me a whirlwind tour of the NoSQL world. What I wanted was a document store, with geo-index and a healthy feel of youthful vitality. Turns out there aren’t too many of those around. ArangoDB was the only system that provided all three. RethinkDB was close, but lacked the geo-indexes.
As I learned more about it, I saw that it was wide open for tools and drivers. There was one Clojure driver. That driver was a simple attempt, but violated many Clojure purisms like global state. So I wrote a driver called travesedo. I needed to work with some Hadoop tools to make processing a large CSV data store into documents. I wrote Guacaphant (yes, it’s in Java). I needed to migrate and deploy database changes. Clojure has a community API for that called Ragtime. I wrote Waller just this last week to use with ArangoDB.

What I wanted was a document store, with geo-index and a healthy feel of youthful vitality. Turns out there aren’t too many of those around. ArangoDB was the only system that provided all three.

Thomas: Thanks! I’d like to dig a bit deeper into some points, if that’s okay. So where did you first learn about ArangoDB?
Patrick: I first learned about Arango while researching Document Stores for my book. At the same time, I knew that I wanted to use a document store with geo-indexes for personal projects. I found ArangoDB in an overview site.
Now I’ve got a few projects that I plan to back with ArangoDB. One focuses on enabling cities to better respond to tourists. As a transplanted Floridian, I understand how much of the state’s economy and even my town depends on the flow of tourist dollars. When I look around at the economic development going on, I see that even large cities have yet to app-ify themselves. My goal is to change this. Geo-location is a huge part of that. I need a database that supports it. Another project focuses on home management. Arango’s multi-model structure allows me to naturally relate home processes via a graph structure. While Arango’s graph infrastructure is presently simple, I have faith that the development team will expand it into something more powerful, distributed, that’s able to take on Neo4J.

Thomas: Thanks! You said you were looking for a DB with “a healthy feel of youthful vitality”. What about ArangoDB made you feel that it has more of that than e.g. MongoDB?
Patrick: One thing about MongoDB is it’s slow to change, or to react to valid negative criticism. When mmap’ing issues were pointed out MongoDB’s response was smoke and mirrors. When the global locks slowed things down, they waved their hands. When the benchmark numbers were shown to be inflated due to default drivers not waiting for confirmation of saving, MongoDB folk talked-talked-talked, but said nothing.
ArangoDB is different. They are open about their shortcomings. Mmap is clearly written out as a design constraint. They talk about how datasets have to be mostly in memory for at least the active pages. Questions on the Google Group are answered quickly, by core team members. Same is true to Stackoverflow. They also actively promote competition in the tooling space. When I first brought up the idea of creating a competing driver for Clojure, I was supported. Some made sure I did my homework before doing that, but I had support.

ArangoDB is different. They are open about their shortcomings. Mmap is clearly written out as a design constraint. They talk about how datasets have to be mostly in memory for at least the active pages. Questions on the Google Group are answered quickly, by core team members. Same is true to Stackoverflow.

Thomas: Interesting! Could you tell me why is it important to you that the company behind a database is as open as ArangoDB GmbH?
Patrick: If I’m going to build a product on something, I want a relationship with the makers and users of that something. In every relationship I want honesty. Openness by a product maker is as close to honesty as I can get. I want a team that clearly says what they’re going to do, keeps the community apprised of their efforts (including failures) and requests dialogue. I believe that if a company does this, it can stand the test of time.
ArangoDB is not backed by Oracle. It’s not backed by IBM. I need to have faith in their ability to be there tomorrow with a good product. Openness is a sign that I’ve picked the right horse.

Thomas: Thanks! So I take it from your previous answer that from your experience, ArangoDB does better in that regard than most of their competition?
Patrick: Yep. I haven’t seen that type of life in any of the established NoSQLs. Cassandra feel tired. Voldemort feels vanquished. MongoDB seems stuck (I grant that they did release an improved 2ndary engine about 3 months ago). Rethink doesn’t have Geo-indexes.

Thomas: Ok. So you said you’re using or are going to use document stores and graphs in your projects. Have you used key-value stores in ArangoDB as well, or are you going to use them in one of the projects you’re planning?
Patrick: I don’t presently have a need for the straight key-value stuff (slight tangent, arango-session for Ring does this against a simple collection). I find structured data a better fit for my modeling.

Thomas: Ah, okay. So what about other features of ArangoDB: Are you using joins? Transactions?
Patrick: Interestingly graphs seem to limit my need for joins. They are implicit joins. I want every product that I own. While I could model that as a Product with an attribute of “owner”, I can share the product definition in a products collection with everyone via the relationship. They are also implicitly transactional. I can’t modify relationships and get into an unstable state. Transactions aren’t a huge sell for me right now, but since I do contracting work, transactions will be a huge sell to future clients. ArangoDB is the only Document Store that I’ve seen that supports them.

Thomas: Could you expand on that a bit on which future clients you expect transactions will be a huge sell for?
Patrick: My goal is to get ArangoDB in the enterprise. I don’t know how the market looks in the EU right now, but NoSQL in the Fortune 1000 is pretty small outside of some niche uses like logging. Document stores provide an easy migration path into NoSQL, especially if you’re in a dynamic, weakly typed language like Clojure. Everything in JSON == map. Given this, I think the compelling arguments for Clojure development speed ups, Arango’s natural modeling (in general and in Clojure) and finally the transaction support should make many pointy haired bosses feel safe and their developers happy ‘cause there is a new toy in town. My understanding is that operations against an individual document are transactional. Since I will probably be doing that (set count to (dec count)), I won’t have bulk modifications.

I think the compelling arguments for Clojure development speed ups, Arango’s natural modeling (in general and in Clojure) and finally the transaction support should make many pointy haired bosses feel safe and their developers happy ‘cause there is a new toy in town.

Thomas: Have you used ArangoDB with other languages than Clojure (or are you planning to)?
Patrick: I’ve used it with Java for guacaphant, but that’s it. My useage there was 1) incredibly short (to just get a cursor to an AQL query) and 2) incredibly no Java ideomatic. I didn’t use data classes like I asked Arango to give me the cursor items as maps. So it’s very Clojure/Functional.

Thomas: Have you used Foxx yet? If so: What for? If not: Do you see a case where you might use it in the future?
Patrick: I haven’t used Foxx. I’m a bit torn by its existence. I understand that it could be used as rapid prototyping API platform, but it seems to have scalability issues. Now I have to have my DB server and Application running on the same box? I don’t like it too much. Added to this is the fact that Clojure makes rapid prototyping easy too. All the benefits of the JVM without the deployment hassle.

Thomas: Is there a way in which Foxx could be improved so that it would be beneficial to you?
Patrick: I don’t think so. Perhaps someone using Foxx could come out and say, “We did X with Foxx”. That might start my creative juices flowing. Until then, I don’t see the need for it other than fronting Arango with a really, really thin veneer for CRUD APIs.
But, I like that Arango is trying something. If people take it up, great. I’m terrible at picking trends. If the community doesn’t really use it, as long as Arango pulls back, great. Experimentation is how we learn.

Thomas: Okay, thank you for that open and honest answer!
Patrick: You’re welcome. See, we’re partnering on this interview.
Thomas: Indeed! So now I’d like to learn a bit about your experience with learning and using ArangoDB. Was it easy to set up? Did you find AQL easy to learn? Were your expectations all fulfilled so far, or were some of them disappointed?
Patrick: Getting going with Arango was pretty easy. The documentation is well formatted, and great for starting. That said, I’ve found it hard to learn about advanced concepts like advanced AQL. I ask a lot of questions on the Google Groups about idiomatic AQL. What I’d like is a more in-depth write up for CRUD web apps. Especially around the idea of using AQL for modification. Another point of weakness is that the documents don’t discuss how to deploy in a clustered environment. I know that there is a simple walk through on that, but it feels more like a toy networking project than a how to. I will guard that sentiment by saying I haven’t looked at the Puppet Scripts. They might show a better way.
As a driver implementor this is a huge gap in documentation. For example, there is nothing that says a call for the next cursor batch must be directed to the same node the last batch came from. This is important for the driver. Only my driver even attempts to support clustering and replicated instances. But it is really weak on preserving the batch call requirements (i.e. it doesn’t do that yet). I found out about this by reasoning about how I would implement a distributed system and then questioning the group.

Getting going with Arango was pretty easy. The documentation is well formatted, and great for starting.

Thomas: Have you read the articles about setting up ArangoDB on Google Compute Engine or Digital Ocean (both released this month) yet? If so: Did you find those useful? (they just came to my mind because they’re both about clustered environments)
Patrick: I haven’t.
Thomas: Just for reference: and
Patrick: Thanks

Thomas: You already mentioned the immediate support via the Google Group and Stackoverflow as a plus for ArangoDB. Does that way of getting support work well for you in general, or would you prefer other means of communication to get support?
Patrick: It works well. I’ve tried IRC. Unfortunately there is a time gap. I’m several hours behind the core team, and probably most ArangoDB users. The Group and Stackoverflow make the gap feel less real.
Thomas: Do you see the fact that ArangoDB GmbH currently only has an office in Germany as a disadvantage for the US market?
Patrick: No. It seems like the disadvantage is that they aren’t covered much by American press like Techcrunch. MongoDB was a darling for a while with them.

Thomas: How was your experience writing drivers for ArangoDB (apart from the clustering problem you already mentioned)? Was it easy to get started with it? Was the effort you needed acceptable?
Patrick: The documentation for the HTTP API is pretty good. They show possible inputs and the expected output. Wrapping that in Clojure was easy. I’ve been implementing the features that I need first. So I focused on DB/Collection/Document creation. Since I’m starting to need graphs, I’m working on that now. When I found issues with the documentation not working or being vague, the ArangoDB team corrected it within an hour of me posting on the group (during one of those happy moments when I’m working early, and they are working late).

Thomas: Sounds great! Okay, so the last question I have on my list: Apart from improving the documentation on clustering, what would you like to see the ArangoDB development focus on in the near future?
Patrick: I would like to see their focus split into two performance enhancing tasks. A) distributed graphs. I know that they’re on the road map. I really want to see them. It would be fun to have another open source, heavy duty graph storage to compete with Neo4J. Neo4J doesn’t even really do distributed searches. B) I want to have array and sub documents indexible. I don’t think Arango does this presently. This could make services like ratings based on locality quicker without the use of joins (but hey, Arango has them).

When I found issues with the documentation not working or being vague, the ArangoDB team corrected it within an hour of me posting on the group (during one of those happy moments when I’m working early, and they are working late).

Thomas: Okay, thank you a lot for your input! It was really interesting. Is there anything you feel hasn’t been touched yet, or anything you’d like to add?
Patrick: Nope. Thanks for the opportunity.

The interview was slightly edited in a few occasions for the sake of improved readability, while carefully keeping the content and tone of the original statements intact. Some comments that were unrelated to ArangoDB were left out.

If you have any experience with ArangoDB so far, does it match Patrick’s experience, or was it different in some way? Is there anything you’d like to know from Patrick which was not covered in this interview?

Let us know in the comments, and maybe Patrick will answer to you if he reads it (or I’ll point it out to him).

Full disclosure: I was sponsored by ArangoDB GmbH to write some blog posts to promote ArangoDB. ArangoDB GmbH does not exert any influence on the content of those posts, however, and they were not formally approved by them. Therefore, the content of this post is 100% J. Patrick Davenport’s and mine. EDIT: Just to clarify: I was sponsored for conducting the interview. Patrick did not receive compensation for the interview, in order to allow him state his honest opinion without fearing negative financial consequences.

This post is licensed CC-BY 4.0

Tagged with: , ,
Posted in ArangoDB

Would you want to see posts about NoSQL databases on PlanetKDE?

My dear readers, I have agreed with ArangoDB to help them spread the word about their – of course open source – multi-model NoSQL database, and I will be using this blog to do that. It’s not going to be boring marketing bla bla, but I’m planning to write about things like tutorials, interviews with ArangoDB users and contributors, as well as the adventures of a “kind-of-geeky psychologist” (i.e. me) taking a stab at developing a data-intensive web application using ArangoDB.

PlanetKDE is not subscribed to this whole blog, however, but only to a specific category. Therefore, I can control whether these posts show up on the Planet or not. Since many of my readers here are rather on the tech-savvy side of the spectrum, I suppose that these posts might be of interest to at least some, if not many, of you, but I don’t want to “spam” the Planet with these posts if the majority would not be interested in them.

Therefore, I want to give you, my readers, a choice. Please vote using the poll below on whether you’d like to see NoSQL database-related posts show up on the Planet or not. Of course I can revise the decision later if I get a lot of feedback to the contrary on my individual posts, but I’d like to get a quantitative picture of the general preference beforehand.

EDIT: The results are in. 49% of respondents would not like to see any NoSQL posts pop up on PlanetKDE at all, but 51% are either generally fine with them or would accept them if they are related to KDE/Qt. I will of course happily oblige and only assign posts to the KDE category if they are in some way related to KDE, for example if I talk about developing Qt-based mobile apps using ArangoDB as a backend. Thank you for participating in the poll!

For those who are interested in all posts about ArangoDB, you can subscribe either to the main RSS feed for my blog to see all posts, or the ArangoDB category feed specifically to keep updated on all those posts in the future.

Posted in KDE

Notes From the PIM Sprint: A Vision for the KDE PIM Framework

I know I know, the PIM sprint has already been last November, but in my defence (@David: !!!), the VDG and KDE as a whole has been so buzzing with activity in the meantime that I didn’t find the time to write the blog post I had meant to write about it. So here it is, better late than never.

Now, what did I do at the KDE PIM sprint?

There where to topics I brought to the table:

  1. Developing a project vision for KDE PIM in general and/or Kontact in particular
  2. Making KMail’s search useful again

In this post I’ll cover the first topic.

A project vision is something that has proven to be useful in product design and development in general, and maybe even more so in Free Software projects. The KDE Human Interface Guidelines define a project vision this way:

A vision describes the goal of the project. It can be emotive and a source of inspiration, for instance by outlining how the final product makes the world a better place. Describe the project’s final goals. Explain who will use the product, and how he or she will take advantage of it. Make sure the vision is shared between all project stakeholders. Leave room in the vision for creativity. Keep it short.

The reason why I (and others) think a project vision is especially useful for FOSS projects is that since the typical FOSS project community is a bunch of quite diverse individuals, each with their own goals and ideas. This is of course one of the great strengths of Free Software, but it also means that, without communicating about those goals and ideas, the members of the community might all march in different directions without even noticing. That is what can lead to what users perceive as a lack of consistency and a lack of common direction.

A project vision can help make the individual ideas and goals explicit, and allow the community to agree on a common goal and direction. That does not mean people are not allowed to add features which are not consistent with the vision, but that the community should think carefully why such a feature should be added anyway. It also helps telling users who keep asking for features which are not consistent with the vision why those features won’t be implemented.

So, I suggested to the KDE PIM community to come up with a vision for their project. Since the KDE PIM project actually produces a whole range of products, the group decided to have specific visions, at least one for the KDE PIM Framework and one for its desktop client Kontact. The time at the sprint was not sufficient for a draft vision for Kontact, but for the framework, we now have a vision draft:

The KDE PIM Framework allows to easily create personal information management applications ranging from personal to large enterprise use, for any target device and major platform.
It seamlessly integrates data from multiple sources, while aiming to be rock-solid and lightning-fast.

The PIM framework focuses on supporting open groupware servers like Kolab, but can be extended to access information from various sources.

Now there are some very important points in there. First is a commitment to broad applicability (it’s a framework, after all!) across usecases and target platforms, as well as a commitment to speed and stability. If the community at large (i.e. also those whop were not present at the sprint) agrees on this vision, this means, for example, that if someone wants to use the framework to create an enterprise application but finds that it currently lacks for example scalability to do so, the team cannot say “But it was never meant for enterprise use!”.

The second part is a commitment to the current driving forces between the PIM Framework: It makes clear that Kolab and similar open groupware servers are what the team cares about most, without excluding other backends. In practice, that means that if someone for example wants the framework to support Microsoft Exchange, the team can say “Please refer to our vision: Exchange is not our focus. So, if you want to integrate support for it in the KDE PIM Framework, you will have to find someone to do it. We’re open to it, but we have other priorities.”

Now as a next step, this vision draft will have to be presented to the whole KDE PIM community, perhaps iterated upon, and finally agreed upon and published officially.

In the same vein, a vision for Kontact should be created. Some input for this formulated by Aaron Seigo:

A high-quality, featureful, scalable, multi-platform groupware suite that provides unified access to groupware data from multiple sources with a focus on full-featured groupware servers.

Now I’d like to see the idea of creating project visions spread to other projects (for example Krita already has one, KDE Telepathy also has a draft which awaits publishing, and the KDE Visual Design Group has committed to creating a vision as a first step when designing or fundamentally redesigning applications). If your project would like to create a vision, feel free to ask any VDG member of your choice or start a thread on the VDG forum.

So, what’s your take on this? Do you think a project vision can be helpful? What do you think about the proposed vision of the KDE PIM Frameworks? Let’s hear your opinion in the comments!

Tagged with:
Posted in KDE

Get every new post delivered to your Inbox.

Join 33 other followers