Getting Started With Riak_test and Riak_core

If you don’t know what riak_core is, or don’t have a riak_core based application you’ll probably not take too much practical use of this posts you might want to start with Ryan Zezeski’s “working” blog try try try and the rebar plugin.

That said if you have a riak_core app this posts should get you started on how to test it with riak_test. We’ll not go through the topic of how to write the tests itself, this might come in a later post also the riak_kv tests are a good point to start.

Please note that the approach described here is what I choose to do, it might not be best practice of the best way for you to do things. I also will link to my fork of riak_test instead of the official one since it includes some modifications required for testing apps other then riak_kv I hope those modifications will be merged back at some point but for now I want to get it ironed out a bit more before making a pull request.

What is riak_test?

So before we start a few words to riak_test. riak_test is a pretty nice framework for testing distributed applications, it is, just as about all other riak_ stuff created by Basho and it is pretty darn awesome.

At it’s current state it is very focused on testing riak_kv (or riak as in the database) but from a first glance a lot of functionality is very universal and after all riak_core is also build on top of riak_core, so modifying it to run with other riak_core based apps is pretty easy.

The setup

Since I will be testing multiple riak_core apps and not just one I decided to go the following path: Have the entire setup in a git repository, then have one branch for general fixes/changed to riak_core then have one branch for each application I want to test that is based on the general branch so common changes can easily be merged so it will look like this:

---riak_test--------- (bashos master tree)
   `---riak_core----- (modifications to make riak_test work with core apps)
    ` `  `---sniffle- (tests for sniffle)
     ` `---snarl----- (tests for snarl)
      `---howl------- (tests for howl)

We’ll go over this and setting up tests for the howl application it’s rather small and simple and it’s easier to follow along with something real instead of a made up situation.

Getting started

Step one of getting started is to get a clone from the riak_test repository, that’s pretty simple (alter the path if you decided to fork):

cd ~/Projects
git clone
cd riak_test

Now we branch of to have a place to get our howl application but first we need to checkout the riak_core branch to make sure we get the changes included in it:

git checkout riak_core
git branch howl
git checkout howl

Okay that’s it for the basic setup not that bad so far is it?


Next thing we need to do is creating a configuration, at this point we assume you don’t have any yet so we’ll start from scratch, if you add more then one application later on you can just add them to an existing configuration.

riak_test looks for the configuration file ~/.riak_test.config and reads all the data from there so we’ll first need to copy the sample config there:

cp riak_test.config.sample ~/.riak_test.config

Next step is to open it in your favourite editor, you’ll recognise it’s a good old Erlang config file with tuples to group sections. We’ll be ignoring the default section for now, if you’re interested in it the documentation is quite good!

So lets go down to where it reads:

%% ===============================================================
%%  Project-specific configurations
%% ===============================================================

Here is where the fun starts, you’ll see a tuple starting with {rtdev, - note this rtdev has nothing whatsoever to do with the rtdev that is in the default section as {rt_harness, rtdev}. The rtdev in the project part is just the name of the project, since your project is named howl not rtdev we’ll go and change that first.

{rtdev, [

Now we can go to set up some variables first up the project name and executables, the name itself is just for information (or if you use giddyup) the executables are how your application is started, since our application is named howl it’s started with the command howl and the admin command for it is howl-admin.

    %% The name of the project/product, used when fetching the test
    %% suite and reporting.
    {rt_project, "howl"},

    {rc_executable, "howl"},
    {rc_admin, "howl-admin"},

With that done come the services, those are the buggers you register in your _app.erl file, lets have a look at the howl_app.erl:

            ok = riak_core_node_watcher:service_up(howl, self()),

So we only have one service here we need to watch out for, named, you might guess … right howl that makes the list rather short:

    {rc_services, [howl]},

Now the cookie, it’s a bit hidden in the code that you need to set it but you do, you will need this later so remember it! Since I am bad at remembering things I named it … howl … again.

    {rt_cookie, howl},

Now comes the setup of paths, for this we’ve to decide where we want to put our data later on, I’ve put all my riak_test things in /Users/heinz/rt/... so we’ll follow with this. Also note that my development process works on three branches:

  • test - the most unstable branch.
  • dev - here things go that should work.
  • master - only full releases go in here.

This setup might not work for you at all, but since it are only path names it should be easy enough to adept the.

Note that by default riak_test will run tests on the current environment

    %% Paths to the locations of various versions of the project. This
    %% is only valid for the `rtdev' harness.
    {rtdev_path, [
                  %% This is the root of the built `rtdev' repository,
                  %% used for manipulating the repo with git. All
                  %% versions should be inside this directory.
                  {root, "/Users/heinz/rt/howl"},

                  %% The path to the `current' version, which is used
                  %% exclusively except during upgrade tests.
                  {current, "/Users/heinz/rt/howl/howl-test"},

                  %% The path to the most immediately previous version
                  %% of the project, which is used when doing upgrade
                  %% tests.
                  {previous, "/Users/heinz/rt/howl/howl-dev"},

                  %% The path to the version before `previous', which
                  %% is used when doing upgrade tests.
                  {legacy, "/Users/heinz/rt/howl/howl-stable"}

And that’s it now the config is set up and should look like this:

{rtdev, [
    %% The name of the project/product, used when fetching the test
    %% suite and reporting.
    {rt_project, "howl"},

    {rc_executable, "rhowl"},
    {rc_admin, "howl-admin"},
    {rc_services, [howl]},
    {rt_cookie, howl},
    %% Paths to the locations of various versions of the project. This
    %% is only valid for the `rtdev' harness.
    {rtdev_path, [
                  %% This is the root of the built `rtdev' repository,
                  %% used for manipulating the repo with git. All
                  %% versions should be inside this directory.
                  {root, "/Users/heinz/rt/howl"},

                  %% The path to the `current' version, which is used
                  %% exclusively except during upgrade tests.
                  {current, "/Users/heinz/rt/howl/howl-test"},

                  %% The path to the most immediately previous version
                  %% of the project, which is used when doing upgrade
                  %% tests.
                  {previous, "/Users/heinz/rt/howl/howl-dev"},

                  %% The path to the version before `previous', which
                  %% is used when doing upgrade tests.
                  {legacy, "/Users/heinz/rt/howl/howl-stable"}

Setting up the application

We’ve riak_test ready to test now next we need to prepare howl to be tested, we’ll only look at the current (aka test) setup since the steps for others are pretty much the same.

The first step is that we need the folder, so lets create it

mkdir -p /Users/heinz/rt/raw/howl
cd /Users/heinz/rt/raw/howl

Since howl lives with the octocat on github it’s easy to fetch our application and checkout the test branch (remember current is on the test branch for me):

git clone howl-test
cd howl-test
git checkout test

And done, now since it’s a riak_core app we should have a task called stagedevrel in our makefile which will basically generate three copies of howl for us in the folders dev/dev{1,2,3} and in the process take care of compiling and getting the dependencies. I prefer stagedevrel over the normal devrel since later on it will it easier to recompile code files (make is enough) because it links them to the right place instead of copying the.

make stagedevrel

Now we’ve to do a bit of a cheating, riak_text expects the root dir to be a git repository, that is why we can’t just put the data in there directly, so we’ve to manually build the tree for riak core and set up as git repositor.

mkdir -p /Users/heinz/rt/howl
cd /Users/heinz/rt/howl
git init

cat <<EOF > /Users/heinz/rt/howl/.gitignore

Now we need to link our devrel files and for my setup I’ve to copy the *.example files of the app.config and vm.args into the right place they might be named differently for you.

export RT_BASE=/Users/heinz/rt/howl/howl-test
export RC_BASE=/Users/heinz/rt/raw/howl/howl-test
for i in 1 2 3 4
  mkdir -p ${RT_BASE}/dev/dev${i}/
  cd ${RT_BASE}/dev/dev${i}/
  mkdir data etc
  touch data/.gitignore
  ln -s ${RC_BASE}/dev/dev${i}/{bin,erts-*,lib,releases} .
  cp ${RC_BASE}/dev/dev${i}/etc/vm.args.example etc/vm.args
  cp ${RC_BASE}/dev/dev${i}/etc/app.config.example etc/app.config

We still need to edit the vm.args in dev/dev{1,2,3,4}/etc/ since we need to set the correct cookie - I hope you still remember yours, I told you you’d need it (if not you can just look in the ~/.riak_test.config)!

That’s it.

Running a first test

In the riak_core branch of riak_test I’ve moved all the riak_kv specific tests from tests to tests_riakkv so you still can look at them but I left one of them in tests, namely the basic command test - it will check if your applications command (howl in our case) is well behaved.

We’ll want to run it to see if howl is a good boy and does well to do so we’ll need to get back into the riak_test folder and run the riak_test command:

cd ~/Projects/riak_test
./riak_test -t tests/* -c howl -v -b none

I’d like to explain this a bit, the arguments have the following meaning:

  • -t tests/* - we’ll be running all tests in the folder tests/.
  • -c howl - our application we want to test is named howl, this is the first element of the tuple we put in our config file when you remember.
  • -v - This just turns on verbose output.
  • -b none - This is still a relict from the riak_kv roots, it means which backend to test with, since we don’t have backends at all we’ll just pass none which means riak_test will happily ignore it.

That’s it! Now go and test all the things!

This is the first part of a series that goes on here.

Plugins With Erlang


Lets start with this, Erlang releases are super useful they are one of the features I like most about Erlang - you get out an entirely self contained package you can deploy and forget, no library trouble, no wrong version of the VM no trouble at all.

BUT (this had to come didn’t it) sometimes they are limiting and kind of inflexible, adding a tiny little feature means rolling out a new release, with automated builds that is not so bad but ‘not so bad’ isn’t good either. And things get worst when there are different wishes.

A little glimpse into reality: I’m currently working a lot on Project FiFo and one of the issues I faced is that - surprisingly - not everyone wants things to work exactly as I do. Which was a real shock, how could anyone ever disagree with me? Well … I got over it, really I did, still solving this issue by adding one code path for every preference and making it configureable didn’t looked like a good solution.

Also recently we are thinking a lot about performance metrics, and there are like a gazillion of them and if you pick two random people I think they want three different sets of metrics. Ask again after 5 minutes and their opinions changed to 7 new metrics and certainly not the old ones!


The problem is a very old one, extending the software after it was shipped, possibly letting the community extend it beyond what was dreamed of in the beginning. The solution is pretty much one day younger then the problem: plugins. Meaning a way to load code into the system.

Sounds easy but it is a bit more complex, just having something load into the VM doesn’t do much good when it does not get executed in the proper place - so sadly this comes with extra work for the developer to sprinkle their code with hooks and callbacks for the plugins.

With this I’d like to introduce eplugin it’s a very simplistic library for exactly that task - introduce plugins in an Erlang release or application. It takes care of discovering and loading plugins, letting them register into certain calls, a little dependency management on startup and provides functions to call registered plugins. Erlang comes with great tools already to the whole thing sums up to under 400 LOC.

Types of plugins

I feel it’s kind of interesting to look at the different kind of plugins that exist and how to handle the cases with eplugin also a post entirely without code would look boring.

informative plugins

Sometimes a plugin just want to know that something happened but don’t care about the result. eplugin provides the call (and apply) functions for that. A logger is a good example for this so lets have a look:

%%% plugin.conf
  [{syslog_plugin, [{'some:event', log}]}],

%%% syslog_plugin.erl

log(String) ->
  os:cmd("logger '" ++ String ++ "'").

%%% in your code
  eplugin:call('some:event', "logging this!"),

Thats pretty much it, provided you’ve started the eplugin application in your code and put the plugins in the right place this will just work. You could also use this to trigger side effects, like delete all files when an error occurs to remove traces of your failure.

messing around plugins

This kind of plugins process some data and return a new version of this data, we have fold for this case. Fold since it internally uses fold to pass the data from one plugin to another. there are many applications for that one would be to replace all occurrences of ‘not js’ with ‘node.js’ to prevent freudian typos in your texts.

%%% plugin.conf
  [{not_js, [{'text:check', replace}]}],

%%% not_js.erl

replace(String) ->
  re:replace(String, "not js", "node.js", [global]).
%%% in your code
  String1 = eplugin:fold('text:check', "I'm writing a not js application!"),

fold and call are the most interesting and important kind of plugins, they cover most if not all of the possible use cases of plugins so there is a special case left which I found useful to have.

checking plugins

Checking plugins are plugins which are supposed to decide if something is Ok or not, they are pretty much a case of fold that returns true or false (or actually whatever is not true). But eplugin solves this too, with the test function! An example here is authentication

%%% plugin.conf
  [{get_out, [{'login:allowed', no_really_not}]}],

%%% get_out.erl

no_really_not(Login) ->
  {forbidden, ["Dear ", Login, " we don't want you here go away!"]}.

%%% in your code
  case eplugin:test('login:allowed', "Licenser") of
     true ->
        %%% huzza!
     Error ->
        io:format("~p~n", [Error])

A Vow to Create Tickets

I’ve had some open source projects before, actually quite some, but Project FiFo is by far the most successful one. And aside from the technical perspective I’ve learned a tremendous amount of new things already, one of which I want so share since. It sounds simple but I never looked at it this way before: Tickets.

The project has caught on momentum so fast that at ‘rush hours’ people come with questions, bug reports and feature requests at rate that it’s they pile up faster then we can help or answer, the channel and community already does a great job ‘filtering’ out easy to answer topics but enough are complicated to a point where a developer has to look at them.

I’ve been on both sides of the fence, and I had the honest believe that it’s easier for developers if you just pop by the IRC channel and ask/report a issue. And that is not entirely wrong often a quick ‘hey I’ve problem X/Y’ is enough to solve a situation others had before that can be resoved with a few words.

What I did not realise is how crucial it is to open tickets for anything that can’t be resolved directly. The reason for this is simple, usually there are N users for 1 developer and for each user it’s easy to keep their topic in mind and present it can become very had very fast for a developer with three users to listen to.

But I don’t want to wander of ranting, instead I’ll try to explain why a ticket really helps me to get stuff done.


Tickets make it very easy to keep track of them, assign them to people (given you’re not solving them yourself) and add followup data. Tickets can be prioritised and categorised, this makes it very easy to group related items or even look up known issues.

The more tickets there are and the more people working on them the more important this gets, handling six tickets gets way easier then handling six email conversations or unthreading six conversations from a IRC channel.


One of the great things about tickets is that they can contain additional information, it’s absolutely helpful to look at a issue that already has some information attached that goes further then three lines and a “it isn’t working”.

To make this better it is also possible to add files and logs which can contain much more information to a ticket and those can be ‘handed around’ between different people looking at the issue without the need to send mails.


Tickets are a two way street, once a ticket is logged the reporter can see it’s progress without having to ask - or at least can ask if it is not handled for a long period. Also it is easier to say ‘hey what is the status if ticket #42’ where all the information is provided already instead of ‘hey how about that bug that happened when …’ and have to explain it all over again.


Once a ticket is logged it’s visible for everyone, there is no reason to go around and ask ‘hey do you know the bug …’ or ‘what are your thoughts about feature …’ and perhaps even get a wrong answer since you’re talking to more then one person. This makes everyones live easier.

Erlang and More DTrace

Some nice additions to the little Erlang DTrace demo. For once I’ve added a filed to input custom scripts which are run on the server which is pretty neat since it allows running all kind of (l)quantize based scripts from the server and get a nice heatmap.

Like how about the heatmap of how Erlang function call times?

Erlang Call Heatmap

The script used (also included in the repository): d erlang*:::global-function-entry { self->funcall_entry_ts[copyinstr(arg1)] = vtimestamp; } erlang*:::function-return { @time[copyinstr(arg1)] = lquantize((vtimestamp - self->funcall_entry_ts[copyinstr(arg1)] ) / 1000, 0, 63, 2); }

Now that is already cool but there is more, in addition to there is a page now that allows to show list based queries (as count or sum) so for example it would be very easy to get a profiling of an Erlang program like this:

Erlang Profiling

The script used (also included in the repository): d erlang*:::global-function-entry { self->funcall_entry_ts[copyinstr(arg1)] = vtimestamp; } erlang*:::function-return { @time[copyinstr(arg1)] = sum((vtimestamp - self->funcall_entry_ts[copyinstr(arg1)] ) / 1000); }

Cool thing is this profiling can be turned on and off in a live system and has a comparable low performance impact (less then 50% unless functions are hammering in my tests).

To add to the joy, the scripts are stopped the moment the page is closed eliminating every kind of overhead, without restarting anything!

So lets sum this up, the old news (when you played with dtrace before) is that you can profile and analyse your applications on the fly with minimal impact, but the funky part is that you can do it directly as part of your application and only shows when you actually look at the page :) it’s kind of like quantumanalytics just the other way round!

Erlang and DTrace

As part of Project FiFo I’ve invested some time to research into DTrace and Erlang, not the probes that are there since some time but a DTrace consumer - letting you execute DTrace scripts from within Erlang and read the results.

The result of this research is the erltrace a consumer for DTrace implemented as a Erlang NIF. The NIF is based on the node.js and and python implementations - many thanks to them!

It’s pretty easy to consume dtrace data from within erlang with erltrace and I figured lets make a little demo. Since I’m not a big fan of reinventing the wheel and a simple demo had been done before, namely heat-tracer.

It’s a nice idea and simple enough to implement and cowboy gives a very nice base for that in Erlang. So there you go dowboy. The HTML/JS part of the page is pretty much the same as in the original save for replacing with simple websockets, I’ll skip this and the the cowboy parts to look directly in the interesting parts.

When connecting we set up a timer to inform us every second to gather the results from DTrace:

websocket_init(_Any, Req, []) ->
    timer:send_interval(1000, tick),
    Req2 = cowboy_http_req:compact(Req),
    {ok, Req2, undefined, hibernate}.

Next up we handle incoming Websockets messages, this deals very simple. Every string that is send is considered a new dtrace script to execute.

The first part creates a new handler, this is pretty much a reference to the libdtrace internal data structures. Since we allow multiple scripts to be send we also have to ensure that the old sockets are closed before new ones are opened.

websocket_handle({text, Msg}, Req, State) ->
    %% We create a new handler
    {ok, Handle} = case State of
                       undefined ->
                       {Old} ->
                           %% But we want to make sure that any old one is closed first.

Next we compile the dtrace script, erltrace only takes lists as strings and not binaries so we’ve to convert it first and then we pass it along with our handle to get the script compiled. It will return ok if everything went well.

    %% We've to confert cowboys binary to a list.
    Msg1 = binary_to_list(Msg),
    ok = erltrace:compile(Handle, Msg1),

Okay now that we’ve compiled the script we just need to tell dtrace to start running it, this happens with the erltrace:go call. Again it will return ok when everything is fine. Finally we just output the script as debug info and return.

    ok = erltrace:go(Handle),
    io:format("SCRIPT> ~s~n", [Msg]),
    {ok, Req, {Msg1, Handle}};

Next up: reading the dtrace data, erltrace:walk do that for you and hand back a datastructur to parse. It is returned as {ok, Data} when there is something to handle.

websocket_info(tick, Req, {Msg, Handle} = State) ->
     case erltrace:walk(Handle) of
         {ok, R} ->

Now since we have JSON on the other side we need to transform the data here Erlangs list comprehensions come to the rescue. Data is returned as [{lquantize, [Name], { {BucketStart, BucketEnd }, BucketCount}}] and we want it in the form {Name, [[BucketStart, BucketEnd], BucketCount] so here you go. We can tehn simpley encode this with jsx and send it over the wire:

             JSON = [{list_to_binary(Call),[ [[S, E], V]|| { {S, E}, V} <- Vs]}|| {lquantize, [Call], Vs} <- R],
             {reply, {text, jsx:encode(JSON)}, Req, State, hibernate};

ok will be returned if there is no data yet to consume, we simply do nothing here.

         ok ->
             {ok, Req, {Msg, Handle1}};

The last case is just for making things proper, if an error is returned we close stop the current handle and create a new one the same we did in the init.

         Other ->
             io:format("Error: ~p", [E]),
                 _:_ ->
             {ok, Handle1} = erltrace:open(),
             erltrace:compile(Handle1, Msg),
             {ok, Req, {Msg, Handle1}}

Now that all put together and run it on a dtrace capable machine we get a nice little heatmap that updates every second:


Neat isn’t it?

A New Start

Now it has been a while, it’s surprisingly hard to find a decent way to blog in the end I landed with Octopress. Lets see how that turns out, so far after a slightly bumpy bumpy start it seems decent enough.