Wednesday, May 01, 2013

how to create a 3 node riak cluster ?

A very brief intro about riak - http://basho.com/riak/. Riak is a distributed database written in erlang. Each node in a riak cluster contains the complete independent copy of the riak package. A riak cluster does not have any "master". Data is distributed across nodes using consistent hashing - which ensures that the data is evenly distributed and a new node can be added with minimum reshuffling. Each object has in a riak cluster has multiple copies distributed acorss multiple nodes. Hence failure of a node does not necessarily result in data loss.

To setup a 3 node riak cluster, we first setup 3 machines with riak installed. To install riak on ubuntu machines all that needs to be done is download the "deb" package and do a dpkg -i "riak_x.x.x_amd64.deb". The version I used here was 1.3.1. 3 machines with ips 10.20.220.2, 10.20.220.3 & 10.20.220.4 were setup

To setup riak on 1st node, there are 3 config changes that need to be done

1. replace http ip: in /etc/riak/app.config replace ip in {http, [ {"127.0.0.1", 8098 } ]} with 10.20.220.2
2. replace pb_ip: in /etc/riak/app.config replace ip in {pb_ip,   "127.0.0.1" } with 10.20.220.2
3. change the name of the fiak machine to match your ip: in /etc/riak/vm.args change name to riak@10.20.220.2



If you had started the riak cluster earlier - before making the ip related changes, you will need to clear the ring and backend db. Do the following.

rm -rf /var/lib/riak/bitcask/
rm -rf /var/lib/riak/ring/



To start the first node, run riak start.

To prepare the second node, replace the ips with 10.20.220.3. Once done do a "riak start". To join this node to the cluster do the following

root@riak2# riak-admin cluster join riak@10.20.220.2
Attempting to restart script through sudo -H -u riak
Success: staged join request for 'riak@10.20.220.3' to 'riak@10.20.220.2'

check out the cluster plan

root@riak2# riak-admin cluster plan
Attempting to restart script through sudo -H -u riak
===============================Staged Changes================================
Action         Nodes(s)
-------------------------------------------------------------------------------
join           'riak@10.20.220.3'
-------------------------------------------------------------------------------

NOTE: Applying these changes will result in 1 cluster transition

###############################################################################
                         After cluster transition 1/1
###############################################################################

=================================Membership==================================
Status     Ring    Pending    Node
-------------------------------------------------------------------------------
valid     100.0%     50.0%    'riak@10.20.220.2'
valid       0.0%     50.0%    'riak@10.20.220.3'
-------------------------------------------------------------------------------
Valid:2 / Leaving:0 / Exiting:0 / Joining:0 / Down:0

WARNING: Not all replicas will be on distinct nodes

Transfers resulting from cluster changes: 32
  32 transfers from 'riak@10.20.220.2' to 'riak@10.20.220.3'


Save the cluster

root@riak2# riak-admin cluster commit
Attempting to restart script through sudo -H -u riak
Cluster changes committed

Add 1 more node

Prepare the 3rd node by replacing the ip with 10.20.220.4. And add this node to the riak cluster.

root@riak3# riak-admin cluster join riak@10.20.220.2
Attempting to restart script through sudo -H -u riak
Success: staged join request for 'riak@10.20.220.4' to 'riak@10.20.220.2'

check and commit the new node to the cluster.

root@riak3# riak-admin cluster plan
Attempting to restart script through sudo -H -u riak
=============================== Staged Changes ================================
Action         Nodes(s)
-------------------------------------------------------------------------------
join           'riak@10.20.220.4'
-------------------------------------------------------------------------------

NOTE: Applying these changes will result in 1 cluster transition

###############################################################################
                         After cluster transition 1/1
###############################################################################

================================= Membership ==================================
Status     Ring    Pending    Node
-------------------------------------------------------------------------------
valid      50.0%     34.4%    'riak@10.20.220.2'
valid      50.0%     32.8%    'riak@10.20.220.3'
valid       0.0%     32.8%    'riak@10.20.220.4'
-------------------------------------------------------------------------------
Valid:3 / Leaving:0 / Exiting:0 / Joining:0 / Down:0

WARNING: Not all replicas will be on distinct nodes

Transfers resulting from cluster changes: 21
  10 transfers from 'riak@10.20.220.2' to 'riak@10.20.220.4'
  11 transfers from 'riak@10.20.220.3' to 'riak@10.20.220.4'

root@riak3# riak-admin cluster commit
Attempting to restart script through sudo -H -u riak
Cluster changes committed
check status

root@riak3# riak-admin status | grep ring
Attempting to restart script through sudo -H -u riak
ring_members : ['riak@10.20.220.2','riak@10.20.220.3','riak@10.20.220.4']
ring_num_partitions : 64
ring_ownership : <<"[{'riak@10.20.220.2',22},{'riak@10.20.220.3',21},{'riak@10.20.220.4',21}]">>
ring_creation_size : 64


For Advanced configuration refer:

http://docs.basho.com/riak/latest/cookbooks/Adding-and-Removing-Nodes/

No comments: