Wednesday, March 31, 2010

CouchDB Relaximation

‹prev | My Chain | next›

Getting back to node.js, I think I will explore some more of the node.js things linked to CouchDB. Several folks were kind enough to provide links in response to a post from last week. One of the first on the list is relaximation.

The relaximation script establishes a number of node.js clients to perform concurrent reads and writes against a CouchDB server. It then performs a statistical analysis of the results, providing a nice graph output. By default, it creates 50 writing processes and 200 reading processes. Other defaults can be seen in the help:
cstrom@whitefall:~/repos/relaximation/tests$ ~/local/bin/node compare_write_and_read.js --help
-w, --wclients :: Number of concurrent write clients per process. Default is 50.
-r, --rclients :: Number of concurrent read clients per process. Default is 200.
-u, --url1 :: CouchDB url to run tests against. Default is http://localhost:5984
-v, --url2 :: CouchDB url to run tests against. Default is http://localhost:5985
-1, --name1 :: Name of first comparative. Required.
-2, --name2 :: Name of first comparative. Required.
-d, --doc :: small or large doc. Default is small.
-t, --duration :: Duration of the run in seconds. Default is 60.
-i, --poll :: Polling interval in seconds. Default is 1.
-p, --graph :: CouchDB to persist results in. Default is http://couchdb.couchdb.org/graphs
-r, --recurrence :: How many times to run the tests. Deafult is 10.
I am still running on my netbook here, which only has a single CouchDB server. I do have the VMs lying about, so let's see how a bare metal CouchDB server compares to a VM CouchDB server:
cstrom@whitefall:~/repos/relaximation/tests$ ~/local/bin/node compare_write_and_read.js --name1 netbook --name2 vm-on-netbook --url2 http://couch-011a.local:5984
{"time":1.006,"writes":{"clients":10,"average":63,"last":52},"reads":{"clients":0,"average":null,"last":null}}
{"time":2.001,"writes":{"clients":18,"average":155,"last":63},"reads":{"clients":9,"average":154,"last":33}}
{"time":3,"writes":{"clients":28,"average":259,"last":13},"reads":{"clients":18,"average":209,"last":211}}
{"time":4.002,"writes":{"clients":33,"average":461,"last":200},"reads":{"clients":27,"average":236,"last":113}}
{"time":5.001,"writes":{"clients":45,"average":629,"last":134},"reads":{"clients":35,"average":371,"last":158}}
{"time":6.003,"writes":{"clients":45,"average":640,"last":56},"reads":{"clients":46,"average":229,"last":113}}
{"time":7.002,"writes":{"clients":46,"average":1003,"last":526},"reads":{"clients":55,"average":343,"last":16}}
...
... Lots and lots and lots of output
...
{"time":58.002,"writes":{"clients":50,"average":2759,"last":23},"reads":{"clients":200,"average":2636,"last":13}}
{"time":59.008,"writes":{"clients":50,"average":2741,"last":32},"reads":{"clients":200,"average":2578,"last":30}}
http://couchdb.couchdb.org/graphs/_design/app/_show/compareWriteReadTest/f4eb5ba38837ce71c2ced8e583004e4f
That is one pretty graph. Kudos on the use of a <canvas> graph.

It is interesting looking at the graph seeing that read/writes both peak at about 30 seconds, then decrease slightly until ~40 seconds of heavy pounding at which point things stay nice and steady. I would have expected things to get optimized relatively quickly and then to stabilize. Perhaps the slight trend downward is a result of reaching some threshold for number of documents in the database. Grist for IRC conversations tomorrow.

Another interesting thing to note is that the VM outperforms the localhost CouchDB server. It took me a while to remember that the VM and localhost CouchDB servers are at different versions (0.11-pre vs. 0.10). It seems clear that there were some optimizations added between 0.10 and 0.11. All the more reason to upgrade.

The last thing to note on the graphs is that the VM/0.11 graphs are nice and smooth while the localhost/0.10 graphs are jagged (even after 10 runs). I am not sure of the reason for this, but the last few runs against this DB resulted in errors similar to:
[Thu, 01 Apr 2010 01:55:23 GMT] [debug] [<0.25812.14>] httpd 500 error response:
{"error":"unknown_error","reason":"normal"}


[Thu, 01 Apr 2010 01:55:23 GMT] [info] [<0.25873.14>] Stacktrace: [{mochiweb_request,send,2},
{mochiweb_request,respond,2},
{couch_httpd,send_response,4},
{couch_httpd_db,do_db_req,2},
{couch_httpd,handle_request,5},
{mochiweb_http,headers,5},
{proc_lib,init_p_do_apply,3}]
Too many of those could have skewed the statistics.

Looking through the code, there is oodles to learn. It still astounds me how much one can do with Javascript. This guy even implements his own OptionParser in Javascript. Crazy stuff.

That was fun and even more educational than I expected. The concurrent clients seem a nice use of node.js that I would not have otherwise thought of. Tomorrow, I think I would like to explore some of the node.js libraries that interact with the _changes API in CouchDB.

Day #59

Tuesday, March 30, 2010

Retrospective: Week Eight

‹prev | My Chain | next›

In an ongoing effort to make this chain as valuable as possible to myself (and possibly others), I perform a weekly retrospective of how the past week went (notes from last week). I do this on Tuesdays to avoid conflicts with B'more on Rails, which usually holds events then.

WHAT WENT WELL

  • Got started with node.js. I was able to install and get a simple interface to CouchDB running in a single day.
  • Learned a bit more about CouchDB replication and this after I wrote the couch-replicate gem. It is interesting that I was forced to learn (or happened upon it, depending on your point of view) while exploring node.js.
  • Applied replication lessons learned to the couch-replicate gem.
  • Simplified couch-replicate a bit at the last B'more on Rails open source hack night. Good times.

OBSTACLES / THINGS THAT WENT NOT SO WELL

  • A divergence into adding a feature to couch_docs has taken me three days already and I was not done until today. This is a fair diversion from the ostensible purpose of my chain.
  • I completely forgot about learning org-mode.

WHAT I'LL TRY NEXT WEEK

  • Org-mode. For an Emacs user, I make far too little use of this. I need to keep better track of patches / ideas for my gems and org-mode seems like a good starting place (carried over from last week).
  • No more couch_docs work (or anything else) unless it ties directly to something that I have learned or am learning. These last three days have just felt awkward.


Day #58

Monday, March 29, 2010

Small couch_docs Updates—Getting It Done

‹prev | My Chain | next›

Picking up from last night, I need couch_docs to update individual CouchDB documents when updated (currently all documents are updated when anything in the watched directory is updated). An RSpec example describing this:
        it "should update documents (if any)" do
file_mock = mock("File", :path => "/foo")
@it.stub!(:documents).and_return([file_mock])

CouchDocs.
should_receive(:put_file).
with("/foo")

@it.directory_watcher_update(@args)
end
I get that passing easily enough by adding a call to put_file inside an iterator over the document updates:
  def directory_watcher_update(args)
#...
documents.each do |update|
CouchDocs.put_file(update.path)
end
#...
end
That get all of my specs passing. There's just one tiny problem: there is no CouchDocs.put_file method.

The put_file method should read a file, use the basename as the document ID, and put it on the CouchDB store. The couch_docs gem already has a Store object with a put! method. The RSpec example describing the behavior I am after is:
  it "should be able to upload a single document into CouchDB" do
Store.
should_receive(:put!).
with('uri/foo', {"foo" => "1"})

File.stub!(:read).and_return('{"foo": "1"}')

CouchDocs.put_file("uri", "/foo")
end
My first pass at making this pass is:
  def self.put_file(db_uri, file_path)
contents = File.read(file_path)
name = File.basename(file_path, ".json")
Store.put!("#{db_uri}/#{name}", contents)
end
I read the file, derive the name/ID from the file's basename and PUT the contents in the store. Unfortunately, this fails with:
1)
Spec::Mocks::MockExpectationError in 'CouchDocs should be able to upload a single document into CouchDB'
<CouchDocs::Store (class)> received :put! with unexpected arguments
expected: ("uri/foo", {"foo"=>"1"})
got: ("uri/foo", "{\"foo\": \"1\"}")
./spec/couch_docs_spec.rb:61:
Ah, I forgot. The CouchDocs::Store.put! method expects the contents to be in hash format, but the file are stored in JSON. At some point I really need to consider switching that, but this is not the time. Instead I parse the JSON:
  def self.put_file(db_uri, file_path)
contents = JSON.parse(File.read(file_path))
name = File.basename(file_path, ".json")
Store.put!("#{db_uri}/#{name}", contents)
end
And the example passes.

That should just about take care of uploading only the changed files with couch_docs. I will smoke test tomorrow and possibly factor the directory watching code out of the command line (where it has a definite odor).

Day #57

Sunday, March 28, 2010

Small couch_docs Updates

‹prev | My Chain | next›

After updating couch-replicate yesterday, I take a look at my couch_docs gem tonight. I received a patch a while back suggesting that watch mode should not update all documents when any update is made. That seem reasonable, so...

When I first start watching a directory, all documents should be pushed to the CouchDB server. Afterwards, if the updates are design documents, only the design documents should be updated. If the updates are normal documents, then the individual, updated documents should be updated. Sounds like a bunch of predicate methods to me.

First up, an RSpec example describing initial directory parsing:
    it "should be an initial add if everything is an add" do
args = [mock(:type => :added),
mock(:type => :added)]
CommandLine.should be_initial_add(args)
end
This dumps me into change-the-message of:
1)
NoMethodError in 'CouchDocs::CommandLine an instance that dumps a CouchDB database should be an initial add if everything is an add'
undefined method `initial_add?' for CouchDocs::CommandLine:Class
./spec/couch_docs_spec.rb:471:
I eventually get this passing with:
  def initial_add?(args)
args.all? { |f| f.type == :added }
end
I add a few more predicate methods, and then I am ready to refactor my directory watcher update block:
        dw.add_observer do |*args|
puts "Updating documents on CouchDB Server..."
CouchDocs.put_dir(@options[:couchdb_url],
@options[:target_dir])
end
First up, I pull the put_dir call out into a new (testable) directory_watcher_update method:
#...
dw.add_observer do |*args|
puts "Updating documents on CouchDB Server..."
directory_watcher_update(args)
end
#...

def directory_watcher_update(args)
CouchDocs.put_dir(@options[:couchdb_url],
@options[:target_dir])
end
I run all of my specs to ensure nothing has broken (it has not) before describing in more detail what should happen with directory watcher updates. First up, it should update both design document and normal documents when first starting up (which is what put_dir does):
      it "should only update design docs if only local design docs have changed" do
CouchDocs.
should_receive(:put_dir)

@it.stub!(:initial_add?).and_return(true)
@it.directory_watcher_update(@args)
end
That example passes without any changes (because that is the current behavior of the method). The trick will be to retain this behavior going forward.

I drive this method to be able to handle design document updates as well before needing to call it a night:
  def directory_watcher_update(args)
if initial_add? args
CouchDocs.put_dir(@options[:couchdb_url],
@options[:target_dir])
else
if design_doc_update? args
CouchDocs.put_design_dir(@options[:couchdb_url],
"#{@options[:target_dir]}/_design")
end
end
end
The call to update normal (non-design) documents may require some refactoring before it can be used in here. I will pick up with that (and hopefully finish) tomorrow.

Day #56

Saturday, March 27, 2010

Catching Up

‹prev | My Chain | next›

Having messed around with node.js and CouchDB replication a bit, I head back to the comfortable lands of Ruby to touch up a gem or two. One of the things that I learned in my exploration what that, although it can go either way, CouchDB optimizes pull replication. That is, when posting to the replication resource on a CouchDB server, the replication will perform better if the target DB is on the same server (and the source is on another server).

When I wrote the couch-replicate gem, I had not know there was a difference. I got the 50-50 chance wrong. So first up, an RSpec example:
  it "should default to local target (pull) replicate" do
RestClient.
should_receive(:post).
with("#{@target_host}/_replicate",
%Q|{"source":"#{@src_host}/#{@db}", "target":"#{@db}", "continuous":true}|)

CouchReplicate.replicate(@src_host, @target_host, @db)
end
That fails with:
cstrom@whitefall:~/repos/couch-replicate$ spec ./spec/couch_replicate_spec.rb 
F..........

1)
Spec::Mocks::MockExpectationError in 'CouchReplicate should default to local target (pull) replicate'
RestClient received :post with unexpected arguments
expected: ("http://couch02.example.org:5984/_replicate", "{\"source\":\"http://couch01.example.org:5984/test\", \"target\":\"test\", \"continuous\":true}")
got: ("http://couch01.example.org:5984/_replicate", "{\"source\":\"test\", \"target\":\"http://couch02.example.org:5984/test\", \"continuous\":true}")
./spec/couch_replicate_spec.rb:16:

Finished in 0.060957 seconds

11 examples, 1 failure
Yup, got it exactly wrong. To fix, I update the CouchReplicate.replicate method:
  def self.replicate(source_host, target_host, db)
source = hostify(source_host)
target = hostify(target_host)
RestClient.post("#{target}/_replicate",
%Q|{"source":"#{source}/#{db}", "target":"#{db}", "continuous":true}|)
end
I have to update a few specs, but now have couch-replicate "doing it right". I have a few things I'd like to do in couch_docs, but I will stick to couch-replicate tonight. Specifically, I like the create_target attribute. I am not sure if that is a replicate-or-create-then-replicate option.

To find out I edit a document on a previously replicated server:



Then, I add create_target to the couch-replicate gem:
  def self.replicate(source_host, target_host, db)
source = hostify(source_host)
target = hostify(target_host)
RestClient.post("#{target}/_replicate",
%Q|{"source":"#{source}/#{db}", "target":"#{db}", "continuous":true, "create_target":true}|)
end
Finally, I set replication in motion:
cstrom@whitefall:~/repos/couch-replicate$ couch-replicate test couch-011a.local couch-011b.local couch-011c.local
Linking replication hosts...
If CouchDB replicate-with-create_target is non-destructive, then the document on server B will not be overwritten and will make it onto server A. That is exactly what happens:



Since this is undocumented, I will not include that in couch-replicate for the time being. I do release 0.0.3 of the gem with the default pull replication.

Day #55

Friday, March 26, 2010

A Silly CouchDB Replication Scheme in node.js

‹prev | My Chain | next›

Picking up from last night's failure to get node.js playing nicely with CouchDB replication, I start by trying it from the command line. The failure stemmed from the use of the create_target attribute, which should create a new database if it does not already exist. With curl, I find:
cstrom@whitefall:~$ curl -X POST http://localhost:5984/_replicate \
> -d '{"source":"seed", "target":"http://couch-011a.local:5984/seed", "create_target":true}'
{"error":"db_not_found","reason":"could not open http://couch-011a.local:5984/seed/"}
Ah good to know. It is not a problem with node.js. Something in my understanding of the create_target attribute is amiss.

Switching the source and target does, however, work:
cstrom@whitefall:~$ curl -X POST http://couch-011a.local:5984/_replicate \                                                                      >     -d '{"source":"http://whitefall.local:5984/seed", "target":"seed", "create_target":true}'
{"ok":true,"session_id":"cb968554905b1527e6fab6eaa0c6f754","source_last_seq":743,"history":[{"session_id":"cb968554905b1527e6fab6eaa0c6f754","start_time":"Fri, 26 Mar 2010 21:40:59 GMT","end_time":"Fri, 26 Mar 2010 21:47:15 GMT","start_last_seq":0,"end_last_seq":743,"recorded_seq":743,"missing_checked":0,"missing_found":675,"docs_read":678,"docs_written":678,"doc_write_failures":0}]}
Interesting. I learn a couple of things from this result. The first is that I need create another seed DB. It is silly to replicate 678 documents for a test.

More importantly, I figured out how the create_target option works. Specifically, it only works when the target database resides on the server being POSTed to, though the wiki seems to imply otherwise. Speaking of the wiki, while reading it, I found that a local target is preferred. I have been doing local source / remote target. That means that I need to update couch-replicate, but first, I want to see last night's node.js replication script through.

I delete the incredibly large seed databases:
cstrom@whitefall:~$ curl -X DELETE http://couch-011a.local:5984/seed
{"ok":true}
cstrom@whitefall:~$ curl -X DELETE http://localhost:5984/seed
{"ok":true}
Using the couch_docs gem, I create a smaller seed DB:
cstrom@whitefall:~/tmp/seed$ ls
2002-08-26-grilled_chicken.json 2002-08-26.json 2002-08-26-pasta.json 2002-08-26-pesto.json _design
cstrom@whitefall:~/tmp/seed$ couch-docs push http://localhost:5984/seed -R
Updating documents on CouchDB Server...
My simple goal is to replicate a database from my localhost (whitefall.local) machine onto CouchDB VM A, and the from A to B, and B to C. At the end, the seed database, which did not exist on VM C at the start should contain the newly created and replicated DB. The node.js script, now using local targets, becomes:
var
sys = require('sys'),
couchdb = require('node-couchdb/lib/couchdb'),
client = couchdb.createClient(5984, 'whitefall.local'),
clienta = couchdb.createClient(5984, 'couch-011a.local'),
clientb = couchdb.createClient(5984, 'couch-011b.local'),
clientc = couchdb.createClient(5984, 'couch-011c.local');

client.allDbs(function (er, data) {
sys.puts("DBs on localhost: " + data);
});

clientc.allDbs(function (er, data) {
sys.puts("DBs on C before replication: " + data);
});

clienta.replicate("http://whitefall.local:5984/seed", "seed", {create_target:true});
clientb.replicate("http://couch-011a.local:5984/seed", "seed", {create_target:true});
clientc.replicate("http://couch-011b.local:5984/seed", "seed", {create_target:true});

clientc.allDbs(function (er, data) {
sys.puts("DBs on C after replication: " + data);
});
Now when I run the script I find... that the database still is not replicated to C:
cstrom@whitefall:~$ ./local/bin/node ./tmp/node-couch.js 
DBs on localhost: eee,test,seed
DBs on C before replication: test
DBs on C after replication: test
Gah!

The explanation for this failure turns out to be simple enough. Checking the log in server B, I find:
[Fri, 26 Mar 2010 22:53:18 GMT] [error] [<0.222.0>] {error_report,<0.31.0>,
{<0.222.0>,crash_report,
[[{initial_call,{couch_rep,init,['Argument__1']}},
{pid,<0.222.0>},
{registered_name,[]},
{error_info,
{exit,
{db_not_found,<<"http://couch-011a.local:5984/seed/">>},
[{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},
But checking server A, I see that the seed database is there:



My guess is that server A has not had time to finished creating / replicating the seed database before server B tries to replicate it. To verify, I add some timeouts to my node.js script:
clienta.replicate("http://whitefall.local:5984/seed", "seed", {create_target:true});

setTimeout(function () {
clientb.replicate("http://couch-011a.local:5984/seed", "seed", {create_target:true});
}, 2000);
setTimeout(function () {
clientc.replicate("http://couch-011b.local:5984/seed", "seed", {create_target:true});
}, 4000);

setTimeout(function () {
clientc.allDbs(function (er, data) {
sys.puts("DBs on C after replication: " + data);
});
}, 6000);
Indeed, server C now contains the seed database:
cstrom@whitefall:~$ ./local/bin/node ./tmp/node-couch.js 
DBs on localhost: eee,test,seed
DBs on C before replication: test
DBs on C after replication: seed,test
This is clearly a contrived example. Were I to even try this for real in node.js, I would not be using timeouts, which would certainly fail for large databases. Still, I feel like I made progress—I figured out why last night's script failed and learned a thing or two about CouchDB replication.

Day #54

Thursday, March 25, 2010

Spike node-couchdb

‹prev | My Chain | next›

Last night I began my flirtation with node.js (with a decided bias towards CouchDB). I had based some of my exploration on the node-couch, which I found easy to follow along while trying to figure things out. Several folks suggested checking out node-couchdb, so let's give it a whirl...

I clone the repository into ~/repos like all of my other code repositories:
cstrom@whitefall:~/repos$ git clone git://github.com/felixge/node-couchdb.git
Initialized empty Git repository in /home/cstrom/repos/node-couchdb/.git/
...
The instructions for node-couchdb introduce me to the node.js location for libraries, which, for local users, are stored in :
cstrom@whitefall:~$ mkdir ~/.node_libraries
cstrom@whitefall:~$ cd !$
cd ~/.node_libraries
cstrom@whitefall:~/.node_libraries$ ln -s ../repos/node-couchdb
To use that library in a node.js script, I need to assign a local variable to the result of a require statement:
// ...
couchdb = require('node-couchdb/lib/couchdb'),
//...
The node-couchdb is the repository I just symlinked and lib/couchdb point to a couchdb.js in that directory. I am eager to explore that library to figure out how it returns an object assigned to the couchdb variable, but I will likely leave that for another day. For now, I try to use the couchdb variable to print out a list of all DBs, just as I did manually last night:
var
sys = require('sys'),
couchdb = require('node-couchdb/lib/couchdb'),
client = couchdb.createClient(5984, 'localhost');

sys.puts(client.allDbs());
The allDbs() method definition does not contain a callback in the documentation, so I wonder if it will work like this:
cstrom@whitefall:~$ ./local/bin/node ./tmp/node-couch.js
undefined
Ah, it does not. It is beginning to dawn on me that everything needs a callback in node.js. I am not sure how many arguments are needed for allDbs(), but I would guess just a callback:
var
sys = require('sys'),
couchdb = require('node-couchdb/lib/couchdb'),
client = couchdb.createClient(5984, 'localhost');

client.allDbs(function (er, data) {
sys.puts(data);
});
The other request methods in node-couch have callbacks with two arguments: an error object (set only if an error occurs) and a data object. Above, I am assuming no errors. Running this script, I find:
cstrom@whitefall:~$ ./local/bin/node ./tmp/node-couch.js 
eee,test,seed
Cool! That is a lot less code than I had to use last night, so node-couchdb is already a win.

Ooh! There is a replicate() method in node-couchdb! And it accepts options like {create_target:true}? I cannot resist replication, but I was not even aware there was a {create_target:true}—now I have to try this out. So I boot three of my VMs, then I modify my script to output all DBs on my localhost CouchDB instance and on one of my VMs:
var
sys = require('sys'),
couchdb = require('node-couchdb/lib/couchdb'),
client = couchdb.createClient(5984, 'localhost'),
clienta = couchdb.createClient(5984, 'couch-011a.local'),
clientb = couchdb.createClient(5984, 'couch-011b.local'),
clientc = couchdb.createClient(5984, 'couch-011c.local');

client.allDbs(function (er, data) {
sys.puts(data);
});

clientc.allDbs(function (er, data) {
sys.puts(data);
});
Running that script I find that the VM does not have a "seed" DB:
cstrom@whitefall:~$ ./local/bin/node ./tmp/node-couch.js 
eee,test,seed
test
Now, if {create_target:true} does what I think it does, then I ought to be able to replicate the "seed" DB onto clienta, then replicate it from clienta to clientb, and finally from clientb to clientc. Will this work?
var
sys = require('sys'),
couchdb = require('node-couchdb/lib/couchdb'),
client = couchdb.createClient(5984, 'localhost'),
clienta = couchdb.createClient(5984, 'couch-011a.local'),
clientb = couchdb.createClient(5984, 'couch-011b.local'),
clientc = couchdb.createClient(5984, 'couch-011c.local');


client.allDbs(function (er, data) {
sys.puts(data);
});

clientc.allDbs(function (er, data) {
sys.puts(data);
});

client.replicate("seed", "http://couch-011a.local:5984/seed", {create_target:true});
clienta.replicate("seed", "http://couch-011b.local:5984/seed", {create_target:true});
clientb.replicate("seed", "http://couch-011c.local:5984/seed", {create_target:true});


clientc.allDbs(function (er, data) {
sys.puts(data);
});
If it does work, the last line should include a "seed" DB in the output:
cstrom@whitefall:~$ ./local/bin/node ./tmp/node-couch.js
eee,test,seed
test
test
Dang. Checking the localhost log, I see:
[Fri, 26 Mar 2010 02:43:56 GMT] [debug] [<0.2440.0>] httpd 404 error response:
{"error":"db_not_found","reason":"could not open http://couch-011a.local:5984/seed/"}
I am not positive, but that trailing slash on "seed" seems wrong. I am not sure if the problem is that or just that the {create_target:true} is not being honored.

I'm at a bit of a loss at this point, so I will pick it back up starting here tomorrow.

Day #53

Wednesday, March 24, 2010

Getting Started with node.js and CouchDB

‹prev | My Chain | next›

Tonight, I would like to get started with node.js. First downloading and installing:
cstrom@whitefall:~/tmp$ wget http://nodejs.org/dist/node-v0.1.33.tar.gz
--2010-03-24 20:51:40-- http://nodejs.org/dist/node-v0.1.33.tar.gz
Resolving nodejs.org... 97.107.132.72
Connecting to nodejs.org|97.107.132.72|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4016600 (3.8M) [application/octet-stream]
Saving to: `node-v0.1.33.tar.gz'

100%[======================================================================================================>] 4,016,600 142K/s in 26s

2010-03-24 20:52:07 (151 KB/s) - `node-v0.1.33.tar.gz' saved [4016600/4016600]
I store source code repositories in the repos directory, but code without source code management (e.g. tarballs) in my src directory:
cstrom@whitefall:~/tmp$ cd /home/cstrom/src/
cstrom@whitefall:~/src$ tar zxf ../tmp/node-v0.1.33.tar.gz
tar: Ignoring unknown extended header keyword `SCHILY.dev'
tar: Ignoring unknown extended header keyword `SCHILY.ino'
tar: Ignoring unknown extended header keyword `SCHILY.nlink'
tar: Ignoring unknown extended header keyword `SCHILY.dev'
tar: Ignoring unknown extended header keyword `SCHILY.ino'
tar: Ignoring unknown extended header keyword `SCHILY.nlink'
Not sure what's up with the "unknown extended header keyword", but stuff was created:
cstrom@whitefall:~/src/node-v0.1.33$ ls
AUTHORS benchmark bin ChangeLog configure deps doc lib LICENSE Makefile README src test tools wscript
Since I am just messing around, I do not want to use sudo to install. For now, I will install into my "local" directory:
cstrom@whitefall:~/src/node-v0.1.33$ ./configure --prefix=/home/cstrom/local
Check for program g++ or c++ : /usr/bin/g++
Check for program cpp : /usr/bin/cpp
Check for program ar : /usr/bin/ar
Check for program ranlib : /usr/bin/ranlib
Checking for g++ : ok
...
There were a few missing development libraries. Configure completed OK without them, but I note them for later:
...
Checking for library execinfo : not found
Checking for gnutls >= 2.5.0 : fail
...
--- libev ---
...
Checking for header port.h : not found
Checking for header poll.h : ok
Checking for function poll : ok
Checking for header sys/event.h : not found
Checking for header sys/queue.h : ok
Checking for function kqueue : not found
...
I would not expect them to matter because configure completes successfully:
...
creating config.h... ok
creating Makefile... ok
creating config.status... ok
all done.
'configure' finished successfully (9.562s)
With that, I can build the executables:
cstrom@whitefall:~/src/node-v0.1.33$ make
Waf: Entering directory `/home/cstrom/src/node-v0.1.33/build'
...
[22/23] cxx: src/node_idle_watcher.cc -> build/default/src/node_idle_watcher_7.o
[23/23] cxx_link: build/default/src/node_7.o build/default/src/node_child_process_7.o build/default/src/node_constants_7.o build/default/src/node_dns_7.o build/default/src/node_events_7.o build/default/src/node_file_7.o build/default/src/node_http_7.o build/default/src/node_net_7.o build/default/src/node_signal_watcher_7.o build/default/src/node_stat_watcher_7.o build/default/src/node_stdio_7.o build/default/src/node_timer_7.o build/default/src/node_idle_watcher_7.o build/default/deps/libev/ev_1.o build/default/deps/libeio/eio_1.o build/default/deps/evcom/evcom_3.o build/default/deps/http_parser/http_parser_4.o build/default/deps/coupling/coupling_5.o -> build/default/node
Waf: Leaving directory `/home/cstrom/src/node-v0.1.33/build'
'build' finished successfully (9m21.816s)
Cool. So now a make install:
cstrom@whitefall:~/src/node-v0.1.33$ make install
Waf: Entering directory `/home/cstrom/src/node-v0.1.33/build'
* installing deps/libeio/eio.h as /home/cstrom/local/include/node/eio.h
* installing deps/libev/ev.h as /home/cstrom/local/include/node/ev.h
* installing deps/udns/udns.h as /home/cstrom/local/include/node/udns.h
...
* installing build/default/node as /home/cstrom/local/bin/node
* installing build/default/src/node_version.h as /home/cstrom/local/include/node/node_version.h
Waf: Leaving directory `/home/cstrom/src/node-v0.1.33/build'
'install' finished successfully (0.476s)
It looks as though the node.js build scripts have honored my --prefix=/home/cstrom/local and, running the node binary would seem to confirm it:
cstrom@whitefall:~$ ./local/bin/node
No script was specified.
Usage: node [options] script.js [arguments]
Options:
-v, --version print node's version
--debug[=port] enable remote debugging via given TCP port
without stopping the execution
--debug-brk[=port] as above, but break in script.js and
wait for remote debugger to connect
--v8-options print v8 command line options
--vars print various compiled-in variables

Enviromental variables:
NODE_PATH ':'-separated list of directories
prefixed to the module search path,
require.paths.
NODE_DEBUG Print additional debugging output.

Documentation can be found at http://nodejs.org/api.html or with 'man node'
Cool!

Before trying something more complicated, I will give the delayed execution example from the nodejs.org site a try. I save this in ~/tmp/delay.js and execute it with node:
cstrom@whitefall:~$ ./local/bin/node ./tmp/delay.js 
Server running at http://127.0.0.1:8000/
And, accessing this resource does incur a delay of 2 seconds:
cstrom@whitefall:~$ time curl http://127.0.0.1:8000/
Hello World
real 0m2.036s
user 0m0.012s
sys 0m0.016s
That's all well and good, but what about something a little more complex? This current chain is supposed to be about updating CouchDB. Since CouchDB is HTTP from the ground up, I ought to be able to pull data back from CouchDB inside node.js. After some fiddling, I am able to produce a list of all CouchDB databases (_all_dbs) on my local CouchDB server with this node.js script:
var sys = require('sys'),
http = require('http');

http.createServer(function (req, res) {
var client = http.createClient(5984, "127.0.0.1");
var request = client.request("GET", "/_all_dbs");

request.addListener('response', function(response) {
var responseBody = "";

response.addListener("data", function(chunk) {
responseBody += chunk;
});

response.addListener("end", function() {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.write(responseBody);
res.close();
});
});

request.close();

}).listen(8000);
sys.puts('Server running at http://127.0.0.1:8000/');
After running the script with node couch.js, I can indeed retrieve the list of databases with curl:
cstrom@whitefall:~$ curl http://127.0.0.1:8000/
["eee","test","seed"]
Piece by piece, this script creates a client and builds a request of the _all_dbs resource:
  var client = http.createClient(5984, "127.0.0.1");
var request = client.request("GET", "/_all_dbs");
The request needs an event listener:
  request.addListener('response', function(response) {
// ...
});
I turn, the response object also needs an event listener to build up the response:
    response.addListener("data", function(chunk) {
responseBody += chunk;
});
And to handle the end of the response from the CouchDB server (which is when the node.js action responds):
    response.addListener("end", function() {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.write(responseBody);
res.close();
});
Finally, and it took me more than a little while to realize this, I need to close the request to get it processed (otherwise things just hang):
  request.close();
The rest of the script is identical to the delayed job script above.

I am quite sure that is a silly example, but it helps get me on my way with node.js. Tomorrow, I will likely play with node-couch, from which I borrowed liberally while putting this example together.

Day #52

Tuesday, March 23, 2010

Retrospective: Week Seven

‹prev | My Chain | next›

In an ongoing effort to make this chain as valuable as possible to myself (and possibly others), I perform a weekly retrospective of how the past week went (notes from last week). I do this on Tuesdays to avoid conflicts with B'more on Rails, which usually holds events then.

WHAT WENT WELL

OBSTACLES / THINGS THAT WENT NOT SO WELL

  • 9 VirtualBox Debian VMs did start to slow down my tiny netbook.
  • Hostnames on my cloned VMs were a pain to set (had to edit /etc/hosts and /etc/hostname, then reboot each)
  • Completely forgot to bring a submitted patch for couch_docs to the B'more on Rails open source hack night.
  • My github commits do not look like they are coming from me

WHAT I'LL TRY NEXT WEEK

  • Org-mode. For an Emacs user, I make far too little use of this. I need to keep better track of patches / ideas for my gems and org-mode seems like a good starting place


Day #51

Monday, March 22, 2010

Extreme CouchDB Replication

‹prev | My Chain | next›

Following up on last night's effort to create the couch-replicate gem, I hope tonight to use that gem to explore extremely fault tolerant CouchDB replication. But first, I am not even sure that I can establish two auto-replications on a single server. To test this out, I start up three servers. When they first start up, there is no replication:



I use couch-replicate to establish a linked list of replication:
cstrom@whitefall:~$ couch-replicate test \
http://couch-011a.local:5984 \
http://couch-011b.local:5984 \
http://couch-011c.local:5984
Linking replication hosts...
Now, there is one-way replication on the first CouchDB server:



Next I use couch-replicate to establish an additional linked list of nodes, but in the opposite direction (I now have double linked lists of replication goodness):
cstrom@whitefall:~$ couch-replicate test \
http://couch-011a.local:5984 \
http://couch-011b.local:5984 \
http://couch-011c.local:5984 -r
Reverse linking replication hosts...
Futon confirms that I now have two auto-replication processes on each server:



Cool.

Before I move on to testing a large number of nodes, what happens when I post the same replication scenario?
cstrom@whitefall:~$ couch-replicate test \
http://couch-011a.local:5984 \
http://couch-011b.local:5984 \
http://couch-011c.local:5984 -r
Reverse linking replication hosts...
Hmmm... It looks as though it successfully established replication again. Happily there are still only the two replication processes in place:



Idempotency. Nice.

So my expectation / hope of how replication works have been met. I have previously created 9 VMs, I might as well put them to some use. Without replication, the databases on each exists in isolation:

+-----+ +-----+ +-----+ +-----+
| b | | c | | d | | e |
+-----+ +-----+ +-----+ +-----+
+-----+
| a |
+-----+
+-----+ +-----+ +-----+ +-----+
| i | | h | | g | | f |
+-----+ +-----+ +-----+ +-----+
Linking the databases circularly (the default in couch-replicate) would give something like:

+-----+ +-----+ +-----+ +-----+
+---->| b |---->| c |---->| d |---->| e |--+
| +-----+ +-----+ +-----+ +-----+ |
+-----+ |
| a | |
+-----+ |
^ +-----+ +-----+ +-----+ +-----+ |
+-----| i |<----| h |<----| g |<----| f |<-+
+-----+ +-----+ +-----+ +-----+
Linking the databases reverse circularly (with --reverse in couch-replicate) would give something like:

+-----+ +-----+ +-----+ +-----+
+---->| b |<--->| c |<--->| d |<--->| e |<-+
v +-----+ +-----+ +-----+ +-----+ |
+-----+ |
| a | |
+-----+ |
^ +-----+ +-----+ +-----+ +-----+ |
+---->| i |<--->| h |<--->| g |<--->| f |<-+
+-----+ +-----+ +-----+ +-----+
If node "b" goes down, an update to "a" will still reach "c" by replicating counter-clockwise. But what happens if "d" goes down as well? Putting aside the fact that I am stretching the bounds of possibility, "c" would no longer receive updates. Unless...

If I use the nth node replication scheme in couch-replicate with n=2, the "a" node will replicate to "c", "b" will replicate to "d", and so on:

+-----------------------+
| |
+-------------+-----------+ |
| | v v
| +-----+ +-----+ +-----+ +-----+
| +---->| b |<--->| c |<--->| d |<--->| e |<-+
| v +-----+ +-----+ +-----+ +-----+ |
| +-----+ |
+-| a | |
+-----+ |
^ +-----+ +-----+ +-----+ +-----+ |
+---->| i |<--->| h |<--->| g |<--->| f |<-+
+-----+ +-----+ +-----+ +-----+
So I establish doubly linked replication plus n=2 replication on all 9 nodes:
cstrom@whitefall:~$ couch-replicate test \
http://couch-011a.local:5984 \
http://couch-011b.local:5984 \
http://couch-011c.local:5984 \
http://couch-011d.local:5984 \
http://couch-011e.local:5984 \
http://couch-011f.local:5984 \
http://couch-011g.local:5984 \
http://couch-011h.local:5984 \
http://couch-011i.local:5984
Linking replication hosts...
cstrom@whitefall:~$ couch-replicate test \
http://couch-011a.local:5984 \
http://couch-011b.local:5984 \
http://couch-011c.local:5984 \
http://couch-011d.local:5984 \
http://couch-011e.local:5984 \
http://couch-011f.local:5984 \
http://couch-011g.local:5984 \
http://couch-011h.local:5984 \
http://couch-011i.local:5984 -r
Reverse linking replication hosts...
cstrom@whitefall:~$ couch-replicate test \
http://couch-011a.local:5984 \
http://couch-011b.local:5984 \
http://couch-011c.local:5984 \
http://couch-011d.local:5984 \
http://couch-011e.local:5984 \
http://couch-011f.local:5984 \
http://couch-011g.local:5984 \
http://couch-011h.local:5984 \
http://couch-011i.local:5984 -n 2
Linking every 2th replication hosts...
Leaving aside the "2th" thing, there are now three replication schemes on all nodes:



To test my theory, I shut down nodes "b" and "d", the I create a "hard_to_replicate" document on "a":



If I have done this correctly, and if my understanding is correct, then replication should still work counter-clockwise (showing up on "i" first) and should also replicate between the two offline nodes. Indeed, the "hard_to_replicate" document does show up on "i":




And also on "c":



Cool! With very little work, it is quite easy to establish a very fault tolerant CouchDB cluster. Even on a little netbook.

Day #50

Sunday, March 21, 2010

Announce: couch-replicate 0.0.1

‹prev | My Chain | next›

Before continuing with my exploration of replication in CouchDB, I think I need a quick Ruby gem to help me out. I shut down all of my VMs after last night, meaning that I'll have to re-establish replication when I bring them back online.

I have already used Bones to generate gems, but have yet to try Jeweler. Many people seem to like it as a gem generator / manager, so this seems like a good time to give it a try.

After choosing yet another gem name with a complete lack of imagination, I am ready to begin. To create a scaffold gem with Jeweler, you just need to to call the jeweler command along with the name of your gem:
cstrom@whitefall:~/repos$ jeweler couch-replicate --rspec
create .gitignore
create Rakefile
create LICENSE
create README.rdoc
create .document
create lib
create lib/couch-replicate.rb
create spec
create spec/spec_helper.rb
create spec/couch-replicate_spec.rb
create spec/spec.opts
Jeweler has prepared your gem in couch-replicate
I also pass in the --rspec option to create RSpec specs and helpers. I thought about sticking with Jeweler's default testing framework, Shoulda because I do not have much experience with it. Although I know some Shoulda and do not expect much trouble with Jeweler, I think it best to limit the number of new things when learning.

Getting started, I need to write a spec or two. Right now the Jeweler generated scaffold is egging me on:
cstrom@whitefall:~/repos/couch-replicate$ spec ./spec/couch-replicate_spec.rb 
F

1)
RuntimeError in 'CouchReplicate fails'
hey buddy, you should probably rename this file and start specing for real
./spec/couch-replicate_spec.rb:5:

Finished in 0.039491 seconds

1 example, 1 failure
Haha. Nice.

I keep the file name and add my first spec. When I established replication last night with curl, the commands looked something like this:
curl -X POST http://couch-011a.local:5984/_replicate \
-d '{"source":"test", "target":"http://couch-011b.local:5984/test", "continuous":true}'
When using RestClient, the overall structure of the POST will be identical. The POST will go against the _replicate resource on the replication source. The JSON payload will include the name of the database being replicated on the source CouchDB server as well as the full URL of the target database (host + DB name). For now, I will assume that the database name on the source and the target will be the same.

So, in RSpec, an example of the behavior I expect is something like:
  it "should be able to tell a node to replicate itself" do
RestClient.
should_receive(:post).
with("#{@src_host}/_replicate",
%q|{"source":"#{@db}", "target":"#{@target_host}/#{@db}", "continuous":true}|)

CouchReplicate.replicate(@src_host, @target_host, @db)
end
Executing this example, I find:
cstrom@whitefall:~/repos/couch-replicate$ spec ./spec/couch-replicate_spec.rb 
F

1)
NameError in 'CouchReplicate should be able to tell a node to replicate itself'
uninitialized constant RestClient
./spec/couch-replicate_spec.rb:11:

Finished in 0.010073 seconds

1 example, 1 failure
Well, that's not unexpected. I need to require rest-client, but also list it as a runtime dependency. I add the runtime dependency first:
  Jeweler::Tasks.new do |gem|
#...
gem.add_development_dependency "rspec", "~> 1.2.0"
gem.add_dependency "rest-client", "~> 1.4.0"
#...
end
I use the greater than, but on same patch level comparator, "~>", because I got burned recently by an API change between minor releases.

When I run rake now, however, I get:
cstrom@whitefall:~/repos/couch-replicate$ rake
(in /home/cstrom/repos/couch-replicate)
Missing some dependencies. Install them with the following commands:
gem install rspec --version "~> 1.2.0"
Bah! I am pretty sure that is the right form for the comparator. What happens when I run that gem install command line?
cstrom@whitefall:~/repos/couch-replicate$ gem install rspec --version "~> 1.2.0"
WARNING: Installing to ~/.gem since /var/lib/gems/1.8 and
/var/lib/gems/1.8/bin aren't both writable.
**************************************************

Thank you for installing rspec-1.2.9

Please be sure to read History.rdoc and Upgrade.rdoc
for useful information about this release.

**************************************************
Successfully installed rspec-1.2.9
1 gem installed
Dang. That was the right comparator. The gem command recognized it, but Jeweler does not. Shame.

Ah well, that may be something for an open source hack night. For now, I will stick with the less restrictive (and less useful) ">=" form:
    gem.add_development_dependency "rspec", ">= 1.2.9"
gem.add_dependency "rest-client", ">= 1.4.2"
That gets me back to my original failure:
cstrom@whitefall:~/repos/couch-replicate$ rake
(in /home/cstrom/repos/couch-replicate)
All dependencies seem to be installed.
F

1)
NameError in 'CouchReplicate should be able to tell a node to replicate itself'
uninitialized constant RestClient
./spec/couch-replicate_spec.rb:11:

Finished in 0.041447 seconds

1 example, 1 failure
Now I am in the normal BDD cycle of change-the-message or make-it-pass. To change the message, I require 'rubygems' in spec_helper.rb, then require 'restclient' in couch-replicate.rb. The message from the example has now changed:
cstrom@whitefall:~/repos/couch-replicate$ spec ./spec/couch-replicate_spec.rb 
F

1)
NameError in 'CouchReplicate should be able to tell a node to replicate itself'
uninitialized constant CouchReplicate
./spec/couch-replicate_spec.rb:16:

Finished in 0.011301 seconds

1 example, 1 failure
I can change this message by actually defining the CouchReplicate class in the couch-replicate.rb file:
class CouchReplicate
end
Now, I get the message the replicate method has not been defined:
cstrom@whitefall:~/repos/couch-replicate$ spec ./spec/couch-replicate_spec.rb 
F

1)
NoMethodError in 'CouchReplicate should be able to tell a node to replicate itself'
undefined method `replicate' for CouchReplicate:Class
./spec/couch-replicate_spec.rb:16:

Finished in 0.011109 seconds

1 example, 1 failure
After defining the replicate method and working the change-the-message cycle a bit more, I finally make-it-pass with:
  def self.replicate(source_host, target_host, db)
RestClient.post("#{source_host}/_replicate",
%Q|{"source":"#{db}", "target":"#{target_host}/#{db}", "continuous":true}|)
end
I could have pulled in json/json-pure, but I do not expect to work with complex data structures in this gem. For now, I will leave JSON in a string as the "simplest thing that could possibly work". If need be, I will make use of json/json-pure later.

Before moving on, I try this method out in irb to make sure that I have not overlooked anything. I boot up two of my CouchDB 0.11 VMs and make sure that replication is not enabled:


Yup, no replication going on here. In irb, I tell this couch-011a server to replicate the "test" database to couch-011b:

irb(main):002:0> require 'rubygems'
=> true
irb(main):003:0> require 'couch-replicate'
=> true
irb(main):004:0> CouchReplicate.replicate('http://couch-011a.local:5984', 'http://couch-011b.local:5984', 'test')
=> 202 Accepted | text/plain 59 bytes
That looks promising. Sure enough, there is now a replication process in place:



That's all well and good, but of limited use beyond the curl command from the other night. What I would like is to pass in a database name and a list of servers that should replicate that database. An RSpec example of how I want this to work:
  it "should replicate in a circle" do
another_host = 'http://couch03.example.org:5984'

CouchReplicate.
should_receive(:replicate).
with(another_host, @src_host, @db)

CouchReplicate.link(@db, [@src_host, @target_host, another_host])
end
The easiest way to make that example pass is to replicate the last host to the first host:
  def self.link(db, hosts)
self.replicate(hosts.last, hosts.first, db)
end
The example is passing, but not exactly what I want. I also want the first host to replicate to the second, the second to replicate to the third, and so on. I think this example ought to suffice to describe what I want:
    it "should replicate in pairs" do
CouchReplicate.
should_receive(:replicate).
with(@target_host, @another_host, @db)

CouchReplicate.link(@db, [@src_host, @target_host, @another_host])
end
(the @target_host and @another_host are the second and third hosts in the list supplied to link)

To implement that, I use a Enumerable method that I never thought I would ever legitimately need, each_cons. I have no idea what the mnemonic for cons is (Lisp?), but each_cons(2) will iterate through the first and second element of the array, the second and third element of the array, the third and fourth element of the array and so on. This is perfect for linking hosts in an array:
  def self.link(db, hosts)
Array(hosts).each_cons(2) do |src, target|
self.replicate(src, target, db)
end

self.replicate(hosts.last, hosts.first, db)
end
Amazingly, that make the example pass. I cannot believe I actually used each_cons. I have seen that method dozens of times when reading the Enumerable documentation and each time thought, "why on earth would I ever do something like that?" Now I know.

In addition to linking forward, I would also like to link in the opposite direction:
    it "should be able to reverse link" do
CouchReplicate.
should_receive(:link).
with(@db, [@host03, @host02, @host01])

CouchReplicate.reverse_link(@db, [@host01, @host02, @host03])
end
A simple call to reverse will suffice:
  def self.reverse_link(db, hosts)
self.link(db, hosts.reverse)
end
Lastly, I would like to support replicating to every nth host. If there are ten hosts, and I replicate to every 3rd host, then host 1 should replicate to host 4, host 2 to host 5, ..., host 10 to host 3. In RSpec:
    it "should replicate nth host" do
CouchReplicate.
should_receive(:replicate).
with(@host02, @host05, @db)

CouchReplicate.nth(3, @db, [@host01, @host02, @host03, @host04, @host05])
end
I can make this pass with:
  def self.nth(n, db, hosts)
Array(hosts).each_cons(n+1) do |src, *n_hosts|
self.replicate(src, n_hosts.last, db)
end
end
That works fine until the end is reached, but @host05 is not told to replicate to @host03. To make it do so, I create an RSpec example:
    it "should replicate nth host in a circle" do
CouchReplicate.
should_receive(:replicate).
with(@host05, @host03, @db)

CouchReplicate.nth(3, @db, [@host01, @host02, @host03, @host04, @host05])
end
That fails until I pad the end of the each_cons array with n extra entries from the beginning of the hosts array:
  def self.nth(n, db, hosts)
(Array(hosts) + Array(hosts)[0..n]).each_cons(n+1) do |src, *n_hosts|
self.replicate(src, n_hosts.last, db)
end
end
The functional programmer in me is shuddering and I cannot believe that I have now used each_cons twice, but hey, it works.

After building a minimal command line interface, I add extremely minimal documentation and am ready to publish to github:
cstrom@whitefall:~/repos/couch-replicate$ rake github:release                  # Release Gem to GitHub
(in /home/cstrom/repos/couch-replicate)
Pushing master to origin
rake aborted!
git push "origin" "master" 2>&1:ERROR: eee-c/couch-replicate doesn't exist yet. Did you enter it correctly?
fatal: The remote end hung up unexpectedly

(See full trace by running task with --trace)
Aw nuts! Looking back, it seems that I missed the --create-repo when I first ran jeweler. Ah well. I create the repository on github, then I can release:
cstrom@whitefall:~/repos/couch-replicate$ rake github:release                  # Release Gem to GitHub
(in /home/cstrom/repos/couch-replicate)
Pushing master to origin
Then I can release to gemcutter:
cstrom@whitefall:~/repos/couch-replicate$ rake gemcutter:release               # Release gem to Gemcutter
(in /home/cstrom/repos/couch-replicate)
Generated: couch-replicate.gemspec
couch-replicate.gemspec is valid.
WARNING: no rubyforge_project specified
Successfully built RubyGem
Name: couch-replicate
Version: 0.0.1
File: couch-replicate-0.0.1.gem
Executing "gem push ./pkg/couch-replicate-0.0.1.gem":
gem push ./pkg/couch-replicate-0.0.1.gem
Pushing gem to Gemcutter...
Successfully registered gem: couch-replicate (0.0.1)
With that, I can install my new gem:
cstrom@whitefall:~/repos/couch-replicate$ gem install couch-replicate
WARNING: Installing to ~/.gem since /var/lib/gems/1.8 and
/var/lib/gems/1.8/bin aren't both writable.
Successfully installed couch-replicate-0.0.1
1 gem installed
And run it:
cstrom@whitefall:~$ couch-replicate test http://couch-011a.local:5984 http://couch-011b.local:5984 --reverse
Reverse linking replication hosts...
Not bad. From conception to a released gem in a single day. I am not quite sure that I am 100% on everything that Jeweler took care of behind the scenes, but that is probably more a function of my pushing to release in a single day. This Jeweler thing seems pretty nice.

Day #49

Saturday, March 20, 2010

Circular Replication with CouchDB

‹prev | My Chain | next›

Today I would like to play with CouchDB replication on some VirtualBox VMs. First up, I clone a bunch of VMs:
cstrom@whitefall:~/.VirtualBox/HardDisks$ VBoxManage clonehd couch-0.11-base.vdi couch-0.11a.vdi
VirtualBox Command Line Management Interface Version 3.0.8_OSE
(C) 2005-2009 Sun Microsystems, Inc.
All rights reserved.

0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone hard disk created in format 'VDI'. UUID: aaa53dee-9d51-4eba-b56e-de8eacee9708
cstrom@whitefall:~/.VirtualBox/HardDisks$ VBoxManage clonehd couch-0.11-base.vdi couch-0.11b.vdi
VirtualBox Command Line Management Interface Version 3.0.8_OSE
(C) 2005-2009 Sun Microsystems, Inc.
All rights reserved.

0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone hard disk created in format 'VDI'. UUID: aafb5319-8364-42dd-8ec3-a12a1235bd15

...

cstrom@whitefall:~/.VirtualBox/HardDisks$ VBoxManage clonehd couch-0.11-base.vdi couch-0.11i.vdi
VirtualBox Command Line Management Interface Version 3.0.8_OSE
(C) 2005-2009 Sun Microsystems, Inc.
All rights reserved.

0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone hard disk created in format 'VDI'. UUID: 6cae9857-1353-4570-b518-ec8ac3a79a86
I then add them to the VirtualBox Virtual Media Manager:



I can then create VMs from the cloned hard drives. Unfortunately, for each VM, I will need to set a different hostname (if I want to rely on avahi hostnames). I fire up a VM, change the hostname and check connectivity only to find that there is none. In fact, there isn't even a network interface:



It took me a bit to recall, but I have run into this problem when cloning VMs in the past. The problem is that the cloned VMs are assigned a new network MAC address, but the udev rules are specific to the VM from which the clones were made. To get around this, I edit /etc/udev/rules.d/70-persistent-net.rules such that the ATTR{address} attribute matches a wildcard MAC:
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="08:00:27:*", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
I also delete any entry for NAME="eth1" or above.

After fixing that and editing the hostname on VMs a-f, I have 6 running VMs with CouchDB 0.11. To test out replication, I need a common database, so I use couch_docs to create and populate a database:
cstrom@whitefall:~/tmp/seed$ couch-docs push http://couch-011a.local:5984/test . -R
Updating documents on CouchDB Server...
Before doing the same on b-f, I check Futon on couch-011a:



Yup, the DB is really there. I create and populate the test database on couch-011b all the way through couch-011f:
cstrom@whitefall:~/tmp/seed$ couch-docs push http://couch-011b.local:5984/test . -R
Updating documents on CouchDB Server...
cstrom@whitefall:~/tmp/seed$ couch-docs push http://couch-011c.local:5984/test . -R
Updating documents on CouchDB Server...
cstrom@whitefall:~/tmp/seed$ couch-docs push http://couch-011d.local:5984/test . -R
Updating documents on CouchDB Server...
cstrom@whitefall:~/tmp/seed$ couch-docs push http://couch-011e.local:5984/test . -R
Updating documents on CouchDB Server...
cstrom@whitefall:~/tmp/seed$ couch-docs push http://couch-011f.local:5984/test . -R
Updating documents on CouchDB Server...
Now it is time to replicate. The easiest replication that I can think of for 6 servers is a round-robin:
            +-----+
+----->| a |------+
| +-----+ |
| v
+-----+ +-----+
| f | | b |
+-----+ +-----+
^ |
| v
+-----+ +-----+
| e | | c |
+-----+ +-----+
^ |
| +-----+ |
+------| d |<-----+
+-----+
To accomplish that, I need to POST to the _replicate resource on each server with the source database and the target database. Using curl looks like:
cstrom@whitefall:~/tmp/seed$ curl -X POST http://couch-011a.local:5984/_replicate \
> -d '{"source":"test", "target":"http://couch-011b.local:5984/test", "continuous":true}'
{"ok":true,"_local_id":"5cd41a28a497587c2853e0b9dc8acd01"}
cstrom@whitefall:~/tmp/seed$ curl -X POST http://couch-011b.local:5984/_replicate \
> -d '{"source":"test", "target":"http://couch-011c.local:5984/test", "continuous":true}'
{"ok":true,"_local_id":"fdd443d544821e2a905e30fb1f6fa6a3"}
cstrom@whitefall:~/tmp/seed$ curl -X POST http://couch-011c.local:5984/_replicate \
> -d '{"source":"test", "target":"http://couch-011d.local:5984/test", "continuous":true}'
{"ok":true,"_local_id":"a0540360cc03c1f51647bd46514216e8"}
cstrom@whitefall:~/tmp/seed$ curl -X POST http://couch-011d.local:5984/_replicate \
> -d '{"source":"test", "target":"http://couch-011e.local:5984/test", "continuous":true}'
{"ok":true,"_local_id":"fae56e64fa4c8b0ada968c5d4861304a"}
cstrom@whitefall:~/tmp/seed$ curl -X POST http://couch-011e.local:5984/_replicate \
> -d '{"source":"test", "target":"http://couch-011f.local:5984/test", "continuous":true}'
{"ok":true,"_local_id":"dd061f67c1d9eb6de8b10d472764b0c6"}
cstrom@whitefall:~/tmp/seed$ curl -X POST http://couch-011f.local:5984/_replicate \
> -d '{"source":"test", "target":"http://couch-011a.local:5984/test", "continuous":true}'
{"ok":true,"_local_id":"f6b8bf78a7c7e542f071abeb7ff5293d"}
I tell couch-011a to replicate its test database to the test database on couch-011b. I tell couch-011b to replicate its test database to the test database on couch-011c, and so on until couch-011f, which I tell to replicate back to couch-011a. That should be what I expect in the diagram. Now to test...

I create a new, empty directory, and create a single JSON file in it (optimistically named) to_be_replicated.json:
cstrom@whitefall:~/tmp/seed2$ echo '{"foo":"bar"}' > to_be_replicated.json
I then push this to the couch-011b server using couch_docs:
cstrom@whitefall:~/tmp/seed2$ couch-docs push http://couch-011b.local:5984/test . -w
Updating documents on CouchDB Server...
As expected, this file is now visible in Futon on couch-011b:



But how about couch-011a?



Yup! It made it all the way around the circuit.

So what happens when two of the servers go down and updates are made? The last time I tried this, I messed up because I confused replication with synchronization. Replication in CouchDB, even automatic replication, is unidirectional. Today I am still unidirectional, but it is a closed loop. Any conflicts I create while the circuit is broken should ultimately get resolved when the circuit is restored. So let's test...

I manually stop couch-011c and couch-011e. Then I push conflicting changes to couch-011a and couch-011d:
cstrom@whitefall:~/tmp/seed2$ echo '{"foo":"bob"}' > to_be_replicated.json 
cstrom@whitefall:~/tmp/seed2$ couch-docs push http://couch-011a.local:5984/test .
Updating documents on CouchDB Server...
cstrom@whitefall:~/tmp/seed2$ echo '{"foo":"bar"}' > to_be_replicated.json
cstrom@whitefall:~/tmp/seed2$ couch-docs push http://couch-011d.local:5984/test .
Updating documents on CouchDB Server...
The server after couch-011a is still online, so the couch-011a change gets replicated to couch-011b, but no further. Server couch-011a, couch-011b, and couch-011d are now in conflict:
cstrom@whitefall:~/tmp/seed2$ curl http://couch-011d.local:5984/test/to_be_replicated
{"_id":"to_be_replicated","_rev":"5-d2c8433606378e67445d1455713b6f93","foo":"bar"}
cstrom@whitefall:~/tmp/seed2$ curl http://couch-011b.local:5984/test/to_be_replicated
{"_id":"to_be_replicated","_rev":"5-01dee71e62dbf26d07613511e6d2cd14","foo":"bob"}
The server after couch-011e in the circuit has seen none of the changes and is still at the original version of the doc (as evidenced by the "1" at the start of the revision):
cstrom@whitefall:~/tmp/seed2$ curl http://couch-011f.local:5984/test/to_be_replicated
{"_id":"to_be_replicated","_rev":"1-f0ce1cb7c380b09ebd91c5829a9f7f40","foo":"bar"}
So what happens when I start the couch-011c and couch-011e servers back up? Well, nothing:
cstrom@whitefall:~/tmp/seed2$ curl http://couch-011b.local:5984/test/to_be_replicated
{"_id":"to_be_replicated","_rev":"5-01dee71e62dbf26d07613511e6d2cd14","foo":"bob"}
cstrom@whitefall:~/tmp/seed2$ curl http://couch-011d.local:5984/test/to_be_replicated
{"_id":"to_be_replicated","_rev":"5-d2c8433606378e67445d1455713b6f93","foo":"bar"}
cstrom@whitefall:~/tmp/seed2$ curl http://couch-011f.local:5984/test/to_be_replicated
{"_id":"to_be_replicated","_rev":"1-f0ce1cb7c380b09ebd91c5829a9f7f40","foo":"bob"}
Server b and d still conflict and server f still has the old document. This is because automatic replication does not stay in place between CouchDB restarts. So I need to redo the replication statements for c & e:
cstrom@whitefall:~/tmp/seed2$ curl -X POST http://couch-011c.local:5984/_replicate \
> -d '{"source":"test", "target":"http://couch-011d.local:5984/test", "continuous":true}'
{"ok":true,"_local_id":"a0540360cc03c1f51647bd46514216e8"}
cstrom@whitefall:~/tmp/seed2$ curl -X POST http://couch-011e.local:5984/_replicate \
> -d '{"source":"test", "target":"http://couch-011f.local:5984/test", "continuous":true}'
{"ok":true,"_local_id":"dd061f67c1d9eb6de8b10d472764b0c6"}
With that, I should have the same document on each server:
cstrom@whitefall:~/tmp/seed2$ curl http://couch-011b.local:5984/test/to_be_replicated
{"_id":"to_be_replicated","_rev":"5-01dee71e62dbf26d07613511e6d2cd14","foo":"bob"}
cstrom@whitefall:~/tmp/seed2$ curl http://couch-011d.local:5984/test/to_be_replicated
{"_id":"to_be_replicated","_rev":"5-d2c8433606378e67445d1455713b6f93","foo":"bar"}
cstrom@whitefall:~/tmp/seed2$ curl http://couch-011f.local:5984/test/to_be_replicated
{"_id":"to_be_replicated","_rev":"1-f0ce1cb7c380b09ebd91c5829a9f7f40","foo":"bob"}
Bah! What's up with that?

It turns out that replication was disabled in the servers that were trying to reach couch-011c and couch-011e while they were down (the log appeared to indicate that this happened after 10 failed replication attempts). So, I need to re-enable replication on server b and d as well:
cstrom@whitefall:~/tmp/seed2$ curl -X POST http://couch-011b.local:5984/_replicate \
> -d '{"source":"test", "target":"http://couch-011c.local:5984/test", "continuous":true}'
{"ok":true,"_local_id":"fdd443d544821e2a905e30fb1f6fa6a3"}
cstrom@whitefall:~/tmp/seed2$ curl -X POST http://couch-011d.local:5984/_replicate -d '{"source":"test", "target":"http://couch-011e.local:5984/test", "continuous":true}'
{"ok":true,"_local_id":"fae56e64fa4c8b0ada968c5d4861304a"}
With that, I finally have the same document on each CouchDB database in the circuit:
cstrom@whitefall:~/tmp/seed2$ curl http://couch-011b.local:5984/test/to_be_replicated
{"_id":"to_be_replicated","_rev":"5-d2c8433606378e67445d1455713b6f93","foo":"bar"}
cstrom@whitefall:~/tmp/seed2$ curl http://couch-011d.local:5984/test/to_be_replicated
{"_id":"to_be_replicated","_rev":"5-d2c8433606378e67445d1455713b6f93","foo":"bar"}
cstrom@whitefall:~/tmp/seed2$ curl http://couch-011f.local:5984/test/to_be_replicated
{"_id":"to_be_replicated","_rev":"5-d2c8433606378e67445d1455713b6f93","foo":"bar"}
How CouchDB chooses the "winning" version of the document is meant to be opaque and if it chooses the wrong one, a conflict resolution view is available. I'm just happy to see this working as expected this time around.

Day #48

Friday, March 19, 2010

Debian Testing and Edge Couch

‹prev | My Chain | next›

OK, let's try this again.

I want a full Debian install with edge CouchDB installed. Yeah, there might be better distros for compiling things, but I really want to stick with with Debian. Debian uses apt-get for package management and you just don't get better than that. More importantly, it is a truly minimal install, which is what you really want for an actual server install. The less there is, the fewer the potential attack vectors.

The problem from yesterday was that I was using stable Debian as the base, but ended up needing to download a ton of junk from testing Debian. Rather than going through all that, why not just start with testing? So I redo my base install steps from the other night, but with testing. The only change is that I select the testing netinst ISO:



The base install for testing takes a long time (1+ hours) on my little netbook. As I did the other night, I ensure that ssh, vim, screen, and sudo are installed before making a backup copy of my base install:
cstrom@whitefall:~/.VirtualBox/HardDisks$ VBoxManage clonehd couch-0.11.vdi debian_testing_base.vdi 
VirtualBox Command Line Management Interface Version 3.0.8_OSE
(C) 2005-2009 Sun Microsystems, Inc.
All rights reserved.

0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone hard disk created in format 'VDI'. UUID: e214e83c-c4de-423b-adec-306435ee69ed
Again, I establish SSH port-forwarding on the VM for ease of interaction:
VBoxManage setextradata "couch-0.11" \
"VBoxInternal/Devices/pcnet/0/LUN#0/Config/guestssh/Protocol" TCP
VBoxManage setextradata "couch-0.11" \
"VBoxInternal/Devices/pcnet/0/LUN#0/Config/guestssh/GuestPort" 22
VBoxManage setextradata "couch-0.11" \
"VBoxInternal/Devices/pcnet/0/LUN#0/Config/guestssh/HostPort" 2222
After booting, I install the necessary software (incorporating lessons learned from last night):
sudo apt-get install \
subversion \
libicu-dev \
libcurl4-gnutls-dev \
erlang \
erlang-dev \
libmozjs-dev \
build-essential \
libtool \
automake \
autoconf
(Update: later installed checkinstall and avahi-daemon)

I can then checkout the code and attempt to bootstrap it:
cstrom@couch-011:~$ mkdir repos
cstrom@couch-011:~$ cd !$
cd repos
cstrom@couch-011:~/repos$ svn co http://svn.apache.org/repos/asf/couchdb/trunk couchdb
A couchdb/test
...
cstrom@couch-011:~/repos$ cd !$
cd couchdb
cstrom@couch-011:~/repos/couchdb$ ./bootstrap
You have bootstrapped Apache CouchDB, time to relax.

Run `./configure' to configure the source before you install.
Awesome! That was so much easier than last night. After running ./configure and make, I can checkinstall:
sudo checkinstall
...
**********************************************************************

Done. The new package has been installed and saved to

/home/cstrom/repos/couchdb/couchdb_0.11.999-1-1_i386.deb

You can remove it from your system anytime using:

dpkg -r couchdb

**********************************************************************
Just as I did last night, I add a couchdb user:
cstrom@couch-011:~/repos/couchdb$ sudo     adduser --system \
> --home /usr/local/var/lib/couchdb \
> --no-create-home \
> --shell /bin/bash \
> --group --gecos \
> "CouchDB Administrator" couchdb
Adding system user `couchdb' (UID 105) ...
Adding new group `couchdb' (GID 108) ...
Adding new user `couchdb' (UID 105) with group `couchdb' ...
Not creating home directory `/usr/local/var/lib/couchdb'
And set permissions:
sudo chown -R couchdb:couchdb /usr/local/etc/couchdb
sudo chown -R couchdb:couchdb /usr/local/var/lib/couchdb
sudo chown -R couchdb:couchdb /usr/local/var/log/couchdb
sudo chown -R couchdb:couchdb /usr/local/var/run/couchdb
That gets me a functional CouchDB install:
cstrom@couch-011:~/repos/couchdb$ sudo -i -u couchdb couchdb
Apache CouchDB 0.12.0a925535 (LogLevel=info) is starting.
Apache CouchDB has started. Time to relax.
[info] [<0.32.0>] Apache CouchDB has started on http://127.0.0.1:5984/
To get that start automatically at boot, I can use the supplied init.d script:
cstrom@couch-011:/usr/local/etc/init.d$ sudo /usr/local/etc/init.d/couchdb start
Starting database server: couchdb.
cstrom@couch-011:/usr/local/etc/init.d$ ps -ef | grep couch
couchdb 10012 1 0 18:54 pts/3 00:00:00 /bin/sh -e /usr/local/bin/couchdb -a /usr/local/etc/couchdb/default.ini -a /usr/local/etc/couchdb/local.ini -b -r 5 -p /usr/local/var/run/couchdb/couchdb.pid -o /dev/null -e /dev/null -R
couchdb 10022 10012 0 18:54 pts/3 00:00:00 /bin/sh -e /usr/local/bin/couchdb -a /usr/local/etc/couchdb/default.ini -a /usr/local/etc/couchdb/local.ini -b -r 5 -p /usr/local/var/run/couchdb/couchdb.pid -o /dev/null -e /dev/null -R
couchdb 10023 10022 8 18:54 pts/3 00:00:00 /usr/lib/erlang/erts-5.7.4/bin/beam -Bd -K true -- -root /usr/lib/erlang -progname erl -- -home /usr/local/var/lib/couchdb -- -noshell -noinput -sasl errlog_type error -couch_ini /usr/local/etc/couchdb/default.ini /usr/local/etc/couchdb/local.ini /usr/local/etc/couchdb/default.ini /usr/local/etc/couchdb/local.ini -s couch -pidfile /usr/local/var/run/couchdb/couchdb.pid -heart
couchdb 10027 10023 0 18:54 ? 00:00:00 heart -pid 10023 -ht 11
cstrom 10031 2269 2 18:54 pts/3 00:00:00 grep couch
To actually run that a boot time, I add it to the appropriate rc directories with update-rc.d:
cstrom@couch-011:/usr/local/etc/init.d$ cd /etc/init.d/
cstrom@couch-011:/etc/init.d$ sudo ln -s /usr/local/etc/init.d/couchdb
cstrom@couch-011:/etc/init.d$ sudo update-rc.d couchdb defaults
update-rc.d: using dependency based boot sequencing
The last thing that I do tonight is configure CouchDB to listen on the VM's network interface so that I can access it from outside the VM. This is a simple setting in the /usr/local/etc/couchdb/local.ini file:
[httpd]
;port = 5984
bind_address = 0.0.0.0
As long as avahi (a.k.a. bonjour) is installed, I can now relax on the VM (when run in host-only mode):



I sense much cloning of VMs and fun with replication tomorrow.

Day #37