Lately we’ve been experimenting a lot with CouchDB an its replication features.
It’s a very cool paradigm that allows you to hide many layer of complexity related to data synchronisation between different systems into an automated and almost-instant replication process.
Basically CouchDB implements two kind of replications, a “one-shot” replication and a “continuous” replication. In the first case there’s a process that starts, replicate an entire DB and then goes in a “Completed” state, while in the second case there’s an always-on iterating process that, using some kind of internal sequence numbers (something conceptually close to a Journal Log of a filesystem), keeps the slave database continuously in sync with the master one.
When dealing with many databases and replication processes it’s pretty easy to reach a point where you have many Replication Processes running on a single server and that may lead to slowness and, in general, a high load of (effectively idle) activities on the machines.
To avoid such circumstances CouchDB, since version 2.1, implements a replication scheduler that is able to cycle trough all the replication jobs in some sort of circular fashion pausing and restarting all the jobs to avoid resources exhaustion.
The Replication Scheduler is controlled by a few tuneable parameters (see http://docs.couchdb.org/en/stable/config/replicator.html#replicator for more details). Three of those parameters are the real deal of the situation as they control the basic aspects of the scheduler:
- max_jobs – which controls the threshold of max simultaneously running jobs;
- interval – which controls how often the scheduler is invoked to rearrange replication jobs, pausing some jobs and starting some others;
- max_churn – which controls the maximum number of jobs to be “replaced” (read: one job is paused, another one is started) in any single execution of the scheduler.
This is a basic diagram outlining the Replication Scheduler process:
So, basically, with “max_jobs” you control how much you want to stress your server, with “interval” you control how often you want to shuffle things up, and with “max_churn” you control how violently the scheduler will act.
- If max_jobs is too high your server load will increase (a lot!).
- If max_jobs is too low your replication will be less “realtime” as there is an higher chance that a replication job could be paused.
- If interval is too high a paused replication job could stay paused for way too long.
- If interval is too low a running replication job could be paused to early, before it could actually catch up with it’s queued activities.
- If max_churn is too high there may be an high expense in setup and kick off timings (when a replication process is started up it has to connect to the server, authenticate, check that everything is aligned and so on…)
- If max_churn is too low the amount of time a process could stay paused may be pretty long.
As usual, your working environment – I mean, database size, hardware performances, document sizes, whatever – has a huge impact on how you tweak those parameters.
My only personal consideration is that the default value of max_jobs (500) seems to me a pretty high value for a common server. After some tweaking, on a “small” Virtual Machine we use for development we’ve settled with max_jobs set to 20, interval set to 60000 (60 seconds) and max_churn set to 10. On the Production server, with better Hardware (Real HW instead of VM, SSD drives, more CPU cores, and whatever) we expect an higher value for max_jobs – but in the 2x/3x range, so maybe something like 40/60 max_jobs – I strongly doubt we could ever reach a max_jobs value of 500.