Hi, I just tried to start a couchbase xdcr replication and it had to pause it almost immediately because the couchbase server slowed down to a crawl. An incremental view indexing that should have take a second took almost a minute, and it only sped up and finished after I paused the replication. Is this normal only when starting a replication, or will having an xdcr replication in general really slow down the view indexing?
Hi @alexegli, If you are running a cluster that does not have the headroom for XDCR to replicate, it can compete with the incoming workload while reading the full data set for the initial sync. Once you are synced up, depending on the mutation rate, the impact may reduce.
You do need to size for the XDCR overhead in your cluster. If you want a shortcut One option is to your a backup. Restore it on the remote cluster so you don’t carry a large set of data over the wire to the remote cluster from scratch.
Thanks for the info. Where would I go to find out what the headroom required for XDCR is? I was going off of the documentation here: http://docs.couchbase.com/admin/admin/XDCR/xdcr-intro.html But it doesn’t mention any extra CPU or memory requirements for XDCR, or a minimum number of nodes that have to be in the cluster. We currently have 2 nodes in our cluster, and each node has 4 2.0GHz cores and 14GB memory. We don’t have much load at all on our cluster, we’re averaging about 17 ops/second. I’m surprised XDCR would have such a huge impact when we have so little load.
I can’t use backups because we use sync gateway and currently couchbase backups are not supported with sync gateway. You end up with inconsistent and corrupted data.
The initial load does have impact as it tries to sync all data quickly. Sizing with XDCR has a few parameters based on document size, mutation rate etc and assumes initial sync is something you can do at a low traffic period so won’t give you a precise answer. However there is some guidance here for you;