Failover and strong consistency in Couchbase

We have a three-node Couchbase cluster with two replicas and durability level MAJORITY. This means that the mutation will be replicated to the active node( node A ) and to one of the two replicas( node B ) before it is acknowledged as successful.

In terms of consistency, what will happen if node A becomes unavailable and the hard failover process promotes node C replica before node A manages to replicate the mutation to node C?

According to the docs Protection Guarantees and Automatic Failover, write is durable but will be available immediately?

Hey @anonimos (nice handle BTW)…

Assuming the order is that the client gets the acknowledgement of the durability, then the hard failover is triggered of your node A, during the failover the cluster manager and the underlying data service will determine whether node B or C should be promoted to active for that vbucket (a.k.a. partition) to satisfy all promised durability. That was actually one of the trickier bits of implementation.

“Immediately” is pretty much correct. Technically it does take some time to do the promotion of the vbucket, but this should be very short as it’s just metadata checks and state changes and doesn’t involve any data movement. Clients will need to be updated with the new topology as well. How long is a function of the environment and what else is going on, but I’d expect single-digit-seconds or even under a second. Assuming you’re using a modern SDK API 3.x client with best-effort retries, it will be mostly transparent to your application, but not entirely transparent since you’re doing a hard failover. Non-idempotent operations, for example, may bubble up as errors.

Hope that helps!

1 Like