Couchbase in Kubernetes using StatefulSet

@arungupta First great job on Couchbase in K8s using StatefulSet. Thanks!! (https://github.com/arun-gupta/couchbase-kubernetes/tree/master/cluster-statefulset)

It was a piece of cake deploying N-node CB cluster with auto-rebalance=true same as scaling it up later.

However im having issues in scenario when CB pod goes down - K8s replication controller reschedules new pod with same name but the new pod does not join CB cluster again.

Did you experience same issue?

Thanks.

Btw check this out:


https://coreos.com/blog/introducing-operators.html

actually just tested it now. Please note i removed terminationGracePeriodSeconds: 0 annotation from the StatefulSet as it is a brutal approach to me.

Way i simulated node crash is that i ran couchbase pod deletion with option --grace-period=0
(kubectl delete po couchbase-0 --grace-period=0)

pod was immediately and ungracefully deleted , K8s replication controller immediately spun up new pod with same name (couchbase-0) and same IP and voila - new pod joined CB cluster again. Sweet, sweet, sweet

2 Likes

Glad that’s working out for you @tomas.valasek! We’d love to hear any more feedback.

Hi @tomas.valasek
im deploying couchbase 6.5.1 Enterprise edition as statefulset(without operator), and the POD status is not coming to 1/1 state
could you please help me how to deploy Couchbase as statefulset without operator

We’ve deployed a couchbase cluster on k8s using a modified version of this StatefulSet . It is now working fine.

But I’ve a question about rolling update of Couchbase Server.

Would this StatefulSet work out with rolling update of a future Couchbase version?
E.g by changing the image and set the AUTO_REBALANCE env to true in the yaml file, and run

kubectl apply -f couchbase-statefulset.yaml

My main concern is whether it suffices to run only auto rebalancing in such an upgrade scenario.

Any suggestion and tips would be highly appreciated.