Couchbase Cluster - Expose External IP


#1

Hi,

is there any possibility to use external IPs for the cluster set up by the Couchbase Operator?
It would be very nice to make the cluster usable to applications deployed outside the Kubernetes cluster.

We are currently testing the Couchbase Operator on Azure AKS.

Thank you!


#2

Hi, I appreciate your time in trying out our software and would love to hear more about your use case. Are you trying to connect clients over the public Internet or via peered private networks?


#3

We are trying to connect clients over the “public” internet. There are two different use cases for us at the moment:

  1. Connect a separate app which is deployed as Azure Web/API App to the couchbase cluster
  2. XDCR between two clusters in two different geographical regions.

#4

I’ll address #2 first as it’s easiest. We are releasing 1.0.0 in very soon which will have support for XDCR across a site-to-site VPN. The essential things that need to work are that you have connectivity from one Kubernetes cluster to the other e.g. they need to have non-overlapping address ranges, the necessary routes installed via the VPN and finally one Kubernetes node in one region can ping another node in another region. There will be a new Couchbase Operator cluster option introduced called exposedFeatures; add “xdcr” to this and you will be able to establish the XDCR connection via a node port IP address. AKS should have VNet support to create the VPN.

Addressing #1, there is no easy way to perform this integration at present, although our clients are being updated to take advantage of the mechanism which allows XDCR to function (this should begin to be released in a few months time), again this would require a VPN from the App Service to the Kubernetes network, and the use of a node port IP address.

Usually we’d expect applications to be deployed in Kubernetes along side the Couchbase cluster and make use of the built in service discovery provided by Kubernetes’ network layer, hence my interest in your use case.

Feel free to discuss anything further.


#5

Hi,
I also need to connect external SDKs to a Couchbase cluster in Kubernetes.
In my situation, AWS Lambdas should be able to connect to the Couchbase cluster in Kubernetes.
I’m sure that because of this use case external SDKs should be supported.


#6

Hi Robin,

Thanks for your feedback. There a couple deficiencies with Kubernetes which makes this use pattern less than perfect that you should be aware of.

We could need to expose ports externally to the internet with a Service per Couchbase node. As there is no provision for control over external DNS we’d have to expose the ports via IP only. During the life-cycle of the cluster, those IP addresses would change and eventually your lambda functions would cease to function.

Additionally you need to consider TLS as this is running over the public internet. Having no DNS makes this somewhat tricky as you’d have to supply us with a CA certificate and key so we could create server certificates with the relevant IP alternative names.

I guess all the problems go away if we were to be granted control over a DNS server so we could control A and SRV records via DDNS. Actually a quick google suggests you are not the first with this requirement: https://github.com/kubernetes-incubator/external-dns.

So I believe this pattern will be possible at some point in the future but not immediately. We’re still left with the problem that the external-dns controller probably won’t support aggregating individual services into a single SRV record, so your Couchbase connection string will be prone to failure as names change. I’ll have a word with the developers and see if it’s realistic.