Accessing Couchbase cluster using non IP address and Hostname/ DNS only

Hi ,

I am trying to get rid of the issue of IP address constantly changing for the CB nodes in cluster .(see below image)
I am trying to make the connection for the SDK directly using DNS or hostname . So not sure what would be that and how to use that .

image

I have 4 node cluster in Kubernetes and I have used the default Operator and cluster helm package to install it .
I have NO hostname and domain or sub-domain specification in my cluster yaml file.

I assume I need to use some headless service domain address but how do I achieve that .
Read this and not sure how do I apply this concept : DNS for Services and Pods | Kubernetes

My connections string for connecting bucket and opening it looks like below : (this IP address is fake one(for security) and this is one of the IP address from node 1 (exposed ports for 11210 data service)

COUCHBASE_CONNSTR = “couchbase://10.80.67.123:30853”

I have headless service running in kubernetes like below

cb-revstrat-ilcb-srv ClusterIP None 11210/TCP,11207/TCP

When I look at the one of the node from Admin UI console I see ::

below is first couple of lines when editing the existing cluster ::

apiVersion: couchbase.com/v1
kind: CouchbaseCluster
metadata:
creationTimestamp: “2020-04-03T19:03:51Z”
generation: 94
name: cb-revstrat-ilcb
namespace: bi-cb
resourceVersion: “215400320”
selfLink: /apis/couchbase.com/v1/namespaces/bi-cb/couchbaseclusters/cb-revstrat-ilcb
uid: cb4e0097-3015-47c1-860e-c0c27c95dca3
spec:
adminConsoleServiceType: NodePort
adminConsoleServices:

  • data
    authSecret: youthful-mule-cb-revstrat-ilcb
    baseImage: couchbase/server
    buckets:
  • compressionMode: passive
    conflictResolution: seqno
    enableFlush: true
    evictionPolicy: fullEviction
    ioPriority: high
    memoryQuota: 128
    name: default
    replicas: 1
    type: couchbase
    cluster:
    analyticsServiceMemoryQuota: 1024
    autoFailoverMaxCount: 3
    autoFailoverOnDataDiskIssues: true
    autoFailoverOnDataDiskIssuesTimePeriod: 120
    autoFailoverServerGroup: false
    autoFailoverTimeout: 120
    clusterName: “”
    dataServiceMemoryQuota: 2048
    eventingServiceMemoryQuota: 1024
    indexServiceMemoryQuota: 2048
    indexStorageSetting: plasma

I look at up below post from @simon.murray however can’t figured out what network change I be needed and ask to Network team ::

Now the problem with using node ports is that if a node goes away or changes address, the clients will break. If the pod that generates the node port goes away, the clients will break. As you are using IP addresses and NodePorts you cannot encrypt the traffic. Be aware of these limitations.

The correct way to connect will be described in the upcoming Operator 2.0 documentation. The short version is that your clients talk to a DNS server that forwards the DNS zone %namespace%.svc.cluster.local to the remote Kubernetes DNS server where the cluster lives. The remote cluster must be using flat networking (no overlays). The client can then connect to couchbase://%clustername%.%namespace%.svc (and must have at least a cluster.local search domain configured for its stub resolver). This gives you high-availability, service discovery and the option of using TLS.

The IP addresses do not change. Your DNS server is not responding to the query, and that triggers the warning. Couchbase Server continues to work I’m assuming, so can be ignored?


I cannot comment on your network setup, because I don’t know enough about it. Consult the following page https://docs.couchbase.com/operator/2.0/concept-couchbase-networking.html. Which setup matches your setup? Does following the documentation links on connecting a client SDK answer your questions?

@simon.murray - Sorry , There is 2 questions on this thread . let me isolate this ::

For now my challenges how do I use all cluster nodes in my client connection details ::

I have below services running ::

My client connecting to first node using ExternalIP address with external NodePort like below ::

COUCHBASE_CONNSTR = “couchbase://10.xx.183.116:30853” – this is POD 1 external IP address and nodeport
COUCHBASE_USER = ‘Administrator’
COUCHBASE_BUCKET_PASSWORD = ‘password’

Our network team is saying why CB using multiple nodePorts as they can’t load balance in Kubernetes config. They are looking for one static Nodeport which can talk to all CB nodes. And I am looking for a static DNS (not the IP address) with port which I can use for client connection .
What I am missing here .

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cb-revstrat-ilcb ClusterIP None 8091/TCP,18091/TCP 30d
cb-revstrat-ilcb-0000-exposed-ports NodePort 10.x.188.116 8092:31430/TCP,18092:30320/TCP,8093:31736/TCP,18093:32263/TCP,8094:31640/TCP,18094:32433/TCP,8095:30709/TCP,18095:30955/TCP,11210:30853/TCP,11207:30434/TCP,8091:30659/TCP,18091:30351/TCP 30d
cb-revstrat-ilcb-0002-exposed-ports NodePort 10.x.231.22 8091:30555/TCP,18091:30971/TCP,8092:32284/TCP,18092:30047/TCP,8093:30597/TCP,18093:32692/TCP,8094:31549/TCP,18094:30613/TCP,8095:30297/TCP,18095:30346/TCP,11210:30566/TCP,11207:31486/TCP 10d
cb-revstrat-ilcb-0003-exposed-ports NodePort 10.x.47.193 8093:31787/TCP,18093:32551/TCP,8094:30298/TCP,18094:32217/TCP,8095:30183/TCP,18095:32736/TCP,11210:32720/TCP,11207:31621/TCP,8091:32150/TCP,18091:31603/TCP,8092:31548/TCP,18092:31704/TCP 10d
cb-revstrat-ilcb-0004-exposed-ports NodePort 10.x.147.229 8093:32057/TCP,18093:31114/TCP,8094:32068/TCP,18094:30305/TCP,8095:31252/TCP,18095:31886/TCP,11210:30500/TCP,11207:30066/TCP,8091:31416/TCP,18091:30779/TCP,8092:31461/TCP,18092:30080/TCP 10d
cb-revstrat-ilcb-srv ClusterIP None 11210/TCP,11207/TCP 30d
cb-revstrat-ilcb-ui NodePort 10.x.34.138 8091:32154/TCP,18091:31769/TCP

As I say it’s all in the documents. If you want to use a load-balanced IP address, then follow this https://docs.couchbase.com/operator/2.0/howto-client-sdks.html#ip-based-addressing. Use the UI service essentially. But we do not recommend this approach…

For DNS based addressing you need to have a setup that looks like this https://docs.couchbase.com/operator/2.0/concept-couchbase-networking.html#intra-kubernetes-networking or like this https://docs.couchbase.com/operator/2.0/concept-couchbase-networking.html#inter-kubernetes-networking-with-forwarded-dns, then you can cnnect using the following https://docs.couchbase.com/operator/2.0/howto-client-sdks.html#dns-based-addressing

@simon.murray - " Generic Networking" is what we have … and we are ending up with this configuration :

https://docs.couchbase.com/operator/2.0/howto-nodeport-networking.html

Please help us how to work with this in On Prem K8s configuration . DNS based option is not okay from our network and security perspective so what is the choice I have so that my IP address is stable and can have unique Nodeport to connect via application client .

thanks

If you follow this https://docs.couchbase.com/operator/2.0/howto-client-sdks.html#ip-based-addressing then it will load balance automatically, which is what the networking team want, and should only need changing if the node you are pointing to is removed for some reason. In your example it would be any Kubernetes node’s IP and the port associated with the UI service i.e. 32154

You cannot use DNS automatically because you are using node port networking. There is nothing stopping you manually adding an A record for all Kubernetes nodes in the cluster, and just hitting http://my.dns.name:32154 rather than a single node as document above. Again you should update the round-robin DNS if the Kubernetes node topology changes.

Hey @eldorado, I think the key bit is this:

Just to connect a few concepts, part of how Couchbase gets its ease of use and performance is by having the library in the application directly know the topology of the Couchbase cluster and be able to go to exactly the node providing that data and/or service. That is also how Couchbase provides its Multi-Dimensional Scaling, where you can have some nodes with different services and resources than others.

In order to make this all come into being, the cluster and the client need a consistent view of which network node is providing which services. At the KV layer in particular, it’s even lower level.

So, while your IT folks might be looking for a singular nodePort that balances across all nodes, that probably comes from deploying and managing other kinds of resources that are homogenous. For example, an nginx frontend where any pod with that label can handle the request.

With Couchbase, the K8S Operator makes your Couchbase deployment more cloud native and can interact with Couchbase itself and other cloud services to carry this topology information across network boundaries. Internal to K8S, pod IPs may change but hostnames for a given resource do not. Those hostnames can be easily used internal to K8S or even across K8S clusters with some DNS forwarding. To carry that outside, like you’re seeking, that means setting up an external DNS name that maps to that internal DNS name. That’s effectively what we’re advocating here.

I do know that there are, just to pick an example, things like PostgreSQL and MySQL that use Kubernetes NodePort. What you’ll see with those is that they either do not distribute across multiple pods inside K8S (e.g., it’s not a distributed database), or they use NodePort to distribute across a proxy like ProxySQL. In turn, that means you have two network hops and you have another set of pods to manage, not to mention all of the complexity that sharding between master and slave replicas gives you.

With Couchbase, it is true that to get all of the benefits of security, Multi-Dimensional Scaling, mini minimal network overhead you will need to take on the setup of one of the more preferred Couchbase Networking options, but the benefits in return are also high. If your app is outside K8S, that’d be Public Networking with the External DNS option.

Hopefully the additional context, namely that Couchbase is distributed and not all pods are homogenous, explains why it is not as straightforward as a single NodePort. There is some additional setup, but after you get there our Couchbase Operator can manage failures, scaling, upgrades, backups all within K8S for you. You won’t have the drudgery of managing when things change.

Hi , Its confirmed that DNS route is not what we want and “Intra-Kubernetes Networking” is where the right direction is to go … Now with that direction if my K8s node IP changes from one node to other how can we do common end-point loadbalance without interrupting service ? we have 6 node k8s cluster and 4 of my CB node can deploy in any cluster node and IP of each node is not static and that can’t be (as per k8s team and config) . We don’t need ingress I assume ? External DNS is too much hassles to maintain and manager in on-prem config …not a good choice either . So can we get some direction as you understand now what the real problem is …

great . that explains a lot … sorry I catch up your messages after previous response .
So I understand there is some trade-off with one approach vs other …If I can’t make Network to change what CB like seems I am more inclined to nginx or other resources to do loadbalance by POD name … my application inside same k8s cluster so I don’t need to think about Public DNS … If we go in that route of common nodeport or loadbalance with nginx what are the potential caveats ? I should still be able to use XDCR and also what all services operator provides correct ? looks like ingress is not option either , correct me if I am wrong. thanks again for good explanation .

If you have your app in the same K8S cluster or an intra-K8S cluster with the DNS forwarding we document, then there are no other caveats. Failures, node additions, all of the management through the Operator should just work.

XDCR would also work through the External DNS option, assuming the other cluster is external.

Something could come up, 2.0 was just released about a month ago, but this is what we test to.

No, a K8S ingress is only for HTTP/HTTPS officially speaking. I’ve seen that NGINX has an extension that can somehow use a K8S ingress with TCP, but there would be many more details to making that work with something like Couchbase since the routing would have to be handled by something in there.