Xdcr replication between private clusters

I have two Kubernetes clusters, running at my LAN environment and their services are either exposed via Ingress controller or NodePort. I have a LAN loadbalancer and i can reach the services via a vserver dns record. I had created two load balancer ip adresses that are serving to the admin service.

Here is the services at the destination side:

kubectl get svc -n foo-couchbase-b
NAME                                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                                                                                                                                                                                                                                                                                                                               AGE
cb-foo                                         ClusterIP   None            <none>        4369/TCP,8091/TCP,8092/TCP,8093/TCP,8094/TCP,8095/TCP,8096/TCP,9100/TCP,9101/TCP,9102/TCP,9103/TCP,9104/TCP,9105/TCP,9110/TCP,9111/TCP,9112/TCP,9113/TCP,9114/TCP,9115/TCP,9116/TCP,9117/TCP,9118/TCP,9120/TCP,9121/TCP,9122/TCP,9130/TCP,9140/TCP,9999/TCP,11207/TCP,11209/TCP,11210/TCP,18091/TCP,18092/TCP,18093/TCP,18094/TCP,18095/TCP,18096/TCP,19130/TCP,21100/TCP,21150/TCP   4d9h
cb-foo-0000                                    NodePort   <none>        8091:32556/TCP,18091:30452/TCP,11210:30435/TCP,11207:31297/TCP,8092:32494/TCP,18092:31741/TCP                                                                                                                                                                                                                                                                                         4d9h
cb-foo-0001                                    NodePort    <none>        8091:31758/TCP,18091:32409/TCP,11210:32420/TCP,11207:31520/TCP,8092:32198/TCP,18092:30690/TCP                                                                                                                                                                                                                                                                                         4d9h
cb-foo-0002                                    NodePort   <none>        8091:30897/TCP,18091:31182/TCP,11210:30310/TCP,11207:31122/TCP,8092:31552/TCP,18092:32119/TCP                                                                                                                                                                                                                                                                                         4d9h
cb-foo-srv                                     ClusterIP   None            <none>        11210/TCP,11207/TCP                                                                                                                                                                                                                                                                                                                                                                   4d9h
cb-foo-xdcr                                    NodePort   <none>        8091:32020/TCP,8092:32021/TCP,11210:32022/TCP,18091:32023/TCP                                                                                                                                                                                                                                                                                                                         4d9h
foo-couchbase-couchbase-admission-controller   ClusterIP   <none>        443/TCP                                                                  

The loadbalancer ip is service to the NodePort 32020. Writing to the xdcr target is causing couchbase to convert it to cb-foo-0000.cb-foo.foo-couchbase-b.svc. Where should i install the external-dns (at target or source?) or should i add dns.domain to the helm char t values? I am a bit confused when i read the documentation.

Which of the following networking options are you attempting to configure?

Also, it’s good to know what CNI plugin you are using for Kubernetes networking.

Inter-Kubernetes Networking with Forwarded DNS, seems the one that fits to my situation. CNI is calico.

Great thanks. Now a cursory glance at https://docs.projectcalico.org/networking/determine-best-networking suggests there are a lot of options!

Are you using overlay networking or not? Are the pods “routable” as described in that document?

The pods are not routable at the current clusters. They are not being able to reach outside world, they are NATed. Overlay networking is used. Calico community version is being used.

Right, with that in mind you have two options:

Public DNS
Each couchbase pod must be exposed either on the public internet using a LoadBalancer service, or you can use an “internal” LoadBalancer service that uses IP addresses on the Kubernetes node network. External DNS is used to replicate the DNS names we assign to the services to a DDNS server; these advertise the load balancer IP. For either of these cases, the IP addresses assigned to the load balancers need to be routable from the XDCR source.

The XDCR source cluster needs to be able to route to the target Kubernetes node network. These use NodePort services exclusively.