Adding new Cluster Node with new Kubernestes POD - best practice

Hi CB Team ,
I have managed to have active CB cluster running on K8’s . and I have 4 kubernetes POD’s running .
Now if I want to add new nodes/POD’s in cluster how can I add those . There is Add server node button but I don’t have clear step by step instruction how can I leverage new POD’s on this setup . I could use multiple different IP address but just need the right way and best practice for this activity
If you can guide on that that would be great .
thanks

Depends on how you have deployed the cluster in the first place…

With the autonomous operator you have to use DNS names, but this is all hidden away from you (for good reason - you will see in a minute…). To scale the cluster simply type kubectl edit cbc ${cluster_name} then edit /spec/servers/${index}/size and the operator will do the rest. Do this, it’s easier!

If you have rolled this cluster by hand, you can use the pod IP address kubectl get pod ${pod_name} -o wide and use the IP returned in the table, but this is not recommended as IP addresses in Kubernetes are not stable and can change, thus breaking things!! If you are using DNS which is stable, you probably already have a headless service referencing all pods in your cluster, this will create DNS entries in the form ${pod_name}.${service}.${namespace}.svc which you can use to address the new servers.

1 Like

@simon.murray - Hi Simon ,
may be you are the right person to address my concern . Can you please help here on how to config the values YAML for Multidimensional scaling ?
I have managed to do it here in my custom values YAML :

servers:
    - size: 3
      name: dataservices
      services:
        - data
      pod:
        resources:
          limits:
            cpu: "10"
            memory: 30Gi
          requests:
            cpu: "5"
            memory: 20Gi
        volumeMounts:
          data: couchbase
          default: couchbase
    - size: 1
      name: indexservices
      services:
        - index
      pod:
        resources:
          limits:
            cpu: "40"
            memory: 75Gi
          requests:
            cpu: "30"
            memory: 50Gi
        volumeMounts:
          data: couchbase
          default: couchbase
    - size: 1
      name: queryservices
      services:
        - query
      pod:
        resources:
          limits:
            cpu: "10"
            memory: 10Gi
          requests:
            cpu: "5"
            memory: 5Gi
        volumeMounts:
          data: couchbase
          default: couchbase
    - size: 2
      name: otherservices
      services:
        - search
        - eventing
        - analytics
      pod:
        resources:
          limits:
            cpu: "5"
            memory: 10Gi
          requests:
            cpu: "2"
            memory: 5Gi
        volumeMounts:
          data: couchbase
          default: couchbase

when I run the installer to install cluster in --dry-run --debug it throws below warning >

2019/12/15 03:12:39 Warning: Merging destination map for chart ‘couchbase-cluster’. Cannot overwrite table item ‘servers’, with non table value: map[all_services:map[pod:map serverGroups: services:[data index query search eventing analytics] size:5]]
REVISION: 1
When I want to continue install it throws below error :

Error: release filled-beetle failed: admission webhook “couchbase-admission-controller-couchbase-admission-controller.default.svc” denied the request: validation failure list:
data in spec.servers[1].services is required
data in spec.servers[2].services is required
data in spec.servers[3].services is required

I am pretty sure I am missing somewhere the loop condition in …\templates\couchbase-cluster.yaml as it is not liking multiple entries for MDS :
And this section seems to be the culprit :


servers:
{{- range $server, $config := .Values.couchbaseCluster.servers }}

  • name: {{ $server }}
    {{ toYaml $config | indent 4 }}
    {{- end }}
    {{- if .Values.couchbaseTLS.create }}

Any clue what I am missing to get my MDS installation goes well ?
thanks for your help.

@tommie is the man when it comes to Helm. What I will say is that the DAC errors are because you have a data service mount on non-data service nodes. Yeah, the error is rubbish and is being improved in 2.0.0. But the error is there for your own protection as it’s a waste of money provisioning unused volumes :smiley:

@tommie - Could you advice and appreciate your help .
thanks