Couchbase/couchbase-operator cluster status not change to Available due to TCP <IP>:32137: connect: connection refused

Hi Team,

I am new to this forum and this tool, We just want to experiment with the feature that couchbase supports. I have tried using the docker-compose and it is working fine. Now just check out the helm and couchbase-operator options. But I am facing the below issue,

Tools used,
minikube v1.26.1 on Microsoft Windows 10 Pro 10.0.19044 Build 19044
Kubernetes v1.24.3 on Docker 20.10.17
helm- Version:"v3.9.4

kubectl get cbc
STATUS → Creating

kubectl describe cbc
Attached full details, I can see the below error
Message: dial TCP :32137: connect: connection refused

Used the below scripts to install,
helm install my-couchbase couchbase/couchbase-operator --values \myvalues_v3.yaml
&
helm install my-couchbase couchbase/couchbase-operator

Started minikube with the below options
minikube start --cpus 4 --memory 6192

Kindly help me to identify what went wrong here, I am able to run the cluster without a helm.

# Default values for couchbase-operator chart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

# Select what to install
install:
  # -- Install the couchbase operator
  couchbaseOperator: true
  # -- Install the admission controller
  admissionController: true
  # -- Install couchbase cluster
  couchbaseCluster: true
  # -- Install sync gateway
  syncGateway: false

# ######################################################### cluster ###############################################
# @default -- will be filled in as below
# -- Controls the generation of the CouchbaseCluster CRD

cluster:
  name: dev-cluster
  cluster:
    # -- AnalyticsServiceMemQuota is the amount of memory that should be
    # allocated to the analytics service. This value is per-pod, and only
    # applicable to pods belonging to server classes running the analytics
    # service.  This field must be a quantity greater than or equal to 1Gi.
    # This field defaults to 1Gi.  More info:
    # https://kubernetes.io/docs/concepts/configuration/manage-resources-
    # containers/#resource-units-in-kubernetes
    #analyticsServiceMemoryQuota: 256Mi
    dataServiceMemoryQuota: 256Mi
    eventingServiceMemoryQuota: 256Mi
    indexServiceMemoryQuota: 256Mi
    #searchServiceMemoryQuota: 256Mi
  # -- Security defines Couchbase cluster security options such as the
  # administrator account username and password, and user RBAC settings.
  security:
    adminSecret: ""
    # -- Cluster administrator username
    username: Administrator
    # -- Cluster administrator pasword, auto-generated when empty
    password: admin@123
    # -- RBAC is the options provided for enabling and selecting RBAC User
    # resources to manage.
    rbac:
      # -- Managed defines whether RBAC is managed by us or the clients.
      managed: true

  # -- Servers defines server classes for the Operator to provision and manage.
  # A server class defines what services are running and how many members make
  # up that class.  Specifying multiple server classes allows the Operator to
  # provision clusters with Multi-Dimensional Scaling (MDS).  At least one
  # server class must be defined, and at least one server class must be running
  # the data service.
  servers:
    # -- Name for the server configuration. It must be unique.
    default:
      # -- AutoscaledEnabled defines whether the autoscaling feature is enabled
      # for this class. When true, the Operator will create a
      # CouchbaseAutoscaler resource for this server class.  The
      # CouchbaseAutoscaler implements the Kubernetes scale API and can be
      # controlled by the Kubernetes horizontal pod autoscaler (HPA).
      autoscaleEnabled: false
      # -- Env allows the setting of environment variables in the Couchbase
      # server container.
      env: []
      # -- EnvFrom allows the setting of environment variables in the Couchbase
      # server container.
      envFrom: []
      # -- Pod defines a template used to create pod for each Couchbase server
      # instance.  Modifying pod metadata such as labels and annotations will
      # update the pod in-place.  Any other modification will result in a
      # cluster upgrade in order to fulfill the request. The Operator reserves
      # the right to modify or replace any field.  More info:
      # https://kubernetes.io/docs/reference/generated/kubernetes-
      # api/v1.21/#pod-v1-core
      pod:
        spec: {}
      services:
        - data
        - index
        - query
        #- search
        #- analytics
        - eventing
      size: 1

# ######################################################### buckets ###############################################
# couchbase buckets to create
# disable default bucket creation by setting
# couchbaseBuckets.default: null
buckets:
  # A bucket to create projects
  #default: null
  # A bucket to create projects
  projects:
    # Name of the bucket
    name: projects
    # The type of bucket to use
    #type: couchbase
    # -- The type of the bucket to create by default. Removed from CRD as only
    # used by Helm.
    #kind: couchbase
    # The amount of memory that should be allocated to the bucket
    memoryQuota: 128Mi
    # The number of bucket replicates
    replicas: 1
    # The priority when compared to other buckets
    ioPriority: high
    # The bucket eviction policy which determines behavior during expire and high mem usage
    evictionPolicy: fullEviction
    # The bucket's conflict resolution mechanism; which is to be used if a conflict occurs during Cross Data-Center Replication (XDCR). Sequence-based and timestamp-based mechanisms are supported.
    conflictResolution: seqno
    # The enable flush option denotes wether the data in the bucket can be flushed
    enableFlush: true
    # Enable Index replica specifies whether or not to enable view index replicas for this bucket.
    enableIndexReplica: false
    # data compression mode for the bucket to run in [off, passive, active]
    compressionMode: "passive"

# ######################################################### RBAC users ###############################################
# Users to create for couchbase RBAC.
# If 'autobind' is set, then Users are automatically created
# alongside groups with specified roles.  To manually create
# groups and bind users then set 'autobind' to 'false' and
# specify 'groups' and 'rolebindings' resources
#users: {}
users:
  #
  # Uncomment to create an example user named 'developer'
  #
  admin:
    # Automatically bind user to a Group resource.
    # See example below of 'developer' user.
    # When autobind is 'true' then the user is
    # created and automatically bound to a group named 'developer'.
    autobind: true
    # password to use for user authentication
    # (alternatively use authSecret)
    password: password@123
    # optional secret to use containing user password
    authSecret:
    # domain of user authentication
    authDomain: local
    # roles attributed to group
    roles:
      - name: mobile_sync_gateway

kubectl-describe-cbc


 # 20/09/2022   12:55.39   /home/mobaxterm  kubectl describe cbc
Name:         dev-cluster
Namespace:    default
Labels:       app.kubernetes.io/managed-by=Helm
Annotations:  meta.helm.sh/release-name: my-couchbase
              meta.helm.sh/release-namespace: default
API Version:  couchbase.com/v2
Kind:         CouchbaseCluster
Metadata:
  Creation Timestamp:  2022-09-20T10:40:46Z
  Finalizers:
    foregroundDeletion
  Generation:  8
  Managed Fields:
    API Version:  couchbase.com/v2
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:meta.helm.sh/release-name:
          f:meta.helm.sh/release-namespace:
        f:labels:
          .:
          f:app.kubernetes.io/managed-by:
      f:spec:
        .:
        f:autoResourceAllocation:
          .:
          f:cpuLimits:
          f:cpuRequests:
          f:overheadPercent:
        f:backup:
          .:
          f:image:
          f:managed:
          f:objectEndpoint:
          f:serviceAccountName:
        f:buckets:
          .:
          f:managed:
        f:cluster:
          .:
          f:analyticsServiceMemoryQuota:
          f:autoCompaction:
            .:
            f:databaseFragmentationThreshold:
              .:
              f:percent:
            f:timeWindow:
            f:viewFragmentationThreshold:
              .:
              f:percent:
          f:autoFailoverMaxCount:
          f:dataServiceMemoryQuota:
          f:eventingServiceMemoryQuota:
          f:indexServiceMemoryQuota:
          f:indexStorageSetting:
          f:indexer:
            .:
            f:logLevel:
            f:maxRollbackPoints:
            f:memorySnapshotInterval:
            f:stableSnapshotInterval:
            f:storageMode:
          f:query:
            .:
            f:backfillEnabled:
            f:temporarySpace:
          f:searchServiceMemoryQuota:
        f:image:
        f:logging:
          .:
          f:audit:
            .:
            f:garbageCollection:
              .:
              f:sidecar:
                .:
                f:image:
            f:rotation:
              .:
              f:size:
          f:server:
            .:
            f:configurationName:
            f:manageConfiguration:
            f:sidecar:
              .:
              f:configurationMountPath:
              f:image:
        f:monitoring:
        f:networking:
          .:
          f:adminConsoleServiceTemplate:
            .:
            f:spec:
              .:
              f:type:
          f:adminConsoleServiceType:
          f:adminConsoleServices:
            .:
            v:"data":
          f:exposeAdminConsole:
          f:exposedFeatureServiceTemplate:
            .:
            f:spec:
              .:
              f:type:
          f:exposedFeatureServiceType:
          f:exposedFeatures:
            .:
            v:"client":
            v:"xdcr":
        f:security:
          .:
          f:adminSecret:
          f:rbac:
            .:
            f:managed:
        f:securityContext:
          .:
          f:fsGroup:
          f:runAsNonRoot:
          f:runAsUser:
          f:windowsOptions:
        f:servers:
          .:
          k:{"name":"default"}:
            .:
            f:name:
            f:pod:
              .:
              f:spec:
            f:services:
              .:
              v:"data":
              v:"eventing":
              v:"index":
              v:"query":
            f:size:
        f:xdcr:
    Manager:      helm
    Operation:    Update
    Time:         2022-09-20T10:40:46Z
    API Version:  couchbase.com/v2
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"foregroundDeletion":
      f:spec:
        f:cluster:
          f:autoCompaction:
            f:tombstonePurgeInterval:
          f:autoFailoverOnDataDiskIssuesTimePeriod:
          f:autoFailoverTimeout:
        f:logging:
          f:audit:
            f:garbageCollection:
              f:sidecar:
                f:age:
                f:interval:
            f:rotation:
              f:interval:
        f:networking:
          f:adminConsoleServiceTemplate:
            f:metadata:
          f:exposedFeatureServiceTemplate:
            f:metadata:
          f:waitForAddressReachable:
          f:waitForAddressReachableDelay:
        f:servers:
          k:{"name":"default"}:
            f:pod:
              f:metadata:
            f:resources:
      f:status:
        .:
        f:allocations:
        f:clusterId:
        f:conditions:
        f:currentVersion:
        f:members:
          .:
          f:ready:
        f:size:
    Manager:         Go-http-client
    Operation:       Update
    Time:            2022-09-20T10:43:38Z
  Resource Version:  2579
  UID:               a6189409-2483-40ce-abdf-80c79d83e9b9
Spec:
  Auto Resource Allocation:
    Cpu Limits:        4
    Cpu Requests:      2
    Overhead Percent:  25
  Backup:
    Image:    couchbase/operator-backup:1.2.0
    Managed:  true
    Object Endpoint:
    Service Account Name:  couchbase-backup
  Buckets:
    Managed:  true
  Cluster:
    Analytics Service Memory Quota:  1Gi
    Auto Compaction:
      Database Fragmentation Threshold:
        Percent:  30
      Time Window:
      Tombstone Purge Interval:  72h0m0s
      View Fragmentation Threshold:
        Percent:                                    30
    Auto Failover Max Count:                        3
    Auto Failover On Data Disk Issues Time Period:  2m0s
    Auto Failover Timeout:                          2m0s
    Data Service Memory Quota:                      256Mi
    Eventing Service Memory Quota:                  256Mi
    Index Service Memory Quota:                     256Mi
    Index Storage Setting:                          memory_optimized
    Indexer:
      Log Level:                 info
      Max Rollback Points:       2
      Memory Snapshot Interval:  200ms
      Stable Snapshot Interval:  5s
      Storage Mode:              memory_optimized
    Query:
      Backfill Enabled:           true
      Temporary Space:            5Gi
    Search Service Memory Quota:  256Mi
  Image:                          couchbase/server:7.0.2
  Logging:
    Audit:
      Garbage Collection:
        Sidecar:
          Age:       1h0m0s
          Image:     busybox:1.33.1
          Interval:  20m0s
      Rotation:
        Interval:  15m0s
        Size:      20Mi
    Server:
      Configuration Name:    fluent-bit-config
      Manage Configuration:  true
      Sidecar:
        Configuration Mount Path:  /fluent-bit/config/
        Image:                     couchbase/fluent-bit:1.2.1
  Monitoring:
  Networking:
    Admin Console Service Template:
      Metadata:
      Spec:
        Type:                    NodePort
    Admin Console Service Type:  NodePort
    Admin Console Services:
      data
    Expose Admin Console:  true
    Exposed Feature Service Template:
      Metadata:
      Spec:
        Type:                      NodePort
    Exposed Feature Service Type:  NodePort
    Exposed Features:
      client
      xdcr
    Wait For Address Reachable:        10m0s
    Wait For Address Reachable Delay:  2m0s
  Security:
    Admin Secret:  auth-my-couchbase-dev-cluster
    Rbac:
      Managed:  true
  Security Context:
    Fs Group:         1000
    Run As Non Root:  true
    Run As User:      1000
    Windows Options:
  Servers:
    Name:  default
    Pod:
      Metadata:
      Spec:
    Resources:
    Services:
      data
      index
      query
      eventing
    Size:  1
  Xdcr:
Status:
  Allocations:
    Allocated Memory:             768Mi
    Data Service Allocation:      256Mi
    Eventing Service Allocation:  256Mi
    Index Service Allocation:     256Mi
    Name:                         default
  Cluster Id:                     07e9980ef7a136a2d544818dfc7e5816
  Conditions:
    Last Transition Time:  2022-09-20T10:41:48Z
    Last Update Time:      2022-09-20T10:41:48Z
    Message:               The cluster is being created
    Reason:                Creating
    Status:                False
    Type:                  Available
    Last Transition Time:  2022-09-20T10:43:38Z
    Last Update Time:      2022-09-20T10:43:38Z
    Message:               Data is equally distributed across all nodes in the cluster
    Reason:                Balanced
    Status:                True
    Type:                  Balanced
    Last Transition Time:  2022-09-20T10:53:43Z
    Last Update Time:      2022-09-20T10:53:43Z
    Message:               dial TCP <IP>:30772: connect: connection refused
    Reason:                ErrorEncountered
    Status:                True
    Type:                  Error
  Current Version:         7.0.2
  Members:
    Ready:
      dev-cluster-0000
  Size:  1
Events:
  Type    Reason          Age   From  Message
  ----    ------          ----  ----  -------
  Normal  ServiceCreated  14m         Service for admin console `dev-cluster-ui` was created
  Normal  NewMemberAdded  13m         New member dev-cluster-0000 added to cluster

deployment-my-couchbase-couchbase-operator.log

  20/09/2022   12:13.29   /home/mobaxterm  kubectl logs -f deployment/my-couchbase-couchbase-operator  --namespace default
{"level":"info","ts":1663670465.7079046,"logger":"main","msg":"couchbase-operator","version":"2.3.2 (build 104)","revision":"99b068a8ebf061655c6e44c10bd
8d39e6d131003"}
{"level":"info","ts":1663670465.8621166,"logger":"controller-runtime.metrics","msg":"Metrics server is starting to listen","addr":"0.0.0.0:8383"}
{"level":"info","ts":1663670465.8632076,"msg":"Starting server","path":"/metrics","kind":"metrics","addr":"[::]:8383"}
{"level":"info","ts":1663670465.8633692,"msg":"attempting to acquire leader lease default/couchbase-operator...\n"}
{"level":"info","ts":1663670465.8755744,"msg":"successfully acquired lease default/couchbase-operator\n"}
{"level":"info","ts":1663670465.875838,"logger":"controller.couchbase-controller","msg":"Starting EventSource","source":"kind source: *v2.CouchbaseClust
er"}
{"level":"info","ts":1663670465.8759584,"logger":"controller.couchbase-controller","msg":"Starting Controller"}
{"level":"info","ts":1663670465.9771447,"logger":"controller.couchbase-controller","msg":"Starting workers","worker count":4}
{"level":"info","ts":1663670465.9773347,"logger":"cluster","msg":"Watching new cluster","cluster":"default/dev-cluster"}
{"level":"info","ts":1663670469.9967425,"logger":"KubeAPIWarningLogger","msg":"policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable i
n v1.25+; use policy/v1 PodDisruptionBudget"}
{"level":"info","ts":1663670479.0054617,"logger":"KubeAPIWarningLogger","msg":"batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use
 batch/v1 CronJob"}
{"level":"info","ts":1663670488.070584,"logger":"cluster","msg":"Janitor starting","cluster":"default/dev-cluster"}
{"level":"info","ts":1663670493.0707247,"logger":"cluster","msg":"Couchbase client starting","cluster":"default/dev-cluster"}
{"level":"info","ts":1663670508.0730312,"logger":"cluster","msg":"Running","cluster":"default/dev-cluster"}
{"level":"info","ts":1663670508.0811977,"logger":"cluster","msg":"Resource updated","cluster":"default/dev-cluster","diff":"  string(\n- \t\"size: 0\\n\
",\n+ \t\"members: {}\\nsize: 0\\n\",\n  )\n"}
{"level":"info","ts":1663670508.1695213,"logger":"cluster","msg":"UI service created","cluster":"default/dev-cluster","name":"dev-cluster-ui"}
{"level":"info","ts":1663670508.1895282,"logger":"cluster","msg":"Resource updated","cluster":"default/dev-cluster","diff":"  string(\n- \t\"members: {}
\\nsize: 0\\n\",\n+ \t\"allocations:\\n- allocatedMemory: 768Mi\\n  dataServiceAllocation: 256Mi\\n  eventingServiceAllocation: 256Mi\\n  indexServiceAl
location: 256Mi\\n  name: default\\nconditions:\\n- lastTransitionTime: \\\"2022-09-20T10:41:48Z\\\"\\n  lastUpdateTime: \\\"2022-09-20T10:41:48Z\\\"\\n
  mess\"...,\n  )\n"}
{"level":"info","ts":1663670508.2301764,"logger":"cluster","msg":"Cluster does not exist so the operator is attempting to create it","cluster":"default/
dev-cluster"}
{"level":"info","ts":1663670513.2347631,"logger":"cluster","msg":"Couchbase client starting","cluster":"default/dev-cluster"}
{"level":"info","ts":1663670530.2664268,"logger":"cluster","msg":"Resource updated","cluster":"default/dev-cluster","diff":"  (\n  \t\"\"\"\n  \t... //
11 identical lines\n  \t  status: \"False\"\n  \t  type: Available\n+ \tcurrentVersion: 7.0.2\n  \tmembers: {}\n  \tsize: 0\n  \t\"\"\"\n  )\n"}
{"level":"info","ts":1663670530.30656,"logger":"kubernetes","msg":"Creating pod","cluster":"default/dev-cluster","name":"dev-cluster-0000","image":"couc
hbase/server:7.0.2"}
{"level":"info","ts":1663670615.9353526,"logger":"cluster","msg":"Resource updated","cluster":"default/dev-cluster","diff":"  (\n  \t\"\"\"\n  \t... //
12 identical lines\n  \t  type: Available\n  \tcurrentVersion: 7.0.2\n- \tmembers: {}\n- \tsize: 0\n+ \tmembers:\n+ \t  ready:\n+ \t  - dev-cluster-0000
\n+ \tsize: 1\n  \t\"\"\"\n  )\n"}
{"level":"info","ts":1663670615.9774024,"logger":"cluster","msg":"Initial pod creating","cluster":"default/dev-cluster"}
{"level":"info","ts":1663670618.0653918,"logger":"cluster","msg":"Operator added member","cluster":"default/dev-cluster","name":"dev-cluster-0000"}
{"level":"info","ts":1663670618.158098,"logger":"cluster","msg":"Resource updated","cluster":"default/dev-cluster","diff":"  (\n  \t\"\"\"\n  \t... // 4
 identical lines\n  \t  indexServiceAllocation: 256Mi\n  \t  name: default\n+ \tclusterId: 07e9980ef7a136a2d544818dfc7e5816\n  \tconditions:\n  \t- last
TransitionTime: \"2022-09-20T10:41:48Z\"\n  \t... // 3 identical lines\n  \t  status: \"False\"\n  \t  type: Available\n+ \t- lastTransitionTime: \"2022
-09-20T10:43:38Z\"\n+ \t  lastUpdateTime: \"2022-09-20T10:43:38Z\"\n+ \t  message: Data is equally distributed across all nodes in the cluster\n+ \t  re
ason: Balanced\n+ \t  status: \"True\"\n+ \t  type: Balanced\n  \tcurrentVersion: 7.0.2\n  \tmembers:\n  \t... // 4 identical lines\n  \t\"\"\"\n  )\n"}

{"level":"info","ts":1663670623.962256,"logger":"cluster","msg":"Created pod service","cluster":"default/dev-cluster","name":"dev-cluster-0000"}
{"level":"info","ts":1663670623.9651093,"logger":"cluster","msg":"Waiting for DNS propagation","cluster":"default/dev-cluster"}
{"level":"info","ts":1663670623.9651775,"logger":"cluster","msg":"Polling for DNS availability","cluster":"default/dev-cluster","service":"dev-cluster-0
000"}
{"level":"info","ts":1663671223.9665668,"logger":"cluster","msg":"Reconciliation failed","cluster":"default/dev-cluster","error":"dial tcp 192.168.67.2:
30772: connect: connection refused","stack":"github.com/couchbase/couchbase-operator/pkg/util/netutil.WaitForHostPort\n\tgithub.com/couchbase/couchbase-
operator/pkg/util/netutil/netutil.go:37\ngithub.com/couchbase/couchbase-operator/pkg/cluster.waitAlternateAddressReachable\n\tgithub.com/couchbase/couch
base-operator/pkg/cluster/networking.go:76\ngithub.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).reconcileMemberAlternateAddresses\n\tgithub.c
om/couchbase/couchbase-operator/pkg/cluster/networking.go:228\ngithub.com/couchbase/couchbase-operator/pkg/cluster.(*ReconcileMachine).handleNodeService
s\n\tgithub.com/couchbase/couchbase-operator/pkg/cluster/nodereconcile.go:1009\ngithub.com/couchbase/couchbase-operator/pkg/cluster.(*ReconcileMachine).
exec\n\tgithub.com/couchbase/couchbase-operator/pkg/cluster/nodereconcile.go:307\ngithub.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).reconci
leMembers\n\tgithub.com/couchbase/couchbase-operator/pkg/cluster/reconcile.go:261\ngithub.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).reconc
ile\n\tgithub.com/couchbase/couchbase-operator/pkg/cluster/reconcile.go:173\ngithub.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).runReconcile
\n\tgithub.com/couchbase/couchbase-operator/pkg/cluster/cluster.go:481\ngithub.com/couchbase/couchbase-operator/pkg/cluster.New\n\tgithub.com/couchbase/
couchbase-operator/pkg/cluster/cluster.go:180\ngithub.com/couchbase/couchbase-operator/pkg/controller.(*CouchbaseClusterReconciler).Reconcile\n\tgithub.
com/couchbase/couchbase-operator/pkg/controller/controller.go:74\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\tsigs
.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).recon
cileHandler\n\tsigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.
(*Controller).processNextWorkItem\n\tsigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pk
g/internal/controller.(*Controller).Start.func2.2\n\tsigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227"}
{"level":"info","ts":1663671223.9757318,"logger":"cluster","msg":"Resource updated","cluster":"default/dev-cluster","diff":"  (\n  \t\"\"\"\n  \t... //
18 identical lines\n  \t  status: \"True\"\n  \t  type: Balanced\n+ \t- lastTransitionTime: \"2022-09-20T10:53:43Z\"\n+ \t  lastUpdateTime: \"2022-09-20
T10:53:43Z\"\n+ \t  message: 'dial tcp 192.168.67.2:30772: connect: connection refused'\n+ \t  reason: ErrorEncountered\n+ \t  status: \"True\"\n+ \t  t
ype: Error\n  \tcurrentVersion: 7.0.2\n  \tmembers:\n  \t... // 4 identical lines\n  \t\"\"\"\n  )\n"}
{"level":"info","ts":1663671229.6775017,"logger":"cluster","msg":"Waiting for DNS propagation","cluster":"default/dev-cluster"}
{"level":"info","ts":1663671229.677596,"logger":"cluster","msg":"Polling for DNS availability","cluster":"default/dev-cluster","service":"dev-cluster-00
00"}

I am facing the exact same issue, can someone please have a look.

I created a kind cluster with the following config (fresh K8s v1.27.3):

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: dev-cluster
nodes:
- role: control-plane
  image: kindest/node:v1.27.3@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72
- role: worker
  image: kindest/node:v1.27.3@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72
- role: worker
  image: kindest/node:v1.27.3@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72
- role: worker
  image: kindest/node:v1.27.3@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72

Helm version:

❯ helm version
version.BuildInfo{Version:"v3.14.0", GitCommit:"3fc9f4b2638e76f26739cd77c7017139be81d0ea", GitTreeState:"clean", GoVersion:"go1.21.5"}

I installed the helm chart with default cfg:

helm install couchbase-cluster --set cluster.name=couchbase-cluster couchbase/couchbase-operator

Status of the pods:

❯ kubectl get pods
NAME                                                      READY   STATUS             RESTARTS          AGE
couchbase-cluster-0000                                    1/1     Running            0                 30m
couchbase-cluster-0001                                    1/1     Running            0                 30m
couchbase-cluster-0002                                    1/1     Running            0                 30m

The operator log shows that there is a reconciliation issue:

{"level":"info","ts":"2024-02-02T21:35:55Z","logger":"cluster","msg":"Reconciliation failed","cluster":"default/couchbase","error":"dial tcp 172.20.0.2:30582: connect: connection refused","stack":"github.com/couchbase/couchbase-operator/pkg/util/netutil.WaitForHostPort\n\tgithub.com/couchbase/couchbase-operator/pkg/util/netutil/netutil.go:37\ngithub.com/couchbase/couchbase-operator/pkg/cluster.waitAlternateAddressReachable\n\tgithub.com/couchbase/couchbase-operator/pkg/cluster/networking.go:76\ngithub.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).reconcileMemberAlternateAddresses\n\tgithub.com/couchbase/couchbase-operator/pkg/cluster/networking.go:228\ngithub.com/couchbase/couchbase-operator/pkg/cluster.(*ReconcileMachine).handleNodeServices\n\tgithub.com/couchbase/couchbase-operator/pkg/cluster/nodereconcile.go:1268\ngithub.com/couchbase/couchbase-operator/pkg/cluster.(*ReconcileMachine).exec\n\tgithub.com/couchbase/couchbase-operator/pkg/cluster/nodereconcile.go:309\ngithub.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).reconcileMembers\n\tgithub.com/couchbase/couchbase-operator/pkg/cluster/reconcile.go:267\ngithub.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).reconcile\n\tgithub.com/couchbase/couchbase-operator/pkg/cluster/reconcile.go:177\ngithub.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).runReconcile\n\tgithub.com/couchbase/couchbase-operator/pkg/cluster/cluster.go:492\ngithub.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).Update\n\tgithub.com/couchbase/couchbase-operator/pkg/cluster/cluster.go:535\ngithub.com/couchbase/couchbase-operator/pkg/controller.(*CouchbaseClusterReconciler).Reconcile\n\tgithub.com/couchbase/couchbase-operator/pkg/controller/controller.go:90\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\tsigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:119\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tsigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:316\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tsigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\tsigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:227"}

And it just hands on the same error over and over again. The bucket never ended up being created and the pods does not switch to the ready state.

If I set the number of couchbase server nodes to 1, it works, but if I keep it to 3 as it is the default in the helm chart, this fails.

I also tried the same config in a real cluster with 3 bare metal worker nodes in L3 connectivity, cilium as CNI, fails exactly with the same problem.