Can't get autonomous operator 2.0.3 to work with kubernetes v1.18.12+k3s1

Here is what I’ve done:

kubectl create namespace couchbase
kubectl create -f crd.yaml
bin/cbopcfg --namespace couchbase | kubectl create -f - -n couchbase

kubectl apply -f couchbase-cluster.yaml -n couchbase

couchbase-cluster.yaml

apiVersion: v1
kind: Secret
metadata:
name: cb-example-auth
type: Opaque
data:
username: QWRtaW5pc3RyYXRvcg== # Administrator
password: cGFzc3dvcmQ= # password

apiVersion: couchbase.com/v2
kind: CouchbaseBucket
metadata:
name: default

apiVersion: couchbase.com/v2
kind: CouchbaseCluster
metadata:
name: cb-example
spec:
image: couchbase/server:6.6.0
security:
adminSecret: cb-example-auth
buckets:
managed: true
networking: # enables node port services
exposeAdminConsole: true
exposedFeatures:
- client
servers:

  • size: 1
    name: some_services
    services:
    • data
    • index
    • query

    - search

    - eventing

    - analytics

Connection to UI via port-forward doesn’t work at all (occasionally succeeds, but looses connection in 30 seconds).
Connection to UI via NodePort lagged at first, but after 10-15 minutes seems to be okay.

My app (using java SDK) did connect both with port-forward and NodePort, I’m getting only errors:

  • on create (upsert) object: AmbiguousTimeoutException: UpsertRequest, Reason: TIMEOUT
logs

2020-11-19 16:03:48.731 INFO 34256 — [ cb-events] com.couchbase.node : [com.couchbase.node][NodeConnectedEvent] Node connected {“alternateRemote”:“172.25.18.66”,“coreId”:“0x788874b500000002”,“managerPort”:“8091”,“remote”:“cb-example-0000.cb-example.couchbase.svc”}
2020-11-19 16:03:48.733 INFO 34256 — [ cb-events] com.couchbase.node : [com.couchbase.node][NodeDisconnectedEvent][461us] Node disconnected {“coreId”:“0x788874b500000002”,“managerPort”:“8091”,“remote”:“172.25.18.66”}
2020-11-19 16:03:48.734 INFO 34256 — [ cb-events] com.couchbase.node : [com.couchbase.node][NodeConnectedEvent] Node connected {“coreId”:“0x788874b500000002”,“managerPort”:“8091”,“remote”:“172.25.18.66”}
2020-11-19 16:03:48.991 INFO 34256 — [ cb-events] com.couchbase.node : [com.couchbase.node][NodeConnectedEvent] Node connected {“alternateRemote”:“172.25.18.66”,“coreId”:“0x788874b500000001”,“managerPort”:“8091”,“remote”:“cb-example-0000.cb-example.couchbase.svc”}
2020-11-19 16:03:48.994 INFO 34256 — [ cb-events] com.couchbase.node : [com.couchbase.node][NodeDisconnectedEvent][632us] Node disconnected {“coreId”:“0x788874b500000001”,“managerPort”:“8091”,“remote”:“172.25.18.66”}
2020-11-19 16:03:48.994 INFO 34256 — [ cb-events] com.couchbase.node : [com.couchbase.node][NodeConnectedEvent] Node connected {“coreId”:“0x788874b500000001”,“managerPort”:“8091”,“remote”:“172.25.18.66”}
2020-11-19 16:03:50.607 INFO 34256 — [ cb-events] com.couchbase.node : [com.couchbase.node][NodeDisconnectedEvent][1345us] Node disconnected {“coreId”:“0x788874b500000002”,“managerPort”:“8091”,“remote”:“172.25.18.66”}
2020-11-19 16:03:50.608 INFO 34256 — [ cb-events] com.couchbase.core : [com.couchbase.core][BucketOpenedEvent][3118ms] Opened bucket “default” {“alternateIdentifier”:“external”,“coreId”:“0x788874b500000002”}
2020-11-19 16:03:50.776 INFO 34256 — [ cb-events] com.couchbase.node : [com.couchbase.node][NodeDisconnectedEvent][556us] Node disconnected {“coreId”:“0x788874b500000001”,“managerPort”:“8091”,“remote”:“172.25.18.66”}
2020-11-19 16:03:50.778 INFO 34256 — [ cb-events] com.couchbase.core : [com.couchbase.core][BucketOpenedEvent][3450ms] Opened bucket “default” {“alternateIdentifier”:“external”,“coreId”:“0x788874b500000001”}

com.couchbase.client.core.error.AmbiguousTimeoutException: UpsertRequest, Reason: TIMEOUT
{“cancelled”:true,“completed”:true,“coreId”:“0x788874b500000002”,“idempotent”:false,“reason”:“TIMEOUT”,“requestId”:3,“requestType”:“UpsertRequest”,“retried”:14,“retryReasons”:[“BUCKET_OPEN_IN_PROGRESS”],“service”:{“bucket”:“default”,“collection”:“_default”,“documentId”:“test-doc”,“opaque”:“0x12”,“scope”:“_default”,“type”:“kv”},“timeoutMs”:2500,“timings”:{“encodingMicros”:25957,“totalMicros”:2505923}}
at com.couchbase.client.core.msg.BaseRequest.cancel(BaseRequest.java:163)
at com.couchbase.client.core.Timer.lambda$register$2(Timer.java:157)
at com.couchbase.client.core.deps.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:672)
at com.couchbase.client.core.deps.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:747)
at com.couchbase.client.core.deps.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:472)
at com.couchbase.client.core.deps.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:832)
Suppressed: java.lang.Exception: #block terminated with an error
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:139)
at reactor.core.publisher.Mono.block(Mono.java:1703)
at com.example.sourceoftruth.ConnectToK8sCouchbase.saveRole(ConnectToK8sCouchbase.java:64)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688)
at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129)
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129)
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129)
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84)
at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32)
at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96)
at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75)
at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:71)
at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:220)
at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:53)

  • and on query: PlanningFailureException: The server failed planning the query
Summary

com.couchbase.client.core.error.DocumentNotFoundException: Document with the given id not found
at com.couchbase.client.core.error.DefaultErrorUtil.keyValueStatusToException(DefaultErrorUtil.java:45) ~[core-io-2.0.11.jar:na]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
|_ checkpoint ⇢ Handler org.aircloud.sourceoftruth.roles.RolesController#bucket() [DispatcherHandler]
|_ checkpoint ⇢ HTTP GET “/test-doc” [ExceptionHandlingWebHandler]
Stack trace:
at com.couchbase.client.core.error.DefaultErrorUtil.keyValueStatusToException(DefaultErrorUtil.java:45) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.java.kv.GetAccessor.lambda$get$0(GetAccessor.java:63) ~[java-client-3.0.10.jar:na]
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2137) ~[na:na]
at com.couchbase.client.core.msg.BaseRequest.succeed(BaseRequest.java:143) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.io.netty.kv.KeyValueMessageHandler.decodeAndComplete(KeyValueMessageHandler.java:322) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.io.netty.kv.KeyValueMessageHandler.decode(KeyValueMessageHandler.java:301) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.io.netty.kv.KeyValueMessageHandler.channelRead(KeyValueMessageHandler.java:228) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.io.netty.kv.MemcacheProtocolVerificationHandler.channelRead(MemcacheProtocolVerificationHandler.java:84) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.handler.flush.FlushConsolidationHandler.channelRead(FlushConsolidationHandler.java:152) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.kqueue.AbstractKQueueStreamChannel$KQueueStreamUnsafe.readReady(AbstractKQueueStreamChannel.java:544) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.kqueue.AbstractKQueueChannel$AbstractKQueueUnsafe.readReady(AbstractKQueueChannel.java:381) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.kqueue.KQueueEventLoop.processReady(KQueueEventLoop.java:211) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.channel.kqueue.KQueueEventLoop.run(KQueueEventLoop.java:289) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[core-io-2.0.11.jar:na]
at com.couchbase.client.core.deps.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[core-io-2.0.11.jar:na]
at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]

kubectl describe cluster operator

k describe couchbasecluster cb-example -n couchbase
Name: cb-example
Namespace: couchbase
Labels:
Annotations:
API Version: couchbase.com/v2
Kind: CouchbaseCluster
Metadata:
Creation Timestamp: 2020-11-19T12:23:18Z
Generation: 4522
Managed Fields:
API Version: couchbase.com/v2
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:buckets:
.:
f:managed:
f:image:
f:networking:
.:
f:exposeAdminConsole:
f:exposedFeatures:
f:security:
.:
f:adminSecret:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2020-11-19T12:23:18Z
API Version: couchbase.com/v2
Fields Type: FieldsV1
fieldsV1:
f:spec:
f:backup:
f:cluster:
f:autoCompaction:
f:timeWindow:
f:tombstonePurgeInterval:
f:autoFailoverOnDataDiskIssuesTimePeriod:
f:autoFailoverTimeout:
f:logging:
f:security:
f:rbac:
f:servers:
f:softwareUpdateNotifications:
f:xdcr:
f:status:
.:
f:adminConsolePort:
f:adminConsolePortSSL:
f:buckets:
f:clusterId:
f:conditions:
f:currentVersion:
f:exposedFeatures:
f:members:
.:
f:ready:
f:phase:
f:size:
Manager: couchbase-operator
Operation: Update
Time: 2020-11-19T13:09:44Z
Resource Version: 131242
Self Link: /apis/couchbase.com/v2/namespaces/couchbase/couchbaseclusters/cb-example
UID: 9517fc4b-dcaf-4891-8623-10bc71b01b49
Spec:
Backup:
Buckets:
Managed: true
Cluster:
Analytics Service Memory Quota: 1Gi
Auto Compaction:
Database Fragmentation Threshold:
Percent: 30
Time Window:
Tombstone Purge Interval: 72h0m0s
View Fragmentation Threshold:
Percent: 30
Auto Failover Max Count: 3
Auto Failover On Data Disk Issues Time Period: 2m0s
Auto Failover Timeout: 2m0s
Data Service Memory Quota: 256Mi
Eventing Service Memory Quota: 256Mi
Index Service Memory Quota: 256Mi
Index Storage Setting: memory_optimized
Search Service Memory Quota: 256Mi
Image: couchbase/server:6.6.0
Logging:
Networking:
Admin Console Service Type: NodePort
Expose Admin Console: true
Exposed Feature Service Type: NodePort
Exposed Features:
client
Security:
Admin Secret: cb-example-auth
Rbac:
Security Context:
Fs Group: 1000
Servers:
Name: some_services
Resources:
Services:
data
index
query
Size: 1
Software Update Notifications: false
Xdcr:
Status:
Admin Console Port: 31844
Admin Console Port SSL: 32028
Buckets:
Compression Mode: passive
Conflict Resolution: seqno
Enable Flush: false
Enable Index Replica: false
Eviction Policy: valueOnly
Io Priority: low
Memory Quota: 100
Name: default
Password:
Replicas: 1
Type: couchbase
Cluster Id: ab29c11403e454be9fb933ca48d6087f
Conditions:
Last Transition Time: 2020-11-19T13:09:44Z
Last Update Time: 2020-11-19T13:09:44Z
Message: Data is equally distributed across all nodes in the cluster
Reason: Balanced
Status: True
Type: Balanced
Last Transition Time: 2020-11-19T13:09:43Z
Last Update Time: 2020-11-19T13:09:43Z
Reason: Available
Status: True
Type: Available
Current Version: 6.6.0
Exposed Features:
client
Members:
Ready:
cb-example-0000
Phase: Running
Size: 1
Events:
Type Reason Age From Message


Normal ServiceCreated 46m Service for admin console cb-example-ui was created
Normal NewMemberAdded 46m New member cb-example-0000 added to cluster
Normal NodeServiceCreated 46m Node service for admin was created
Normal NodeServiceCreated 45m Node service for analytics was created
Normal NodeServiceCreated 45m Node service for data was created
Normal NodeServiceCreated 45m Node service for eventing was created
Normal NodeServiceCreated 45m Node service for index was created
Normal NodeServiceCreated 45m Node service for query was created
Normal NodeServiceCreated 45m Node service for search was created
Normal BucketCreated 45m A new bucket default was created

kubectl describe pod with server

❯ k describe pod cb-example-0000 -n couchbase
Name: cb-example-0000
Namespace: couchbase
Priority: 0
Node: k3s-agent-couchbase/172.25.18.66
Start Time: Thu, 19 Nov 2020 15:23:35 +0300
Labels: app=couchbase
couchbase_cluster=cb-example
couchbase_node=cb-example-0000
couchbase_node_conf=some_services
couchbase_service_data=enabled
couchbase_service_index=enabled
couchbase_service_query=enabled
Annotations: operator.couchbase.com/version: 2.0.3
pod.couchbase.com/spec:
{“containers”:[{“name”:“couchbase-server”,“image”:“couchbase/server:6.6.0”,“ports”:[{“name”:“admin”,“containerPort”:8091,“protocol”:“TCP”}…
server.couchbase.com/version: 6.6.0
Status: Running
IP: 10.42.2.11
IPs:
IP: 10.42.2.11
Controlled By: CouchbaseCluster/cb-example
Containers:
couchbase-server:
Container ID: containerd://1290d4878cb32774d0c9f1932e621dadb526363028e9f3968c0f9357177953c1
Image: couchbase/server:6.6.0
Image ID: docker.io/couchbase/server@sha256:f5bd898916bde9b5de8d278577e830bdfdbc26f314404ae9408a76920c1139a5
Ports: 8091/TCP, 8092/TCP, 8093/TCP, 8094/TCP, 8095/TCP, 8096/TCP, 9100/TCP, 9101/TCP, 9102/TCP, 9103/TCP, 9104/TCP, 9105/TCP, 9110/TCP, 9111/TCP, 9112/TCP, 9113/TCP, 9114/TCP, 9115/TCP, 9116/TCP, 9117/TCP, 9118/TCP, 9119/TCP, 9120/TCP, 9121/TCP, 9122/TCP, 11207/TCP, 11210/TCP, 11211/TCP, 11214/TCP, 11215/TCP, 18091/TCP, 18092/TCP, 18093/TCP, 18094/TCP, 18095/TCP, 18096/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
State: Running
Started: Thu, 19 Nov 2020 15:23:36 +0300
Ready: True
Restart Count: 0
Readiness: exec [test -f /tmp/ready] delay=10s timeout=5s period=20s #success=1 #failure=1
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-cm529 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-cm529:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-cm529
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 49m default-scheduler Successfully assigned couchbase/cb-example-0000 to k3s-agent-couchbase
Normal Pulled 49m kubelet Container image “couchbase/server:6.6.0” already present on machine
Normal Created 49m kubelet Created container couchbase-server
Normal Started 49m kubelet Started container couchbase-server
Warning Unhealthy 49m kubelet Readiness probe failed:

Props for putting the logs in there so they collapse! I love a well formatted post.

The SDK is designed to try to get an operation done until timeout. What I see here is that it couldn’t, for 2.5s, complete the connection and bootstrap. The other thing that looks slightly odd is the number of connections to port 8091.

The way you’ve done the exposed services looks correct. Which Networking Topology do you intend? Also, note that what K8S you’re running on may enter into it, as you won’t always have a LoadBalancer available.

Two other bits of telemetry you may want to check out: the operator logs themselves and if you want to validate the networking setup, this is sometimes easier done with SDK-Doctor which will do active probing and checking.

1 Like

Initially, my post was marked as spam and hidden, so I had another 10-20 hours of trials, and got some things to work.
I’ve created a new post, where I narrow down my issue: Is it possible to access CB cluster on K8s with NodePort Service?
And also due to that 10-20 hours, I can’t edit or delete this post, so I’ll just sum up briefly where I am:

  1. CB Cluster is up and running.
  2. Admin UI is available and stable when accessed with via NodePort or LB port (port-forward is still struggling loading home page).
  3. Intra-k8s application (java SDK) can access cluster, read and write documents.

It is sufficient for ‘prod’ use cases, I guess.
Still I’d like to have access to the database from development machine (external to k8s cluster), for debugging and stuff.

I tried to expose CB Cluster with NodePort (Generic) Networking, but got the about same logs and issues.

So now I’d like to know if it’s even possible to use NodePort networking for this case or should I configure Public Networking with kubernetes-sigs/external-dns + some networking magic, and how exactly to do that?

UPD

I’ve just run sdk-doctor. In short, it did connect successfully, but with warnings.
Lets, please, move to the newer post, I’ll share full SKD-doctor output there.
(link to the newer post) Is it possible to access CB cluster on K8s with NodePort Service?