@dony.thomas Can you please tell us the version of the server, what exactly are you doing? It would be great if you can do a cbcollect too so that it is easier to figure out what is going on
Hi! I have the same issue. It occurs each time application with couchbase-client in functional tests is shutdown.
java-client 2.7.2
core-io 1.7.2
In my functional tests I use CouchbaseMock 1.5.22
I am also facing the same issue.
I am trying to close the bucket and facing this WARNING.
This is happening after closing bucket and before disconnecting cluster.
2019-05-29 14:08:14.054 INFO 61578 --- [TaskScheduler-1] c.c.c.core.config.ConfigurationProvider : Closed bucket fileProcessingDB
Closing Previous bucket: true
2019-05-29 14:08:14.059 WARN 61578 --- [ cb-core-3-2] com.couchbase.client.core.CouchbaseCore : Exception while Handling Request Events RequestEvent{request=null}
java.lang.ArithmeticException: / by zero
at com.couchbase.client.core.node.locate.ConfigLocator.locateAndDispatch(ConfigLocator.java:89) ~[core-io-1.7.4.jar:na]
at com.couchbase.client.core.RequestHandler.dispatchRequest(RequestHandler.java:259) ~[core-io-1.7.4.jar:na]
at com.couchbase.client.core.RequestHandler.onEvent(RequestHandler.java:208) ~[core-io-1.7.4.jar:na]
at com.couchbase.client.core.RequestHandler.onEvent(RequestHandler.java:79) ~[core-io-1.7.4.jar:na]
at com.couchbase.client.deps.com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:150) ~[core-io-1.7.4.jar:na]
at com.couchbase.client.deps.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [core-io-1.7.4.jar:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181]
Disconnecting Previous cluster: true
I asked a couple of others to weigh in, but I’d recommend updating to 2.7.6 in any case as it’s entirely API compatible and some bugs have been fixed since the 2.7.0 release.
@venugopalsmartboy thanks for raising it! We’ll look into it, but I think until we fix it you can ignore it for now. During shutdown the node list sometimes goes to 0 and the code just needs to check for that and bail out.
A potential workaround might also be to not close the bucket and then shutdown the full cluster, but rather only shutdown the cluster (which in turn will close the buckets).