There is a problem when i using java sdk 2.1.x


my enviroment is
CouchbaseEnvironment env = DefaultCouchbaseEnvironment

i using this method to write:

when my cluster is failover complite,i found that this write method is very slowly,even 1 ops/sec! as the same time it occur TimeOutException.

SDK 2.1.2+2.1.3: Connection refused without accessing bucket

Hi @hubo3085632,

can you describe the following a little more please so we can help you better:

  • what workload are you running
  • what actions are you performing
  • what are your expectations/actual results

Maybe you can also show us some code and logs?

      thank you ,i am trying to  handle the exception if i use 

couchbase in my production.I stop one node when i am writting,then 30s
later,the speed of insert almost 1ops/sec.
here is the code :
JsonDocument isdone=bucket.insert(Corp,ReplicateTo.ONE);
}catch(Exception e){
if(e instanceof RequestCancelledException){
System.out.println(“writing failed!”);

here is the error log
警告: [/][ViewEndpoint]: Could not connect to endpoint, retrying with delay 4096 MILLISECONDS: 拒绝连接: /
at Method)




五月 06, 2015 3:29:12 下午 com.couchbase.client.core.node.CouchbaseNode$1 call
信息: Disconnected from Node
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(
at java.util.ArrayList.get(
at a.insertdata(
at a.main(
java.lang.RuntimeException: java.util.concurrent.TimeoutException
at a.insertdata(
at a.main(
Caused by: java.util.concurrent.TimeoutException
… 5 more


I think there is a disconnect between your assumption on how couchbase works and how it actually works :smile:

When you shut down a node, you are not able to write data to this partition until you failover a node. Which means that those partial operations are going to time out if they can’t be retried in time.
Now, if you failover and bring you cluster back into a condition where all partitions are active, the client should recover timely and your operations to all partitions start working again. If this is not the case, then its maybe a client bug - but I can’t say for sure from the code you’ve shown here.

Can you please provide more information on the actions you are performing and the expected/actual behaviour of the SDK?


you means if i down a node,all cluster will refuse my write operate? not other node can continue writing?


No, only this specific node is down of course. So all document IDs which map to this node won’t be writable until you perform a failover, which means the replicas are promoted to active. All other nodes are fine. But the point is, since I guess you are using blocking operations of course if you write or read 5 docs and 2 of those 3 are on the downed node you will have 2 timeouts in the 5 second range and the other 3 will come back fine.

We also provide fail fast capabilities that in this case will fail early with a different exception, but then it’s more than normal up to you to perform retry handling as you see fit. Does that make sense?


oh,i see,but why it will write slowly after failover if i use insert with
I found a strange issue,if i run my client on windows server,it will always prompt timeout exceptions;if i run on the linux server,it performs better and does not need to modify the connectTimeout and kvTimeout.


when i use the command :service couchbase-server stop,it will go first to flush the in-memory`s data into disk,right?


@hubo3085632 it is going slowly with ReplicateTo.ONE after failover because once you hit failover, for those partitions there is no replica available (given that you only have one replica specified on the bucket).
You need to run a rebalance to create the replicas again or use more replicas in the first place so a failover will still have replicas available. The original insert will succeed anyways, but we are not able to fulfil the persistence constraint properly (which you are asking for through ReplicateTo.ONE).

Also, disk persistence and replication is asynchronous, the service will be stopped. Make sure to use PersistTo.MASTER if you want to be sure data gets written (you can be sure when you get a success back).


i set the bucket to 2 replica and insert(ReplicateTo.ONE) ,after failover it still slowly and timeout.
exception is that::
java.lang.RuntimeException: java.util.concurrent.TimeoutException
at a.insertdata(
at a.main(
Caused by: java.util.concurrent.TimeoutException



Do you think it’s possible that you share:

  • Code to reproduce
  • Your cluster setup
  • The steps to reproduce
  • Your expectations?

So we can try to reproduce your issue as closely as possible.


I want to ask a question:

when I deal with the node failure excpetion in coding, I use .toBlock() to retrive the data without timeout, but how can i catch the excpetion to retry the CRUD operation.