Receive TimeOutException when try to open two buckets

This is my code :

List<BucketSettings> buckets = this.clusterMgr.getBuckets();
for (BucketSettings s : buckets)
{
    Bucket b = cluster.openBucket(s.name(), "123", PENDING_TIME, TimeUnit.MILLISECONDS);
    bucketMap.put(s.name(), b);
}

only the first bucket can be open successful. and then there is a TimeOutException be thrown:

十月 29, 2014 10:02:01 下午 org.apache.catalina.core.StandardContext listenerStart
严重: Exception sending context initialized event to listener instance of class com.master.listener.GameContextListener
java.lang.RuntimeException: java.util.concurrent.TimeoutException
	at rx.observables.BlockingObservable.blockForSingle(BlockingObservable.java:481)
	at rx.observables.BlockingObservable.single(BlockingObservable.java:348)
	at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:107)
	at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:93)
	at com.kuangchao.framework.orm.store.CouchBaseDataSource.getBucket(CouchBaseDataSource.java:156)
	at com.kuangchao.framework.orm.store.CouchBaseDataSource.get(CouchBaseDataSource.java:245)
	at com.kuangchao.framework.orm.session.NosqlDataSession.getModel(NosqlDataSession.java:186)
	at com.master.listener.GameContextListener.checkRootConfig(GameContextListener.java:124)
	at com.master.listener.GameContextListener.contextInitialized(GameContextListener.java:81)
	at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4797)
	at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5291)
	at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
	at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
	at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
	at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:633)
	at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:977)
	at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1655)
	at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
	at java.util.concurrent.FutureTask.run(Unknown Source)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
	at java.lang.Thread.run(Unknown Source)
Caused by: java.util.concurrent.TimeoutException
	at rx.internal.operators.OperatorTimeoutBase$TimeoutSubscriber.onTimeout(OperatorTimeoutBase.java:169)
	at rx.internal.operators.OperatorTimeout$1$1.call(OperatorTimeout.java:42)
	at rx.internal.schedulers.ScheduledAction.run(ScheduledAction.java:43)
	at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
	at java.util.concurrent.FutureTask.run(Unknown Source)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(Unknown Source)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
	... 3 more
十月 29, 2014 10:02:01 下午 org.apache.catalina.core.StandardContext startInternal
严重: Error listenerStart
十月 29, 2014 10:02:01 下午 org.apache.catalina.core.StandardContext startInternal
严重: Context [/master] startup failed due to previous errors
十月 29, 2014 10:02:01 下午 org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
严重: The web application [/master] appears to have started a thread named [RxComputationThreadPool-1] but has failed to stop it. This is very likely to create a memory leak.

the version of java-sdk is 2.0.1, couchbase server is 3.0.1, jdk is 1.7, and it happens on both windows8.1 and centos6.4.

Hi, I think we need more info to track down your issue:

  • Does it work when you open only one bucket?
  • To what did you set the “pending time”?
  • Can you please share the full log (ideally set the logging level to TRACE) so we can see what’s going on?

Unfortunately, the info you’ve provided only shows parts of the stack trace but not the surrounding context.

Hello Daschl,

I got this error too.

Only one bucket works as well.

My connection:

    List<String> ips = new ArrayList<>();
    ips.add(IPADDRESS1);
    
    cluster = CouchbaseCluster.create(ips);        
    serialBucket1 = cluster.openBucket();
    bucket1 = serialBucket1.async();

    serialBucket2 = cluster.openBucket(BUCKET2);
    bucketStat2 = serialBucketStat2.async();

My logs (Level.ALL)

http://pastebin.com/FKymTCYd

UPDATE:
If i connect to 3 nodes, then works as well.

Thank you very much!
Huhh

@huhh thanks for reporting it. So you are saying it fails against a 1 node cluster (like localhost) but works against 3 nodes? Is there anything else in the 2 environments other than the node size? Every info can help me track it down.

Hello Daschl,

Sorry, seems i was not precise.
The cluster has got 3 nodes on a vpn network.

Only one bucket case:
- i can connect via one ip/node and via 3 ips/nodes too.

2 buckets case:
- when i connected to just 1 node, then it dropped this error.
- when i connected to 3 nodes, then worked as well.

The error has been dropped on an idle state, the nodes has not been useds.

Thank you very much!
Huhh

First, we do not recommend to run client/server between a VPN since this normally adds very significant latency to the mix, and the default timeouts are not suited for that.

Can you try to increase the openBucket timeout and see if it helps? Also, can you please post a TRACE log somewhere which properly configured absolute timestamps so we can see how many time is actually spent on the wire?

Thanks!

Hello Daschl!

It was exactly 3 minutes, as the config too:

serialBucket2 = cluster.openBucket(BUCKET2,3,TimeUnit.MINUTES);

http://pastebin.com/sPSGbwjy

Thank you very much!
Huhh

@huhh hm it looks like the connect process was good but the observable never completed.

Are you running the client on windows as well?

It would be awesome if you could profile or do a thread dump in the time after the logs say connected to all three nodes. Maybe there is a deadlock somewhere that I did not anticipate.

http://www.couchbase.com/issues/browse/JCBC-642

@giabao thanks for reporting. On which platform are you running it?

Hello Daschl,

Here are the thread dump while it waits for the second bucket.
http://pastebin.com/JvF2FJ3n

It sometimes works, mostly not, and i am using Linux.

Thank you very much!
Huhh

@huhh yes thanks - sadly nothing suspicious in there. In my debug sessions it also seems to work when you try to debug, so its kind of nasty. I’ll see if I can fix it before we do a 2.0.2, please follow the linked JCBC ticket for more info.

Looks like I’ve found and fixed the issue. It will be part of the 2.0.2 release which is planned end of this week. If you are curious, you can grab it directly from master and see if it works for you.

Hey @giabao,
In case you missed it, please see the last comments on the issue you raised http://www.couchbase.com/issues/browse/JVMCBC-79 to see how to fix your code.
Thanks
Simon

I get this error as well (first bucket ok, after that it fails). Only solution with 2.0+ api is to allow multiple clusters but that uses LOTS of memory. So I had to go back to 1.2.2 api I was using before. Using async doesn’t work as the get method calls then error out for null bucket since bucket not created fast enough and these are calls were client is waiting for response in limited time frame. I am using the 2.0.3 api.

@racarlson can you share your code?

here is some test code you can ignore remove the insert/update/delete methods, connecting to bucket is enough for the problem to appear.
(some code removed to fit in this comment box)

import java.io.UnsupportedEncodingException;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Locale;
import java.util.concurrent.TimeUnit;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.couchbase.client.deps.io.netty.buffer.ByteBuf;
import com.couchbase.client.deps.io.netty.buffer.Unpooled;
import com.couchbase.client.java.Bucket;
import com.couchbase.client.java.Cluster;
import com.couchbase.client.java.CouchbaseCluster;
import com.couchbase.client.java.document.BinaryDocument;
import com.couchbase.client.java.document.RawJsonDocument;
import com.couchbase.client.java.document.StringDocument;
import com.couchbase.client.java.env.CouchbaseEnvironment;
import com.couchbase.client.java.env.DefaultCouchbaseEnvironment;
import com.couchbase.client.java.view.DefaultView;
import com.couchbase.client.java.view.DesignDocument;
import com.google.gson.JsonElement;
import com.google.gson.JsonParser;
import com.oss.common.ExceptionTools;
import com.oss.common.StringTools;
import com.oss.error.MyCustomException; // just replace with default exception to test code without this class

public class CacheManager
{
private Bucket bucket = null;
private Cluster cluster = null;
String bucketName = “MYBUCKET”;
int defaultTimeToLive = 0;
int retries = 10;
int connectRetries=8;
int connectTimeout = 100015;
int bucketOpenTimeout = 60
5;
int connectTryCount = 0;

private synchronized void connect() throws Exception
{
            hosts.add("127.0.0.1");	    
	CouchbaseEnvironment environment = DefaultCouchbaseEnvironment.builder().connectTimeout(connectTimeout).disconnectTimeout(connectTimeout).build();				
	cluster = CouchbaseCluster.create(environment,this.hosts);
		
	if (StringTools.doesStringHaveData(bucketPassword))
	{
		bucket = cluster.openBucket(bucketName,bucketPassword,bucketOpenTimeout,TimeUnit.SECONDS);
	}
	else
	{
		bucket = cluster.openBucket(bucketName,bucketOpenTimeout,TimeUnit.SECONDS);
	}      		
            // add code to open second bucket here to get error(s)
            // String bucketName2 = "SECOND_BUCKET_NAME"
            //bucket = cluster.openBucket(bucketName2,bucketOpenTimeout,TimeUnit.SECONDS);
}

private CacheManager() throws Exception
{
}

public void shutdown() throws Exception
{    	
    if ( cluster!=null )
    {
        if (bucket!=null)
        {
    	try { bucket.close(); } catch(Exception e) {}
        }
        try {  cluster.disconnect(); } catch(Exception e) {}
    }
}
    
public String getString(String key)
{
	String rtn = "";
	RawJsonDocument doc = bucket.get(key,RawJsonDocument.class);
	if (doc!=null)
	{
    	rtn =  bucket.get(key,RawJsonDocument.class).content();
	}
    return rtn;
}    

}

I can cache the bucket with the cluster in an object and check to see if that cluster+bucket is already open and re-use the object. With that logic I can hit a higher number of threads connecting without using as much memory. However, is there any concern with using a bucket in multiple threads? I am working on testing code now that will do this (impossible to upload it on here as comments)

edit: unfortunately reusing cluster AND bucket does not work as under load (even when lots of memory free) it stops working with error below - even though memory and class is on the class path , since it works ok with smaller loads.

Exception in thread “Thread-85” java.lang.NoClassDefFoundError: Could not initialize class rx.internal.util.RxRingBuffer
at rx.internal.operators.OperatorMerge$MergeSubscriber.onStart(OperatorMerge.java:139)
at rx.Observable$1.call(Observable.java:144)

Which RxJava version do you have on your classpath?

RxJava Version 1.0.4