Possible thread leak when Couchbase Server is offline

I just noticed on my dev machine, that the number of Couchbase threads is always increasing if Couchbase Server is not started.
The threads are named like cb-computations, cb-io, cb-core.

Java SDK Version: 2.4.0
Couchbase Server Version: 4.6. DP

It goes along with the following Exception stack trace which happens periodically every 30s multiple times:

Jan 13, 2017 8:06:24 AM com.couchbase.client.core.endpoint.AbstractEndpoint$2 onSuccess
WARNING: [null][KeyValueEndpoint]: Could not connect to remote socket.
 2017-01-13 08:06:24,352 WARN  - [null][KeyValueEndpoint]: Could not connect to remote socket. - [cb-io-24-2] c.c.c.c.l.JdkLogger
Jan 13, 2017 8:06:24 AM com.couchbase.client.core.endpoint.AbstractEndpoint$2 onSuccess
WARNING: [null][KeyValueEndpoint]: Could not connect to endpoint, retrying with delay 30 SECONDS: 
com.couchbase.client.deps.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:11210
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
	at com.couchbase.client.deps.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:257)
	at com.couchbase.client.deps.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:291)
	at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:640)
	at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:575)
	at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:489)
	at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:451)
	at com.couchbase.client.deps.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
	at com.couchbase.client.deps.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
	at java.lang.Thread.run(Thread.java:745)

Here is how connection is initiated:

env = DefaultCouchbaseEnvironment
    .builder()
    .queryEndpoints(5)
    .autoreleaseAfter(50000)
    .queryTimeout(20000)
    .retryStrategy(FailFastRetryStrategy.INSTANCE)
    .reconnectDelay(Delay.fixed(30, TimeUnit.SECONDS))
    .build();

This seems to start the background threads which are not under my control any more.

Observation: When I start Couchbase server then, the number of threads stops increasing, but does not decrease either anymore. It just stays at whatever number of threads where present when the server was started.

Any ideas, how I can stop it from creating new Threads until running into OutOfMemory errors?

Can you show me more code? Those pools are initialized on the environment and then reused - are you sure that you only ever create one environment and reuse it?

How does your retry logic on failed open bucket looks like?

1 Like

Sorry for late reply.
It was indeed a coding mistake. Your answer guided me to it. I didn’t shutdown() the environment and cluster when the connection error happened, so I guess the environment was created multiple times.

Now I am doing the following and the thread pools are not increasing anymore.

try {
env = DefaultCouchbaseEnvironment
.builder()
.queryEndpoints(queryEndpoints)
.autoreleaseAfter(50000)
.queryTimeout(20000)
.retryStrategy(FailFastRetryStrategy.INSTANCE)
.reconnectDelay(Delay.fixed(30, TimeUnit.SECONDS))
.build();

	cluster = CouchbaseCluster.create(env, hosts.split(","));
	
	bucket = cluster.openBucket(bucketname, password);

	}catch (Exception e) {
		LOGGER.error("Error connecting to Couchbase: "  +e.getMessage(), e);
		env.shutdown();
		cluster.disconnect();
		throw e;
	}

Thank you
Christoph

1 Like