NoClassDefFoundError for /netty/handler/timeout/IdleStateEvent


#1

Hi all, I’m having an issue with my current Couchbase JDK client setup, and I was wondering if this could be a bug, or something that I’m doing wrong.

I have set-up a simple static holder for my Cluster and Bucket instances so I can re-use both of them in all my REST services, according to the documentation best practices. This works great and all operations against the Couchbase bucket get handled properly but there’s something that’s throwing an exception exactly each 30 seconds.

WARN  [com.couchbase.client.deps.io.netty.channel.DefaultChannelPipeline] (cb-io-1-1) An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.: java.lang.NoClassDefFoundError: com/couchbase/client/deps/io/netty/handler/timeout/IdleStateEvent
at com.couchbase.client.deps.io.netty.handler.timeout.IdleStateHandler$AllIdleTimeoutTask.run(IdleStateHandler.java:431) [core-io-1.1.3.jar:1.1.3]
at com.couchbase.client.deps.io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38) [core-io-1.1.3.jar:1.1.3]
at com.couchbase.client.deps.io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:123) [core-io-1.1.3.jar:1.1.3]
at com.couchbase.client.deps.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:380) [core-io-1.1.3.jar:1.1.3]
at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) [core-io-1.1.3.jar:1.1.3]
at com.couchbase.client.deps.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) [core-io-1.1.3.jar:1.1.3]
at com.couchbase.client.deps.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) [core-io-1.1.3.jar:1.1.3]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_40]

This happens even if I restart the application server and simply wait without doing any use of my REST services.
I was wondering if this is because I don’t close the bucket after each operation, but since this happens even If I don’t make CRUD operations I believe it could be a misconfiguration or a bug.

I’m using WildFly 8.2.0.Final as server, Couchbase 3.0.3-1716 Enterprise Edition (build-1716) and Couchbase JDK Client 2.1.3.

My static cluster and bucket holder class:

public class Persistence {

    //Configuration
    public static String clusterAddress = "192.168.0.55";
    public static String bucketName = "default";
    public static String bucketPassword = null;

    public static Cluster cluster;
    public static Bucket bucket;

    public static Bucket getBucketInstance() {
        if (cluster == null) {
            cluster = CouchbaseCluster.create(clusterAddress);
        }
        if (bucket == null) {
            bucket = cluster.openBucket(bucketName, bucketPassword);
        }
        return bucket;
    }

}

Any ideas of how to solve this issue or what could be going wrong?


#2

The SDK sends a heartbeat every 30 seconds to prevent firewalls closing ports, etc. So it could be related to that.

I think @daschl will be the best person to answer this but, in the meantime, could you let us know a little more about your systems: i.e. the server operating system and the version of Couchbase you’re using.


#3

Thanks @matthew for the quick reply, as I said at the opening post:

If it helps here is all of the specs:

I’m currently running my dev WildFly server locally under Windows 8.1 with JDK and JRE 1.8.0._40 x64 and Couchbase server is running remotely under Ubuntu Server 14.04 LTS.

I access the server through a VPN with under 5 ms of latency, so I dont belive VPN might be the issue. I’ve been accessing in the past MySQL servers with Java MySQL connector with the exact same setup without an issue, and although Couchbase is not the same It removes most doubts about the VPN causing any issue.

I also use NetBeans 8.0.2 as my default IDE not very relevant but you never know. :smile:

Since the error is a NoClassDefFoundError I’m inclined to believe it might be an initialization error or as the name states an issue with the classpath, likely caused by WildFly using also netty, although it should be safe since they have separated namespaces.

Hope this helps.


#4

Still trying to solve this issue, but meanwhile can netty.channel.DefaultChannelPipeline be silenced somehow?


#5

you could try to silence it and check if it is indeed the keepAlive that causes this problem (which looks very weird to me btw) by deactivating keep alive:

  1. Create a static CouchbaseEnvironment
  2. Set keepAliveInterval to 0
  3. Use the environment in the Cluster creation

In your example:

CouchbaseEnvironment env = DefaultCouchbaseEnvironment.builder()
   .keepAliveInterval(0)
   .build();
public class Persistence {

    //Configuration
    public static String clusterAddress = "192.168.0.55";
    public static String bucketName = "default";
    public static String bucketPassword = null;
    public static final CouchbaseEnvironment env = DefaultCouchbaseEnvironment.builder()
            .keepAliveInterval(0).build();

    public static Cluster cluster;
    public static Bucket bucket;

    public static Bucket getBucketInstance() {
        if (cluster == null) {
            cluster = CouchbaseCluster.create(env, clusterAddress);
        }
        if (bucket == null) {
            bucket = cluster.openBucket(bucketName, bucketPassword);
        }
        return bucket;
    }
}

This should deactivate keep alives and you can then confirm that the messages have gone away.


#6

I’ve setup the environment as described and I can confirm the exception message goes away, so we can be pretty sure the issue is with the keepAlive heartbeat, still need to figure out why thou.

Now I’m wondering if disabling keepAlive meanwhile could have a negative effect on Couchbase somehow.


#7

@Lethe if you have constant traffic, the keepalive won’t be needed. But if you have silent times, it will help proactively detect cluster topology changes and dead sockets.