at com.couchbase.client.deps.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:99)
at com.couchbase.client.deps.io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at com.couchbase.client.deps.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:182)
at com.couchbase.client.deps.io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:147)
at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at com.couchbase.client.deps.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at com.couchbase.client.deps.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at com.couchbase.client.deps.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at com.couchbase.client.deps.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
select distinct ru.horseName, ru.ownerName from default ru join default ra on keys ru.raceId join default tr on keys ra.raceCardId where 1=1
Some times query working good some times not.
Caused by: java.lang.IllegalStateException: Error parsing query response (in TRANSITION) at {
“requestID”:
at com.couchbase.client.core.endpoint.query.QueryHandler.transitionToNextToken(QueryHandler.java:375)
at com.couchbase.client.core.endpoint.query.QueryHandler.parseQueryResponse(QueryHandler.java:320)
at com.couchbase.client.core.endpoint.query.QueryHandler.decodeResponse(QueryHandler.java:190)
at com.couchbase.client.core.endpoint.query.QueryHandler.decodeResponse(QueryHandler.java:62)
at com.couchbase.client.core.endpoint.AbstractGenericHandler.decode(AbstractGenericHandler.java:161)
[Java]
Thanks @gadipati.
The fact that the same query sometimes works sometimes not is weird.
From the look of the stack trace, the parser tries to find a response section (like signature/errors/results) right at the beginning of the response (which should have been parsed separately before to extract the requestID part)…
Can you try and reproduce it with full logging enabled and post the full log on pastebin, gist.github.com or similar site? To activate full logging, for example if you have/add log4j in your classpath, you can use the following log4j.properties configuration (the TRACE being the important part):
This will give us every network message and allow me to see how the response was transmitted (eg. was it split into chunks, how are the chunks looking, …).
Also can you tell us more about your setup? Does this happen in a local test with just one node for example, or on a larger cluster? Does it show inconsistency in the same client (eg. querying in a loop) or in multiple clients, or in multiple executions? Running the same query in cbq command line client, what do you obtain from the n1ql server?
I have been testing it on my local(windows) / remote(Linux) using jboss application server and my Couchbase server is single node and single bucket without any clusters. Getting this error in 90% of my testing. Query is working good in cbq client.
Sorry for taking time following up on this. It seems that the current (as of 2.1.0-dp2) state of the parser is actually still vulnerable to small network chunking, especially at the beginning of the response (before the results start).
I’m not sure this chunking is done by the server itself or the network stack (@geraldss any thought on that?)
Following your logs, I was able to improve it further and obtain a more resilient version. Unfortunately, since the logs were uploaded on the day of 2.1.0 release, this couldn’t be part of this release.
Rama, are you in a position to grab the sources and build from master branch to validate that the latest parser fixes your problem?
Hi @simonbasle, in this case, the chunking you are seeing is probably due to the network stack.
The N1QL server does not chunk - it buffers up to the first 16K bytes of the response (the size of this buffering can be controlled by a server-level argument called keep-alive-length), and then starts streaming the response if it is larger than that (or writes it out in one go if it is smaller).