JOIN or NEST on same bucket


#1

Hi,

I have only one bucket with different type of documents. How Could I use NEST or JOIN on same bucket documents.

select * from default ru NEST default ra on keys ru.raceId
select * from default ru JOIN default ra on keys ru.raceId

{
“error”:
{
“caller”: “standard:50”,
“cause”: “Parse Error - syntax error”,
“code”: 4100,
“key”: “parse_error”,
“message”: “Parse Error”
}
}


#2

Hi, are you using N1QL DP4, and are you using the command line shell, cbq? If not, please try both of those. The queries should work.

Thanks.


#3

Hi Geraldss,

Yes, I’ve been trying on above both.

I have document structure as dbms tables. Unable to join parent child relationship.

Thanks,
Gadipati


#4

Your error message does not match the latest cbq. Can you copy and paste the exact query and response?


#5

My Apologies, I have tried on DP3, It is good with DP4.

Thanks for your great support.


#6

I have horseRegNumber column in ru data source only. Is it mandatory to prefix alias name even there is no such column in other datasources?

select distinct tr.* from default ru join default ra on keys ru.raceId join default tr on keys ra.raceCardId where horseRegNumber=‘abc’;

{
“requestID”: “ef8695e6-11fd-46e6-b87e-683d9309d476”,
“errors”: [
{
“code”: 5000,
“msg”: “Ambiguous reference to field horseRegNumber.”
}
],
“status”: “fatal”,
“metrics”: {
“elapsedTime”: “2.0002ms”,
“executionTime”: “2.0002ms”,
“resultCount”: 0,
“resultSize”: 0,
“errorCount”: 1
}
}


#7

Yes, the prefix is mandatory in this case, because there is no enforced schema.


#8

Thanks,

Receiving below error while using above query with java api.

com.couchbase.client.deps.io.netty.handler.codec.DecoderException: java.lang.IllegalStateException: Error parsing query response (in INITIAL): {
“requestID”: “e635d194-f627-4d5a-982e-00277b1243ce”,
“signature”: {
"": ""
},
“results”: [
{
“completeIndicator”: “Y”,
“docType”: “RaceCard”,
“localeType”: “USA”,
“numberOfRaces”: “9”,
“raceCardId”: “FL20141201”,
“raceDate”: “20141201”,
“trackId”: “FL”,
“trackName”: “FINGER LAKES”
}
],
“status”: “success”,
“metrics”: {
“elapsedTime”: “37.0021ms”,
“executionTime”: “37.0021ms”,
“resultCount”: 1,
“resultSize”: 296
}
}

at com.couchbase.client.deps.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:99)
at com.couchbase.client.deps.io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at com.couchbase.client.deps.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:182)
at com.couchbase.client.deps.io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:147)
at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at com.couchbase.client.deps.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at com.couchbase.client.deps.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at com.couchbase.client.deps.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at com.couchbase.client.deps.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at com.couchbase.client.deps.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)

#9

Hi @gadipati!
Can you confirm which version of the Java SDK you are using?

There’s a developer preview out there that is the one well adapted for N1QL DP4 : see http://blog.couchbase.com/n1ql-dp4-java-sdk .

Note that there was a second release of the developer preview last week, so if you were using 2.1.0-dp you may want to upgrade to 2.1.0-dp2

Simon


#10

Thanks Simon,

It is good with above version upgrade.

Thanks,
Rama


#11

Inconsistency in query execution.

select distinct ru.horseName, ru.ownerName from default ru join default ra on keys ru.raceId join default tr on keys ra.raceCardId where 1=1

Some times query working good some times not.

Caused by: java.lang.IllegalStateException: Error parsing query response (in TRANSITION) at {
“requestID”:
at com.couchbase.client.core.endpoint.query.QueryHandler.transitionToNextToken(QueryHandler.java:375)
at com.couchbase.client.core.endpoint.query.QueryHandler.parseQueryResponse(QueryHandler.java:320)
at com.couchbase.client.core.endpoint.query.QueryHandler.decodeResponse(QueryHandler.java:190)
at com.couchbase.client.core.endpoint.query.QueryHandler.decodeResponse(QueryHandler.java:62)
at com.couchbase.client.core.endpoint.AbstractGenericHandler.decode(AbstractGenericHandler.java:161)


#12

[Java]
Thanks @gadipati.
The fact that the same query sometimes works sometimes not is weird.
From the look of the stack trace, the parser tries to find a response section (like signature/errors/results) right at the beginning of the response (which should have been parsed separately before to extract the requestID part)…

Can you try and reproduce it with full logging enabled and post the full log on pastebin, gist.github.com or similar site? To activate full logging, for example if you have/add log4j in your classpath, you can use the following log4j.properties configuration (the TRACE being the important part):

# Root logger option
log4j.rootLogger=TRACE, stdout

# Direct log messages to stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%t] %d{HH:mm:ss} %-5p %c{1}:%L - %m%n

This will give us every network message and allow me to see how the response was transmitted (eg. was it split into chunks, how are the chunks looking, …).

Also can you tell us more about your setup? Does this happen in a local test with just one node for example, or on a larger cluster? Does it show inconsistency in the same client (eg. querying in a loop) or in multiple clients, or in multiple executions? Running the same query in cbq command line client, what do you obtain from the n1ql server?

Thanks
Simon


#13

Hi Simon,

Here is the gist Log.

I have been testing it on my local(windows) / remote(Linux) using jboss application server and my Couchbase server is single node and single bucket without any clusters. Getting this error in 90% of my testing. Query is working good in cbq client.

Thanks,
Rama


#14

Can anybody help me on this issue.

Thanks,
Rama


#15

Hi @gadipati,

Sorry for taking time following up on this. It seems that the current (as of 2.1.0-dp2) state of the parser is actually still vulnerable to small network chunking, especially at the beginning of the response (before the results start).
I’m not sure this chunking is done by the server itself or the network stack (@geraldss any thought on that?)

Following your logs, I was able to improve it further and obtain a more resilient version. Unfortunately, since the logs were uploaded on the day of 2.1.0 release, this couldn’t be part of this release.

Rama, are you in a position to grab the sources and build from master branch to validate that the latest parser fixes your problem?


#16

Hi @simonbasle, in this case, the chunking you are seeing is probably due to the network stack.
The N1QL server does not chunk - it buffers up to the first 16K bytes of the response (the size of this buffering can be controlled by a server-level argument called keep-alive-length), and then starts streaming the response if it is larger than that (or writes it out in one go if it is smaller).

Thanks,
Colm.


#17

Hi Simon,

Here is log with master version. Gist Log

Thanks.


#18

@gadipati the link doesn’t work… you still had issues then? :confused:


#19

Yes Simon, I still had issue with master


#20

Gist link live now. Please have look on it.