Can I have an isolated query node

My understanding is that adding Query nodes increases throughput because the clients will round robin queries to the available query nodes.

I’m wondering if I can add a query node and some how exclude it from the round robin and point particular queries to it.

This is useful because we have a few very long running/heavy transactional queries that we’d like to run in isolation.

Ideally we’d like to be able to spin up and take down a isolated node as needed for these queries so that they don’t adversely impact the rest of the system.

Also, am I correct about the round robin strategy, because if so, then I don’t understand how the system prevents starvation for a given query since it’s possible that it’s been round robin’d to a long spinning query node.

I don’t think it is possible at present. cc @binh.le One option is Analytical service for those if possible.

1 Like

@naftali you are correct about round-robin, but the query engine does not support pipelining so a specific socket will only ever hold one open N1QL query (so the client will not dispatch 2 queries onto onto the same socket if one is still in progress).

1 Like

@daschl

Thank you for this important information.

Just to confirm that if I want to increase concurrent query execution, or get any at all (within a single application machine node), I’d have to use a pool of couchbase connections.

Please do confirm because that’s at once counter intuitive for me but also super important to know definitely.

In JS I’ve been dispatching queries that return promises, only awaiting the promises after having fired a few of them off, thinking that I’d get some concurrent query processing out of it.

According to what you’ve said that’s basically a waste of effort, because the queries will in any case be dispatched sequentially.

Please do let me know and thank you so much for your help.

Generally, no. You don’t need to pool connections at the app layer.

More specifically, the Couchbase SDK will automatically buffer and dispatch queries with reasonable out of the box defaults. You may be able to tune a bit more specifically to your workload. See the tune-ables in the docs, in particular the I/O Options and the maxHttpConnections().

Also, @daschl is more the expert here, but I think it’s true that if you use TLS, the client and server will automatically negotiate HTTP/2, and that will effectively allow multiple streams. This is done in HTTP/2, not by number of connections which we have to do in HTTP/1.1 since it’s effectively not possible to pipeline on HTTP/1.1. On the cluster side, there is some serialization to the processing of the multiple streams though.

1 Like

Ok thank you @ingenthr.

I think I understood @daschl incorrectly in that when we he said “the client will not dispatch 2 queries onto onto the same socket” I assumed that the client only maintained one socket.

Though I gather from you and from the maxhttpconnections option that the client internally maintains a pool of connections on which to dispatch queries. (Which is why we need not manage multiple CB connections at the app layer.)

Would you mind confirming this? I think it would be helpful to have that pinned down in the forum threads.

I would certainly love for @daschl to confirm that if my nodeJs connection is communicating over TLS, the client will multiplex concurrent queries over that connection.

But most importantly, in order to get an idea of worst case performance, especially considering that we scale out numerous app nodes, what is the upper bound on or range of how many sockets the client will maintain, or how many queries will the client multiplex, in the case of TLS.

Thank you for all your help, this info in gold.

Confirmed, but note that this behavior varies from client to client. We necessarily leverage the IO strategies and libraries of the various different platforms. I was referring to the Java SDK in the example I gave.

Using Node.js changes the scenario quite a bit. Node itself approaches concurrency internally with an event driven architecture. It’s a little bit of an oversimplification, but imagine you have a bunch of concurrent requests, What our Node.js SDK does in conjunction with libcouchbase (the underlying C library) is load up a number of connections with requests, then when the Node.js event loop receives data back, it will process all of the responses. Then, because node itself does not run multiple threads, often it may be deployed with multiple processes on a given system.

My colleague @brett19 can probably expand on this.

1 Like

@naftali - I have opened MB-43215 to track the need to have an isolated query node for long running query use case.

1 Like

@binh.le that will be an awesome feature.

I suspect that the link is wrong though. It leads to a six day old bug with " What will be N1QL default txtimeout" for a title.

Update, found it: https://issues.couchbase.com/browse/MB-43215

1 Like