Java Couchbase Client Config for higher throughput


I’m using Java SDK client 3.1.0 for doing N1Ql query to be exposed in my rest API.
I’m using default env config and call the sync query like this :

cluster.query(query, QueryOptions.queryOptions().adhoc(false).parameters(param))

This is my env setting:

env = ClusterEnvironment.builder()

And below is my simple query using KV:

Select * from myBucket use keys ‘xxxxxx’

I’ve tried hit my API using JMeter but getting not that high throughput / sec.

Is there any configs to tune my API for higher throughput?


@Han_Chris1 query performance is usually determined by the server side, not so much by the client settings. Can you tell us more about the query itself, how long it runs, did you tune the indexes for it?

Also your config properties are not a good idea. Do not tune the idle http connection timeout, that won’t help you. Also if you want to tune the request tracer, do it via the ThresholdRequestTracerConfig setting, not manually providing one (the null in the builder gives you a hint that something is a bit odd there ;))

So, I would not start with tuning the env but rather tuning your query.

Hi @daschl,

My query is quite simple using key, it takes around 7ms only when I execute from couchbase UI.
But when I query using my java code, it takes around 100-200 ms per request.
Okay I can remove that configs, but is there any other config I can set in my code ?


@Han_Chris1 can you show us the code how you actually perform the query and do something with the results? Also, do you take JVM warmup into account ?( so run a couple hundreds of those queries before measuring vs. just one?)

Also note that if the query above is really exactly that, you should use KV operations instead if you already know the IDs.

Hi @daschl,

Actually I have serveral N1QL query,not only using KV. I try using this query to avoid issue from index part.
I’m not taking JVM warmup into account, even if I hit manual single request using postman also have around that 100-200ms
I’m using quarkus framework to expose rest API
the process flow is like this :
init singleton connection during postConstruct then call query for each request

My overall code:

ClusterEnvironment env = ClusterEnvironment.builder()

PasswordAuthenticator authenticator = PasswordAuthenticator

Cluster cluster = Cluster.connect(nodes,ClusterOptions

QueryResult result = null;
try {
result = cluster.query(query, QueryOptions.queryOptions().adhoc(false).parameters(param));
if(result.rowsAsObject().size()>0) {
JsonObject objectResult = result.rowsAsObject().get(0);
return objectResult;
}else {
return null;
} catch (QueryException e) {
LOG.warn("Query failed with exception: " + e);

Kindly need your advise :slight_smile:

@Han_Chris1 it’s a little hard to say in isolation - would you be able to provide a quarkus project that demonstrates the issue which I can use to reproduce locally?

Hi @daschl,

I’ve pushed my sample code in github :
Let me know if you can reproduce in your local


@Han_Chris1 it must be something environmental. I cloned your repository, and the only thing I changed was to change the properties file to point to a node on localhost and different bucket name / user - I also had to change the query from myBucket to my bucket name…

I started the app with mvn compile quarkus:dev

One thing I noted is that quarkus opens the connections lazily, so the first query really takes longer until the client is fully bootstrapped - maybe there is a way in quarkus to load the resources eagerly?

Once the first request wen through, I used the wrk benchmarking tool. one thread, one connection - to test the latency. Note that my bucket was empty, since I wanted to make sure there isn’t much contributing to the perf on the SDK side (vs. i.e a longer running n1ql query).


 wrk -c 1 -t 1 -d 30s
Running 30s test @
  1 threads and 1 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.99ms  361.20us  17.94ms   93.78%
    Req/Sec     1.02k   128.03     1.20k    69.67%
  30486 requests in 30.01s, 2.09MB read
Requests/sec:   1015.99
Transfer/sec:     71.44KB

it ran for 30s, with an average latency of a response at 1ms, max 17.9ms and that’s end-to-end.

Here is the diff for my changes:

diff --git a/src/main/java/com/test/api/query/ b/src/main/java/com/test/api/query/
index 85e17d5..16685db 100644
--- a/src/main/java/com/test/api/query/
+++ b/src/main/java/com/test/api/query/
@@ -2,7 +2,7 @@ package com.test.api.query;

 public class StringQuery {

-       public static String getAgentDetail = "select * from myBucket use keys $agent_number";
+       public static String getAgentDetail = "select * from default use keys $agent_number";

diff --git a/src/main/resources/ b/src/main/resources/
index c75c810..eb68748 100644
--- a/src/main/resources/
+++ b/src/main/resources/
@@ -3,8 +3,8 @@ quarkus.http.test-port=9088

 #Couchbase Connection
-query=select * from myBucket
+query=select * from `default`
1 Like

Hi @daschl,

Thanks for for your quick response.
Yes, it tries to do init connection through first request only, after that it will use the same connection.

So, do you mean that there’s no issue with my code?
I’m afraid issue with the latency.
But I’ve tried to deploy inside the same segment server and I assume there’s no connectivity issue

Is there any idea about how to debug the latency?


Well, I would try to remove as many potential factors as possible and then go from there:

  • to remove the potential network impact (because if you query from UI you are on the same host), use a couchbase node on localhost
  • you can also try with an empty bucket to remove any potential n1ql latency implications

Noted with thanks @daschl

Hi @daschl,

Still curious, since I need to expose my REST API to do N1QL query using sync method,
What’s the best practice to be set on maxHttpConnections or other configs to optimize the performance?

The max http connections really comes down to what kinds of queries you run. If you have more long running queries they might end up using more connections at the same time and it can make sense to bump it up… But note that this is more a server side question too, since just bumping up the connections does not help you if you are limited on the server query processing latency,. I would stick with the default first and if that does not meet your performance criteria AND you know that this is the bottleneck, then tune it higher. It also depends on how many app servers, how many query nodes etc you have since the sum of it also matters. If queries are too slow I would first try to add more query nodes to the cluster to speed up the parallel processing.

1 Like