I’m doing a comaprison on N1QL query response times between Query Workbench and java SDK. I observe that using java SDK the response time is much higher than what shows up in query workbench
SimpleN1qlQuery simpleN1qlQuery = N1qlQuery.simple("select * from .....");
long startTime = System.currentTimeMillis();
N1qlQueryResult n1qlQueryResult = bucket.query(simpleN1qlQuery);
long queryTime = System.currentTimeMillis() - startTime;
Using Java SDK , I’m seeing response times of 1.2 to 1.7 seconds.
WorkBench shows response times 60 - 100millisec.
What’s the reason for abnormality ?
A few things jump to mind as explaining the difference:
- You are comparing execution time on the client to the metrics returned by the query engine on it’s execution time. In the SDK, you’ll see that you can access the same metrics. These should be very close to each other, but other time might be in the system, like…
- You don’t show the whole program, but if this is right after the app starts the connection isn’t built yet. Startup time, establishing connections, early GC, JIT etc. will take a bit of time. Most of the time anymore the JVM defaults to the server VM, which trades slow startup time for fast execution over a long run. One way to check for this would be to run it in a loop with a simple sleep(1000) in between iterations.
- Is it possible that you’re running with a higher concurrency of some sort in your program using the SDK and comparing that to single statement executions in the Web UI?
We do a number of benchmarks with the Java SDK that have very low latencies, so I’m fairly confident there’s an explanation. I wouldn’t call it an ‘abnormality’ until we know what the cause is.
Thanks for insight. I think your point 1 is the reason and I wasn’t comparing apples to apples. So what I see in query work bench (elapsed time, execution time) is from server side and it doesn’t take it account the network latency (to transfer all json data?).
Your’re referring this kind of API for metric collection - which can be gathered from SDK ? If not please point me to right documentation
event bus metric
On point 2, it’s just a simple N1QL query which is getting executed in web framework (playframework).And I’m getting bucket connection only once during app startup. I did the warm up and also I’ve executed several times consecutively and took the average of all response times.
On point 3, there’s no concurrency and kind of plain simple main program.
I believe the “Execution Time” maps to the metrics returned by N1QL and the “Elapsed Time” is what the query workbench observes as the time between request and completion of the response. You should see that execution time match what is in the Java SDK’s metrics returned on a query result. @eben: is this correct?
We generally would expect the “Elapsed Time” and the time measured at the client with something like a delta on
currentTimeMillis() to be pretty similar in steady state where the networking between the query workbench as a client of cbq-engine and the Java SDK as a client of cbq-engine are similar.
Did your updated run with the loop show a result more along the lines of what you’d expect? For what it’s worth, I usually would look at something like the 98th percentile rather than the average, as the average is likely to include those startup time outliers. They tend to give you an unrealistic picture of how something operates in steady state.
@ingenthr isn’t quite correct - the metrics reported in the Query Workbench are all coming from the server. The values for “Execution Time” and “Elapsed Time” are the same values you would see in the “metrics” section of a query result if you ran the query in cbq, or via the REST API. Thus “Elapsed Time” is total wall-clock time on the server required to complete the query, “Execution Time” is the time spent by the server working on the query. The two are usually close, unless the server is working on many queries at once.
Ah, that makes sense. I knew of cbq-engines metrics, but I wasn’t sure if query workbench gave elapsed time from it’s perspective. Thanks.