OpenTracing - Timings



Could you explain how all the numbers add up in the below example?
Looks like the decoding ,decode_us, the JSON on the App is what took most of the time, but how does dispatch_us & decode_us add into the total_us?



Dispatch is effectively the time between when the network request is made and when the response is received by our library. Keep in mind, there are buffers and OS scheduling in there, so it doesn’t exclusively mean “time on the wire”.

The total_us should always be bigger than the dispatch time and decoding time. Other time will be in scheduling/synchronization and in some cases when there is scheduler pre-emption of some sort. That could be just another process, or virtualization, or memory management (GC) on the platform.

Hope that helps and see the sdk-rfc for all kinds of details.


Oh, and yeah, I see the numbers don’t quite add up. Maybe @daschl or @subhashni can comment on that. We can start the parsing before the whole buffer is received, so maybe that is why? In other words, decoding starts before dispatch has completed. Or maybe it’s a bad example. :slight_smile:


@ingenthr … Could I look at the stats like this?


If my assumption is correct, yes. The dispatch probably doesn’t start at exactly the same time as the total, but is probably µsec later in most cases.

By the way, this is exactly what OpenTracing systems like Jaeger Tracing and Lightstep will draw out for you (minus the server_us, as that’s just info in our Tracer without OpenTracing support on KV engine).

In fact, you can go a step further and if say you have a method that is doing multiple Couchbase requests or going to other systems, you can pass in (in some SDKs) your own parent span and you’ll see the that outside the parent total_us.