Log slow key-value operations

Is there a way to log slow get operations on the server to find out what makes them slow (disk reads, too much time spent in a wait queue, etc)? I’m using Couchbase Server 5.1.1.

In 5.5.x we’ve extended this information. The client needs to enable the feature on the server by requesting the Tracing feature for the connection. Note that we’ve not instrumented the entire server yet (adding trace information do affect the performance in some workloads which is why the client needs to enable it). For 6.0.2 we’re recording the following metrics: http://src.couchbase.org/source/xref/6.0.2/kv_engine/tracing/tracetypes.h#26. You’ll find this information in the log files in /opt/couchbase/var/lib/couchbase/logs/memcached.log.nnnnnn.txt

The threshold for the various commands is specified in /opt/couchbase/etc/couchbase/kv/opcode-attributes.json and you may override them (see /opt/couchbase/etc/couchbase/kv/opcode-attributes.d/README.md)

I don’t have a real world example available, the format you’ll get is something like (I added my own probes when trying to figure out another problem):

2019-10-10T11:38:34.582025+00:00 WARNING 84: Slow operation. {"cid":"a1759534726802dd/5a87c4725cf30057/fe210200","duration":"824 ms","trace":"request=23179035108944:824054 state.read_packet_body=23179035109471:6 state.read_packet_body=23179035116584:1 state.read_packet_body=23179035173730:2 state.read_packet_body=23179035176417:0 state.read_packet_body=23179039057132:6 state.read_packet_body=23179039064195:1 state.read_packet_body=23179039157560:7 state.read_packet_body=23179039165696:11 state.read_packet_body=23179241339365:8 state.read_packet_body=23179241349024:0 state.read_packet_body=23179853169087:4 state.read_packet_body=23179853173721:1 state.read_packet_body=23179856914451:3 state.read_packet_body=23179856918574:1 state.read_packet_body=23179859041763:3 state.read_packet_body=23179859045455:1 state.read_packet_body=23179859049044:2 state.execute=23179859051813:111 store=23179859144692:14","command":"SET","peer":""}

You can refomat this to look something like:

{"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "request", "ts": 23179035108.944, "dur": "824054"}
, {"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "state.read_packet_body", "ts": 23179035109.471, "dur": "6"}
, {"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "state.read_packet_body", "ts": 23179035116.584, "dur": "1"}
, {"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "state.read_packet_body", "ts": 23179035173.73, "dur": "2"}
, {"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "state.read_packet_body", "ts": 23179035176.417, "dur": "0"}
, {"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "state.read_packet_body", "ts": 23179039057.132, "dur": "6"}
, {"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "state.read_packet_body", "ts": 23179039064.195, "dur": "1"}
, {"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "state.read_packet_body", "ts": 23179039157.56, "dur": "7"}
, {"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "state.read_packet_body", "ts": 23179039165.696, "dur": "11"}
, {"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "state.read_packet_body", "ts": 23179241339.365, "dur": "8"}
, {"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "state.read_packet_body", "ts": 23179241349.024, "dur": "0"}
, {"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "state.read_packet_body", "ts": 23179853169.087, "dur": "4"}
, {"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "state.read_packet_body", "ts": 23179853173.721, "dur": "1"}
, {"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "state.read_packet_body", "ts": 23179856914.451, "dur": "3"}
, {"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "state.read_packet_body", "ts": 23179856918.574, "dur": "1"}
, {"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "state.read_packet_body", "ts": 23179859041.763, "dur": "3"}
, {"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "state.read_packet_body", "ts": 23179859045.455, "dur": "1"}
, {"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "state.read_packet_body", "ts": 23179859049.044, "dur": "2"}
, {"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "state.execute", "ts": 23179859051.813, "dur": "111"}
, {"cat": "SET", "ph": "X", "pid": "a1759534726802dd/5a87c4725cf30057/fe210200", "name": "store", "ts": 23179859144.692, "dur": "14"}

and load the file in the tracer in Chrome (chrome://tracing)

Thank you for suggestion! I thought, maybe there is a way to do something like this with older server versions (our cluster use 5.1.1)?

This first appeared in 5.5, so it wouldn’t be there in 5.1.x. I’d also mention that an upgrade to at least the latest 5.5 would be recommended as there are (of course) fixes for known issues in that series.