Slow answer from Couchbase Server 4 using couchnode compared to cbq

40-preview
40-beta
#1

Hello, we are experiecing great problem with delayed/slow/high latency response from Couchbase Server 4.0.0-1869-rel using nodejs module couchbase@2.0.8 and node v0.12.2. Server have 8G ram and we have just around 6000 documents inside. Any idea? I though Couchbase can do better, should we change database?

First is select made through couchnode module and second through cbq.
As you can see, there is difference 1700ms on this example.

Any idea what’s going on? How we could optimize query?

Total count in DB: 5460

NodeJS trough couchnode module

function getCount(status_array, id, callback){
      var _where = '';
      console.time('getCount');
      for(var i in status_array){
        if(i > 0){
          _where = _where + ' AND ';
        }
        _where = _where + 'status_id != ' + status_array[i];
      }
      var _sQuery = "SELECT COUNT(*) AS count FROM crm WHERE jsonType = 'aaa' AND jsonSubType = 'aaabbb' AND row_id = '" + id + "' AND (" + _where + ")";
      var query_string = db_n1ql.fromString(_sQuery).consistency(db_n1ql.Consistency.REQUEST_PLUS);
      console.time('Select: getCount');
      db_crm.query(query_string, function(err, resultset){
        console.timeEnd('Select: getCount');  
        console.timeEnd('getCount');
        callback(resultset[0].count);
      });
  }



Select: getCount: 1821ms
getCount: 1822ms

Output from cbq:

EXPLAIN SELECT COUNT(*) AS count FROM crm WHERE jsonType = 'aaa' AND jsonSubType = 'aaabbb' AND row_id = 'row_0' AND (status_id != 2 AND status_id != 3 AND status_id != 4 AND status_id != 5);


"metrics": {
        "elapsedTime": "121.587296ms",
        "executionTime": "119.803789ms",
        "resultCount": 1,
        "resultSize": 3340
    }
#2

I don’t believe EXPLAIN actually traverses through the results; it would merely tell you the “plan” (meaning which indexes it would query and such).

#3

And how I measure the time it takes to execute select?

#4

In the example you’ve shown, you compare the time it takes to execute EXPLAIN to the time it takes to execute SELECT. Simply execute SELECT in both the SDK and CBQ. Once you’ve done that, you can then compare timings.

#5

here it’s done SELECT through CBQ. Through SDK is done in my first post in this thread.
There is still difference like ~1400ms, any idea what’s the problem?

{
    "requestID": "fda46671-a96c-4039-af90-3c3eb725eea9",
    "signature": {
        "count": "number"
    },
    "results": [
        {
            "count": 7
        }
    ],
    "status": "success",
    "metrics": {
        "elapsedTime": "329.867409ms",
        "executionTime": "329.513166ms",
        "resultCount": 1,
        "resultSize": 34
    }
}
#6

Looks like Nodejs SDK doesn’t prints elapsedtime in result

#7

Any idea to resolve this issue ?

#8

Bump. Any ideas? I though Couchbase can do better, should we change database?
Server is 8G ram, E5540 @ 2.53GHz, there is no i/o while query is happening. Even I don’t see some high cpu load.

Thanks!

#9

To begin with I’d take any performance on the pre-release versions of 4.0 with a grain of salt - the Developer Preview and Beta are not yet fully performance-tuned.

For a start I’d upgrade to the 4.0 beta - I believe there’s been significant work both in functionality and performance since the DP.

Secondly, you don’t mention what Secondary Indexes (if any) you have created. You should consider creating indexes on the heavily used WHERE fields - if you’re just using a primary index then your query will require a full-table scan (I believe).

#10

Note also you’re not performing the query with same consistency criteria. Via Node.JS:

I don’t see the same thing when you ran the query via cbq.

#11

Ok, now we are running CB 4 Beta and looks like GSI helped us a lot! Before almost 14 seconds, now average around 3.9 seconds