Error: Data received on socket was not in expected format

<RC=0x16[Data received on socket was not in the expected format], HTTP Request failed. Examine 'objextra' for full result, Results=1, C Source=(src/http.c,140), OBJ=ViewResult<RC=0x16[Data received on socket was not in the expected format], Value='},\n\n    ],\n    "status": "timeout",\n    "metrics": {\n        "elapsedTime": "1m15.289138508s",\n        "executionTime": "1m15.28911565s",\n        "resultCount": 59777,\n        "resultSize": 78879564\n    }\n}\n', HTTP=200>>

Could someone point me towards how to fix this?

The error gets throw while iterating across the results of a N1QL query using the python SDK. Offending line:

for row in cb.n1ql_query(query):

Here’s the traceback:

Traceback (most recent call last):
  File "", line 95, in <module>
  File "", line 29, in writeToES
    for row in cb.n1ql_query(query):
  File "/usr/local/lib/python2.7/dist-packages/couchbase/", line 343, in __iter__
    raw_rows = self.raw.fetch(self._mres)
_ProtocolError_0x16 (generated, catch ProtocolError): <RC=0x16[Data received on socket was not in the expected format], HTTP Request failed. Examine "objextra" for full result, Results=1, C Source=(src/http.c,140), OBJ=ViewResult<RC=0x16[Data received on socket was not in the expected format], Value="},\n\n    ],\n    "status": "timeout",\n    "metrics": {\n        "elapsedTime": "1m15.540084496s",\n        "executionTime": "1m15.540053887s",\n        "resultCount": 60085,\n        "resultSize": 79286006\n    }\n}\n", HTTP=200>>

The error seems to occur randomly.


There is noting wrong with the document that its trying to bring in. I am able to write a query to bring in the specific document. Moreover, it fails with this error randomly across docs (never consistently on the same doc).


Can you please add some code to catch the exception and print the raw output and copy paste it here ?



@manik, I would love to but couchbase doesn’t yet have the proper documentation around catching error codes for n1ql queries using the python api.

can you please send me a link to the docs when they are complete and then i can trap the error for you.

right now, we just trap a generic error and reprocess. it works.


The server is returning invalid JSON, as seen in the exception. Specifically:

},] is invalid JSON, as there is a trailing comma after the last item in the list.


Issue filed as


Aah, I see how that would have happened. The query timed out midway through the request processing. It should have inserted an empty result-set after that comma before the closing ‘]’ and the time-out error. @mnunberg looks like the default timeout for the python client is 75 seconds, as a workaround is it possible to increase that ? That way if a timeout may not occur.



The timeout can be set at a global basis by using the n1ql_timeout option in the connection string.

The timeout can be set on a per-query basis as well, but it must be lower or equal to the timeout specified globally (by default 75 seconds unless modified in the connection string).

Libcouchbase will also honor the timeout property in the N1QLQuery object itself, like so:

q.set_option("timeout", "100s")

for example.

I’ll see about adding a proper Pythonic way to modify the n1ql timeout in-situ (i.e. assignable after Bucket creation).


Thank you.

I set the n1ql_timeout to a higher value and that seems to have resolved this issue.

Though we now sometimes see this:

File \"/usr/local/lib/python2.7/dist-packages/couchbase/\", line 298, in _handle_meta\n    raise N1QLError.pyexc(\"N1QL Execution failed\", err)\nN1QLError: <N1QL Execution failed, OBJ={u\"msg\": u\"Index scan timed out - cause: Index scan timed out\", u\"code\": 12015}>\n"

Any suggestions?


@mnunberg @manik

my new issue seems to be a duplicate of: and - maybe even related to

they all have closed/resolved status. do you know what the resolution is?


Hi @whollycow007,

With secondary indexes an index scan cannot exceed 2minutes. This is a design limitation. If you expect to be scanning a large dataset then you should consider one of the following workarounds

  1. Create an index which specific to the query in question such that the selectivity of that that index is smaller than the original index. For e.g if the query contains a where clause then create the index with that where clause.

  2. Use a view indexe.




This is a test server with about 350,000 docs - mostly small docs. Total volume 255mb. 99% of the docs have a ‘type’ key and the ones that dont are atomic counters.

This is my index definition:

CREATE INDEX docType ON default(type,insertEpoch,updateEpoch) USING GSI

And this my query:

select distinct type from default where type is not null

A describe command shows the index docType is being used.

I’ll try a view index later tonight - could you briefly tell me the diff between a GSI and View? My assumption would be that views are auto refreshed every 5 secs whereas a GSI is updated based on data changes (traditional index)? I’m possibly completely wrong since I see spikes in IOPS after putting in the GSI too.



Hi @whollycow007

Here is a document that details the difference between views and GSi


1 Like

One small addition to this thread. We are also providing covering indexes beginning with 4.1.0 developer preview. These indexes avoid the key-value fetch after scanning the index, and should produce better performance for your use case.

1 Like