Why extra Documents are created in Read only Database?

Hi everyone.

I have configured one of my Sync Gateway configes in a way to make it “read-only”.

The “Read-only” rule is controlled by the bellow if condition inserviceconfig.json:

if (doc)  {

I have the role of “admin” and am allowed to put data in couchbase and had put around 40 documents in it. We have 2 users named iOS and Android and they don’t have the “admin” role, so are not allowed to put data, they only can read data.

But when I look in bucket, I see more than 960.000 documents.

I have set the database to be read only and I thought it must have only 40 docs not 960.000!!!

I have an example of odd docs in bucket.

Some extra documents found in bucket are as follows:

Doc ID: _sync:local:0002ca1fef9b5a416884a956064ad4de9b024ffa

    "_rev": "0-3",
    "lastSequence": "33"

I set the IDs myself, and mentioned example of IDs are unfamiliar to me.

Extra info:

  • Couchbase is a cluster of 3 servers,
  • we had more than 260000 channels and 260000 users, and as it goes on it grows more .
  • 2 servers of sync gateway .

It would be appreciating if someone can help me, because I don’t know really what’s going on!!!

Hi @zahra.darvishian,

All documents starting with ID "_sync" are Sync Gateway metadata documents which are required in order for replication to clients to work correctly.

The "_sync:local:xxxxxxxx" documents are per-device user checkpoints. They mark where each unique user/device combination last replicated up to, so they don’t have to start from the beginning of time when they start replicating.

You mention only having 2 users, Android and iOS, but each of these checkpoints is representing a physical device, so I’d expect there would be around 260,000 of them if you have 260,000 app users.

The rest of the 960,000 will probably be made up of other similar metadata.

Are you seeing a functional issue because of these extra documents?


hi Ben, thanks a lot for replying.
yes, 2 days ago we faced a problem which logs are as follows:

2020-02-09T19:41:39.951+03:30 [INF] Initializing indexes with numReplicas: 1...
2020-02-09T19:41:40.035+03:30 [INF] DCP: Backfill in progress: 0% (1 / 1888)
2020-02-09T19:41:40.094+03:30 [INF] Cache: Received #8256787 (unused sequence)
2020-02-09T19:41:40.094+03:30 [INF] Cache: Received #8256843 (unused sequence)
2020-02-09T19:41:40.467+03:30 [INF] Cache: Received #8256715 (unused sequence)
2020-02-09T19:41:40.474+03:30 [INF] DCP: Backfill complete
2020-02-09T19:41:40.475+03:30 [INF] Cache: Received #8256818 (unused sequence)
2020-02-09T19:41:40.887+03:30 [INF] Verifying index availability for bucket zDatabase...
2020-02-09T19:42:55.888+03:30 [INF] Timeout waiting for index "sg_access_x1" to be ready for bucket "zDatabase" - retrying...
2020-02-09T19:42:55.888+03:30 [INF] Timeout waiting for index "sg_channels_x1" to be ready for bucket "zDatabase" - retrying...

and 2 servers of sync gateway gone unavailable, ports (4984, 4985)did not listened but the SyncGateway service was running.
then we deleted the bucket related to zDatabase inorder to have our SyncGateway ports back again.
fortunately we had back up of its data, so the data loss did not occur.

As the timeout message indicates, Sync Gateway was waiting for its indexes in Couchbase Server to come online. It would be worth investigating why this happened from the server side by looking at the indexer logs.

I presume you’ve not made any changes recently to the Couchbase Server cluster?
Any upgrades, etc. that might’ve caused this?

no , there were no changes in Couchbase servers.
what else could be the reasoon?!