The Sequence of Document Synchronization in _changes feed

Hi,
We have a Couchbase-Server running, on which I have a changes-Feed that shows me the Changes made to Documents in Couchbase. My Question is, if the sequence of synchronization is determined or random? When i put multiple Documents into a bucket, from which they are synced through the SyncGateway, the resulting sequence of synchronization i see in the feed seems random.

Then there is another Question. When a document is created or changed on the mobile Application, I often see it twice in the changes-feed with the same revision number. Is this normal or are we doing something wrong?

1 Like

Documents written through Sync Gateway are assigned a sequence value at write time - this sequence is used to order the changes feed.

I’d need more information on the duplicate changes feed entry question. Generally the same document/revision shouldn’t appear multiple times in the changes feed - if you’ve got a sample changes response with duplicate entries, that might help clarify the situation.

The order of documents in the changes feed isn’t truly random, but you shouldn’t rely on its ordering. Currently it’s ordered by sequence number, which means by ascending order of last modification (on SG). But the more parallelism we add to SG, the less deterministic the ordering becomes.

i am facing same problem with one document revision have multiple sequence number
by this faceing problem in changes feed on sync_gateway main bucket.
one document sync two times instead of one.
dociument from sync_bucket:

--------------------------------------------------------
   "_sync": {
    "rev": "1-6b07e971ea0a9e820cad2d57d2507fb6",
    "sequence": 6049635,
    "recent_sequences": [
      6049590,
      6049635
    ],
    "history": {
      "revs": [
        "1-6b07e971ea0a9e820cad2d57d2507fb6"
      ],
      "parents": [
        -1
      ],
      "channels": [
        [
          "ch-D-570C1364-B9AC-431B-8987-A7CF41A121C5"
        ]
      ]
    },
    "channels": {
      "ch-D-570C1364-B9AC-431B-8987-A7CF41A121C5": null
    },
    "upstream_cas": 1513329109764669400,
    "upstream_rev": "1-6b07e971ea0a9e820cad2d57d2507fb6",
    "time_saved": "2017-12-15T09:11:51.783950571Z"
  }

sync_gateway logs for this doc

----------------------------------
2017-12-15T09:11:49.382Z CRUD+: Invoking sync on doc "C-6F70DA24-361C-4848-A932-351FB5F4C145" rev 1-6b07e971ea0a9e820cad2d57d2507fb6
2017-12-15T09:11:49.382Z CRUD: 	Doc "C-6F70DA24-361C-4848-A932-351FB5F4C145" in channels "{ch-D-570C1364-B9AC-431B-8987-A7CF41A121C5}"
2017-12-15T09:11:49.383Z CRUD: Stored doc "C-6F70DA24-361C-4848-A932-351FB5F4C145" / "1-6b07e971ea0a9e820cad2d57d2507fb6"
2017-12-15T09:11:49.383Z Events+: Event queue worker sending event Document change event for doc id: C-6F70DA24-361C-4848-A932-351FB5F4C145 to: Webhook handler [http://olddev.dirolabs.com:4000/jsonvalidator/validdata]
2017-12-15T09:11:49.749Z Cache: Received #6049590 after 366ms ("C-6F70DA24-361C-4848-A932-351FB5F4C145" / "1-6b07e971ea0a9e820cad2d57d2507fb6")
2017-12-15T09:11:49.761Z Feed: Got shadow event:C-6F70DA24-361C-4848-A932-351FB5F4C145
2017-12-15T09:11:49.761Z Shadow: Pushing "C-6F70DA24-361C-4848-A932-351FB5F4C145", rev "1-6b07e971ea0a9e820cad2d57d2507fb6"
2017-12-15T09:11:49.778Z Changes+: MultiChangesFeed sending {Seq:6049590, ID:C-6F70DA24-361C-4848-A932-351FB5F4C145, Changes:[map[rev:1-6b07e971ea0a9e820cad2d57d2507fb6]]} 
2017-12-15T09:11:51.783Z Shadow+: Pulling "C-6F70DA24-361C-4848-A932-351FB5F4C145", CAS=15006cceafc80000 ... have UpstreamRev="", UpstreamCAS=0
2017-12-15T09:11:51.783Z Shadow+: Not pulling "C-6F70DA24-361C-4848-A932-351FB5F4C145", CAS=15006cceafc80000 (echo of rev "1-6b07e971ea0a9e820cad2d57d2507fb6")
2017-12-15T09:11:51.783Z CRUD+: Invoking sync on doc "C-6F70DA24-361C-4848-A932-351FB5F4C145" rev 1-6b07e971ea0a9e820cad2d57d2507fb6
2017-12-15T09:11:51.783Z CRUD+: updateDoc("C-6F70DA24-361C-4848-A932-351FB5F4C145"): Rev "1-6b07e971ea0a9e820cad2d57d2507fb6" leaves "1-6b07e971ea0a9e820cad2d57d2507fb6" still current
2017-12-15T09:11:51.784Z CRUD: Stored doc "C-6F70DA24-361C-4848-A932-351FB5F4C145" / "1-6b07e971ea0a9e820cad2d57d2507fb6"
2017-12-15T09:11:52.353Z Changes+: MultiChangesFeed sending {Seq:6049590, ID:C-6F70DA24-361C-4848-A932-351FB5F4C145, Changes:[map[rev:1-6b07e971ea0a9e820cad2d57d2507fb6]]}   (to M.865F8392-BA62-4325-BE27-561109850316)
2017-12-15T09:11:53.750Z Cache: Received #6049635 after 1966ms ("C-6F70DA24-361C-4848-A932-351FB5F4C145" / "1-6b07e971ea0a9e820cad2d57d2507fb6")
2017-12-15T09:11:53.753Z Feed: Got shadow event:C-6F70DA24-361C-4848-A932-351FB5F4C145
2017-12-15T09:11:53.754Z Changes+: Found sequence later than stable sequence: stable:[6049609] entry:[6049635] (C-6F70DA24-361C-4848-A932-351FB5F4C145)
2017-12-15T09:11:53.993Z Changes+: MultiChangesFeed sending {Seq:6049635, ID:C-6F70DA24-361C-4848-A932-351FB5F4C145, Changes:[map[rev:1-6b07e971ea0a9e820cad2d57d2507fb6]]}   (to M.865F8392-BA62-4325-BE27-561109850316)
2017-12-15T09:11:54.767Z Changes+: MultiChangesFeed sending {Seq:6049635, ID:C-6F70DA24-361C-4848-A932-351FB5F4C145, Changes:[map[rev:1-6b07e971ea0a9e820cad2d57d2507fb6]]} 
2017-12-15T09:23:56.513Z Changes+: MultiChangesFeed sending {Seq:6049635, ID:C-6F70DA24-361C-4848-A932-351FB5F4C145, Changes:[map[rev:1-6b07e971ea0a9e820cad2d57d2507fb6]]}   (to M.865F8392-BA62-4325-BE27-561109850316)
2017-12-15T09:28:58.147Z Changes+: MultiChangesFeed sending {Seq:6049635, ID:C-6F70DA24-361C-4848-A932-351FB5F4C145, Changes:[map[rev:1-6b07e971ea0a9e820cad2d57d2507fb6]]} 
2017-12-15T09:35:22.246Z Changes+: MultiChangesFeed sending {Seq:6049635, ID:C-6F70DA24-361C-4848-A932-351FB5F4C145, Changes:[map[rev:1-6b07e971ea0a9e820cad2d57d2507fb6]]}   (to M.865F8392-BA62-4325-BE27-561109850316)

Thanks for this information - it’s helpful. It looks like multiple updates are being processed for the document, without creating a new revision. I suspect this is related to bucket shadowing, given your log output. Can you share the version of Sync Gateway you’re running, and your Sync Gateway config?

Thanks adamf for quick reply i am using sync_gateway 1.5 with couchbase5.0 and also sync_gateway 1.4.1 with couchbase4.6 in both environment facing same problem.
sync_gateway sync file mention here

{

"interface": "0.0.0.0:4984",

"adminInterface": "0.0.0.0:4985",

"log" : ["*"],    

"CORS": {

"Origin":["http://xxxxx:4984"],

"LoginOrigin":["http://xxxxx:4984","https://xxxxxx"],

"Headers":["Content-Type"],

"MaxAge": 1728000

 },

"databases": {

    "phonebooks": {

        "server": "http://xxxdb8091",

        "bucket": “bucketname",

"password": “password",

        "username”:"bucketname",

        "sync":"function(doc,oldDoc){if(doc===null){throw({forbidden:'nullDocFound'})}else if(doc.access==='public'){channel('!')}else if((doc.owner!==undefined)&&(doc.type!==undefined)){switch(doc.type){case 'dxp':if(doc.process){var process=doc.process;if(process[0].docver){var docver=process[0].docver;if(docver==='1.0'){channel('olddxp');access(doc.owner,'olddxp')}else{channel('ch-'+doc.owner+'-sos');channel('ch-'+doc.owner+'-'+doc.type)}}else{channel('olddxp');access(doc.owner,'olddxp')}} break;case 'mxch':case 'cxp':case 'dirocard':case 'sdx':case 'dxareatracker':case 'mx':channel('ch-'+doc.owner+'-sos');channel('ch-'+doc.owner+'-'+doc.type);break;case 'cx':if(doc.dx!==undefined){var dxarr=doc.dx;for(var i=0,len=dxarr.length;i<len;i++){channel('ch-'+dxarr[i])}}else if(doc.archivedx!==undefined){var arcdx=doc.archivedx;for(var i=0,len=arcdx.length;i<len;i++){channel('ch-'+arcdx[i])}}else if(doc.sharedx!==undefined){var sharedx=doc.sharedx;for(var i=0,len=sharedx.length;i<len;i++){channel('ch-'+sharedx[i])}} if(doc.mx!==undefined){var mx=doc.mx;for(var i=0,len=mx.length;i<len;i++){channel('ch-'+mx[i]+'-vmx')}} break;case 'dx':channel('ch-'+doc._id);if(doc.dxtype!==undefined){if(doc.dxtype.toLowerCase()==='private'&&!doc.mx_ps){access(doc.owner,'ch-'+doc._id)}} break;case 'nx':if(doc.actionkey!==undefined){switch(doc.actionkey) {case 'us':case 'ca':case 'exit':if(doc.dxid!==undefined&&doc.dxid.length>0){channel('ch-'+doc.dxid[0])} if(doc.cxd){var cxd=doc.cxd;for(var i=0;i<cxd.length;i++){for(var j=0;j<cxd[i].length;j++){if(cxd[i][1]!=='') {channel('dxadmin-'+cxd[i][1])} else{channel('ch-'+doc.dxid[0])}}}};break;case 'inv':case 'hide':case 'unhide':case 'cc_diro_to_native':case 'cc_native_to_diro':case 'cu_diro_to_native':case 'cu_native_to_diro':case 'resync':case 'dco':case 'mxreg':case 'mxn':case 'mxo':case 'f_cx_blue':channel('ch-'+doc.owner+'-'+doc.type);break;case 'dtcgr':if(doc.dxid!==undefined){var dsArr=doc.dxid;channel('ch-'+dsArr[0]);if(dsArr.length>0) {var dxID=dsArr[0];if(doc.sharedmx!==undefined){var sharedMx=doc.sharedmx;for(var i=0,len=sharedMx.length;i<len;i++){role(sharedMx[i],'role:dxadmin-'+dxID)}}}} break;default:if(doc.dxid!==undefined){var dsarr=doc.dxid;for(var i=0,len=dsarr.length;i<len;i++){channel('ch-'+dsarr[i])}}}} break;case 'nxp':channel('ch-'+doc.owner+'-'+doc.type);break;case 'coach':channel('ch-'+doc.owner+'-sos');break;default:channel('fail')}} else{channel('fail')}}",

       "users": {"GUEST": {"disabled": true, "admin_channels": ["hx"]}},

        "shadow": {

            "server": "http://xxxdb:8091",

            "bucket": “bucketname_shadow",

            "username”:"bucketname_shadow",

            "password”:”password"

        }

           ,

“event_handlers”: {

“max_processes” : 20000,

  "wait_for_process" : "20",

            "document_changed": [

{

  "handler": "webhook",

                  "url": "http://webhook:4000/jsonvalidator/validdata",

                  "filter": "function(doc) { if (doc.type != 'nx') {  return true; } return false; }"  

}

             ] 

}

    }

}

}

please reply adamf , i have given you all details regarding this issue.