Document not included in push replication after removing conflicting revisions

mobile
java

#1

Hey all,

I’ve been working on adding conflict resolution to my app for a while now, and everything is working fine, except for the case when I don’t change the properties of document, and just delete all revisions that are in conflict. Specifically this happens if all properties are the same in all revision maps. The revisions are being removed correctly, as the same document doesn’t show up again again in a query for all conflicts, but the resolved document isn’t pushed back up to the server.

Here’s my conflict resolution code:

public static boolean removeConflictingRevisions(Document document) {
    try {
        for (SavedRevision rev : document.getConflictingRevisions()) {
            if (rev.getId().equals(document.getCurrentRevision().getId())) {
               
                UnsavedRevision newRev = rev.createRevision();
                newRev.setUserProperties(rev.getUserProperties());
                newRev.save();
            } else {
                UnsavedRevision tombstone = rev.createRevision();
                tombstone.setIsDeletion(true);
                tombstone.save(true);
            }
        }
    } catch (CouchbaseLiteException e) {
        e.printStackTrace();
        return false;
    }
    return true;
}

#2

I can’t think of any reason why the changes wouldn’t be pushed. Do you get any warnings from the replicator? If you turn on Sync logging, maybe some of the messages will give a clue.


#3

Hey Nils,

So it seems from both the Couchbase lite and sync gateway logs that the changes are being replicated up to the server.

The couchbase lite logs include the following:

V/Sync: PusherInternal{https://my_url:4984/my_bucket, push, b4fee}: POSTing 2 revisions to _bulk_docs: <the correct information for the two docs>

And the Sync gateway logs include the following:

21:25:52.384252 2017-09-11T21:25:52.384Z CRUD+: Invoking sync on doc "<doc_id>" rev 29-e02a2001ffd69d78ae45a379c5455179
21:25:52.384511 2017-09-11T21:25:52.384Z CRUD+: Saving old revision "<doc_id>" / "28-023a8ffdf9baaa9680c0ab341076f6c9" (166 bytes)
21:25:52.385038 2017-09-11T21:25:52.385Z CRUD+: Backed up obsolete rev "<doc_id>"/"28-023a8ffdf9baaa9680c0ab341076f6c9"
21:25:52.385231 2017-09-11T21:25:52.385Z CRUD+: SAVING #1334783
21:25:52.385948 2017-09-11T21:25:52.385Z CRUD: Stored doc "<doc_id>" / "29-e02a2001ffd69d78ae45a379c5455179"

However, when I then perform a pull replication, on a different device, the two docs that appear to have had their conflicts resolved, still appear in the list of documents returned by a conflicting docs query.

Thanks!


#4

Well, look at the documents on the other device to see what revisions they have. And/or turn on Sync logging to see which revisions are being pulled.


#5

Right, sorry for being a bit lazy with this.

So I think I understand what’s happening. First, here are my repro steps:

  1. Install the app on a device
  2. Start a pull replication
  3. Perform a conflicts query
  4. The two docs in conflict are returned
  5. Resolve those conflicts with the above code

At this point the new and deleted revs are definitely replicated up to the server correctly. I then perform the same steps on a second device, and once again the same docs are in conflicts, and the same revision that was deleted on the previous device is the one causing the conflict. That said its _removed property is set to true, so the delete is being properly replicated.

My question is, is this expected behaviour? I understand that tombstones need to be replicated to all devices, and are therefore kept in all database for 3 days, so this part makes sense, but it seems strange to me that deleted revs would cause a conflict?

Thanks!


#6

Well… If you have resolved the conflict correctly on a device, it shouldn’t show up as a conflict again on another device unless there is an local revision of the conflicting document on the other device resulting in a new conflict.
It would help to get some concrete sync logs and more details in terms of the revision tree and such so we can see what’s exactly happening here.
In addition to the requested sync logs, Can you also run a _raw query on sync gateway for the document in conflict

Curious where did you get the “3 days” number from ? I didn’t think there was any fixed time limit.

Assuming you mean the _deleted property?


#7

Hey Priya,

Ok, good to know that my understanding isn’t warped by anything.

Here’s the raw query results:

{
"_sync": {
    "rev": "31-cba600e2a01af501a608093a34419c20",
    "flags": 24,
    "sequence": 1334858,
    "recent_sequences": [
        820253,
        821933,
        821942,
        827785,
        1334491,
        1334781,
        1334783,
        1334856,
        1334858
    ],
    "history": {
        "revs": [
            "19-c31b481507064f2f30f1aae25da27641",
            "28-023a8ffdf9baaa9680c0ab341076f6c9",
            "8-1c27fcc3531a0330f23740b11a6ede71",
            "1-03c4291a8789d42e2fb081a4629dceec",
            "15-f9bff1d2fcdce3158ecb197b346bd07c",
            "26-1856b380fabb8f16d4d534c0e2012fba",
            "12-deddfaf2e0d38919e7154d325304cfab",
            "1-c67f2dd73a207c240cec6e71f5e5003b",
            "2-5fbe8954f8348cc3425c2cb536d6ee4a",
            "14-dc17bd0a0ab3a6d8928e40184ebe6b4f",
            "21-39319698d61f8299b712d043ac5bf356",
            "7-20e3736982468200c76f2b4c0c54d97a",
            "22-94283606598f3c080cbb05dc4b04971a",
            "31-cba600e2a01af501a608093a34419c20",
            "13-5113ee0f45b598030438ea4beabf1415",
            "3-b0176d698c738c05ccd5e04ee59eaf2f",
            "29-e02a2001ffd69d78ae45a379c5455179",
            "24-e18c125d2d9a5bc9885aaef5d088c768",
            "16-cce13d02b0eea6a87e1d2549e5d96f0c",
            "30-645668f1bbb52c682212cd73a36c125e",
            "17-9fbe7882a57a19063ad1c1630855f678",
            "27-a251e5a15bf9545e9707ddc2fd8cb57a",
            "20-098ee5f049e647b6da51cc2c52e686b0",
            "11-756950f134105dc24c7311653e3199c0",
            "25-fb8f889caf5186e8ab86e41816f5b9b7",
            "23-06fac88eadece90b6bd7f61cf87940ea",
            "2-6ea23606def1a0787a645bcba545787d",
            "9-3c81129357cbb3da2dfffc19619b66cf",
            "18-46d080af78a967f27328e360b72ec56a",
            "10-e52aa222137613db13d1d6a94501f796"
        ],
        "parents": [
            28,
            21,
            11,
            -1,
            9,
            24,
            23,
            -1,
            7,
            14,
            22,
            -1,
            10,
            19,
            6,
            8,
            1,
            25,
            4,
            16,
            18,
            5,
            0,
            29,
            17,
            12,
            3,
            2,
            20,
            27
        ],
        "deleted": [
            26
        ],
        "bodymap": {
            "15": "{\"balance\":0,\"facility_id\":\"5f9b25c5-4bff-46cc-83dc-0c7aee2a6e0a\",\"loans\":[],\"name\":\"AARON\",\"payments\":[],\"phone_number\":\"+254727347491\",\"type\":\"Patient\"}",
            "26": "{\"_deleted\":true,\"balance\":0,\"facility_id\":\"c3e5d002-a9b8-456c-b194-c1b4a7668ae9\",\"loans\":[],\"name\":\"Sammy Aaron Wilks\",\"payments\":[],\"phone_number\":\"+254727347491\",\"type\":\"Patient\"}"
        },
        "channels": [
            null,
            [
                "c3e5d002-a9b8-456c-b194-c1b4a7668ae9"
            ],
            null,
            [
                "c3e5d002-a9b8-456c-b194-c1b4a7668ae9"
            ],
            null,
            [
                "c3e5d002-a9b8-456c-b194-c1b4a7668ae9"
            ],
            null,
            null,
            null,
            null,
            null,
            null,
            null,
            [
                "c3e5d002-a9b8-456c-b194-c1b4a7668ae9"
            ],
            null,
            [
                "5f9b25c5-4bff-46cc-83dc-0c7aee2a6e0a"
            ],
            [
                "c3e5d002-a9b8-456c-b194-c1b4a7668ae9"
            ],
            null,
            null,
            [
                "c3e5d002-a9b8-456c-b194-c1b4a7668ae9"
            ],
            null,
            [
                "c3e5d002-a9b8-456c-b194-c1b4a7668ae9"
            ],
            null,
            null,
            null,
            null,
            [
                "c3e5d002-a9b8-456c-b194-c1b4a7668ae9"
            ],
            null,
            null,
            null
        ]
    },
    "channels": {
        "c3e5d002-a9b8-456c-b194-c1b4a7668ae9": null
    },
    "time_saved": "2017-09-12T11:24:12.196614982Z"
},
"balance": 0,
"facility_id": "c3e5d002-a9b8-456c-b194-c1b4a7668ae9",
"loans": [],
"name": "Sammy Aaron Wilks",
"payments": [],
"phone_number": "+254727347491",
"type": "Patient"
}

and here’s a more complete log record. I’ve only included the bottom 60 lines or so. Let me know if you need more:

09-12 20:06:38.143 4105-14278/com.my_company.my_app V/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33}: Setting lastSequence to 4882 from(4830)
09-12 20:06:38.143 4105-14278/com.my_company.my_app V/Sync: setPaused: false
09-12 20:06:38.143 4105-14278/com.my_company.my_app V/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33}: Setting lastSequence to 4883 from(4882)
09-12 20:06:38.143 4105-14278/com.my_company.my_app V/Sync: setPaused: false
09-12 20:06:38.145 4105-14278/com.my_company.my_app V/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33}: Setting lastSequence to 4884 from(4883)
09-12 20:06:38.146 4105-14278/com.my_company.my_app V/Sync: setPaused: false
09-12 20:06:38.158 4105-14278/com.my_company.my_app V/Sync: setPaused: false
09-12 20:06:38.159 4105-14278/com.my_company.my_app V/Sync: setPaused: false
09-12 20:06:38.270 4105-14278/com.my_company.my_app V/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33}: POSTing 2 revisions to _bulk_docs: [{balance=0, _rev=32-a9473f847b6d629c08eac9a7e61e6d49, payments=[], loans=[], name=Sammy Aaron Wilks, _revisions={start=32, ids=[a9473f847b6d629c08eac9a7e61e6d49, cba600e2a01af501a608093a34419c20]}, _id=+254727347491, type=Patient, facility_id=c3e5d002-a9b8-456c-b194-c1b4a7668ae9, phone_number=+254727347491}, {balance=-668.94, _rev=49-7cf06315d5de443fadaecc33b9d6d318, payments=[{_id=null, amount=80, date=1.474261346844E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=1, date=1.47426239985E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=1, date=1.474262401288E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=1, date=1.474262401656E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=1, date=1.474262401966E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=1, date=1.474262402261E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=1, date=1.474262402624E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=1, date=1.474262402825E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=2, date=1.474262673812E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=201, date=1.474263051355E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=20, date=1.474263313554E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=35, date=1.474341122515E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=1, date=1.474345222681E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=4, date=1.474345353915E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=10, date=1.474345365678E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=25, date=1.479808757924E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=2.29, date=1.479808765253E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=290, date=1.479896955765E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=1000, date=1.481208416905E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=1085.01, date=1.481271541027E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=1000, date=1.484907588936E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=3, date=1.484907632272E12, mpesa_confirmation=null, patient_id=null}, {_id=null, amount=10, date=1.497364350013E12, mpesa_confirmation=null, patient_id=0727347491}], loans=[{_id=null, amount=80, date_due=0, date_given=1.474261330077E12, patient_id=0727347491, sale_id=null}, {_id=null, amount=210, date_due=0, date_given=1.474261435483E12, patient_id=0727347491, sale_id=null}, {_id=null, amount=20, date_due=0, date_given=1.4742632854E12, patient_id=0727347491, sale_id=null}, {_id=null, amount=50, date_due=0, date_given=1.474341019425E12, patient_id=0727347491, sale_id=null}, {_id=null, amount=60, date_due=0, date_given=1.474345326864E12, patient_id=0727347491, sale_id=null}, {_id=null, amount=5.99, date_due=0, date_given=1.474945724698E12, patient_id=0727347491, sale_id=null}, {_id=null, amount=8.3, date_due=0, date_given=1.47699602524E12, patient_id=0727347491, sale_id=null}, {_id=null, amount=243, date_due=0, date_given=1.479801391829E12, patient_id=0727347491, sale_id=null}, {_id=null, amount=32, date_due=0, date_given=1.481101520803E12, patient_id=0727347491, sale_id=null}, {_id=null, amount=1218, date_due=0, date_given=1.481103286342E12, patient_id=0727347491, sale_id=null}, {_id=null, amount=235.01, date_due=0, date_given=1.481103482108E12, patient_id=0727347491, sale_id=null}, {_id=null, amount=175, date_due=0, date_given=1.481203873109E12, patient_id=0727347491, sale_id=null}, {_id=null, amount=7, date_due=0, date_given=1.481206754104E12, patient_id=0727347491, sale_id=null}, {_id=null, amount=18, date_due=0, date_given=1.481266751984E12, patient_id=0727347491, sale_id=null}, {_id=null, amount=1400, date_due=0, date_given=1.481267950742E12, patien
09-12 20:06:38.274 4105-14278/com.my_company.my_app V/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33}: Incrementing changesCount count from 0 by adding 2 -> 2
09-12 20:06:38.277 4105-14278/com.my_company.my_app D/Sync: [sendAsyncRequest()] POST => https://api.mycompany.com:4984/my-db/_bulk_docs
09-12 20:06:38.282 4105-14278/com.my_company.my_app V/Sync: com.couchbase.lite.replicator.RemoteRequest {POST, https://api.mycompany.com:4984/my-db/_bulk_docs}: RemoteRequest created, url: https://api.mycompany.com:4984/my-db/_bulk_docs
09-12 20:06:38.283 4105-14262/com.my_company.my_app V/Sync: com.couchbase.lite.replicator.RemoteRequest {POST, https://api.mycompany.com:4984/my-db/_bulk_docs}: RemoteRequest execute() called, url: https://api.mycompany.com:4984/my-db/_bulk_docs
09-12 20:06:38.367 4105-14278/com.my_company.my_app V/Sync: com.couchbase.lite.replicator.RemoteRequest {POST, https://api.mycompany.com:4984/my-db/_revs_diff}: RemoteRequest execute() finished, url: https://api.mycompany.com:4984/my-db/_revs_diff
09-12 20:06:38.791 4105-14262/com.my_company.my_app V/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33}: Setting lastSequence to 4887 from(4884)
09-12 20:06:38.791 4105-14262/com.my_company.my_app V/Sync: setPaused: false
09-12 20:06:38.791 4105-14262/com.my_company.my_app V/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33}: Setting lastSequence to 4888 from(4887)
09-12 20:06:38.791 4105-14262/com.my_company.my_app V/Sync: setPaused: false
09-12 20:06:38.791 4105-14262/com.my_company.my_app V/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33}: POSTed to _bulk_docs
09-12 20:06:38.791 4105-14262/com.my_company.my_app V/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33}: Incrementing completedChangesCount count from 0 by adding 2 -> 2
09-12 20:06:38.792 4105-14262/com.my_company.my_app V/Sync: com.couchbase.lite.replicator.RemoteRequest {POST, https://api.mycompany.com:4984/my-db/_bulk_docs}: RemoteRequest execute() finished, url: https://api.mycompany.com:4984/my-db/_bulk_docs
09-12 20:06:38.793 4105-14226/com.my_company.my_app D/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33} [fireTrigger()] => STOP_GRACEFUL
09-12 20:06:38.794 4105-14226/com.my_company.my_app V/Sync: [waitForPendingFutures()] END - thread id: 313
09-12 20:06:38.805 4105-14222/com.my_company.my_app D/Sync: firing trigger: STOP_GRACEFUL
09-12 20:06:38.806 4105-14222/com.my_company.my_app V/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33} [onExit()] RUNNING => STOPPING
09-12 20:06:38.806 4105-14222/com.my_company.my_app V/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33} [onEntry()] RUNNING => STOPPING
09-12 20:06:38.806 4105-14222/com.my_company.my_app D/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33} STOPPING...
09-12 20:06:38.806 4105-14222/com.my_company.my_app V/Sync: setPaused: false
09-12 20:06:38.807 4105-14222/com.my_company.my_app D/Sync: State transition: RUNNING -> STOPPING (via STOP_GRACEFUL).  this: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33}
09-12 20:06:38.807 4105-14222/com.my_company.my_app V/Sync: Both RUNNING and STOPPING are ACTIVE, not notify  Replicator state change
09-12 20:06:38.807 4105-14391/com.my_company.my_app V/Sync: [waitForPendingFutures()] STARTED - thread id: 348
09-12 20:06:38.807 4105-14391/com.my_company.my_app D/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33} [fireTrigger()] => STOP_GRACEFUL
09-12 20:06:38.808 4105-14391/com.my_company.my_app V/Sync: [waitForPendingFutures()] END - thread id: 348
09-12 20:06:38.808 4105-14391/com.my_company.my_app D/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33} [fireTrigger()] => STOP_IMMEDIATE
09-12 20:06:38.808 4105-14391/com.my_company.my_app D/Sync: PusherInternal stop.run() finished
09-12 20:06:38.809 4105-14222/com.my_company.my_app D/Sync: firing trigger: STOP_GRACEFUL
09-12 20:06:38.810 4105-14222/com.my_company.my_app D/Sync: firing trigger: STOP_IMMEDIATE
09-12 20:06:38.810 4105-14222/com.my_company.my_app V/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33} [onEntry()] STOPPING => STOPPED
09-12 20:06:38.810 4105-14222/com.my_company.my_app D/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33}: saveLastSequence() called. lastSequence: 4888 remoteCheckpoint: {_rev=0-1, lastSequence=2332}
09-12 20:06:38.810 4105-14222/com.my_company.my_app D/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33}: start put remote _local document.  checkpointID: c9d33b63ede2f1e0a5accefe88f168aaa0ec1139 body: {_rev=0-1, lastSequence=4888}
09-12 20:06:38.810 4105-14222/com.my_company.my_app D/Sync: [sendAsyncRequest()] PUT => https://api.mycompany.com:4984/my-db/_local/c9d33b63ede2f1e0a5accefe88f168aaa0ec1139
09-12 20:06:38.811 4105-14222/com.my_company.my_app V/Sync: com.couchbase.lite.replicator.RemoteRequest {PUT, https://api.mycompany.com:4984/my-db/_local/c9d33b63ede2f1e0a5accefe88f168aaa0ec1139}: RemoteRequest created, url: https://api.mycompany.com:4984/my-db/_local/c9d33b63ede2f1e0a5accefe88f168aaa0ec1139
09-12 20:06:38.839 4105-14392/com.my_company.my_app V/Sync: [waitForPendingFutures()] STARTED - thread id: 349
09-12 20:06:38.846 4105-14223/com.my_company.my_app V/Sync: com.couchbase.lite.replicator.RemoteRequest {PUT, https://api.mycompany.com:4984/my-db/_local/c9d33b63ede2f1e0a5accefe88f168aaa0ec1139}: RemoteRequest execute() called, url: https://api.mycompany.com:4984/my-db/_local/c9d33b63ede2f1e0a5accefe88f168aaa0ec1139
09-12 20:06:39.091 4105-14392/com.my_company.my_app D/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33} [fireTrigger()] => STOP_GRACEFUL
09-12 20:06:39.091 4105-14392/com.my_company.my_app V/Sync: [waitForPendingFutures()] END - thread id: 349
09-12 20:06:39.103 4105-14223/com.my_company.my_app D/Sync: com.couchbase.lite.replicator.ReplicationInternal$8@b52da8: put remote _local document request finished.  checkpointID: c9d33b63ede2f1e0a5accefe88f168aaa0ec1139 body: {_rev=0-1, lastSequence=4888}
09-12 20:06:39.103 4105-14223/com.my_company.my_app D/Sync: com.couchbase.lite.replicator.ReplicationInternal$8@b52da8: saved remote checkpoint, updating local checkpoint. RemoteCheckpoint: {_rev=0-2, lastSequence=4888}
09-12 20:06:39.104 4105-14223/com.my_company.my_app V/Sync: com.couchbase.lite.replicator.RemoteRequest {PUT, https://api.mycompany.com:4984/my-db/_local/c9d33b63ede2f1e0a5accefe88f168aaa0ec1139}: RemoteRequest execute() finished, url: https://api.mycompany.com:4984/my-db/_local/c9d33b63ede2f1e0a5accefe88f168aaa0ec1139
09-12 20:06:39.108 4105-14222/com.my_company.my_app V/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33}: clearDbRef() called
09-12 20:06:39.113 4105-14222/com.my_company.my_app V/Sync: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33}: clearDbRef() setting db to null
09-12 20:06:39.114 4105-14222/com.my_company.my_app D/Sync: State transition: STOPPING -> STOPPED (via STOP_IMMEDIATE).  this: PusherInternal{https://api.mycompany.com:4984/my-db/, push, c9d33}
09-12 20:06:39.116 4105-14222/com.my_company.my_app D/Sync: firing trigger: STOP_GRACEFUL
09-12 20:06:39.163 4105-14222/com.my_company.my_app D/SyncManager: DONE SYNC: 1159075
09-12 20:07:31.368 4105-15080/com.my_company.my_app D/FA: Logging event (FE): _e, Bundle[{_o=auto, _et=1210987, _sc=SyncActivity, _si=-5036193825058194409}]
09-12 20:07:32.032 2599-15081/com.google.android.gms V/FA-SVC: Logging event: origin=auto,name=user_engagement(_e),params=Bundle[{firebase_event_origin(_o)=auto, engagement_time_msec(_et)=1210987, firebase_screen_class(_sc)=SyncActivity, firebase_screen_id(_si)=-5036193825058194409}]
09-12 20:07:32.077 2599-15081/com.google.android.gms V/FA-SVC: Event recorded: Event{appId='com.my_company.my_app', name='user_engagement(_e)', params=Bundle[{firebase_event_origin(_o)=auto, engagement_time_msec(_et)=1210987, firebase_screen_class(_sc)=SyncActivity, firebase_screen_id(_si)=-5036193825058194409}]}
09-12 20:07:32.171 1337-1337/? E/EGL_emulation: tid 1337: eglCreateSyncKHR(1881): error 0x3004 (EGL_BAD_ATTRIBUTE)

I read it somewhere on one of the couchbase blog posts. I don’t remember exactly where. I’m guessing it was more of a guideline? While we’re on the subject, do deleted docs persist until database compaction happens?

Yea, although I also _removed referenced in the couchbase lite code - is it deprecated?


#8

Ah, this explains it. _removed is not a deletion, it’s a marker that the client has lost access to that document due to an access control change. What’s happening is that the client doesn’t know that this revision is also a deletion; so it still sees the document as being in conflict. That’s a bug — it’s new to me, and I’m not sure what the best way to fix it is.

Just to make sure: I assume there is a rule in your sync function that is causing the other client’s user to lose access to the document, based on the change the first client made?


#9

Wait, was _removed a typo? If so, take back what I said above. Can you confirm please?

I understand that tombstones need to be replicated to all devices, and are therefore kept in all database for 3 days

I don’t know if there is an automatic purge of tombstones; but I don’t work on SG anymore. @adamf, can you confirm?


#10

Sorry! my sync function is really simple, and there’s nothing in it that would change which users can access documents. All documents are assigned channels based on the facility they belong to (our permissions structure is based around a store-type model, where all items in the store are owned by the store, and users have various permissions within the store/facility). There is currently no way to transfer ownership of a document.

I definitely did see the _removed field somewhere, but I’d have to confirm exactly where. Based on my above comment I’d guess that this probably isn’t it.


#11

This appears to be from the first device; the log messages are about pushing, not pulling. But you’ve already confirmed that the revisions all end up on the server correctly. So the problem must be with the pull from the second device. Could you post a log from that when pulling the changes that were made?


#12

The problem is that the docs only come up in the conflicts feed on the first pull replication on a device that doesn’t have any existing data on it. The first pull replication involves over 2000 documents, and I have no way of telling where the one’s I’m concerned with are. After the first pull replication of the full data set for the user on any device, the conflict no longer comes up for that device. However, if I were to clear all of the data for the app on that device, and perform another pull replication for that user, the conflict would show up again.


#13

Also, mostly unrelated, but if you create a document, and then delete it before performing a push replication, is that document still included in the first push replication, or does the system know that it hasn’t been replicated and just permanently delete it?


#14

I don’t understand this statement. What’s the difference between “the first pull replication of the full data set” and “clear all of the data for the app on that device, and perform another pull replication for that user”? Can you break this down into a clear series of steps to reproduce?


#15

These are the repro steps. The ones I would add are:

Push the resolved changes to the server
Perform another pull request - the two docs that were in conflict on the first pull request no longer show up as in conflict
Clear all of the data for the app/find a new device
Perform another pull request - this time the two docs that were in conflict originally once again show up in conflict. It’s always the same revision that is causing the conflict. In other words getConflictingRevisions() always returns the current revision and 3-b0176d698c738c05ccd5e04ee59eaf2f, when the conflict comes up.

To put them in context, the conflict only occurs when there is no data on the device. After the first pull replication, the conflict does not re-appear after subsequent pull requests, unless the app data is cleared again, which is effectively a fresh install

There isn’t one, my mistake for adding unnecessary detail


#16

Presumably your conflict resolution code deleted that revision 3-b0 since it’s not current. And then that deletion revision (4-something) got pushed to the server, since you said that the push is successful.

So then when a fresh client pulls the db from scratch, that deletion revision 4-something isn’t getting pulled? Can you confirm that the database only has revision 3-b0 and not the 4-something created by the other client?


#17

I’m not sure if this is the best way, but I checked the revision history list document.getRevisionHistory(), and it only holds revisions 12+ (I assume because older revisions were thrown out? This is a leftover conflict from previous versions of our app that didn’t include conflict resolution)

Also, circling back to this, document.getConflictingRevisions().get(0).getProperties() gives me the property _removed = true (index 0 is the 3-bo revision)


#18

Hey Jens,

Sorry to prod, but we’re getting to release the next version of our app and it would be great to have this resolved.

Thanks!


#19

OK, so you are removing documents from channels. So that makes what I said earlier valid again.

CBL probably shouldn’t be treating a _removed doc as contributing to a conflict. So you can ignore these for purposes of conflict resolution.


#20

Thanks Jens!

The reason I didn’t think that docs were switching channels is because my sync function is really simple:

function (doc, oldDoc) {
	if (getType() == "Facility") {
	        channel(doc._id);
	} else if (getType() != "User") {
	        channel(doc.facility_id);
	}

	channel(doc.channels);

	function getType() {
		return (isDelete(doc) ? (oldDoc != null ? oldDoc.type  : "None") : doc.type);
	}
}

My only guess for how this happened is that it’s a legacy bug from when I was changing this stuff, because facility ID should never change. Just want to confirm I’m not missing anything in what my sync function is doing?