Incomplete Pull Replication


#1

Since several days we work with data from a productive environment. All 380000 records (630 Mb) could easily be pushed into the database via sync-gateway. But the pull-replication back to other clients (windows and centos) stopped at 76500 units - the replicator status remained busy. By the way two documents are lost.
We don’t find any errors in the log files. With all other testdata provided by couchbase we had absolutly no problems.
Here are the last lines from the log:

INFO 2018-03-06T16:29:28.520 "LiteCore [Sync]: {DBWorker#2} {‘0076498’ #1-97234e2402d6a809f45f2e4a2227ac86adb078ae}"
INFO 2018-03-06T16:29:28.520 "LiteCore [Sync]: {DBWorker#2} {‘0076501’ #1-a9efcd335bedecf741a03361524578f08983ad4a}"
INFO 2018-03-06T16:29:28.520 "LiteCore [Sync]: {DBWorker#2} {’ ’ #1-55b37f8107a5a82cbae52ef1f1f378ec18ab66af}"
INFO 2018-03-06T16:29:28.520 "LiteCore [Sync]: {DBWorker#2} Inserted 23 revs in 1.33ms (17352/sec)"
DEBUG 2018-03-06T16:29:28.608 "No Error."
INFO 2018-03-06T16:29:28.608 "Level: Busy, 76499 of 76500 units and 76499 documents processed, 99.9987% done."
INFO 2018-03-06T16:29:32.306 "LiteCore [Sync]: {Repl#1} Saving remote checkpoint cp-LR8JjSze+gnjVVq1PD1RjPLevKg= with rev=‘0-5’: {“remote”:76214} …"
INFO 2018-03-06T16:29:32.346 "LiteCore [Sync]: {Repl#1} Successfully saved remote checkpoint cp-LR8JjSze+gnjVVq1PD1RjPLevKg= as rev=‘0-6’"
INFO 2018-03-06T16:29:32.346 “LiteCore [Sync]: {DBWorker#2} Saved local checkpoint cp-LR8JjSze+gnjVVq1PD1RjPLevKg= to db”

Is this an error or are there any restrictions or settings ?
Any hints what further to do ?

Regards,
Alfred


#2

@weis,

What are the sync gateway logs saying?


#3

Hi,
thank you for the response. After a while we find out, that we have problems with documents of a size more than 180k. We have created an issue (with a testcase):
https://github.com/couchbase/sync_gateway/issues/3363

Regards,
Alfred


#4

We have tested an updated build of sg. It was the same bug as in https://github.com/couchbase/couchbase-lite-core/issues/457 (and the associated SG fix #3362). The bug is now fixed.
Thanks!