CouchbaseLite 2.0 Swift and GateWay 1.5 Slow Pulling

It looks like when the database on the server side is large, the pulling (as observed in database change listener) can be in much smaller chunks (down to single digits of changed documents per call) and gets very slow.

Is there anyway we can control the chunk size (returned in database change listener) and make it larger as in in hundreds or even thousands of documents a time?

The goal of the DB change observer is to notify observers of any change to DB. You typically would not want to coalesce changes because you are likely going to miss important changes that you would want your app to immediately react to ( For instance, notifying user when a document getting deleted or of a document getting updated after merging remote changes).

Some options that I can think of

  • Maybe instead of the DB observer, you can monitor the Replication status, check when it goes busy and take actions only when the replication status goes back to Idle.

  • You could consider using LiveQueries if you want to fine tune the subset of documents / changes that you are interested in. This blog post gives you some examples of queries. Here is corresponding swift playground. You can try using count to determine if the count exceeds a threshold. But since that threshold could keep changing, you may have to update query with new threshold

@priya.rajagopal After many experiments we found that the problem is like: the initial database changes are reasonably fast, but after (we believe) all the documents (~ 1 million) are pulled down, the replicator change listener doesn’t report the activity as stopped (we configured it as non-continuous), instead the database change listener started to report changes one or two a time and in the meantime the CPU and memory goes up quickly. The database change keeps coming in in such small chunks even when we turned off the WIFI, so it looks like it’s regurgitating what it had pulled in already.

Does this ring something in you? Thanks

How do you believe that the documents were all replicated ? Since you indicated that replicator change listener goes not go to idle which is a bug if it doesn’t after pulling all the data. So may have to do some more digging into that -

Can you print something like this in the replicator callback to see if in fact all data comes back
print("PushPull Replicator: \(s.progress.completed)/\(s.progress.total), error: \(String(describing: s.error)), activity = \(s.activity)")

Thats odd. So sync has stopped and you still see changes reported ?

Well, lets try printing what replicator is reporting first and we can take it from there(Please start with a clean DB locally and then sync the 1M docs so we have a baseline)

I put the log in place and it confirms that the total number hasn’t reached the full number yet as you suspected. And the total number is still increasing. However, this does not explain why after I turned off the wifi that both database and replicator changes are still coming in, even after more than 10 minutes it was still going.

2017-11-30 14:58:18:888 MyApp[9019:7433413] INFO couchbase replicator: 878875/1016000, error: nil, activity = busy
2017-11-30 14:58:18:932 MyApp[9019:7433413] INFO couchbase recorded 2 document changes
2017-11-30 14:58:19:003 MyApp[9019:7433413] INFO couchbase recorded 2 document changes
2017-11-30 14:58:19:088 MyApp[9019:7433413] INFO couchbase recorded 2 document changes
2017-11-30 14:58:19:115 MyApp[9019:7433413] INFO couchbase replicator: 878877/1016000, error: nil, activity = busy
2017-11-30 14:58:19:198 MyApp[9019:7433413] INFO couchbase recorded 2 document changes
2017-11-30 14:58:19:269 MyApp[9019:7433413] INFO couchbase recorded 3 document changes
2017-11-30 14:58:19:314 MyApp[9019:7433413] INFO couchbase replicator: 878882/1016000, error: nil, activity = busy
2017-11-30 14:58:19:396 MyApp[9019:7433413] INFO couchbase recorded 4 document changes
2017-11-30 14:58:19:536 MyApp[9019:7433413] INFO couchbase replicator: 878883/1016000, error: nil, activity = busy
2017-11-30 14:58:19:566 MyApp[9019:7433413] INFO couchbase recorded 4 document changes
2017-11-30 14:58:19:681 MyApp[9019:7433413] INFO couchbase recorded 4 document changes
2017-11-30 14:58:19:749 MyApp[9019:7433413] INFO couchbase replicator: 878886/1016000, error: nil, activity = busy
2017-11-30 14:58:19:806 MyApp[9019:7433413] INFO couchbase recorded 3 document changes

My guess is that the CBL had already cached a lot of results, but don’t know why it is feeding it in such a small and slow paces.

Also it looks like the database changes and replicator changes are not necessarily in sync. When the sync started, the replicator.status.completed may remain 0 for a long time even though the database change listener have already recorded hundreds of changed document IDs.

By “turning off WiFi”, I presume you meant, turning off all network connectivity (Not sure if you were on a device with cellular).
But that should be fine, it just looks the app was busy updating the db during replication so the logging messages was deferred to after the replication tapered down.

So since your problem that the database changes are coming in batches - Did you wait to see if the replication goes idle and use that as an indication that everything is synced up ?

If replication does not ever go idle, that is a separate issue.

yes, turned off all network connectivity.

The trickling in of these super small chunks took forever and didn’t finish in hours while the memory and CPU usage became very high.

OK- I am not as concerned about the trickling part as replication happens in the background thread so its going to be lower priority than tasks on main thread in the app and when its scheduled.

But what bothers me that it takes that long and never completes (assuming you were doing over a reasonably high speed network). I have filed a issue -https://github.com/couchbase/couchbase-lite-ios/issues/1970
Please update with specifics -

  • What version of swift and DB that you are using
  • Also specifics on the documents you are syncing - average size of documents, whether they have blobs etc would be helpful