Currently _revs_diff and _bulk_get call get 100 documents or revision diffs in one call. We have around 50k+ documents hence the One shot or _revs_diff after replace database or _revs_diff after when we switch to peer to peer takes lot of time.
And we want to reduce this time. The app we are developing will be exclusive for the device and very less or no other app will be running on the tablet. In that case we want to increase the INBOX_CAPACITY, INSERTION_BATCHER_CAPACITY and MAX_REVS_TO_GET_IN_BULK.
How much change is required in that case, Is It only the change in PullerInternal and ReplicationInternal?
What is the connection between MAX_REVS_TO_GET_IN_BULK, INSERTION_BATCHER_CAPACITY of PullerInternal and INBOX_CAPACITY of ReplicationInternal.
Increasing them for a High End Android Tablet be a help ? because the network will be local LAN 90 percent of the time having On premise server and sync gateway available on local LAN.
If this issue will be resolved in couch-base lite 2.0 with new protocol.
Why do you think that increasing these constants will improve performance?
Have you profiled replication to see where time is being spent?
Currently we have around 30k document for one app. And when we replace the database, It does _revs_diff for all the documents. And it is around 700+ calls. I want to reduce this number.
So if we can increase inbox capacity from 100 to 500 then it will be giving around 700/5= 140 calls. and the best part is in these calls 98% of the time in return there is no data because the data was actually there but the walkthrough was required as private and public UUIDs have changed.
Now this walk through via _revs_diff takes time in peer to peer sync also. Which is worst case for us as in between the operations if network is not available then 2 users will have to wait for 2-4 minutes for the walk through and then the sync will happen.
And if these numbers have been put to support lower configuration devices then there should be a configuration by which tuning according to the requirement / Network can be done.
May be I am completely going in a wrong direction but I need to reduce walk through time in peer to peer sync.
So if we can increase inbox capacity from 100 to 500 then it will be giving around 700/5= 140 calls.
Sure, but the overhead of an HTTP call isn’t that high, once the socket is already open. And the calls are handled asynchronously. But packing more data into a single request would increase latency because it takes longer for the request to be delivered. Also, the more revisions that get looked up in a single revs-diff call, the longer the replicator is holding onto the SQLite database lock, which can cause blockage of UI threads.
You can of course modify them in the source code. We haven’t made them configurable through the API, because it makes our customer support more complex if customers might be using unknown custom values.
I tried to do so but no matter what it always have 100 as inbox capacity in ProcessInbox function.
Which have a list , The size of revision list is always 100. And I haven’t been able to find out from where it is coming. If you can help me finding it out please help.
Can we have 3 types of configuration lower, Mid and High End config, This will have the ecosystem small.