LiveQuery returning old revisions as (additional) documents after view index has been closed and reopened

I’m using CBL-ios V 1.3.1 on an encrypted ForestDB and i’m having a weird behavior of LiveQuery/View:
What i’m doing is:

  1. Create a CBLView
  2. Open a UITableViewController which opens a LiveQuery on that view
  3. Edit a record in a modal view controller on top of the table
  4. After closing the modal edit view controller, the changes are shown in the table view like they should
  5. Wait a minute (literally!)
  6. The underlying index gets closed
  7. Reload the view controller (which stops/dismisses the old LiveQuery and creates a new one for the view, which in turn reopens the index)

This works well for some time with or without restarting the app, but starting from a certain point of time it always fails (but only after steps 1-7, not directly after the app is restarted) in the following way:
The iterator returns additional documents which are older revisions of documents in the index where the key properties had previously been different. After restarting the app, the results are OK again until the index is closed and reloaded for the first time. When i comment out the performSelector line in closeIndexSoon preventing the index from getting closed and reloaded, everything stays well.

The strange thing is, the MapReduceIndex shows the correct rowCount in the logs but the iterator returns additional documents. (“IndexEnumerator: found key=…” occurs more often than rowCount.)

I first thought, it might be a threading problem as the LiveQueries are executed from background and i’m also using “normal” queries in the main thread. But as far as i’ve tracked it down until now, i’m not having any views which are used both as a query AND a LiveQuery although when the first “closing” of indexes appears (60 seconds after the app was first started) in most cases there are multiple indexes which get closed within a short amount of time including some normal queries on main thread and some LiveQueries on the CBL background thread.
This behavior is not limited to one single LiveQuery, it looks more like it can happen to anyone of them (although until now, i only recognized theses problems on views with combined (array) keys). One precondition is, that at least one contained document had an older revision where the key properties were different.

To me it looks like either the index or the database gets corrupted in some way and afterwards the error starts occurring each time the index is reloaded at runtime. But i’m not having any idea of what corrupts the database and why.

Any proposals and help is warmly welcomed… TIA!

The first question to ask is whether you’re following the thread-safety rules in your code — not using the same CBL objects from multiple threads (or queues).

If your code is safe, then this seems like a bug, so please file an issue on Github. If you could show examples of the “additional documents which are older revisions of documents in the index”, that would be great.

Thanks for the response! Your assumption that it most probably is a threading problem made me invest 2 more days into investigating thread access to all my CBL objects.
I was using the shared CBLManager instance (normally used from main thread in my app) from a background thread. The enclosing method should have been called from main thread but wasn’t because of a bug some steps higher in the call hierarchy.
Activating lots of log messages (DatabaseVerbose + QueryVerbose + ViewVerbose) didn’t take me much closer to the solution because the “thread violation” occurred very long before the first effects. (and the code where the effects occurred was not violating the threading rules, indeed).
The key to finding the problem was adding a quick and dirty checkThread() method to CBLManager, CBLDatabase, CBLView and CBLQuery (which checks, if the calling thread is equal to the thread from which the object was generated and raises a NSException if not) and calling that method within all main methods of these classes.
So, maybe such a “thread-safety enforcement mode” would help more people facing threading (or just inexplicable) problems like me? What do you think?

Unfortunately, the problem reoccurred. And now i’m quite sure there is no usage of CBL objects on the wrong thread.
I will again try to find the problem or write a test case reproducing the problem the next days or weeks…

Meanwhile one question: Is it safe to update documents concerning a view (especially update fields which are part of the key array of the view) from one thread (e.g. the main thread) while the CBLView object “lives” on another thread (the CBL background thread)?

I managed to forestdb_dump a corrupted viewindex file (see below), the third and fourth Doc ID entries are the same document but in different revisions. Any idea how this might happen? Doc IDs look interesting to me (start and end with same sequences but are different in the middle, but i’m not understanding the forestdb storage enough to interpret this)
I’m not right now able to reproduce the problem in a unit test case which is the reason i’m not filing it as a bug right now.

DB header info:
    BID: 63 (0x3f, byte offset: 258048)
    DB header length: 88 bytes
    DB header revision number: 21
    DB file version: ForestDB v2.x format
    HB+trie root BID: not exist
    Seq B+tree root BID: not exist
    Stale B+tree root BID: 5, 128-byte subblock #0 (0x20000000000005, byte offset: 20480)
    DB header BID of the last WAL flush: not exist
    # documents in the main index: 0, 0deleted / in WAL: 4 (insert), 0 (remove)
    # live index nodes: 0 (0 bytes)
    Total document size: 1981 bytes, (index: 0 bytes, WAL: 1981 bytes)
    # KV stores: 2
      KV store name: default
      # documents in the main index: 0, 0deleted / in WAL: 0 (insert), 0 (remove)
      # live index nodes: 0 (0 bytes)
      Total document size: 0 bytes
      Last sequence number: 0

      KV store name: ed0f921df4066fc85e7df1506b9b354fdfbf0abc45f5a0f910268ff61dca89e0
      # documents in the main index: 0, 0deleted / in WAL: 4 (insert), 0 (remove)
      # live index nodes: 0 (0 bytes)
      Total document size: 0 bytes
      Last sequence number: 3

Doc ID: (hex)
        01                                                 .
    KV store name: ed0f921df4066fc85e7df1506b9b354fdfbf0abc45f5a0f910268ff61dca89e0
    Sequence number: 3
    Byte offset: 250837
    Indexed by WAL
    Length: 1 (key), 0 (metadata), 94 (body)
    Compressed body size on disk: 92
    Status: normal
    Metadata: (null)
    Body: @X@

Doc ID: (hex)
        06 08 49 44 5f 5f 38 25  3c 5e 3e 08 5e 5e 4d 2b   ..ID__8%<^>.^^M+
        47 37 5d 58 39 45 2d 5d  00                        G7]X9E-].
    KV store name: ed0f921df4066fc85e7df1506b9b354fdfbf0abc45f5a0f910268ff61dca89e0
    Sequence number: 2
    Byte offset: 250745
    Indexed by WAL
    Length: 25 (key), 0 (metadata), 25 (body)
    Compressed body size on disk: 27
    Status: normal
    Metadata: (null)
    Body: A�v�#

Doc ID: (hex)
        07 07 06 52 4b 33 43 5f  00 01 00 06 08 49 44 5f   ...RK3C_.....ID_
        5f 38 25 3c 5e 3e 08 5e  5e 4d 2b 47 37 5d 58 39   _8%<^>.^^M+G7]X9
        45 2d 5d 00 00                                     E-]..
    KV store name: ed0f921df4066fc85e7df1506b9b354fdfbf0abc45f5a0f910268ff61dca89e0
    Sequence number: 2
    Byte offset: 49152
    Indexed by WAL
    Length: 37 (key), 1 (metadata), 786 (body)
    Compressed body size on disk: 789
    Status: normal
    Metadata: ?
    Body: {"_id":"-nKyyE0GXH-XXp6mexUfl8x","_rev":"1-6f4e405239a94c7f07dedbef48e500ae","Titel":"Rocky","fbf-signature":"eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzUxMiJ9.eyJfc2VxdWVuY2UiOjYyLCJfaWQiOiItbkt5eUUwR1hILVhYcDZtZXhVZmw4eCIsInR5cGUiOiItZ3JhZ0JnQUtYZzJuWDZuMVpEMXdNdyIsImZiZi1zaWduYXR1cmUtdXNlcklkIjoiOGY4MjBhMDItNzg5YS00MjY5LWEwZjEtODI2OTdmNzI4ZTZkIiwiVGl0ZWwiOiJSb2NreSIsIlNwcmFjaGVuIjpbXX0.Xykoq9SQs-HaWZ_eMw-Hr5MwzuNzJZ50d21732ssefOHVRUWMKIzjKTwe8KNHy653wmY4LjblfbwBiCWdUT28VVnPWNCo9fGP532sga5bDFhxbO0zH-UBgmRWLvsrznRVe4hgu8Si5FV7Vb3szXTAGho1Xp7z_YstVDn6cWZLmYUhtv_BG_RlpCt8JSCpmjSIBZzXddA-7lF1dGpjf13885gggFyuv8ebKQcMO8u6qAsGFk9ukUJKoR5ujhE-s6UDCM5Ps63xCdRq9KNr9TCla2Hi6Ze1DU6HFtdYI9AzwvbAiOOr47Y5wu9ygqzt5IwWm1GQzJwQG6eaQT6xYxJRQ","type":"-gragBgAKXg2nX6n1ZD1wMw","_local_seq":63,"Sprachen":[]}

Doc ID: (hex)
        07 07 06 52 4b 33 43 5f  27 28 29 2a 2b 2c 00 01   ...RK3C_'()*+,..
        00 06 08 49 44 5f 5f 38  25 3c 5e 3e 08 5e 5e 4d   ...ID__8%<^>.^^M
        2b 47 37 5d 58 39 45 2d  5d 00 00                  +G7]X9E-]..
    KV store name: ed0f921df4066fc85e7df1506b9b354fdfbf0abc45f5a0f910268ff61dca89e0
    Sequence number: 1
    Byte offset: 249856
    Indexed by WAL
    Length: 43 (key), 1 (metadata), 800 (body)
    Compressed body size on disk: 805
    Status: normal
    Metadata: ^
    Body: {"_id":"-nKyyE0GXH-XXp6mexUfl8x","_rev":"7-9095ac0c9494c0b5252c1ad46367f49b","Titel":"Rocky234567","fbf-signature":"eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzUxMiJ9.eyJfc2VxdWVuY2UiOjkzLCJfaWQiOiItbkt5eUUwR1hILVhYcDZtZXhVZmw4eCIsIlRpdGVsIjoiUm9ja3kyMzQ1NjciLCJmYmYtc2lnbmF0dXJlLXVzZXJJZCI6IjhmODIwYTAyLTc4OWEtNDI2OS1hMGYxLTgyNjk3ZjcyOGU2ZCIsInR5cGUiOiItZ3JhZ0JnQUtYZzJuWDZuMVpEMXdNdyIsIlNwcmFjaGVuIjpbXX0.F_b37G2DCH33d8ef4iJAoN9C5AanfcgoRgUprp8lXv5AKrePwZNDcQpK8dQCmlUy7XB7hYV5X6nNLqWd8vWG6CwZhIZxT57IxhzZDEvq9RHzCceHN5x7j-xtktftLlKDUGSqQQC_3s7sWo0wNe9hVjffiABV3RahFUkdQyQFLjtgeEJIyEy0p9M7UmXwyW8Uea9hNfPRYkaX4vJ1A_2NaOPZGI1gWMM-vRNTM2vPd_utqDSqVJ71-LHTHiVA2V0pcTqx7mXi-smEEIRC5CDL-G9WJ_u-XYY7aTkbbx60OxC4LJeQLQm9ZrGx_SavEmC-xyLpSdYMPpMeic67wNyUaw","type":"-gragBgAKXg2nX6n1ZD1wMw","_local_seq":94,"Sprachen":[]}

I finally was able to reproduce the problem in a test case right now.
Wrote a test case and filed an issue on Github.
Main “ingredients” for the problem are:

  • updating a key property of a document in the view
  • updating the mapBlock for the view with a different version string
  • using the view on a background thread (e.g. a LiveQuery)
  • closing the view index on a background thread

EDIT: Btw, i was already using the newest version in branch release/1.4.0, not the version tagged with 1.3.1. CBL_VERSION_STRING is still set to 1.3.1 in this version, which was misleading me.