Problem with deletion using TTL/expiration of larger number of documents

Yeah, I tried this in the beginning, and just tried again. It is MOSTLY deleting the documents, however some documents still remain in the subsequent query. Would eventing give me guarenteed results if I went that route? Because this .remove method is not working with full accuracy it seems.

Couchbase Indexes are Eventual consistency,
As you are doing async call you should wait to finish the async call and issue N1QL query with REQUEST_PLUS to get the next list of documents. Also not sure how you are getting next list of documents. Are you using ORDER BY/LIMIT/OFFSET?
If you are using multiple threads then you need to synchronize those.

You mentioned you want delete millions of records and want to use offset /limit .

Note: Couchbase indexes are eventual consistent. you must use REQUEST_PLUS.
If used second time it might give different results.

If you have 100 items without offset and limit.
OFFSET 10 LIMIT 15 and delete 15 items. Request_plus second time gives another 15 times. After 6th time it will not give results.

Best Approach will be use Keyset pagination: Using OFFSET and Keyset in N1QL | The Couchbase Blog

If you running second time run the SELECT query with REQUEST_PLUS or check UI and Indexes and see any pending mutations and let them go down to 0

CREATE INDEX ix1 ON mybucket (dealer,META().id, entityType);
id = “”
Query = “select RAW meta().id as key , entityType from mybucket where dealer=$dealer AND META().id > $id LIMIT 10000;”

Do In loop
          Execute  the N1QL query
           if results length 0 
                 break the loop
          Else run the delete query asynchronously on all the documents.
          set id value to last value in the results let the loop continue.

I ran the query 1 hour after doing the queryBucklet.touch() command. Can I specify the scan consistency in the web console?

Yes. Checkout Query Preferences Query Workbench | Couchbase Docs

Thanks for your quick and helpful reply.. .