Delete Statement using N1ql query

I have this n1ql query that is run within one pod. The query is:

  Statement deleteStatement = Delete.deleteFromCurrentBucket().where("id=$id").limit(1).returning( "*");
 N1qlQuery query = N1qlQuery.parameterized(deleteStatement, param);
 result = CurrentBucket.query(query);*

The objective of this statement is to “pop” the topmost record, and return its value. The issue is that some of these “pop” queries aren’t returning any records.
There are more than enough documents in the current bucket that satisfy the where condition shown above, so thats not an issue.
When running the program, and attempting to run this with multiple threads, I see that in some threads the result’s “finalSuccess” flag shows up as “true” but no rows are returned.

When I test after relocating this code into a syncronized block, I don’t see this error when I run for one pod, but the same issue shows up when running among multiple pods.

I was under the impression that the methods listed in the “bucket” interface here are thread-safe and that on the server’s side the delete operation on the bucket is threadsafe(i.e. there is row locking or some similar mechanism) as well.

For reference the documentation is here:
https://docs.couchbase.com/sdk-api/couchbase-java-client-2.7.6/com/couchbase/client/java/Bucket.html#query-com.couchbase.client.java.query.N1qlQuery-

I could use some advice on this, since it seems like the operation isn’t thread safe. Is the bucket thread safe? i.e. when we have multiple delete operations that come for that same index, will it execute the first one and the latter successfully?

Couchbase sdk version: 2.7.6

Try setting ScanConsistency.REQUEST_PLUS for your delete queries

See https://docs.couchbase.com/java-sdk/2.7/scan-consistency-examples.html

“Not bounded” being the default, the query engine will grab and return what is in the index right now. It is the fastest option, but not suitable for “read your own writes” since they may not yet be indexed.

At the other end of the scale there is “request plus” which will — when a query is performed — wait until all of the indexes used have caught up to their highest internal sequence numbers. This will make sure that you read your own writes, but it might take some time if there is a write-heavy workload. Also sometimes you are only interested in the last mutation(s) you performed, so waiting for everything else in the index to be updated can mean unnecessary latency.