Inconsistent Java LegacyDocument.replace

EDIT

This topic has been addressed here by a more simplified case of my problem.

Java Client version 2.4.8
Server Version: 4.6.1

I’ve encountered a strange phenomenon using LegacyDocument.

First some background:
Few months ago we have upgraded a certain application java-client version from 1.4.4 to 2.4.8.
Having some other applications using SDK1 and existing data, we had decided to use LegacyDocument for interaction with server documents.

Few of those documents are used to hold increment counters values. Using SDK1, we had done the increment like this:

couchbaseClient.asyncIncr(counterKey, step)

Shifting to LegacyDocument using SDK2 we now have to:

  1. Get the document

  2. Increment by one

  3. Update with the new value

We have actually used bucket.getAndLock for obvious concurrency reasons.

To complicate things more, we had prepared ourselves for the situation where all (Java) clients are ready to use SDK2. In that case it’s unnecessary to use get/increment/update (+ locking) as in the case of LegacyDocument. Assuming we can switch to JsonLongDocument, we can just call:

bucket.counter(counterKey, delta);

Where delta is actually 1.

We had prepared a feature flag - legacyClientsStillUsed() - just for that. When the day comes,
we can stop using LegacyDocument and use JsonLongDocument instead:

if (legacyClientsStillUsed()) {
    String  s ="";
    LegacyDocument legacyDocument = bucket.getAndLock(LegacyDocument.create(counterKey), 400);
    long cas = legacyDocument.cas();
    s = legacyDocument.content().toString();
    Long value = Long.parseLong(s) + delta;
    bucket.replace(LegacyDocument.create(counterKey, value, cas));
    return value;
}
else {
    try {
	JsonLongDocument counter = bucket.counter(counterKey, delta);
	return counter.content();
    } catch (CouchbaseException e) {
        if ("INVALID_ARGUMENTS".equalsIgnoreCase(e.getMessage())) {
		    LegacyDocument legacyDocument = bucket.get(LegacyDocument.create(counterKey));
		    String s = legacyDocument.content().toString();
		    Long value = Long.parseLong(s) + delta;
		    bucket.upsert(JsonLongDocument.create(counterKey, value));
		    return value;
	    }
    }
}

This is quite straightforward:
If the flag is unset (there are still client using SDK1): get a LegacyDocument increment its value, finally update.
Once the flag is set, increment a JsonLongDocument. If the document is still stored as legacy (catch clause), read it as LegacyDocument , get it’s value, then update it as a JsonLongDocument.

We tested the code (by setting the flag) on one of the environments. For two (out of three) counter documents, that transition went well. For, the third counter value however, the value has been reset (to Zero).

So, I’ve made a simple UNIT test:
Create a new LegacyDocument whose value is 0 (Zero)
Increment by one repeatedly. Switch the the flag on/off every three calls. To be more specific, for 10 first calls:

  • Increment a LegacyDocument by 1
  • Increment a LegacyDocument by 1 (expected 2)
  • Increment a LegacyDocument by 1 (expected 3)
  • Turn on feature flag
  • Switch to and increment a JsonLongDocument by 1 (expected 4)
  • Increment a JsonLongDocument by 1 (expected 5)
  • Increment a JsonLongDocument by 1 (expected 6)
  • Turn off feature flag
  • Increment a LegacyDocument by 1 (expected 7)
  • Increment a LegacyDocument by 1 (expected 8)
  • Increment a LegacyDocument by 1 (expected 9)
  • Turn on feature flag
  • Switch to and increment a JsonLongDocument by 1 (expected 10)
    And so on

This is the UI view of the document after 3 increments (a Legacy binary document).
As this is my first post here. I can seem to include more than a single image upload.

Warning: Editing of binary document is not allowed
ORID::orderIdSequenceTest::counter

And, then after 6 increments (a Json document with the value of 6).
So turning the flag on, effected in a smooth transition.

Warning: JSON should represent an object
ORID::orderIdSequenceTest::counter
6

And, then after 8 increments (again a binary)

Warning: Editing of binary document is not allowed
ORID::orderIdSequenceTest::counter

And then after the 9th increment: an Error

Despite that error, the value has been increased:

result = {Long@3095} 9
value = 9

So reading a value of a legacy document, change it (from 8 → 9) and replacing it, somehow has a suspicious effect.

Then increasing the counter once more - 10th time, just after setting the flag again - resulted in counter being reset !!!. You can see it in the debug variable as (content = 1)

result = {JsonLongDocument@3106} “JsonLongDocument{id=‘ORID::orderIdSequenceTest::counter’, cas=1538566727663550464, expiry=0, content=1, mutationToken=null}”
id = “ORID::orderIdSequenceTest::counter”
cas = 1538566727663550464
expiry = 0
content = {Long@3112} 1
mutationToken = null

and the UI view:

Warning: JSON should represent an object
ORID::orderIdSequenceTest::counter
1

Stranger indeed, after two more iterations, the JSON value appearing in UI , as well as counter.content() both advanced the counter to 3, but calling both:
bucket.get(key, LegacyDocument.class);
as well as
bucket.get(key, JsonLongDocument.class)
returned 51

I’ve repeated that test (delete/re-create the document), this time without setting the flag on and off. Meaning all iterations read/updated a single LegacyDocument. Again, I’ve encountered the error on the UI at the same stage: increasing the counter value from 8 to 9.

To make it even more strange: I’ve then used SerializableDocument for document replacement, just for testing. Note that now I use LegacyDocument for the get operation and SerializableDocument for update operation.

LegacyDocument legacyDocument = bucket.getAndLock(LegacyDocument.create(counterKey), 400);
long cas = legacyDocument.cas();
 s = legacyDocument.content().toString();
 Long value = Long.parseLong(s) + delta;
 bucket.replace(SerializableDocument.create(counterKey, value, cas));
 return value;

This time, everything went well!!! I’ve increased the counter 30 times (turning flag on/off) , reset the document to different initial values. Incremented, switched the flag. Now Perfect.

But I sense that I can’t rely on that change, as I have no clue what is the problem initially, and afraid I may tackle edge cases in the future.

Some now for the questions:

  1. Has anybody ever encountered such problem switching between document types / transcoders ?
  2. Couldn’t find anything related in logs, but obviously don’t know where exactly look.
  3. Where in library code can I focus for further investigation
  4. Can someone suggest why SerializableDocument.create worked where LegacyDocument.create failed

I’ve tried this code

try {
			rqmBucket.counter(docId, 1);
		} catch (CouchbaseException e) {
			if ("INVALID_ARGUMENTS".equalsIgnoreCase(e.getMessage())) {
				LegacyDocument legacyDocument = rqmBucket.get(LegacyDocument.create(docId));
				String s = legacyDocument.content().toString();
				Long value = Long.parseLong(s) + 1;
				rqmBucket.upsert(JsonLongDocument.create(docId, value));
			}
		}

but I got at a java.lang.NumberFormatException because legacyDocument.content() gives me all the Json doc. :
"{"resource":{"createdAt":15..."