Flags (0x802) indicate non-binary document for id

Getting the below Error Flags (0x802) indicate non-binary document for id Lkup/SpDoc/Agreement/SCP, could not decode.
at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(
at rx.internal.operators.OnSubscribeFilter$FilterSubscriber.onNext(
at rx.observers.Subscribers$5.onNext(
at rx.internal.operators.OnSubscribeDoOnEach$DoOnEachSubscriber.onNext(
at rx.internal.producers.SingleProducer.request(
at rx.internal.producers.ProducerArbiter.setProducer(
at rx.internal.operators.OnSubscribeTimeoutTimedWithFallback$TimeoutMainSubscriber.setProducer(
at rx.Subscriber.setProducer(
at rx.internal.operators.OnSubscribeMap$MapSubscriber.setProducer(
at rx.internal.operators.OnSubscribeFilter$FilterSubscriber.setProducer(
at rx.Subscriber.setProducer(
at rx.Subscriber.setProducer(
at rx.subjects.AsyncSubject.onCompleted(
at com.couchbase.client.core.endpoint.AbstractGenericHandler.completeResponse(
at com.couchbase.client.core.endpoint.AbstractGenericHandler.access$000(
at com.couchbase.client.core.endpoint.AbstractGenericHandler$
at java.base/java.util.concurrent.Executors$ Source)
at java.base/ Source)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$ Source)
at java.base/ Source)
Caused by: rx.exceptions.OnErrorThrowable$OnNextValue: OnError while emitting onNext value: com.couchbase.client.core.message.kv.GetResponse.class
at rx.exceptions.OnErrorThrowable.addValueAsLastCause(
at rx.internal

The Exception is happening when we tried to get the document from the Bucket

bucket.get(internalDocId, ByteArrayDocument.class);

and the Bucket is created using multicluster configuration.

Hi Kolappan,

SDK 2.x is very strict about document flags. If the flags don’t match the document type you’re asking for, you’ll get the exception you posted. The standard transcoder for ByteArrayDocument fails like this unless the document was created with the “binary” flag.

The long-term solution is to upgrade to SDK 3 which does not have this problem. The rest of this post assumes you need to stay with SDK 2.x for a while longer.

In general, one possible solution is to get the document using the same document class that was used to create it. I don’t know if that’s an option in this case, since the flag value 0x802 doesn’t look like it matches any of the default transcoders (although I could be mistaken). EDIT: Actually, there’s a good chance this document was created using LegacyDocument instead of ByteArrayDocument, and getting it as a LegacyDocument might resolve the issue; see the post below this one.

If you really just want the bytes of a document regardless of its type, you can register a custom transcoder that ignores flags when reading the document:

import com.couchbase.client.core.message.ResponseStatus;

 * A variant of {@link ByteArrayTranscoder} that doesn't check flags.
public class LenientByteArrayTranscoder extends ByteArrayTranscoder {
  protected ByteArrayDocument doDecode(String id, ByteBuf content,
                                       long cas, int expiry, int flags,
                                       ResponseStatus status) {
    return newDocument(id, expiry, ByteBufUtil.getBytes(content), cas);

Custom transcoders are registered at the bucket level, and must be specified
the first time a bucket is opened:

public static void main(String[] args) {
  CouchbaseCluster cluster = CouchbaseCluster.create("localhost")
      .authenticate("Administrator", "password");

  // Pass a list of custom transcoders when opening the bucket.
  // CAVEAT: if your code opens the bucket multiple times,
  // the transcoders must be specified on the *first* call;
  // all subsequent calls ignore the transcoders argument.
  List<Transcoder<?, ?>> customTranscoders = new ArrayList<>();
  customTranscoders.add(new LenientByteArrayTranscoder());
  Bucket bucket = cluster.openBucket("default", customTranscoders);

  // Create a JSON document...
  bucket.upsert(JsonDocument.create("foo", JsonObject.empty()));

  // And read it back as a byte array document.
  // This would fail without the custom transcoder.
  ByteArrayDocument doc = bucket.get("foo", ByteArrayDocument.class);

  // Console output should be an empty JSON Object: {}
  System.out.println(new String(doc.content(), StandardCharsets.UTF_8));


Hi Kolappan,

Upon reflection, this is probably a legacy document. If that’s the case, the flag value of 0x802 would indicate it contains a a compressed (GZIP) byte array.

Before messing around with custom transcoders, try this:

// For SDK 2
LegacyDocument doc = bucket.get(docId, LegacyDocument.class);
byte[] content = (byte[]) doc.content();
// For SDK 3.2 or later
GetResult result = collection.get(docId, GetOptions.getOptions()
byte[] content = result.contentAs(byte[].class);


Hi David,

Trying with Legacy Document did work for me. Thanks a lot. You mentioned the document should be created with binary flag to work with ByteArrayDocument. We are not creating the doc programmatically.We uploaded the document in the Couchbase 6.5 server. Does the server has any UI option to set the binary flag ?

Hi Kolappan,

We are not creating the doc programmatically. We uploaded the document in the Couchbase 6.5 server.

I don’t understand. How did you upload the LegacyDocument?

Does the server has any UI option to set the binary flag ?

No, it doesn’t… but I don’t see how setting the binary flag would be helpful in this case, since you know it’s a LegacyDocument.

If you really want to read it as a ByteArrayDocument, you could register the custom transcoder from my previous post. But I would strongly advise against that, since the LegacyDocument format uses compression and some other details that you would have to re-implement. It’s probably better to just let LegacyDocument handle those details for you.

Is there a reason you don’t want to use the LegacyDocument class to read the document?


I have used the LegacyDocument and it solves the issue. now i am able to read the document successfully. Thanks for the help