CBL 2.8 Attachment errors "400 Incorrect data sent for attachment with digest"

I got following errors when try to send doc with an attachment using CBL 2.8 and SyncGateway 2.8.

2021-02-16T09:33:49.956+03:00 [ERR] c:[6d1f4eb4] Error during downloadOrVerifyAttachments for doc 5b9176d9-6c8d-4598-b610-200451092689/4-09421d692952f0692bc370e0f38b0d291df61aed: 400 Incorrect data sent for attachment with digest: sha1-WJGq/sLw1IRqNugcChK/gvrdgOg= – db.(*blipHandler).handleRev() at blip_handler.go:763
2021-02-16T09:33:49.956+03:00 [ERR] c:[6d1f4eb4] Error during downloadOrVerifyAttachments for doc 2ba869e0-7534-45eb-ae45-657d0f952169/4-aaffc53b0d6815992786061733daac4c987e049d: 400 Incorrect data sent for attachment with digest: sha1-GxdahCWr5sM96mq375Ju3Y+S4wQ= – db.(*blipHandler).handleRev() at blip_handler.go:763
2021-02-16T09:33:49.956+03:00 [ERR] c:[6d1f4eb4] Error during downloadOrVerifyAttachments for doc 375ea1da-8f4a-4255-b8aa-de34594fcbfc/6-67a4e015f72432c390255ad69133330734ef2dc4: 400 Incorrect data sent for attachment with digest: sha1-52KEn/xSAsN4KV4LfW571eQ7JTE= – db.(*blipHandler).handleRev() at blip_handler.go:763
2021-02-16T09:33:49.956+03:00 [ERR] c:[6d1f4eb4] Error during downloadOrVerifyAttachments for doc 3719ddc5-2b78-4f6e-af61-37f74798c3fd/4-e6cca6bb318e887bc1480a54aee7bc60d8435873: 400 Incorrect data sent for attachment with digest: sha1-JN0XG6+nAVKddKkFdXmdX6vgS7U= – db.(*blipHandler).handleRev() at blip_handler.go:763
2021-02-16T09:33:49.956+03:00 [ERR] c:[6d1f4eb4] Error during downloadOrVerifyAttachments for doc bea0406c-f059-4c66-b645-1316c74d6be7/6-324db383564aa1559e2178aa33f1353556cbdc35: 400 Incorrect data sent for attachment with digest: sha1-xOzGXNbfZV9EgmkkJ0xcgrHGlMc= – db.(*blipHandler).handleRev() at blip_handler.go:763

Android code to set attachmnet:

public void AddAttachmentToPhotoCollection(PledgeTaskPhotoCollection photo, InputStream stream, int imageSizeInBytes) {
ICouchbaseDocument docWithAttachment = this.facadeEntityMapper.getPledgeTaskPhotoCollectionMapper().fetchDocument(photo);
Map<String, Object> userProperties = docWithAttachment.getProperties();
userProperties.put(PledgeTaskPhotoCollection.ImageSizeInBytesField, imageSizeInBytes);
MutableDocument doc = docWithAttachment.getRawAsMutable();
//update attachment
try {
doc.setData(userProperties);
> Blob blob = new Blob(“image/jpeg”, stream);
> doc.setBlob(DbHelper.imageAttachmentName, blob);
> docWithAttachment.save(doc);
} catch (CouchbaseLiteException e) {
e.printStackTrace();
} finally {
try {
if (stream != null)
stream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}

What is the reason? How to make it work?

There’s a lot of code there, that I didn’t write! ;-P. I don’t really know what an ICouchbaseDocument is and I don’t really know what its save method does.

A Blob is represented in a document as a dictionary containing, among other things, meta data that identified the object as a blob, its content type, and the SHA1 digest for the contents. It appears that the documents you are sending to the SGW contain blobs in which the digest for the content does not match the digest in the document.

This shouldn’t happen. One possible problem could be that the stream that you are using to supply the blob content is being closed before it is fully read. Once you hand the stream to the Blob, you should not attempt to manage it further: it belongs to the blob (as the documentation makes clear).

I will ask one of the SGW guys to have a look at this and see if they have any other insight.

@blake.meike Thanks for reply! ICouchbaseDocument it is just a wrapper for CBL document and its save method implement save method of a database under the hood:

public void save(MutableDocument doc) throws CouchbaseLiteException {
DatabaseManager.getDatabase().save(doc);
}

P.S. By the way I tried to get rid of InputStream and replaced it with bytes [] array, so to prevent problem with ‘closing stream before fully read’. But this does not help eather, i still got this error on some devices, when amount of attachments quite large (around 300 docs with attachments)

Are you running out of file-system storage space?

@blake.meike Definitely not my case. Got alot of free space more that 5 Gb left.

@blake.meike
UPDATE I am able to get rid of the errors, by means of performing PUSH replication multiple times for limited chunks of data.

So I splits ids that I wanted to push in chunks of size 50 and do PUSH replication multiple times (until all docs are successfully sent to server).

This is kida bad stuff going on and that should not happen since CBL should manage that stuff for me.

Any bright ideas about what is going on?

This supports our SGW expert’s suggestion that the problem is probably a network device: “I suspect it’s some kind of transport issue (websocket proxy truncating large frames?)”

There is so little information in this post… and there are three separate posts. Let’s confine the discussion to this post, ok? … and a little more information about your environment and application would help a lot. We aren’t magicians! :stuck_out_tongue_winking_eye:

@blake.meike Thanks for reply Blake.

Our project is deployed like so:

NGINX server that routes trafic is configured according to this article:
https://docs.couchbase.com/sync-gateway/current/load-balancer.html

On the image I displayed happy smile faces in cases when everything works OK, and with the red signs, cases for which PUSH replication is only working in chunk mode.

P.S.: You mentioned “websocket proxy truncating large frames”, may be we can check some particular network properties of our NGINX config?

I don’t have a lot of suggestions for what to do next. At this point, I don’t have enough information to be able to guess about what’s breaking.

One of our SGW experts has the following suggestions:

You could try to confirm that all four devices are attempting to push data of the same size. This would tell us if the problem is something as simple as that working device transmissions are under some nginx-imposed limit and the non-working devices are over that limit. If transmissions are identical, there is likely to be something in the way the failing clients are sending that data that is a just a bit different. In the process, perhaps you could get clues about those differences…

Another approach (to distinguish between a device problem and an nginx problem) might be to point all four devices directly at SG and see whether the problem persists.

1 Like

@blake.meike Thank for reply Blake! I will try your suggestions

@blake.meike Today I contacted our admin who work with NGINX.
We talked about this problem and he decided to test client_max_body_size NGINX parameter.

It was client_max_body_size 21m as per SG documentation (i.e. max allowed attachment size), he just decided to randomly increase this value to 700m.

Now we got client_max_body_size 700m and replication is working now.

I am confused and even frustrated. What is this all about? Any hints?

Thanks!

Not my wheelhouse. Will get one of our experts to have a look.

Couchbase Server won’t support documents larger than 20MB, so I wouldn’t expect setting client_max_body to a size larger than 21MB to make a difference. It still sounds more likely to have been an nginx config issue of some sort to me.

How close to the 20M limit are the documents you were attempting to push? You mention that you broke them into chunks of size 50, but I don’t know what the units are for that, and the size of the unchunked documents.

@adamf Thanks for reply Adam!
On average attachment size is near 2 Mb. Maximum attachment size in my case is 4.5 Mb.
So it is far from 20Mb limit.

Regarding chunks of 50, it means that I send only 50 documents with attachments per one-shot PUSH replication. I do so by specifing setDocIds() method in push replicator config.
Then I continue this process until all of my documents are succesfully sent to server (e.g. if I got 130 documents to send, I send them in 3 iterations (separate PUSH replications): 50 docs + 50 docs + 30 docs).