Error DecompressionException: Offset exceeds size of chunk from Kafka Source Connector

Hi,

I’m using Kafka Connect version 2.5.0 with kafka-connect-couchbase-3.4.x.jar version
My bucket size is 100mio docs and I need to stream out to kafka topic.

When reaching offset around 700k, I always get this error:

com.couchbase.client.deps.io.netty.codec.compression.DecompressionException: Offset exceeds size of chunk

Here’s my config:

{
“name”: “cb-connector”,
“config”: {
“connector.class”: “com.couchbase.connect.kafka.CouchbaseSourceConnector”,
“connection.password”: “password”,
“tasks.max”: “1”,
“couchbase.compression”: “ENABLED”,
“transforms.channel.static.field”: “channel”,
“connection.timeout.ms”: “2000”,
“connection.username”: “username”,
“couchbase.stream_from”: “SAVED_OFFSET_OR_BEGINNING”,
“couchbase.flow_control_buffer”: “128m”,
“dcp.message.converter.class”:
“transforms.channel.type”: “org.apache.kafka.connect.transforms.InsertField$Value”,
“connection.bucket”: “myBucket”,
“event.filter.class”: “com.couchbase.connect.kafka.filter.AllPassFilter”,
“name”: “cb-connector”,
“topic.name”: “myTopic”,
“couchbase.persistence_polling_interval”: “100ms”,
“connection.cluster_address”: “xx.xx.xx.xx”
}
}

Which config to handle this kind of issue?

Thanks

Hi Han,

I’m sorry this is happening. It looks like the Netty Snappy decompressor is failing on a particular document.

The quick fix is to disable compression by changing the couchbase.compression config property from ENABLED to DISABLED. This will force Couchbase Server to decompress the documents before sending them to the connector.

In the long term, you might want to consider upgrading to version 4.x of the connector. It uses a more robust Snappy decoder, among other improvements.

Thanks,
David