Unable to read documents stored with .NET SDK 2.0 with older drivers

Hello,

I am using the .NET SDK 2.0.2 to store a string document in Couchbase:

class Program
{
    static void Main()
    {
        var bucketName = cluster.Configuration.BucketConfigs.Single().Value.BucketName;
        var cluster = new Cluster("couchbaseClients/couchbase");
        var bucket = cluster.OpenBucket(bucketName);
        var result = bucket.Upsert("some_key", "some data");
        Console.WriteLine(result.Success);
    }
}

Now I am having troubles reading this document using the Java SDK version 1.1.0:

import com.couchbase.client.CouchbaseClient;
import java.net.URI;
import java.util.ArrayList;

public class Main {

    public static void main(String[] args) {
        ArrayList<URI> nodes = new ArrayList<URI>();

        nodes.add(URI.create("http://127.0.0.1:8091/pools"));

        CouchbaseClient client = null;

        try {
            client = new CouchbaseClient(nodes, "samples", "");
        } catch (Exception e) {
            System.err.println("Error connecting to Couchbase: " + e.getMessage());
            System.exit(1);
        }

        Object res = client.get("some_key");
        System.out.println(res);
    }
}

Which throws the following exception:

2015-02-16 11:16:31.938 WARN net.spy.memcached.transcoders.SerializingTranscoder:  Failed to decompress data
java.util.zip.ZipException: Not in GZIP format
	at java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:165)
	at java.util.zip.GZIPInputStream.<init>(GZIPInputStream.java:79)
	at java.util.zip.GZIPInputStream.<init>(GZIPInputStream.java:91)
	at net.spy.memcached.transcoders.BaseSerializingTranscoder.decompress(BaseSerializingTranscoder.java:181)
	at net.spy.memcached.transcoders.SerializingTranscoder.decode(SerializingTranscoder.java:84)
	at net.spy.memcached.transcoders.TranscodeService$1.call(TranscodeService.java:63)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at net.spy.memcached.transcoders.TranscodeService$Task.run(TranscodeService.java:110)
	at net.spy.memcached.transcoders.TranscodeService$Task.get(TranscodeService.java:96)
	at net.spy.memcached.internal.GetFuture.get(GetFuture.java:63)
	at net.spy.memcached.MemcachedClient.get(MemcachedClient.java:1009)
	at net.spy.memcached.MemcachedClient.get(MemcachedClient.java:1030)
	at com.company.Main.main(Main.java:23)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)

With the old .NET SDK version 1.3.3 I was able to workaround this issue by using a custom transcoder:

public class MyTranscoder : DefaultTranscoder
{
    protected override CacheItem Serialize(object value)
    {
        var item = base.Serialize(value);
        item.Flags = 0;
        return item;
    }
}

that I registered in the configuration:

<couchbase>
    <transcoder type="MyProject.MyTranscoder, MyProject"/>
    <servers bucket="samples">
        <add uri="http://127.0.0.1:8091/pools" />
    </servers>
 </couchbase>

Unfortunately I wasn’t able to achieve the same thing with the .NET SDK 2.0. I couldn’t find a way to substitute the default transcoder which seems to be burnt in the internal ClusterController class.

Do you have any idea if such scenario is supported: storing documents with .NET SDK 2.0 and consuming them with Java SDK 1.1?

Remark: it would be great if this is possible without the need of modifying the Java client.

@dimitrod -

The SDK’s only guarantee upgrade compatibility of data and compatibility between the same Major version (.NET 2.0 and Java 2.0 for example) and SDK (.NET 1.X and .NET 2.X).

This is actually just a missing feature in the SDK; I created a ticket for adding it: Loading...

I included a brief “how to do it”, so if you feel inclined you can always take a stab at it and submit a pull request. If you chose to go this route and need any help or clarification, let me know!

-Jeff