ObjectDisposedException (Message: “Safe handle has been closed”) on IBucket.Upsert

We’re getting an intermittent ObjectDisposedException (with message “Safe handle has been closed”) on the Exception property of the result when calling Upsert on a bucket. This seems to occur mostly when the document does not yet exist on the bucket. We’ve been able to reproduce the issue in a staging or development environment making it difficult to pin down.


  • Couchbase .NET Client SDK 2.7.12
  • .NET Framework 4.5.2
  • Couchbase Server CE 6.0 (3 node cluster)
  • Sync Gateway Server 2.6

We’re using our own multiton implementation which wraps Cluster, Bucket, and Sync Gateway access methods and instantiates new objects based on differing cluster and bucket configurations (i.e. singletons by config). This is required since the application needs to communicate with several clusters with differing bucket and Sync Gateway configurations. It does, however, mean that it is possible to have more than one object that is connecting to the same cluster, although this is not a common use case. An excerpt from the multiton implementation is below:

public sealed class ClusterBucket : IClusterBucket
    private IBucket Bucket { get; set; }
    private ICluster Cluster { get; set; }
    public ClusterBucketSyncGateway SyncGateway { get; private set; }

    #region Multiton 
    private static Dictionary<string, ClusterBucket> clusterBuckets = new Dictionary<string, ClusterBucket>();
    private static readonly object padlock = new object();
    private ClusterBucket(ClusterBucketConfiguration config)
        var clientConfig = new ClientConfiguration
            Servers = config.CouchbaseServerUriBuilders.Select(x => x.Uri).ToList()
        var authenticator = new PasswordAuthenticator(config.Username, config.Password);

        Cluster = new Cluster(clientConfig);
        Bucket = Cluster.OpenBucket(config.BucketName);

        SyncGateway = ClusterBucketSyncGateway.GetClusterBucketSyncGateway(config);

    public static ClusterBucket GetClusterBucket(ClusterBucketConfiguration config)
        lock (padlock)
            if (clusterBuckets.ContainsKey(config.Key) == false)
                clusterBuckets.Add(config.Key, new ClusterBucket(config));
        return clusterBuckets[config.Key];


The ObjectDisposedException (with message “Safe handle has been closed”) is thrown from this calling code:

        var result = Bucket.Upsert<T>(id, doc);
        if (!result.Success)
            throw result.Exception;

Is this a reasonable architectural choice and implementation given our requirements? Looking at the SDK code, it seems like this is not the ObjectDisposedException thrown when the bucket has been disposed, but seems like it might rather been another object that the bucket is referencing that might have been disposed. Does anyone have an idea of what object it might be or the mechanism which might be disposing of it?

@nonorval -

I don’t have logs to verify, but a ODE with “safe handle has been closed” generally means that a connection has been closed or dropped by one thread, while another thread is still using the connection. This can be completely expected during normal running conditions as connections can be terminated at anytime for many reasons (unreliable network, app shutdown, etc). This is a side effect in that any number of threads can write to a connection, but there is a dedicated read thread. In some cases the SDK can simply retry the op if its idempotent (GET, Insert, etc), however if its not it cannot be retried because we don’t know if it succeeded on the server (response never came back) so its returned back to the app.

All of that being said, it could be an issue, but you would have to look deeper into the cause of why the connection was closed (and disposed).


@jmorris Thanks so much for speedy response and explanation - that definitely helps to understand the mechanism. I will build in some retries to see if that helps, since we should be able to retry safely for these operations.

Do you have any idea how we might be able to track down the culprit that is closing the connection? Would the SDK logs maybe be helpful?