Troubleshoot Couchbase .net sdk 3.2 connection issues

I have an interesting situation. I have a console app with .net framework 4.6.1 connects successfully to Couchbase server 7.0 using .net sdk 3.2.

However, it is unable to connect when used in an MVC project with .net framework 4.6.1. The code & dev environment is same.

Is there anyway that we can figure out what’s going on with the connection?

 private static async Task InitializeCouchbase()
            IServiceCollection serviceCollection = new ServiceCollection();
            serviceCollection.AddLogging(builder => builder
                            .AddFilter(level => level >= LogLevel.Debug));
            loggerFactory = serviceCollection.BuildServiceProvider().GetService<ILoggerFactory>();
            loggerFactory.AddFile("c:/Logs/TWSCouchbaseHelper-{Date}.txt", LogLevel.Debug);
            var ipAddressList = new List<string> { "", "" };
            var config = new ClusterOptions()
                .WithConnectionString(string.Concat("http://", string.Join(", ", ipAddressList)))
                .WithCredentials("twsuser", "twsusercouchbase")
                .WithLogging(loggerFactory); //<= Need to add the ILoggerFactory via DI

            config.KvConnectTimeout = TimeSpan.FromMilliseconds(12000);
            var cluster = await Cluster.ConnectAsync(config);
            CBCluster = cluster;
            DefaultBucket = await cluster.BucketAsync("default");
            BucketManager = cluster.Buckets;
            var defaultScope = await DefaultBucket.ScopeAsync("_default");
            DefaultCollection = await defaultScope.CollectionAsync("_default");

Would you please post the connection code and config you are using for the ASPNET MVC project with .NET 4.6.1?

Thanks for the quick response. I’ve updated the post with connection code.
Here is the part of the debug Log that repeats:

2021-09-08T13:35:03.6587121-04:00 [DBG] Checking “” in polling. (093af39a)
2021-09-08T13:35:03.6587121-04:00 [DBG] Executing op GetClusterConfig on “” with key “” and opaque 1908. (ebdc59ba)
2021-09-08T13:35:03.6587121-04:00 [DBG] Receiving new map revision 403 (36e624ac)
2021-09-08T13:35:03.6587121-04:00 [DBG] Updating new map revision 403 (7cfa9d49)
2021-09-08T13:35:03.6860707-04:00 [DBG] Completed executing op “” on GetClusterConfig with key “” and opaque 1908 (88e8675d)
2021-09-08T13:35:03.6860707-04:00 [DBG] {“NetworkResolution”:“auto”,“rev”:403,“name”:“CLUSTER”,“uri”:null,“streamingUri”:null,“nodes”:,“nodesExt”:[{“thisNode”:false,“services”:{“mgmt”:8091,“mgmtSSL”:18091,“indexAdmin”:9100,“indexScan”:9101,“indexHttp”:9102,“indexStreamInit”:9103,“indexStreamCatchup”:9104,“indexStreamMaint”:9105,“indexHttps”:19102,“kv”:11210,“kvSSL”:11207,“capi”:8092,“capiSSL”:18092,“projector”:9999,“n1ql”:8093,“n1qlSSL”:18093,“cbas”:0,“cbasSSL”:0,“fts”:8094,“ftsSSL”:18094,“moxi”:0},“hostname”:“”,“alternateAddresses”:null,“HasAlternateAddress”:false},{“thisNode”:true,“services”:{“mgmt”:8091,“mgmtSSL”:18091,“indexAdmin”:9100,“indexScan”:9101,“indexHttp”:9102,“indexStreamInit”:9103,“indexStreamCatchup”:9104,“indexStreamMaint”:9105,“indexHttps”:19102,“kv”:11210,“kvSSL”:11207,“capi”:8092,“capiSSL”:18092,“projector”:9999,“n1ql”:8093,“n1qlSSL”:18093,“cbas”:0,“cbasSSL”:0,“fts”:8094,“ftsSSL”:18094,“moxi”:0},“hostname”:“”,“alternateAddresses”:null,“HasAlternateAddress”:false}],“nodeLocator”:null,“uuid”:null,“ddocs”:null,“vBucketServerMap”:{“hashAlgorithm”:"",“numReplicas”:0,“serverList”:,“vBucketMap”:,“vBucketMapForward”:},“bucketCapabilitiesVer”:null,“bucketCapabilities”:null,“clusterCapabilitiesVer”:[1,0],“clusterCapabilities”:{“n1ql”:[“enhancedPreparedStatements”]},“VBucketMapChanged”:false,“ClusterNodesChanged”:false} (161966ec)
2021-09-08T13:35:03.6860707-04:00 [DBG] Waiting for 00:00:02.5000000 before polling. (c8639b24)
2021-09-08T13:35:03.6870903-04:00 [DBG] Receiving new map revision 403 (36e624ac)
2021-09-08T13:35:03.6870903-04:00 [DBG] Updating new map revision 403 (7cfa9d49)


I can think of a couple of possible problems that may cause this.

The first is some kind of race condition, possibly within the SDK itself, and the different startup procedures of the console vs ASP.Net applications is triggering it. You might try adding await cluster.WaitForReadyAsync() before calling to get the Bucket to see if that makes a difference.

The second thought I had was the SynchronizationContext. Legacy ASP.Net runs with a specific synchronization context that only lets one Task completion run at a time per HTTP request. A console app would run in SynchronizationContext.Default, which is unrestricted. This could be causing some kind of deadlock or other issue.

The SDK itself should be immune to this, unless there is a bug. Or it could be something within your startup logic leading up to InitializeCouchbase. A few things to try:

  1. Make sure you’re not using .Wait() or .Result on a task, especially in your action methods. This is almost guaranteed to cause a deadlock.
  2. Try adding .ConfigureAwait(false) on all your awaits within InitializeCouchbase
  3. If that doesn’t work, try adding await Task.Delay(100).ConfigureAwait(false) to the top of InitializeCouchbase as an experiment (this should force the rest of the method off the synchronization context).

Note: #3 I wouldn’t leave in for production, that’s just for diagnosis. The others are fine for production.

1 Like

Thanks for the reply. I used .Result to make it work synchronously as I’m trying to migrate from CB 1.8 version to CB7. Looks like that caused a deadlock as you mentioned. The old CB client is synchronous and our entire app was built using synchronous methods. Now, I have three options

  1. figure out a way to use new CB client synchronously
  2. Forget about migrating as the old one works perfectly since 2015
  3. Rewrite entire application, regression testing, etc