Concurrency Issues

Using .NET Client 2.1
With sever 3.0+

when doing concurrent requests some get/set take a few seconds, have increased max connections to 100 but didn’t see much of a performance gain, on server I can see it is doing only 6-8k ops/sec.

Creating the bucket,cluster is an expensive operation hence we only do it once at application start, which is visible as during tests, tests of about 5-10 business transactions/sec results in each transaction taking around 150-250ms
which is expected but going beyond 15-20 results in the above scenario

Increasing the max connection a bit more resulted in out of memory exception, looking at the source code for the sdk I can see
BufferAllocator = § => new BufferAllocator(p.MaxSize * p.BufferSize, p.BufferSize); within each connection, why is the buffer for each connection dependent on the max size of the pool ? as this would result in significant memory requirements.

Port exhaustion isn’t the issue have checked.

Could you post some example code? Your app is likely never going to allocate the 100 connections; your better off setting the max much lower, like 10 or less.

The buffer is a pre-allocated, contiguous buffer; each connection gets assigned a discrete section of the buffer. An optimization could be to allow the buffer to grow as the connection pool grows, the only downsize to this is it would create a number of objects on the LOH; i.e. one for each new allocation. That might not be an issue, however.

-Jeff

We have a custom wrapper around the client, here some of the code
private ClientConfiguration CreateCouchbaseConfiguration(BucketInfo bucketInfo)
{
return new ClientConfiguration
{
BucketConfigs =
new Dictionary<string, BucketConfiguration> { { bucketInfo.Name, CreateBucketConfiguration(bucketInfo) } },
SerializationSettings =
new JsonSerializerSettings
{
ContractResolver = contractResolver,
Converters = new[] { new StringEnumConverter { CamelCaseText = true } }
},
Servers = CreateServerUriList(bucketInfo),
PoolConfiguration = new PoolConfiguration
{
MinSize = bucketInfo.MinConnectionPoolSize,
MaxSize = bucketInfo.MaxConnectionPoolSize
}
};
}

private BucketConfiguration CreateBucketConfiguration(BucketInfo bucketInfo)
	{
		return new BucketConfiguration
				{
					BucketName = bucketInfo.Name,
					Password = bucketInfo.Password,
					Port = bucketInfo.Port,
					Servers = CreateServerUriList(bucketInfo)
				};
	}

Singleton is maintained by DI container.

Get Implementation
public override INoSqlGetOperationResult Get(string key)
{
IOperationResult couchbaseResult = bucket.Get(key);
return ConvertCouchbaseGetResultToNoSql(couchbaseResult);
}

The buffer is a pre-allocated, contiguous buffer; each connection gets assigned a discrete section of the buffer.

Hi Jeff,

The issue we have seen by stepping through the code is that instead of each connection getting a discrete section of the buffer each connection creates a new buffer allocator which creates a brand new buffer. As each buffer is made to fit the maximum number of connections this causes a rather polynomial progression of memory usage as you increase the max connection size.

-Nick

@nick_adcock -

I see…that’s definitely a bug; you can read it bit more here: https://issues.couchbase.com/browse/NCBC-895

Thanks,

Jeff