when doing concurrent requests some get/set take a few seconds, have increased max connections to 100 but didn’t see much of a performance gain, on server I can see it is doing only 6-8k ops/sec.
Creating the bucket,cluster is an expensive operation hence we only do it once at application start, which is visible as during tests, tests of about 5-10 business transactions/sec results in each transaction taking around 150-250ms
which is expected but going beyond 15-20 results in the above scenario
Increasing the max connection a bit more resulted in out of memory exception, looking at the source code for the sdk I can see
BufferAllocator = § => new BufferAllocator(p.MaxSize * p.BufferSize, p.BufferSize); within each connection, why is the buffer for each connection dependent on the max size of the pool ? as this would result in significant memory requirements.
Could you post some example code? Your app is likely never going to allocate the 100 connections; your better off setting the max much lower, like 10 or less.
The buffer is a pre-allocated, contiguous buffer; each connection gets assigned a discrete section of the buffer. An optimization could be to allow the buffer to grow as the connection pool grows, the only downsize to this is it would create a number of objects on the LOH; i.e. one for each new allocation. That might not be an issue, however.
The buffer is a pre-allocated, contiguous buffer; each connection gets assigned a discrete section of the buffer.
Hi Jeff,
The issue we have seen by stepping through the code is that instead of each connection getting a discrete section of the buffer each connection creates a new buffer allocator which creates a brand new buffer. As each buffer is made to fit the maximum number of connections this causes a rather polynomial progression of memory usage as you increase the max connection size.