This makes sense, as Clusters and Buckets are expensive to construct / initialize. I built a few prototypes to determine the optimal implementation, but the right pattern still alludes me.
I started with a repository singleton that brokers requests (Get(), Upsert(), etc) to Couchbase. The repository is thread safe and intended to run in a web application, so dozens of threads will be using it simultaneously. It is up to the repository to instance the Cluster and Bucket objects as it sees fit.
Model #1 - Construct a new Cluster and open a Bucket for each request. This is obviously not right, as each request takes several seconds. Probably not very socket and memory efficient either.
Model #2 - Construct the Cluster at start up (shared within the repository singleton) and open a Bucket for each request (in a using clause). This is noticeably faster but still takes more than a second per request.
Model #3 - Construct the Cluster and Bucket at start up and use the same bucket object to handle all requests. This is definitely the fastest but, for some reason, the bucket reference isn’t released when the application ends. This causes unit tests to keep a read lock on the application assemblies, so compiles fail after the first test run. I can abort the test runner process, but I’m concerned about deployment and run-time stability.
Model #4 - I figured maybe the problem with #2 is the Bucket being closed after each request so the Cluster couldn’t hold a connection open. I tried opening the Bucket when the repository is constructed and then opening another instance for each request, but it didn’t improve on the performance of #2.
I expect #3 is in the right direction, but I can’t figure out why the bucket isn’t closed when the AppDomain shuts down. Implementing disposers and finalizers hasn’t helped either.
Any help you can offer will be greatly appreciated.