We are currently evaluating Couchbase for a distributed cache use case and I am looking to get clarity on the following:
1. Does the C++ client provide the ability to set timeout at Connection-level, Authentication-level and Request-level. The documentation here talks about Java, .net and PHP clients only.
2. Will moxi (client side) work with a Couchbase bucket? The reason for going Moxi route is to reuse long-running connections (saving on TCP connection setup handshake time) and to multiplex multiple clients into shared connections (reducing the file descriptor usage on the servers). Note that in our environment, we expect very large number of client processes .
3. If Moxi is not an option, can you describe how the C++ smart client deals with connections.
We are looking for ways to save on on connection setup handshake time but at the same time avoid starving/crashing the server/cluster by making very large number of file descriptors) because of our large number of client processes.
Note that since the smart clients use persistent connections, what best practices do you provide for large number of client process connecting to a cluster without sacrificing stability/performance(i.e.
On one extreme you can have child process that house the smart clients holding to the connections even though not all of them will be making get/put operations while on the other extreme you can connect/shutdown for every cache operation within the child processes(which obviously is a very bad option)
- libcouchbase (the C client, i guess you are using it from your C++ application) allow you to specify only one timeout, but you can change it after successfull connection. For example, before lcb_connect() set it to 3 seconds which will be effective timeout for connection phase, and then set it to 1 second for all further data operations. We don't separate authentication neither REST, nor SASL, so the first is the part of the lcb_connect(), and the second will be done before first data packet to the selected data port.
2.Yes, but with moxi proxy people usually employ plain memcached clients, which cannot query Map/Reduce engine through the Couchbase Views.
3.The libcouchbase opens N+1 sockets, one connection to listen cluster configuration changes, and N active data connections, which are opened on demand, when libcouchbase routes your key to vbucket living on the node. It will close data sockets only when receives new configuration from the server (rebalance) or in case of destroying the handle. Note that although the libcouchbase handles doesn't share anything between and doesn't use global variables, it isn't safe to share the handle between multiple threads, you should implement external locking techniques to protect handles which are accessed from multiple threads.