I’ve been evaluating couchbase trying to figure out if our company can continue to use it as our database solution and hit a serious performance snag. I’ve been using couchbase 3.1.0 enterprise server install since it seems more stable than couchbase 4.X. Not all of our documents will be able to fit in memory, though most of our active dataset can, and sometimes we will need to retrieve documents from disk. I’ve been testing with a single node setup with couchbase data stored on 4 disks of 100GB each setup in a RAID 0 array. I get about 4ms latency writing/reading to the RAID drive and about 1500 iops. I have a database bucket with about 3.6 million docs in it of varying sizes (from about 195 bytes to about 113KB). If I run tests with documents of about the same size it works fine and I can retrieve 5000 docs at once with a nodejs couchbase library getMulti call. With varying sizes as I described above though it generally fails at about 10 docs at once. By fails I mean it times out because it takes longer than 2500ms to retrieve them. I find that sometimes it takes over 600ms just to retrieve a single document from disk, even when nothing else is using the disk or the vm at all.
So can couchbase support a use case where we will need to retrieve a couple hundred docs from disk sometimes? Is this the expected performance for couchbase when retrieving docs from disk?