I’ve had some problems with memory usage and cbbackup. My existing couchbase cluster is tight on memory and running cbbackup immediately leads to failures in the cluster (as observed from clients) and temp OOM errors.
I’ve brought up a new cluster with (almost) twice as much memory and will be putting it into production soon. But I was running some tests and found some results I don’t understand. The new cluster has had a backup dataset loaded and is not yet being used by clients. At idle, uses around 4GB of memory (of 24GB configured). But when I test cbbackup, the memory spikes to 24GB almost immediately. I can only imagine that this will still lead to failures when production clients are involved.
What would explain this behavior, is it expected? Does cbbackup just use whatever memory is available - i.e. would it not use as much if it weren’t available?
Some details about my setup - it’s all on AWS, 4 m1.larges with the target bucket using 6GB each and 100GB SSD EBS drives for disk. No other applications are running on the instances. A fifth instance (m3.large) is running cbbackup. Also, the dataset has 14M items and approximately 150GB in total data.
Any insight into cbbackup memory usage and whether I should expect cbbackup to fail when the cluster is use would be greatly appreciated.