Cbbackup and memory used

Hi,

I’ve had some problems with memory usage and cbbackup. My existing couchbase cluster is tight on memory and running cbbackup immediately leads to failures in the cluster (as observed from clients) and temp OOM errors.

I’ve brought up a new cluster with (almost) twice as much memory and will be putting it into production soon. But I was running some tests and found some results I don’t understand. The new cluster has had a backup dataset loaded and is not yet being used by clients. At idle, uses around 4GB of memory (of 24GB configured). But when I test cbbackup, the memory spikes to 24GB almost immediately. I can only imagine that this will still lead to failures when production clients are involved.

What would explain this behavior, is it expected? Does cbbackup just use whatever memory is available - i.e. would it not use as much if it weren’t available?

Some details about my setup - it’s all on AWS, 4 m1.larges with the target bucket using 6GB each and 100GB SSD EBS drives for disk. No other applications are running on the instances. A fifth instance (m3.large) is running cbbackup. Also, the dataset has 14M items and approximately 150GB in total data.

Any insight into cbbackup memory usage and whether I should expect cbbackup to fail when the cluster is use would be greatly appreciated.

Thanks,
Travis

1 Like

I am experiencing this same issue using version 3.0.1 Community on Centos6.5 in Rackspace. Python version “python.x86_64 2.6.6-52.el6”.

The cbbackup utility just uses all available memory until the VM’s kernel OOMkills it, as well as memcached.

Tasks: 242 total,   1 running, 241 sleeping,   0 stopped,   0 zombie
Cpu(s):  6.7%us,  5.2%sy,  0.0%ni, 86.1%id,  0.8%wa,  0.0%hi,  0.0%si,  1.2%st
Mem:  30822556k total, 28809180k used,  2013376k free,   114604k buffers
Swap:        0k total,        0k used,        0k free,  5510528k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
25979 couchbas  20   0 16.9g  16g 6468 S 14.6 56.4  17:07.67 memcached
26840 root      20   0 5474m 4.6g 8012 S 200.8 15.7  75:39.92 python
14072 couchbas  20   0 2148m 331m 1964 S  7.0  1.1 939:51.42 beam.smp
14043 couchbas  20   0 1275m  23m 1232 S  0.3  0.1  18:50.06 beam.smp 

Mar 9 20:44:10 couchdbwhois1113r kernel: Out of memory: Kill process 14119 (memcached) score 567 or sacrifice child
Mar 9 20:44:10 couchdbwhois1113r kernel: Killed process 14119, UID 497, (memcached) total-vm:17794644kB, anon-rss:17453500kB, file-rss:8kB
Mar 9 20:44:10 couchdbwhois1113r kernel: Out of memory: Kill process 4630 (python) score 320 or sacrifice child
Mar 9 20:44:10 couchdbwhois1113r kernel: Killed process 4630, UID 0, (python) total-vm:11581564kB, anon-rss:10773124kB, filers:4kB

Probably You can use Extra config parameters available for cbbackup command to limit the memory usage.
Available extra config parameters (-x):
batch_max_bytes=400000 (Transfer this # of bytes per batch);
batch_max_size=1000 (Transfer this # of documents per batch);