Couchbase 3.1.0 - Hard out of memory error when performing full backup

We recently migrated our cluster to Couchbase 3.1.0 Enterprise edition.
The odd thing is - when performing full backup of a bucket, web UI alerts “Hard Out Of Memory Error. Bucket X on node Y is full. All memory allocated to this bucket is used for metadata”. I looked into the server logs, but haven’t find any similar errors there. We have quite a big bucket, and the backup requires to add an amount of RAM comparable to the size of the bucket - e.g. to backup 100Gb bucket allocate additional 80Gb of RAM.
Is that even normal? Did we miss something?

I also asked on SO without any luck http://stackoverflow.com/questions/32949144/couchbase-3-1-0-hard-out-of-memory-error-when-performing-full-backup

Hello Any luck on this. We are having simillar issues, however it looks like fixed in next version

Hey there,

This is a known issue in the Couchbase Server 3.x releases.

To understand the problem, we must also first understand Database Change Protocol (DCP), the protocol used to transfer data throughout the system. At a high level the flow-control for DCP is as follows:

  1. The Consumer creates a connection with the Producer and sends an Open Connection message. The Consumer then sends a Control message to indicate per stream flow control. This messages will contain “stream_buffer_size” in the key section and the buffer size the Consumer would like each stream to have in the value section.
  2. The Consumer will then start opening streams so that is can receive data from the server.
  3. The Producer will then continue to send data for the stream that has buffer space available until it reaches the maximum send size.
  4. Steps 1-3 continue until the connection is closed, as the Consumer continues to consume items from the stream.

The cbbackup utility does not implement any flow control (data buffer limits) however, and it will try to stream all vbuckets from all nodes at once, with no cap on the buffer size.
While this does not mean that it will use the same amount of memory as your overall data size (as the streams are being drained slowly by the cbbackup process), it does mean that a large memory overhead is required to be able to store the data streams.
When you are in a heavy DGM (disk greater than memory) scenario, the amount of memory required to store the streams is likely to grow more rapidly than cbbackup can drain them as it is streaming large quantities of data off of disk, leading to very large streams, which take up a lot of memory as previously mentioned.

The slightly misleading message about metadata taking up all of the memory is displayed as there is no memory left for the data, so all of the remaining memory is allocated to the metadata, which when using value eviction cannot be ejected from memory.

The reason that this only affects Couchbase Server versions prior to 4.0 is that in 4.0 a server-side improvement to DCP stream management was made that allows the pausing of DCP streams to keep the memory footprint down, this is tracked as MB-12179.
As a result, you should not experience the same issue on Couchbase Server versions 4.x+, regardless of how DGM your bucket is.

Workaround

If you find yourself in a situation where this issue is occurring, then terminating the backup job should release all of the memory consumed by the streams immediately.
Unfortunately if you have already had most of your data evicted from memory as a result of the backup, then you will have to retrieve a large quantity of data off of disk instead of RAM for a small period of time, which is likely to increase your get latencies.
Over time ‘hot’ data will be brought into memory when requested, so this will only be a problem for a small period of time, however this is still a fairly undesirable situation to be in.

The workaround to avoid this issue completely is to only stream a small number of vbuckets at once when performing the backup, as opposed to all vbuckets which cbbackup does by default.

This can be achieved using cbbackupwrapper which comes bundled with all Couchbase Server releases 3.1.0 and later, details of using cbbackupwrapper can be found in our documentation.
In particular the parameter to pay attention to is the -n flag, which specifies the number of vbuckets to be backed up in a batch at once.
As the name suggests, cbbackupwrapper is simply a wrapper script on top of cbbackup which partitions the vbuckets up and automatically handles all of the directory creation and backup generation, while still using cbbackup under the hood.
As an example, with a batch size of 50, cbbackupwrapper would backup vbuckets 0-49 first, followed by 50-99, then 100-149 etc.

It is suggested that you test with cbbackupwrapper in a testing environment which mirrors your production environment to find a suitable value for -n and -P (which controls how many backup processes run at once, the combination of these two controls the amount of memory pressure caused by backup as well as the overall speed).
You should not find that lowering the value of -n from its default 100 decreases the backup speed, in some cases you may find that the backup speed actually increases due to the fact that there is far less memory pressure on the server.
You may however wish to sensibly adjust the -P parameter if you wish to speed up the backup further.

Below is an example command:

cbbackupwrapper http://[host]:8091 [backup_dir] -u [user_name] -p [password] -n 50

It should be noted that if you use cbbackupwrapper to perform your backup then you must also use cbrestorewrapper to restore the data, as cbrestorewrapper is automatically aware of the directory structures used by cbbackupwrapper.

Hope that this helps understand the problem and how to avoid it in future!

Sorry for the late reply. We updated to 4.x shortly after and the problem went away