Memory usage / beam.smp vs memcached

Is there any explanation somewhere about the exact memory usage of all couchbase processes?

Let me give an example:

  • cluster with 3 nodes, 3.7 GB RAM per node
  • per-server RAM quota set to 2000 MB
  • couchbase web interface shows “memory used as measured from mem_used”: 1.04 GB
  • memory consumed by beam.smp process, according to PS command: 27% (~1020 MB), seems consistent with value showed by web interface
  • memory consumed by memcached process, according to PS command: 43.8% (~1620MB).

So, generally, I have noted some inconsistent data:

  • beam.smp + memcached consume together more memory than set in per-server ram quota
  • the memory used reported by couchbase does not include memory used by memcached

Now some questions:

  • what exactly is the memcached process? Does it cache keys / values? What is the purpose of memory used in beam.smp vs memory used by memcached process?
  • is there any way to control memcached process memory usage?
  • does memcached process “scale” automatically memory usage, to use free memory for caching, or how it allocates memory?

I’m asking, because our machines are getting very close to 100% memory usage. We have limited, what we think is couchbase memory usage - but the total memory consumption didin’t change.

@korekontrol

First of all we recommend bare minimum of 4GB on each Couchbase node for any kind of reliable testing. On production, I would suggest to have at least 8 GB RAM per node.

“RAM quota” setting only applies for amount of memory that memcached could use to store all of the buckets defined on the cluster. It doesn’t account for memory usage by cluster manager(manages XDCR, views, rebalance, failures and management operations), so it’s essential that you leave headroom(at the very least ~20% of total RAM on the system) for cluster manager and OS.

From high level, you could assume memcached to b the storage engine that caches and persists data for you.

beam.smp refers to the Erlang VM that typically has the responsibility to manage rebalance, failures, heartbeats, XDCR, views etc.

“RAM quota” setting provides you that control knob.

If this is your production then you really need more RAM on your servers else OOM-killer might kick in and kill couchbase specific processes.

@asingh, thanks a lot for very comprehensive explanation. It helps us a lot.
Servers have 3.75GB of memory, which is AWS m3.large instance. The good thing is that three machines of this size are our staging and development clusters, for production we have more and bigger machines.

One more question: on production cluster (machines with 60GB of memory), I’ve reduced cluster RAM quota (to make sure that there is enough memory for beam.smp and OS), but the memory usage of memcached process did not go down. Is there any way to trigger any kind of “garbage collection” or “memory reclaim” operation for memcached process, or do I have to restart the couchbase to cause it?

If restarting is the only way, the preferred way would be to add node (we’re in cloud so it’s not a problem) and proceed with swap rebalance. We have four nodes - is it ok to add four other nodes, proceed with swap rebalance, restart couchbase and then swap re-balance back to the old nodes? (we prefer to stay with the old IP addresses of current nodes)

1 Like

I have similar issues here: our cluster has 9 nodes, each with 256G memory. We set up the couchbase quota to be 242G, however, the management UI shows on two of the nodes the memory usage is more than the quota - data usage is only 16G, “other usage” is 233G.

Logging into one of these two servers, I can see memcached is the process that takes a huge amount of memory:

$ ps -eo pmem,pcpu,vsize,pid,cmd | sort -k 1 -nr | head -5
 6.7  2.2 18208628 29561 /opt/couchbase/bin/memcached -C /opt/couchbase/var/lib/couchbase/config/memcached.json
 0.4 17.9 3374176 29436 /opt/couchbase/lib/erlang/erts-5.10.4.0.0.1/bin/beam.smp -A 16 -sbt u -P 327680 -K true -swt low -MMmcs 30 -e102400 -- -root /opt/couchbase/lib/erlang -progname erl -- -home /opt/couchbase -- -smp enable -setcookie nocookie -kernel inet_dist_listen_min 21100 inet_dist_listen_max 21299 error_logger false -sasl sasl_error_logger false -nouser -run child_erlang child_start ns_bootstrap -- -smp enable -couch_ini /opt/couchbase/etc/couchdb/default.ini /opt/couchbase/etc/couchdb/default.d/capi.ini /opt/couchbase/etc/couchdb/default.d/geocouch.ini /opt/couchbase/etc/couchdb/local.ini
 0.1  1.6 1307068 30344 /opt/couchbase/bin/indexer -vbuckets=1024 -cluster=127.0.0.1:8091 -adminPort=9100 -scanPort=9101 -httpPort=9102 -streamInitPort=9103 -streamCatchupPort=9104 -streamMaintPort=9105 -storageDir=/mnt/storage2/couchbase/data/@2i
%MEM %CPU    VSZ   PID CMD
 0.0  3.5 1453028 30329 /opt/couchbase/bin/projector -kvaddrs=127.0.0.1:11210 -adminport=:9999 127.0.0.1:8091

I also tried to upload the screen shot from couchbase management UI.

Anyway to stop this strange usage of the memory? It’s only 2 out of 9 machines, very skewed.

1 Like

memcached is taking ~18GB Virtual. The “Other Data” is whatever else your machine’s OS is using - most likely buffers / cache if the server isn’t running any other significant programs.

Look at those figures in top. Also see the (somewhat tongue in cheek) webpage [Linux ate my RAM!] (http://www.linuxatemyram.com) :wink:

1 Like