User data size vs metadata size

I am setting up an 4-node cluster and doing some test with couchbase. The server version is 2.2.1. The type of key is long. I am using ‘{ }’ as the value for all keys. I check the resource used in the web page. In the VBucket Resources, I found that user data in RAM is 696M and metadata is 570M. I am wondering why the user data is even bigger than metadata. metadata takes 64 bytes per document. The size of the value is only 3 bytes. Does the user data include metadata? If so, metadata is stored twice in memory?

Hello,

What do you mean by “key is long”?

The metadata/key couples are stored for each replica. You can look in this part of the documentation:
http://docs.couchbase.com/couchbase-manual-2.2/#couchbase-bestpractice-sizing-ram

See the total_metadata variable in the calculation formula.

Regards
Tug
@tgrall

The length of each key is 8 bytes. with the overhead of metadata is 56bytes. The size of the metadata for each doc will be 64 bytes. The size of each key value is 12 bytes (3 unicode chars). I assume the size of metadata will be 64/12 times of the user data. It seems in your web UI VBucket resources page user data size includes both metadata and key value.

Couchbase is built for high-performance. Having the metadata in memory allows the database to perform low latency gets and sets directly from memory without any disk operations.

If you have very small keys like {}, you will hit a larger metadata overhead. But that’s obviously not the common case. Most users have keys that are at minimum 300-500 bytes going all the way upto a few KB. So the metadata overhead in those cases becomes significantly less.

Metadata is stored for the active copy and each replica copy as well.
Do you have a real world use case with values that are that small?

Also, in the future you will be able to select at a bucket level if you want the keys and metadata for the entire dataset in memory or not.