I have 8G size data want to save in a bucket, but the bucket’s RAM quota usage is just 7.51G; when i save 3G data in the bucket, the RAM/quota usage show 7.14/7.51, and the ramain data save in the bucket will cause some couchbase error,like:couchbase.exceptions.TemporaryFailError , at the same time I get a checkbox alert on the web console is “Metadata overhead warning Over 70% of RAM allocated to bucket “binnarydb” on node “192.168.1.101” is taken up by keys and metadata.”. but i must save all data in the bucket, how should i do
This usually happens when your key and/or value is small … Ex. key(newyork)=>value(1).
In Couchbase you have to remember that for every key/value pair you store there is a 54 byte meta data overhead. So if your data size is small and you have lots of it , but the meta data is large due to total amount of items.
http://www.couchbase.com/issues/secure/attachment/11317/Screen%20shot%202011-05-15%20at%205.45.07%20PM.png you can see the exact amount in the bottom left hand corner. The solution is allocate more memory to the cluster or add more nodes.
thank you for your answer ,and i have a question, when i have one node in cluster, i save 3G data in the bucket, the RAM/quota usage show 3.1G/3.755G, and i add a node to the cluster,so the all quota RAM is 7.51G，and i think the RAM/quota usage will show me is 3.1G/7.51G, but it is not, the web console show me the RAM /quata usage is 7.15G/7.51G, it is why ? and my value is binary data, i think cause the checkbox is not the reason what I’m storing, is there some way to store 8G data in 7G RAM? just can i operate the bucket move data from RAM to disk? if it can , how can i do it