Hard Out Of Memory Error when I insert data

I encountered the following error when I inserted huge amount of documents.

[07:47:01] - Hard Out Of Memory Error. Bucket “default” on node 172.31.9.215 is full. All memory allocated to this bucket is used for metadata.
[07:47:01] - Hard Out Of Memory Error. Bucket “default” on node 172.31.15.241 is full. All memory allocated to this bucket is used for metadata.
[07:47:01] - Hard Out Of Memory Error. Bucket “default” on node 172.31.9.214 is full. All memory allocated to this bucket is used for metadata.
[07:47:01] - Hard Out Of Memory Error. Bucket “default” on node 172.31.15.242 is full. All memory allocated to this bucket is used for metadata.

What is that and is there losses of data ?

There was an error when I searched doccuments.

A ‘hard out of memory’ happens when you’ve tried to insert more items than will fit in the system’s metadata memory quota. No data would be lost, unless you weren’t error handling results trying to put items in.

You may want to revisit the sizing of your cluster. Couchbase has a mode for full ejection too, which would limit the capacity only by what the disk will hold. The docs have a description of the options.

Thank you for your prompt reply.
I already set the ‘full ejection’ mode.

Thank you for tips for creating a new bucket.

This error did not occur the bucket does not have N1QL second index.
So, I’d like to know tips for N1QL’s indexing tuning.

  • How the bucket disk I/O priority effect the performance ?
  • How many index services on cluster ?
  • How I determine index memory ?

Thanks !

Please post a new topic so the questions may be found by others. Quick answers:

  • Lower priority buckets would get less priority in scheduling IO than higher priority ones
  • You can have as many index services as you need, that depends more on your dataset, rate of change and expected workload. See the docs for some pointers.
  • You mean to size it? You may need to experiment a bit or review the sizing docs. Size is partially determined by your workload.

Also, note if you either have or are considering an Enterprise subscription, the Couchbase folks can possibly help analyze to size your deployment.

Hi ingenthr,

I will post an new topic.
I have a new question about my question.
Can N1QL index has a replication ? I think index for keys of document has a replication ?
Is that true ?

I will consider EE subscription next phase.

Thanks !

Out of interest what kind of sets/second where you achieving when you were inserting your data?
It is possible that your sets/second were far outstripping the ~3000mutations/second limit of the indexer and as a result your items were being queued in the projectors DCP queue, resulting in the hard OOM error that you saw.
This could also be why using a new bucket without indexes on helped as it did not need to project the changes to the indexer.