Why ram did not free on "full eviction" type

Hello,
I’m am very confusing regarding “Memory handling” issue in Couchbase.
let me explain it completely,
there are three VMmachine : centos7minimal,10G RAM,8Core,500G Space.
i tried import data from oracle to it and having “Temporary failure received from server. Try again later” error, opened ticket MB-19391, they answered me(Thanks to them) but I’m not satisfied and more importantly my problem is not solved yet.
I set 6.6G ram per couchbase-server and allocate 5.5G for Bucket(Only there is one bucket exist),
and no replica set for more performance.
800 million record entered and “Temporary failure received from server. Try again later” occurred.
now I have questions:
1-why ram did not fully clear when I set it “full eviction”?
2-why RAM is not free after stop task for one day? it should be clear if inserting data is done automatically,am I right?
3-as you may find in screenshots in “Server nodes” tab ram takes 95%b of total but when consider the detail , it is near 60%, do any one know why?
P.S. “import from oracle” is only task that run on this cluster.





this problem accord of all my activity.

Thanks.

Data is not evicted unless memory is needed. That’s what the low / high watermarks are for.

As far as temporary failure, that sounds like the rate of ingestion is too high, so Couchbase can’t clean up / flush quickly enough for your incoming data stream.

Hi,
many thanks, this was almost of my problem.
but, is there any command that by run it, the program wait till all data flush to space and then continue next line?
and is there any command to does it? because I have had below status since 3 days ago.

Your clients should be getting backoff warnings - but if memory is full and you’re inserting too fast, there’s really nothing you can do other than either:

  1. add nodes
  2. add ram
  3. If you’re running locally, add RAID-0 to your storage.

You can try, also, adjusting the compaction settings, but fundamentally once you fill up memory, you’re IO bound.

OK! thanks,
as you are cool man :slight_smile: let me ask another one, :wink:
if I control “disk write queue” in “General Bucket Analytics” tab and change my code to keep it normally, can i be sure of this issue,
and do you know any reference regarding these trends to explain them?

many thanks.

I do not quite understand what you’re saying.

But, you can certainly look at things related to the write queues (how big the queues are and time in the queue) to see that you’ve got a problem. I don’t think you can control them, though.

Hello,
many thanks again.
I’m speaking about below


do you know any reference for these

Looks like you’re writing far too frequently for your hardware. Add nodes, more drives, more ram…

Take a look at your write rate / size of writes / etc. Examine your I/O utilization and there are suggestions on optimizing that here:

Just make sure you understand everything that it says.

1 Like