We’ve found that our Couchbase cluster has been running consistently high on CPU and RAM utilization; in our use case however, we were curious if upgrading infra resources was the best approach; or if we could/should change our persistence strategy. Here is our use case:
-Write ~2-3 million records daily
-Purge ~2-3 million records daily (through setting a 45 day TTL flag on the document)
-Within the first 48 hours we process the data, during that time CRUD performance is important
-After 48 hours, we never touch the data again, except rarely to research issues and of course when it’s purged
We were thinking, if there’s a way to persist the data that is at least 48 hours old to disk, versus in memory, that this would dramatically reduce our resource needs; bearing in mind that performance on data post-48 hours is not important. I saw the persistTo flag, but I’d prefer to do it at the bucket level, or set some sort of time frame, versus having to update each document after a couple days.
Thank you in advance!