Disk page caching and couchbase

Hey guys, I’m using Couchbase 4.5.1 Community edition on Ubuntu 14.04 LTS. I have a 2 node cluster for my staging environment in AWS, with 1 bucket, 4 views, and replica set of 1. I’m running it on a pair of r4.xlarge instances, with HD space 40GB, RAM 30GB.

I’m ingest roughly 17GB of data per day, as this cluster backs a REST api collecting telemetry data. The telemetry data is stored as JSON docs, with a TTL of 1 day.

I’ve noticed that while my bucket RAM quota is 10GB, the RAM usage goes uptil 30GB of RAM. Using free -m and top I see that the disk page cache is using anywhere from 12-16GB of data. Is this normal? Is it viable to disable disk page caching since Couchbase’s architecture keeps data in memory, and asynchronously saves to disk?

The reason is that when disk page caching is high, it renders other processes on the VM a little slow, including SSH for administrative tasks for me.

@architbaweja

Do you have your CB cluster configured correctly?
Here are some links:
https://www.couchbase.com/resources/presentations/tuning-couchbase-server-the-os-and-the-network-for-maximum-performance.html

Disabling THP: https://blog.couchbase.com/often-overlooked-linux-os-tweaks/

This is an amusing (but accurate!) description of why the disk cache is using your memory (and yes, it’s normal): http://www.linuxatemyram.com

Thanks @househippo I’ll double check those settings. I actually used ansible playbook from couchbaselabs to provision these. https://github.com/couchbaselabs/ansible-couchbase-server

Anyway, I’ll check those settings again and see if the ansible role had the opposite default values.

@drigby I have read that website you linked. While it seems like Linux follows a “trust me” approach to Disk Page Caching I was wondering if there was still a way to tell Linux to not increase Disk page caching use beyond a certain limit?

Thirdly, do you guys know that if switching from ext4 filesystem to another would make a difference in an AWS environment?