High water mark, low water mark and sizing

Hello experts.
It is written in the following documentation that:

Low Water Mark ( ep_mem_low_wat ) - When the low water mark threshold is reached, it indicates that memory usage is moving toward a critical point and system administration action must be taken before the high water mark is reached.

But according to the sizing guidlines we can plan that not all the data will be in memory (which is good for our use case) if I understand the working set percentage correctly. In this case, won’t the low water mark will always be reached?
We planned by those guidelines and now the memory used is always ~96% of the high water mark.
Should we scale out or as long as we are ok with the amount of documents in the memory and the amount of documents that on memory we are OK.

The bucket mentioned is value eviction and the couchbase server version is 3.

Hello gluz,

As you use case is fine with the amount of documents in memory, you should be fine. You’d see better performance with more memory, but providing you aren’t hitting the high water mark constantly, you should be fine (the low water mark is primarily for Server processes upon hitting the high water mark).

The only thing that could cause issues is if you intend to use views etc.

The definitive thing to do, however, is thoroughly test your cluster for performance under stress and verify its performance is as you require (adding memory if not). Other than performance losses, however, there should be no adverse side-effects from riding between water marks.