Couchbase Resource usage question

Copy pasting a student’s inquiry to support:
My name is Jamie Tabone and I am an undergraduate student reading for a degree in Software Development at the University of Malta. As part of my dissertation, I am currently conducting a research study which aims to compare two web stacks of technology for their ability to scale linearly with respect to resource usage as throughput increases.

The first stack is the MEAN stack which mainly consist of Node.js and MongoDB. The other stack is a proposed Erlang-based stack which mainly consist of Cowboy as the web server and Couchbase (knowing it is partially built in Erlang) as the database server.

During the tests, the performance of Couchbase looks as in the charts below. Do you have any idea why Couchbase follows no specific resource usage pattern as load increases? In fact, even before the actual test was started, the Couchbase server had the same pattern of resource usage.

Note that the test scenario simulates a web application with high level of writes to the database. In fact, a user document (inc. name, surname, username, password, dob and about) is added to the server for each request. The test is 20 minutes long and the number of requests per second increase by 30 every 30 seconds, starting with a load of one request per second, up to 1170 requests per second.

The Resources for the database server: 7 GB of RAM and 4 CPU cores.

CPU Usage :


RAM Usage:

Thank you. Your help will be much appreciated.

Best regards,

Jamie Tabone

1 Like

And my response:
To help start to answer your questions, Couchbase uses resources very differently to what you may be experienced with in terms of other database systems and especially MongoDB. Most databases rely heavily on the disk and CPU in order to handle traffic and provide performance. Couchbase, however, is a “memory first” architecture and has a tightly managed cache (based on memcached) that no other technology provides out of the box today. Writes go first into RAM and are eventually written to disk and reads are served directly from RAM without touching disk. This allows Couchbase to utilize RAM much more heavily, and put much less emphasis for performance on the disk and CPU resources. In turn, this allows us to use disk and CPU to accomplish “background” tasks like index maintenance, compaction, etc.

The end result of all this is Couchbase’s ability to handle extremely high levels of traffic, both reads and/or writes, without saturating its resources. We routinely benchmark single nodes up to 200k operations/sec, and that then scales linearly with more nodes being added.

I would suggest that you “tweak” your benchmark in a few ways:

  • Try to push the load even higher, almost any database technology can handle 1k ops/sec and you won’t really see the difference in performance between them until you get higher
  • Try to measure latency as well as throughput since that’s sometimes even more important to the end-user’s experience
  • Try to push more data into the system than you have RAM available. Other technologies (MongoDB especially) start to really suffer when they don’t have enough RAM to cache the entire dataset. Couchbase handles this very very well.

Hope that helps Jamie!

Perry

1 Like