Large document performance

We’ve been using couchbase for a year or so and gradually moving more and more functionality into it. It has worked wonderfully well (except for some backup issues). With our latest revision, our largest documents will jump from around 400K to a little under 2M.

I don’t doubt couchbase can handle this, but I’m a little worried it may cause performance issues. These larger objects will be read and written frequently. We will be allocating enough RAM & nodes to hold our expected daily working set.

Are there any issues we should look out for? Any implementation wrinkles to use with a larger document size?

This will be on linux using couchbase 3 community with Java SDK 1.4.2 on the client side.

Thanks!
Travis

Hi, Travis,

Glad to hear that you have good experience using Couchbase. We have seen customers using large documents than 2M.

From performance perspective, yes, you will need to adjust RAM and Disk settings to suit your needs. The bottleneck may come from many place, RAM, Disk I/O or Disk. Please refer to this blog about sizing: http://blog.couchbase.com/how-many-nodes-part-1-introduction-sizing-couchbase-server-20-cluster

Thanks,
Qi

Thank you for the information! It’s always a little nerve-racking going
into new territory like this.

You are welcome.

Let me know how your adventure goes and we are always here to help.

Thanks,
Qi

@travisgreer, in addition to what @qicb mentioned you may review you select queries , create partitioned indexes on the predicates , views etc …
Thanks