We’ve been using couchbase for a year or so and gradually moving more and more functionality into it. It has worked wonderfully well (except for some backup issues). With our latest revision, our largest documents will jump from around 400K to a little under 2M.
I don’t doubt couchbase can handle this, but I’m a little worried it may cause performance issues. These larger objects will be read and written frequently. We will be allocating enough RAM & nodes to hold our expected daily working set.
Are there any issues we should look out for? Any implementation wrinkles to use with a larger document size?
This will be on linux using couchbase 3 community with Java SDK 1.4.2 on the client side.
@travisgreer, in addition to what @qicb mentioned you may review you select queries , create partitioned indexes on the predicates , views etc …
Thanks