We’re finally moving some of our core application queries away from traditional views into N1QL-based queries. In doing this, we are reviewing the following documentation:
Ideally what we would like to do is start with Standard Global Indexes, then measure how large they are, and what % stays cached in memory. In the “Index Stats” section of the Web UI we can see the size of the index, but I can’t find any metrics on what % of the index is cached in memory versus what’s forced to read from disk.
Is there some other way to see what % of a Standard GSI is cached in memory?
@jeffhoward001, roughly 60% of the Indexer RAM Quota gets assigned to the cache for Standard GSI(this will change from 5.0 onwards).
I have filed MB-25670 to expose these stats on UI.
Thanks Deepkaran, do you think this will make it into the 5.0 GA release?
@jeffhoward001, 5.0 GA is already in feature freeze. This will make it to the release after that.
@jeffhoward001 With 5.0, we have also introduced a new storage engine(called Plasma) for GSI.
I would highly recommend that you take our 5.0 beta for a spin and let us know of your feedback.
Also, do share your Views usecase, so that we can advice on how the same can be moved over to N1QL.
Thank you for sharing Venkat. One question: the article mentioned “SSD and Flash storage” storage several times. Does this mean that Couchbase 5.0 will be more tolerate of the non-persistent local SSD drives in Azure and EC2?
Both cloud platforms offer extremely fast (100,000 IOPS) local SSD storage, but the storage is non-persistent if your VM gets moved to another host. Ideally we could our index storage on these volumes, and in the unlikely event that we’re moved to a new host, Couchbase would rebuild the indexes on that node.
I submitted a forum post on this a few years back with 4.0 went GA, and the answer was that storing indexing on non-persistent storage was “loosely supported” - Storing indexes on temporary storage
We’re really hoping that support for rebuilding missing index files is formally supported within 5.0, do you know if that’s true?
As many of our customers are using SSD/Flash, we have to make sure that the product leverages the hardware capabilities and hence Plasma is optimised for such.
With ephemeral storage, as that offered by many cloud vendors, the nodes are transient. You can failover the node and add nodes suitably; indexes will be rebuilt.
With 5.0, we support index replicas as well, so that you do not have to manually rebuild nodes. Manageability gets a big facelift in 5.0 w.r.t indexing. https://blog.couchbase.com/transition-index-replicas/