High Indexer CPU usage

Our Couchbase cluster is running Community Edition 6.5.0 - build 4966. We have noticed that the indexer service starts to use up a lot of the CPU when it receives certain requests, the CPU usage does not go down even when requests stop coming in. This causes queries to pile up and slow down.

Sample query causing the issue -
SELECT * FROM rules WHERE type= ‘raw’ AND algorithm.algorithmName= ‘vtb’ AND algorithm.timeFrame= ‘30’ AND oid= ‘12345’ AND ctid= ‘cw-12345-qbcde’ AND catID= ‘pc-12345-gZWfi1BA’ AND (runDate LIKE ‘2021-01-12%’ OR runDate LIKE ‘2021-01-11%’) AND SOME s IN tid SATISFIES s IN [ “cc_076”,“cc_070”,“cc_064” ] END ORDER BY dateCreated DESC limit 50

This query completes in a about 200 milliseconds, but we still see the CPU increasing and other queries start to slow down.

We have tried limiting the indexer threads to prevent it using up all the available CPU, but this causes the indexer service to intermittently fail causing slowness in queries. The Indexer thread count is set to 0 (unbounded). The index storage mode is ‘Standard Global Secondary’. We usually run with 20GB memory allocated to the indexer service, and haven’t seen much improvement even after increasing this to 40GB.
We have increased the size of the node from 8CPU, 64 GB to 16CPUs. Doubling the resources and increasing the memory allocated to the indexer service did not cause much improvement.

To resolve this issue, we need to restart the indexer service. However, the issue still shows up when the queries are received again.

We are trying to understand why the indexer not release resources even after the queries are executed?

1 Like

We’ve got here the same issue with 7.0.2 CE.
Did you resolve this?

@alston_dmello First of all apologies. I am not sure how this post went unnoticed for so long. Are you still facing the issue?

@markdg, I see that you have raised another post: https://www.couchbase.com/forums/t/why-run-indexer-in-high-cpu-load-after-500-select-queries. I think we can continue the discussion there

Thanks,
Varun