Are there any ways we can optimize memory usage?
Reducing the amount of data delivered in a response for example. There are multiple fields we don’t actually need, like the name of the index, maxScore etc.
We are currently searching 135 fields in our JSON documents.
The issue as you noted here is the huge memory requirement for running the incoming queries. If you can afford more RAM/ FTS quota, that is one way to buy some short term relief to the problem.
Reducing the amount of data delivered in a response for example. There are multiple fields we don’t actually need, like the name of the index, maxScore etc.
Currently there is no way to do this. One option you could do this is to paginate the results. But higher order page requests could still consume memory.
We are currently searching 135 fields in our JSON documents.
Does it mean, you are trying to retrieve stored values of 135 fields with query ?
Or, you have 135 conjuncts/disjuncts conditions in query?
Or , You have indexed 135 fields of the document?
Looking at the high memory estimate for the query (may be too many concurrent/slow queries?), it’s worth checking whether you can rewrite the same query in a better way.