CouchBase validable that when the call response to Curl exceeds 20 MB, it gives validation error. As I can change that parameter to increase the size of the response that N1QL supports.
SELECT DATE_PART_STR(h.fields.actual_arrival_date,'month') AS month, COUNT(h.fields.actual_arrival_date) AS con
FROM CURL("http://localhost:8094/api/index/nuevo/query", {
"request": "POST",
"header": "Content-Type: application/json",
"data": '{
"size": 10000000,
"from": 0,
"fields": [
"*"
],
"query": {
"conjuncts": [{
"inclusive_start": true,
"inclusive_end": true,
"field": "actual_arrival_date",
"start": "2019-01-01",
"end": "2020-01-31"
}, {
"field": "containers.loads.description_text",
"match_phrase": "coffe"
}
]
}}',
"user": "user:password"
})AS result
UNNEST result.hits h
GROUP BY DATE_PART_STR(h.fields.actual_arrival_date,'month')
Yes @vsr1 , try it and have the same problem when it exceeds 20 MB of response gives the error that you have exceeded the maximum limit of 20 MB.
In this way, for each key value returned by the FTS a query is made to the bucket. In the way you send you, the SELECT is not asked to the bucket and it is faster since it will be calculated with the value stored in the FTS index
I am trying to build a statistic with the response of the FTS index. For that reason the Size Parameter is 10,000,000
Paste exact error. The modified N1QL query should have only 12 rows. Is 20MB error coming from internal. In that case it might be same if directly queried fts index outside n1ql.
In this way, for each key value returned by the FTS a query is made to the bucket. In the way you send you, the SELECT is not asked to the bucket and it is faster since it will be calculated with the value stored in the FTS index
SEARCH() function posted by me also uses the info from FTS index and will not fetch the document from bucket. Please read through blog specific to covering index. and also Do EXPLAIN and see.