Full text search total_hits wrong by changing from

I have 461061 documents in index.

{
  "fields": [
    "*"
  ],
  "highlight": {},
  "sort": [
    "name"
  ],
  "size": 20,
  "from": 0,
  "query": {
    "must": {
      "conjuncts": [
        {
          "field": "class_type",
          "match": "REGION"
        }
      ]
    }
  }
}

it returns

],
"total_hits": 461061,
"max_score": 1.4851010726785485,
"took": 133725788,
"facets": null
}

but if I request like this,

{
  "fields": [
    "*"
  ],
  "highlight": {},
  "sort": [
    "name"
  ],
  "size": 20,
  "from": 461000,
  "query": {
    "must": {
      "conjuncts": [
        {
          "field": "class_type",
          "match": "REGION"
        }
      ]
    }
  }
}

it returns

"hits":[],
"total_hits": 154063,
"max_score": 1.4825419134327702,
"took": 5241947075,
"facets": null

total_hits changed. this is exactly same index.
why this returns?
Why does couchbase respond by abruptly reducing the number of documents in index?
my couchbase is crazy? what should I do?

couchbase-server -v
Couchbase Server 6.0.0-1693 (CE)

I think couchbase full text search max_result_window is problem.


this not works for me.

@horoyoi_o,

Like to get a few more details like below,

Can you please share your search response status part as well for the failing case?
eg:{"status":{"total":N,"failed":N1,"successful":N2}

Is this failure consistent only for this higher order paged query ?

If the Size+From arguments in the SearchRequest exceeds the current bleveMaxResultWindow limit, then you would have got an explicit error rather an incorrect total_hits count response like above.

How many FTS nodes your cluster has?

Cheers!