Best way to index large amount of data

I was wondering what the best approach is for Index’s if i have million of records and i want the best performance.
The records i a using are like property information which have some basic stuff like parcel number etc, But then i have a ton of info like school district , street number, sales price SQFT etc which the user might use to narrow down its search.
So the question is , is it best to create one index which includes all possible fields the user might use in its query or should there be one for each field that might be used. If have to create one for each possible search criteria i would need to create 50+ indexes.

GSI indexes order of keys are important. If predicate has leading index keys missing it needs to scan most of the index.
If you create single filed indexes, multiple predicates can’t push to indexer.

Checkout FTS index if you need to filter based on different fields.

So reading up on the docs you linked there seem to be 2 versions of using the FTS with N1QL

N1QL/FTS integration using SEARCH_QUERY


N1QL/FTS integration using N1QL SEARCH predicate

is there any benefits or advantage between the 2 ?

Use Search predicate. If possible you can directly use FTS search from client.
You can also use mix mode. Define regularly searched with GSI. One rarely used combinations use Search Predicate.