Couchbase syncgateway waiting for indexes on boot

Good morning everybody,
we are having this issue using couchbase server 6.5.1 and sync_gateway 2.7.3

We have installed the couchbase and the sync_gateway on a linux machine with ubuntu 18.04
At the boot the sync_gateway is waiting for the indexes and took ~5 minutes to start,

2020-05-22T17:55:12.475+02:00 [WRN] Using deprecated config option: "logging.[\"default\"].LogFilePath". Use "logging.log_file_path" instead. -- rest.(*ServerConfig).deprecatedConfigLoggingFallback.func1() at config.go:729
    2020-05-22T17:55:12.475+02:00 [WRN] Using deprecated config option: "log". Use "logging.console.log_keys" instead. -- rest.(*ServerConfig).deprecatedConfigLoggingFallback.func5() at config.go:763
    2020-05-22T17:55:12.475+02:00 [INF] Logging: Console to stderr
    2020-05-22T17:55:12.475+02:00 [INF] Logging: Files to /home/sync_gateway/logs
    2020-05-22T17:55:12.475+02:00 [INF] Logging: Console level: info
    2020-05-22T17:55:12.475+02:00 [INF] Logging: Console keys: [* HTTP]
    2020-05-22T17:55:12.475+02:00 [INF] Logging: Redaction level: none
    2020-05-22T17:55:12.475+02:00 [INF] requestedSoftFDLimit < currentSoftFdLimit (5000 < 65535) no action needed
    2020-05-22T17:55:12.475+02:00 [INF] Logging stats with frequency: 1m0s
    2020-05-22T17:55:12.475+02:00 [INF] Opening db /astenpos as bucket "astenpos", pool "default", server <http://127.0.0.1:8091>
    2020-05-22T17:55:12.475+02:00 [INF] GoCBCustomSGTranscoder Opening Couchbase database astenpos on <http://127.0.0.1:8091> as user "sync_gateway"
    2020-05-22T17:55:12.478+02:00 [INF] Auth: Attempting credential authentication http://127.0.0.1:8091?http_idle_conn_timeout=90000&http_max_idle_conns=64000&http_max_idle_conns_per_host=256&kv_pool_size=2&n1ql_timeout=75000&operation_tracing=false
    2020-05-22T17:55:12.496+02:00 [INF] Successfully opened bucket astenpos
    2020-05-22T17:55:12.520+02:00 [INF] Set query timeouts for bucket astenpos to cluster:1m15s, bucket:1m15s
    2020-05-22T17:55:12.520+02:00 [INF] Initializing indexes with numReplicas: 0...
    2020-05-22T17:55:13.135+02:00 [INF] Verifying index availability for bucket astenpos...
    2020-05-22T17:56:28.136+02:00 [INF] Timeout waiting for index "sg_channels_x1" to be ready for bucket "astenpos" - retrying...
    2020-05-22T17:56:28.136+02:00 [INF] Timeout waiting for index "sg_access_x1" to be ready for bucket "astenpos" - retrying...
    2020-05-22T17:56:28.138+02:00 [INF] Timeout waiting for index "sg_roleAccess_x1" to be ready for bucket "astenpos" - retrying...
    2020-05-22T17:57:43.137+02:00 [INF] Timeout waiting for index "sg_access_x1" to be ready for bucket "astenpos" - retrying...
    2020-05-22T17:57:43.137+02:00 [INF] Timeout waiting for index "sg_channels_x1" to be ready for bucket "astenpos" - retrying...
    2020-05-22T17:57:43.138+02:00 [INF] Timeout waiting for index "sg_roleAccess_x1" to be ready for bucket "astenpos" - retrying...
    2020-05-22T17:58:58.138+02:00 [INF] Timeout waiting for index "sg_access_x1" to be ready for bucket "astenpos" - retrying...
    2020-05-22T17:58:58.138+02:00 [INF] Timeout waiting for index "sg_channels_x1" to be ready for bucket "astenpos" - retrying...
    2020-05-22T17:58:58.139+02:00 [INF] Timeout waiting for index "sg_roleAccess_x1" to be ready for bucket "astenpos" - retrying...
    2020-05-22T18:00:13.138+02:00 [INF] Timeout waiting for index "sg_channels_x1" to be ready for bucket "astenpos" - retrying...
    2020-05-22T18:00:13.138+02:00 [INF] Timeout waiting for index "sg_access_x1" to be ready for bucket "astenpos" - retrying...
    2020-05-22T18:00:13.142+02:00 [INF] Timeout waiting for index "sg_roleAccess_x1" to be ready for bucket "astenpos" - retrying...
    2020-05-22T18:00:48.688+02:00 [INF] Indexes ready for bucket astenpos.

Meanwhile in the indexes we are facing some timeout issues

2020-05-22T17:57:11.299+02:00 [Info] memstats {“Alloc”:409546432, “TotalAlloc”:2262383968, “Sys”:555465944, “Lookups”:1146, “Mallocs”:15531288,“Frees”:13214747, “HeapAlloc”:409546432, “HeapSys”:522616832, “HeapIdle”:108830720, “HeapInuse”:413786112,“HeapReleased”:0, "He$
2020-05-22T17:57:13.144+02:00 [Info] SCAN##20 REQUEST defnId:9942394123315746205, instId:4130464471424506387, index:astenpos/sg_roleAccess_x1, type:scan, partitions:[0], scans: ([{[“foo”] [“foo”] 3 range [{[{“foo” “foo” 3}] [“foo”] [“foo”] 3 }] }]), limit: 2020-05-22T17:57:13.144+02:00 [Info] SCAN##20 RESPONSE status:(error = Index scan timed out), requestId: 334cec9d-b0a0-4dbe-a9aa-523c0af7cf8a 2020-05-22T17:57:13.151+02:00 [Info] SCAN##21 REQUEST defnId:16934133884558357764, instId:11249186732118408689, index:astenpos/sg_access_x1, type:scan, partitions:[0], scans: <ud>([{["foo"] ["foo"] 3 range [{[{"foo" "foo" 3}] ["foo"] ["foo"] 3 }] <nil>}])</ud>, limit:92
2020-05-22T17:57:13.151+02:00 [Info] SCAN##21 RESPONSE status:(error = Index scan timed out), requestId: c28ddb4c-1fae-42d9-9c84-1809be3d8231
2020-05-22T17:57:13.159+02:00 [Info] SCAN##22 REQUEST defnId:4659082084430082866, instId:17094682661864805293, index:astenpos/sg_channels_x1, type:scan, partitions:[0], scans: ([{[[“foo”,0]] [[“foo”,1]] 3 range [{[{[“foo”,0] [“foo”,1] 3}] [[“foo”,0]] [[“foo”,1]] 3 }$
2020-05-22T17:57:13.160+02:00 [Info] SCAN##22 RESPONSE status:(error = Index scan timed out), requestId: 3e9f8e1d-38a3-4ce9-8e58-31d03495829f
2

We do not really know how we can deal with this situation, because it looks like there are 2 services waiting for something, and I don’t know what are waiting for.
Also, looking to the 8091 the indexes status are ALL on READY but no one has the RESIDENT RATIO setted.

Thanks for any feedback and have a good day.