Sync gateway multiple edge instance

Hi All,
I need to configure a couchbase landscape where is present one central cluster and several edge clusters. The edge clusters are composed by just one node because they are deployied on a small data center hosted in K3S.

I’ve already setup the central site and two edge instance, so I’ve deploied 3 syncgateway. The central one has no replica defined. The edge SG are configured with two replica for push and pull (they are different due to filter by channel.

I have a doubt about the name of the replica defined in the two edge SG. Currently I’ve set the same replica definition, where it change just the query_params:

“replications”: {
“pts-prod-cloud-pull”: {
“direction”: “pull”,
“remote”: “https://sg.couchbase.XX.com/pts-prod-cloud”,
“username”: “pts_sync_gateway_access”,
“password”: “XXX”,
“continuous”: true,
“filter”: “sync_gateway/bychannel”,
“query_params”: [ “site::5aa53d32-4652-494c-b332-0c62b6f48f30”, “SHARED” ]
},
“pts-prod-cloud-push”: {
“direction”: “push”,
“remote”: “https://sg.couchbase.XX.com/pts-prod-cloud”,
“username”: “pts_sync_gateway_access”,
“password”: “XXX”,
“continuous”: true,
“filter”: “sync_gateway/bychannel”,
“query_params”: [ “site::5aa53d32-4652-494c-b332-0c62b6f48f30” ]
}
}

It seems that the second edge SG doesn’t receive the documents regarding the “site:XXXX” channel, but just those with “SHARED” channel.
The question is: does the replicas defined in the edge SGs should be named differently (e.g. adding the site name)? I’ve noticed that in the central couchbase are defined two documents:

I expect to find 4 documents, regarding the four replica defined in the two edge SG.

Thanks,
Regards

Hi All,
in the meantime to get an answer on the main topic, I’ve one more question regarding the following timeout:

Which is the correct configuration to modify in order to mitigate the issue? Consider that the edge SG is on a vessel, with poor connection through satellite.

Thanks,
Dario

On the original issue - yes, you should use unique replication names on the edge SGs to avoid the checkpoint collision on the central cluster.

The TLS handshake timeout can be a variety of network connectivity issues. If you have a poor/intermittent connection, the replication will attempt to reconnect when hitting that error (as indicated by the “status”:“reconnecting”)

Hi Adam,
thanks for the explanation. I’d like just to know if the vessel SG should automatically established the replication once the network return stable. I ask because I’m able to query the vessel couchbase server with postman, so I expect that the connectivity should be fine. Moreover, which could be the most appropriate timeout parameter in the config.json to increase in order to better fit the network latency?

Thanks,
Regards

Dario