We are testing our deployment of couchbase + syncgw cluster for production and have some problems with best practises of scaling syncgw above couchbase cluster.
- Is this balancing of syncgw ok? That every replica have persistent storage and k8s service is balancing via all replicas ? Above service is k8s ingress and we are calling syncgw via his endpoint (SSL).
- We have problems with automatic deletion of tombstones. On couchbase bucket there is set Tombstone purge interval to 0.04 (1h) for testing. Even if we manually call compaction on couchbase and then we call https://syncgw-url:4985/bucket/_compact on syncgw (via balancer) there are still deleted but not purged objects when we get data via syncgw.
Ad 2) : there is something strange @ syncgw log:
2019-04-12T10:42:13.980Z [WRN] Unable to retrieve server’s metadata purge interval - will use default value. Get 127.0.0.1/settings/autoCompaction: unsupported protocol scheme “” – db.NewDatabaseContext() at database.go:374
2019-04-12T10:42:13.980Z [INF] Using metadata purge interval of 1.25 days for tombstone compaction.
2019-04-12T10:42:13.987Z [INF] Reset guest user to config
yamles of couchbase and syncgw k8s infra: