How to persist database created through admin rest api on sync gateway

The documentation of the sync gateway seems mainly focused around the config file setup. I am looking for a way to add databases to the config, without service interruption.

One way I’ve investigated was to use the admin rest API to create the database. This does work, however the added database doesn’t continue existing over a server restart. If the server is restarted, only the databases defined in the config file are retained, the other ones are forgotten.

This is different from the user data, which is actually persisted over server restarts, and thus not limited to the users defined in the initial config file.

How can I make it so adding a database through the admin API saves this to the config file of the server, or otherwise persists this information, so that a server restart doesn’t wipe de database config? Are there other solutions to manage the available databases on a live sync gateway cluster/server, without taking the service down?

You will have to update the config file and restart the Sync Gateway for database changes to be persistent . If you want to do it without downtime, then in addition to updating via the admin API (which is not persisted), you will also have to update the corresponding config file so the changes are consistently applied on restart. Note that the config changes applied via are scoped to that sync gateway .

I am curious about your use case. Unlike users for instance which are fairly dynamic, database config rarely change. What specific database config properties do you expect to update and how often?

We use the sync gateway inside a B2B SaaS offering. Each tenant of the platform receives a separate bucket on the couchbase server, and thus also a separate database on the sync gateway. In order to optimally make use of resources, they are on the same infrastructure, meaning there are multiple tenants on the sync gateway.

So the use case is to be able to bring a tenant (sync gateway database) online/offline on this shared infrastructure, without interruptions for the other tenants, when we want to add/remove a tenant on the platform.

We are eagerly awaiting the collections, in hope that this might simplify the tenant management, but we still would like to find a solution for the current setup, since even if collections offer a solution, it will still take some time to adapt the product and migrate the customers, and the 7.0 is currently still in beta.

Thanks for the details.

Some related feedback -

To support multi tenancy with sync gateway , you have couple of options

  • Most common approach is to segregate the tenants by channels . I.e. assign documents belonging to a tenant to a specific channel. All tenants are in the same bucket and sync gateway is responsible of access control enforcement. Channels are fundamental to how access control on Sync Gateway.
  • If you must have a bucket/tenant, then it would be preferable to segregate the tenant databases across Sync Gateways and have the load balancer direct traffic to appropriate sync gateway. So if you have 2 buckets (tenants) and 4 SGWs. Configure SGW1 and 2 for Bucket 1 and SGW3 and 4 for Bucket 2 (The extra SGW is for HA). There is a performance benefit to this as opposed to having all your SGWs handling all your buckets . If you have a Sync Gateway configured to handle sync for N buckets, then that SGW is going to have to process the full database change feed (DCP) for every bucket - with every sync gateway in your deployment configured to do that, that’s a lot of redundant data processing . Of course, trade offs is the number of sync gateways but you are weighing horizontal scale versus vertical scale.

The additional benefit specific to your case is that these options will incur no downtime

  • Now…one other option you can consider using configServer . However, note that this is deprecated …which means that this capability will be removed in a future release. When we do remove this, we will have alternate methods available on managing SGW config persistence. At that point, you’d have to update your solution to use new capability. So this is a stop gap solution.

Got it. Your options are what I discussed in my previous post, Alternatively, you can rethink how you setup your system for multi tenancy

Yeah well. Sync Gateway wouldn’t support named collections in 7.0 server release timeframe (It will support default scope/collections ). Thats on our plan but not in this timeframe. So probably good idea to think of other options.

Thank you for the detailed explanation of the different options. The sync gateway per tenant looks interesting from a technical view, just have to see how this impacts the OpEx, but I definitely see the technical advantages of that solution, even in regards to scaling and availability.

I also appreciate the update on the actual scope to expect from the interaction of the sync gateway with the 7.0 server.