Couchbase 3, XDCR, when behind a proxy, there is a problem creating a remote cluster reference

Hello everyone,

i try to setup a remote cluster reference and i run into an error.

  1. have nginx + proxy_pass to one couchbase instance (has its own domain, so no path rewriting)
  2. use chromium as the browser
  3. log in, i “create new cluster reference”, it opens a popup, typing in all fields, save!
  4. Receive an error: Attention - Request returned error

I debugged a little bit more by changing the source code to give me some more info than just an “error”:

/opt/couchbase/lib/ns_server/erlang/lib/ns_server/priv/public/js/app-misc.js

The ajaxCallback function gets an http status code 401, textStatus is “error”, no data.responseText. The called url was: /pools/default/remoteClusters?uuid=12345678...

Hm. 401 is ‘unauthorized access’.

When i avoid the proxy and i go directly on the server (well, with ssh and port redirection), it works. It recognizes, that i am logged in.

Anyone having an idea?

Same for creating buckets. Weird. When i locally go on the instance, everything works fine. Behind the proxy with one instance attached, nope. Hm.

I spoke to one colleague who will probably be able to jump in here and advise. One thing he wondered about is whether or not you’re keeping the requests through the proxy sticky to a given node on the cluster, or if they round-robin. For certain operations, like creating a bucket, there is a local-to-a-node reference associated to the underlying configuration.

Is the proxy sticky load balancing the requests?

Hi.

Most likely you’re facing known limitation of how session auth works in our UI.

Specifically when you login in UI against specific node in your cluster, there’s auth token that’s created and passed to browser. And that token is neither persisted nor replicated. So you cannot use same token against other nodes. And this token becomes invalid on node restart.

Given that you mentioned reverse-proxy, it’s likely that some of your requests are sent (by nginx) to “wrong” nodes where your auth token is invalid.

You can confirm that by looking at http_access.log (located usually at /opt/couchbase/var/lib/couchbase/logs).

Hello,

thank you both for your answers.

But to me it still sounds like, that you think, that i have many servers behind the proxy. This is not true. I have picked one server of my cluster. So the request always goes to the same server.

Also, since the UI is full of ajax calls, if there were more servers setup behind the proxy, i would constantly get errors, because i am not logged in on all those other servers…

What do you think?

Could be some other reason of course. In order to diagnose I’ll need collectinfos from all nodes of your cluster. Best way to do that is by creating ticket in jira and attaching diagnostics there.