The ajaxCallback function gets an http status code 401, textStatus is “error”, no data.responseText. The called url was: /pools/default/remoteClusters?uuid=12345678...
Hm. 401 is ‘unauthorized access’.
When i avoid the proxy and i go directly on the server (well, with ssh and port redirection), it works. It recognizes, that i am logged in.
I spoke to one colleague who will probably be able to jump in here and advise. One thing he wondered about is whether or not you’re keeping the requests through the proxy sticky to a given node on the cluster, or if they round-robin. For certain operations, like creating a bucket, there is a local-to-a-node reference associated to the underlying configuration.
Most likely you’re facing known limitation of how session auth works in our UI.
Specifically when you login in UI against specific node in your cluster, there’s auth token that’s created and passed to browser. And that token is neither persisted nor replicated. So you cannot use same token against other nodes. And this token becomes invalid on node restart.
Given that you mentioned reverse-proxy, it’s likely that some of your requests are sent (by nginx) to “wrong” nodes where your auth token is invalid.
You can confirm that by looking at http_access.log (located usually at /opt/couchbase/var/lib/couchbase/logs).
But to me it still sounds like, that you think, that i have many servers behind the proxy. This is not true. I have picked one server of my cluster. So the request always goes to the same server.
Also, since the UI is full of ajax calls, if there were more servers setup behind the proxy, i would constantly get errors, because i am not logged in on all those other servers…
Could be some other reason of course. In order to diagnose I’ll need collectinfos from all nodes of your cluster. Best way to do that is by creating ticket in jira and attaching diagnostics there.