Retrieve Self-Signed Key

When setting up multiple couchbase servers, I’ve got them load balanced behind a BigIP F5, which works great when using HTTP, however if I want to load balance the front end from HTTPS, if I go directly to an IP of any of the servers, I get the UI fine (with the self-signed cert), however, if I copy the .pem (or the cert from the UI/UX) and then import it in to the BigIP, the BigIP needs the key that signed the cert. I’ve tried couchbase-cli to export the .pem hoping I might get a .pfx or a key of some sorts, but no luck.

How can I get the key that (self)-signed the cert so I can have the BigIP trust the pass through HTTPS traffic?

Hey there!

I’d love to understand more about why you’re putting a load balancer in front of the Couchbase cluster.

For an answer to the specific question you’ve raised, I’d like to bring @don into the discussion as I believe he is most likely to be able to help.

I want to give my users a single address to address the cluster and short of using RR DNS, which has potential flaws (i.e. DNS RR can’t see up vs down), I can put a Big IP in the middle. Also, this avoids having to have the self-signed cert, which I can’t always rely on my users trusting. By using the Big IP I can use a wildcard cert we’ve got corporate wide which then ensures trust from an SSL level. Does that help?

Ping? Any word here?

To solve for a single hostname to bootstrap for a cluster, we typically recommend the use of DNS SRV records. The reason we recommend against a load balancer is that those are typically used in a scenario where each node has exactly the same service on it and Couchbase’s SDKs and cluster have a more tight relationship.

Beyond that, DNS SRV is arguably easier to setup/configure/maintain than routing traffic through a load balancer.

Which SDK are you using?

A DNS SRV is no real difference between VIP’ing it via an F5 is it…All I’m doing is controlling the up/down from the NLB level versus DNS SRV records will not know up/down.

It’s a bit different because in Couchbase, performance is derived in part from the close understanding of the cluster topology by the SDK. In fact, we don’t even use HTTP to retrieve that topology by default.

The SDK will iterate through the list of hosts in the SRV record providing that service, then retrieve the topology from one after authentication (even SSL’d authentication), and set itself up to learn about cluster topology updates from all of the nodes of the cluster.

I’m not using the BIG IP for SSL offloading, I’m simply taking the TCP request and ensuring that it can forward. However, CB is requiring SSL connectivity, due to it having it’s own cert.

I understand that. The idea here is that the SDK does its bootstrapping over memcached protocol on port 11210 and will automatically find a node to bootstrap into the cluster.