3 Node cluster reporting extra nodes to clients

Just wondering if others have seen this before. I have a number of 3 node clusters in test/dev environments which have undergone a lot of configuration changes over the last month. The issue I now see is that some of these clusters are now reporting extra server nodes to the clients. The java client I have noticed in the log indicates 4 clients of which 3 are referenced by dns name and IP address, and the 4th one is referenced via IP address only, which is a duplicate.

Here is an example from the java service log:

INFO com.couchbase.client.CouchbaseClient: CouchbaseConnectionFactory{bucket='inventory', nodes=[http://172.20.30.47:8091/pools, http://172.20.30.48:8091/pools, http://172.20.30.49:8091/pools], order=RANDOM, opTimeout=10000, opQueue=16384, opQueueBlockTime=15000, obsPollInt=400, obsPollMax=25, obsTimeout=10000, viewConns=10, viewTimeout=75000, viewWorkers=1, configCheck=10, reconnectInt=500, failureMode=Redistribute, hashAlgo=NATIVE_HASVE_HASH, authWaitTime=2500} INFO com.couchbase.client.CouchbaseClient: viewmode property isn't defined. Setting viewmode to production mode INFO :couchbase.CBSession$1->connectionEstablished(124) - DS.CONNECTED_OK INFO :couchbase.CBSession$1->connectionEstablished(124) - DS.CONNECTED_OK INFO :couchbase.CBSession$1->connectionEstablished(124) - DS.CONNECTED_OK INFO :couchbase.CBSessionManager->startPool(117) - DS.DB.SUCCESS_CONNECTIONS INFO :common.InventoryAppContext->connect(148) - IV.STARTED_DATASOURCE INFO net.spy.memcached.auth.AuthThread: Authenticated to server1.domain.com/172.20.30.49:11210 INFO net.spy.memcached.auth.AuthThread: Authenticated to server2.domain.com/172.20.30.47:11210 INFO net.spy.memcached.auth.AuthThread: Authenticated to server3.domain.com/172.20.30.48:11210 INFO net.spy.memcached.auth.AuthThread: Authenticated to /172.20.30.47:11210

I am seeing this on a number of clusters (from the client logs). The reason for concern is that the node connections (to port 11210) are not being distributed evenly across the couchbase cluster. We are seeing more connections to the duplicated server node than the others.

I was wondering if others have seen this behaviour and if there is a way to cleanup the server nodes that have been registered twice. I've searched around on this issue but haven't come up with any solid answers thus far.

Couchbase Server Version 2.2 and also have 2.5.1 doing the same thing in one environment

Let me know if I need to provide more information.