We have implemented a solution using the kafka-connector to stream DCPEvents onto kafka and would like to run concurrent connectors for high availability reasons.
The behaviour I am seeing is that a single connector running on it’s own receives all changes as DCPEvents as i expect, but more than one running in parallel results in none of the connectors receive events at all, as if they simply aren’t even being fired. A running connector that is happily receiving all changes as DCPEvents, will seemingly cease to function as soon as another one is fired up, and will continue to not function even when all others have stopped.
Is there are reasonable explanation for this? I would have expected at least one, if not all of the connectors to receive events from the stream, but as i say I cannot see any events even being fired.
Any help would be much appreciated.
I am using kafka-connector 2.0.0, with core-io 1.2.6 and Couchbase version 4.1.0-5005 Enterprise Edition (build-5005).