XDCR One-Way with cleanup


Is there a way to set up uni-direction XCDR so that all of the source buckets eventually send the data to a destination target cluster and then, upon successful transmission to the remote bucket, it removes itself from the source bucket?

This way, all the source buckets empty themselves into the destination bucket and then the source buckets clean themselves up and drop the documents.

If not, Is there a way to have couchbase send a signal when its remote queue has been successfully written?
And if so, is there a way to delete a document in a source bucket and NOT have that deletion propagate via XDCR to the remote bucket?


You can do this by removing replication before dropping the source bucket. the signal will have to be something you need to process however. There isn’t a auto-magic way to do this. look at a few indicators like the pending replications and maybe rowcounts across both source and target bucket if this is a purely insert-only workload.

If you plan on doing this regularly to a single destination bucket however, be aware that if the destination receive a key that already exists on the destination, it will engage conflict resolution in XDCR and that means we will look at the revisionIDs for resolving the conflict. you can find more information here: http://docs.couchbase.com/admin/admin/Tasks/xdcr-conflictResolution.html



I think you misunderstood me. I want all the source buckets to continuously be available as targets for clients to drop documents into. I will not be dropping any buckets, only source documents after they are replicated by XCDR.

But I want XCDR to ultimately take all the documents in the source buckets from all the source clusters and deliver them to the destination cluster/bucket for persistence and processing there.

This means the source clusters, after they have transmitted documents via XCDR to destination cluster/bucket, no longer need to retain documents and can get rid of them. But the source clusters/buckets need to be able to receive in MORE documents and then keep delivering them to the destination cluster/buckets.


I did misread. If you are simply trying to not replicate deletes, there isn’t a built in method for this. The workaround for the would be, updating 2 clusters in your app for all operations with the identical data but in cases where you delete, you need to issuing that delete to one cluster. you won’t be using XDCR in that case.