Metadata Purge Interval and XDCR



I’m testing couchbase 3.0.1 community edition.
When I would like to change “Metadata Purge Interval” to 0.12 i’ve got this message:
"The Metadata purge interval should always be set to a value that is
greater than the indexing or XDCR lag. Are you sure you want to change
the metadata purge interval?"
My question is, where can i find that two options (indexing and XDCR lag), i’ve diggged through the manual but could not find it.

One more question.
When i would like to flush the data bucket it says: “Cannot flush buckets with outgoing XDCR”.
I paused the XDCR replication on the server, do i have to stop replication on all server before i could flush?ű
My replication setting:
Srv01 -> Srv02 and Srv03
Srv02 -> Srv01 and Srv03
Srv03 -> Srv01 and Srv02

And one more :smile:
I want to store php session files, during the tests the used data bucket’s item count is just growing and growing, when will these files getting deleted? I sat up session_gc_probability and session_gc_divisor to “1” and session.gc_maxlifetime = 10800 so that should delete all session files which is 3H older with 100% probability, but it doenst do it.

Thanks, Robert


Any thoughts?
Actually the first and the second question is already known.


Hi Robert,
For first one, error message you see while changing the “metadata purge interval” is a warning message. Since purging metadata on source while indexes are built or cross datacenter replication will cause data-corruption. Depending on your workload you can check the monitoring stats to find out your lag/latency time for indexing and XDCR.

Second, yes that’s correct that current behavior if you want flush on source cluster you need to follow steps here Whereas if you want to flush the bucket on destination you don’t have to tear-down the replication streams.

Third, let us know what version of PHP SDK you are using? Additionally you can check these below stats on Couchbase Server to see if items are actually getting expired after time-interval -

Hope that helps!

Anil Kumar



Thanks for the reply.

We are using (atm):
Ubuntu 12.04 LTS
PHP 5.3.10-1ubuntu3.14 with Suhosin-Patch and
php5-memcached 1.0.2-2

The ep_expired_access is “0”.
The delete_hits: 24708
delete_misses: 22892 stats shows this.
Does it means couchbase tried to delete 24708 sessions but could not delete 22892?
The vb_active_expired shows 38990 items.

When I checking WebUI the “deletes per second” graph shows deletes when the page is used,
BTW, the bucket’s “Item Count” shows the actual active sessions or active sessions + expired sessions stat?

I tried different session.gc_maxlifetime and session.cache_expire settings in the php.ini.


Delete hits and delete misses are statistics that relate to front-end operations. When I talk about front-end operations I mean operations that are done directly against Couchbase by your application. Examples of these operations are set, get, append, delete, etc. In this case the stat for delete hits means “The amount of times your application issues a delete and the item the application tries to delete existed in Couchbase.” Delete misses is the opposite of this and means “The amount of time a delete as issued from your application and the item did not exist in Couchbase”.

As for the expiration stats, you mentioned that ep_expired_access is 0. Couchbase expires items at a set interval which I believe is 1 hour by default. This means that for a period of 1 hour expired items will still be present in Couchbase. These items are removed in one of two ways. The first way is if the application tries to access the item. In this case the item is immediately deleted, ep_expired_access is incremented, and a NOT_FOUND response is returned to the application. The second way expired items are removed is through the expiry pager which runs automatically in Couchbase. There should be an ep_expired_pager stat which is incremented each time the expiry pager runs and deletes and expired item.

As you mentioned in your last question, the item count includes active and expired items. This is because we expire item lazily, either through the expiry pager or when an item is subsequently accessed. Over time (every hour by default) these items will be removed from the database.

If you want the expiry pager to run more frequently you will need to modify the expiry pager interval on each node. You can do this by following the instructions in the link below.

Note that you want to change the exp_pager_stime

cbepctl [hostname]:11210 -b [bucket-name] -p [bucket-password] set flush_param exp_pager_stime [value in seconds]