Couchbase view compaction fails in 4.0 resulting in still growing disk usage

Couchbase Community version 4.0 with XDCR replication to Elasticsearch, single node, ~8 design documents, each with several views. Data size ~5-6GB. Disk usage is more or less constant for several days after fresh installation and starts growing afterwards, when compaction threshold is reached (I guess so). Disk usage grows slowly at the begining and can reach even several GB for few minutes. The only solution I found then is to do a backup, reinstallation and restore, but it happens again after few days/weeks.

In logs I can see views compaction errors:

    [ns_server:info,2016-03-24T04:58:32.788+01:00,ns_1@127.0.0.1:<0.26292.335>:compaction_new_daemon:spawn_view_index_compactor:850]Compacting indexes for mapreduce_view/rtb/_design/dev_product/main. Compacti
    on is scheduled
    [ns_server:warn,2016-03-24T04:58:41.171+01:00,ns_1@127.0.0.1:<0.26308.335>:compaction_new_daemon:do_chain_compactors:592]Compactor for view `mapreduce_view/rtb/_design/dev_product/main` (pid [{type,
    view},
    {important,
    true},
    {name,
    <<"mapreduce_view/rtb/_design/dev_product/main">>},
    {fa,
    {#Fun<compaction_new_daemon.23.85348694>,
    [<<"rtb">>,
    <<"_design/dev_product">>,
    mapreduce_view,
    main,
    {config,
    {30,
    18446744073709551616},
    {30,
    18446744073709551616},
    undefined,
    false,
    false,
    {daemon_config,
    30,
    131072}},
    false,
    {[{type,
    bucket}]}]}}]) terminated unexpectedly: {view_group_index_compactor_exit,
    91}
    [ns_server:warn,2016-03-24T04:58:41.172+01:00,ns_1@127.0.0.1:<0.24540.335>:compaction_new_daemon:do_chain_compactors:597]Compactor for view `rtb/_design/dev_product` (pid [{type,view},
    {name,
    <<"rtb/_design/dev_product">>},
    {important,false},
    {fa,
    {#Fun<compaction_new_daemon.25.85348694>,
    [<<"rtb">>,
    <<"_design/dev_product">>,
    {config,
    {30,
    18446744073709551616},
    {30,
    18446744073709551616},
    undefined,false,false,
    {daemon_config,30,
    131072}},
    false,
    {[{type,bucket}]}]}}]) terminated unexpectedly (ignoring this): {view_group_index_compactor_exit,
    91}
    [ns_server:info,2016-03-24T04:58:46.855+01:00,ns_1@127.0.0.1:<0.26498.335>:compaction_new_daemon:spawn_master_db_compactor:791]Start compaction of master db for bucket auction with config:
    [{database_fragmentation_threshold,{30,undefined}},

Same setup on previous CB version (3.1) was performing well.

Any ideas why compaction fails and causing constantly increasing disk usage?

I have recently been getting this with Couchbase Community 6.0, very similar setup. 1 node. Any ideas as to why?