Endless compaction process causes memory leak (potential)


#1

Hello!
CB-server 4.1.X, 3 nodes, 650K docs.
With “heavy” AC-settings (MPI=0.04, DBF=2%/1Mb) i see endless compaction for more then 1 day (and in fact, there is nothing more to compact, as i understand):

[ns_server:info,2016-08-24T07:05:08.530Z,ns_1@80.node:<0.26030.213>:compaction_new_daemon:maybe_compact_vbucket:723]Compaction of <<“storage/839”>> has finished with ok
[ns_server:info,2016-08-24T07:05:08.532Z,ns_1@80.node:<0.26022.213>:compaction_new_daemon:maybe_compact_vbucket:720]Compacting ‘storage/847’, DataSize = 161195, FileSize = 163931, Options = {1472018827,
5147,
false}
[ns_server:info,2016-08-24T07:05:08.598Z,ns_1@80.node:<0.26013.213>:compaction_new_daemon:maybe_compact_vbucket:723]Compaction of <<“storage/840”>> has finished with ok
[ns_server:info,2016-08-24T07:05:08.599Z,ns_1@80.node:<0.25994.213>:compaction_new_daemon:maybe_compact_vbucket:720]Compacting ‘storage/848’, DataSize = 180695, FileSize = 184411, Options = {1472018827,
5044,
false}
[ns_server:info,2016-08-24T07:05:08.672Z,ns_1@80.node:<0.26024.213>:compaction_new_daemon:maybe_compact_vbucket:723]Compaction of <<“storage/843”>> has finished with ok
[ns_server:info,2016-08-24T07:05:08.679Z,ns_1@80.node:<0.26125.213>:compaction_new_daemon:maybe_compact_vbucket:720]Compacting ‘storage/850’, DataSize = 164401, FileSize = 168027, Options = {1472018827,
7444,
false}
[ns_server:info,2016-08-24T07:05:08.749Z,ns_1@80.node:<0.26015.213>:compaction_new_daemon:maybe_compact_vbucket:723]Compaction of <<“storage/845”>> has finished with ok
[ns_server:info,2016-08-24T07:05:08.750Z,ns_1@80.node:<0.26061.213>:compaction_new_daemon:maybe_compact_vbucket:720]Compacting ‘storage/851’, DataSize = 168646, FileSize = 172123, Options = {1472018827,
5261,
false}
[ns_server:info,2016-08-24T07:05:08.824Z,ns_1@80.node:<0.26022.213>:compaction_new_daemon:maybe_compact_vbucket:723]Compaction of <<“storage/847”>> has finished with ok
[ns_server:info,2016-08-24T07:05:08.827Z,ns_1@80.node:<0.26101.213>:compaction_new_daemon:maybe_compact_vbucket:720]Compacting ‘storage/853’, DataSize = 164428, FileSize = 168027, Options = {1472018827,
5093,
false}
[ns_server:info,2016-08-24T07:05:08.908Z,ns_1@80.node:<0.25994.213>:compaction_new_daemon:maybe_compact_vbucket:723]Compaction of <<“storage/848”>> has finished with ok
[ns_server:info,2016-08-24T07:05:08.984Z,ns_1@80.node:<0.26125.213>:compaction_new_daemon:maybe_compact_vbucket:723]Compaction of <<“storage/850”>> has finished with ok
[ns_server:info,2016-08-24T07:05:09.060Z,ns_1@80.node:<0.26061.213>:compaction_new_daemon:maybe_compact_vbucket:723]Compaction of <<“storage/851”>> has finished with ok
[ns_server:info,2016-08-24T07:05:09.127Z,ns_1@80.node:<0.26101.213>:compaction_new_daemon:maybe_compact_vbucket:723]Compaction of <<“storage/853”>> has finished with ok
[ns_server:info,2016-08-24T07:05:09.127Z,ns_1@80.node:<0.24286.213>:compaction_new_daemon:spawn_dbs_compactor:675]Finished compaction of databases for bucket storage
[ns_server:info,2016-08-24T07:05:09.128Z,ns_1@80.node:<0.26051.213>:compaction_new_daemon:spawn_scheduled_kv_compactor:467]Start compaction of vbuckets for bucket links with config:
[{parallel_db_and_view_compaction,false},
{database_fragmentation_threshold,{2,1048576}},
{view_fragmentation_threshold,{2,1048576}}]
[ns_server:debug,2016-08-24T07:05:09.133Z,ns_1@80.node:<0.26062.213>:compaction_new_daemon:bucket_needs_compaction:953]links data size is 800593, disk size is 8459044
[ns_server:debug,2016-08-24T07:05:09.134Z,ns_1@80.node:compaction_new_daemon<0.351.0>:compaction_new_daemon:process_compactors_exit:1332]Finished compaction iteration.
[ns_server:debug,2016-08-24T07:05:09.134Z,ns_1@80.node:compaction_new_daemon<0.351.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 4s

Is it normal ? (especially “Finished compaction for compact_kv too soon. Next run will be in 4s”)


#2

And seems like there is a memory leak.

  1. No activity (except looped auto-compaction) starting from ~ 24 Aug 00:00 am

2 . ~ 03:00 pm, one-by-one restart of all 3 nodes, little more free ram.
3 . ~ 03:25 pm, full restart of all servers (i.e full OS reboot on 3 nodes). +5G free ram at once.

Looks like a memory leak.


#3

Free RAM dropping is expected - Linux likes to use as much RAM as available for caching - see http://www.linuxatemyram.com

What’s more interesting is the mem_used stat - i.e. how much memory Couchbase processes themselves are using - e.g.

See the “In Use” figure for a total, or the per-bucket "memory used graph.


Strange Memory Usage
#4

@drigby, thanks, very useful :slight_smile:

P.S. Yeah, kinda “cool irony” :wink:

There are no downsides, except for confusing newbies