Regular bucket shutdown and exit of memcached with status 137

#1

I have a cluster of two nodes with Couchbase 4.1.0 DP on two Debian 7 machines where I have 4 buckets with a large amount of data. It is not deployed in production, it is just for evaluation, so it is not receiving any traffic.

It happens that I’m regularly getting the following errors, even without any activity (reads or writes) on the cluster:

Control connection to memcached on 'ns_1@10.32.3.212' disconnected: {error,
closed} (repeated 2 times)

Service 'memcached' exited with status 137. Restarting. Messages: 2015-11-30T15:23:47.418214Z WARNING (Bingo_Card) Engine warmup is complete, request to stop loading remaining database
2015-11-30T15:23:47.422229Z WARNING (Bingo_Card) Engine warmup is complete, request to stop loading remaining database
2015-11-30T15:23:47.526705Z WARNING (Bingo_Card) Engine warmup is complete, request to stop loading remaining database
2015-11-30T15:23:47.580204Z WARNING (Bingo_Card) Engine warmup is complete, request to stop loading remaining database
2015-11-30T15:23:47.585363Z WARNING (Bingo_Card) Engine warmup is complete, request to stop loading remaining database

Control connection to memcached on 'ns_1@10.32.3.212' disconnected: {error,
closed}

Control connection to memcached on 'ns_1@10.32.3.212' disconnected: {{badmatch,
{error,
closed}},
[{mc_client_binary,
stats_recv,
4,
[{file,
"src/mc_client_binary.erl"},
{line,
170}]},
{mc_client_binary,
stats,
4,
[{file,
"src/mc_client_binary.erl"},
{line,
418}]},
{ns_memcached,
handle_info,
2,
[{file,
"src/ns_memcached.erl"},
{line,
723}]},
{gen_server,
handle_msg,
5,
[{file,
"gen_server.erl"},
{line,
604}]},
{ns_memcached,
init,1,
[{file,
"src/ns_memcached.erl"},
{line,
177}]},
{gen_server,
init_it,
6,
[{file,
"gen_server.erl"},
{line,
304}]},
{proc_lib,
init_p_do_apply,
3,
[{file,
"proc_lib.erl"},
{line,
239}]}]}

There is no replica, the data is distributed across the two nodes and both of them fail indistinctly.

Thanks in advance. Any help is much appreciated.

#2

The “control connection” closed looks suspiciously like the memcached process is crashing. The process name is a little deceiving, as in Couchbase, the memcached binary loads all of the buckets, even the Couchbase type buckets.

Do you have a core file hanging around there? Or does the babysitter log (see the docs for log directories) show restarting of a process?

#3

Status 137 means that the memcached process was sent a SIGKILL signal. This means something external to Couchbase Server kill the process. Is there any reports of OOM-Killer in the system logs (dmesg)?

#4

I think the problem was that the the nodes were too tight in memory, with the RAM quota per node too high compared with the total amount of memory. I have added a third node to the cluster, giving more (total) memory to the buckets but reducing the RAM quota per node so there is more memory available to the OS.

Anyway, here you have a portion of the babysitter log:

[ns_server:info,2015-11-30T19:41:00.454Z,babysitter_of_ns_1@127.0.0.1:<0.391.0>:ns_port_server:log:210]memcached<0.391.0>: 2015-11-30T19:41:00.253097Z WARNING (Bingo_Card) Engine warmup is complete, request to stop loading remaining database
memcached<0.391.0>: 2015-11-30T19:41:00.267755Z WARNING (Bingo_Card) Engine warmup is complete, request to stop loading remaining database
memcached<0.391.0>: 2015-11-30T19:41:00.281142Z WARNING (Bingo_Card) Engine warmup is complete, request to stop loading remaining database
memcached<0.391.0>: 2015-11-30T19:41:00.293183Z WARNING (Bingo_Card) Engine warmup is complete, request to stop loading remaining database
memcached<0.391.0>: 2015-11-30T19:41:00.301124Z WARNING (Bingo_Card) Engine warmup is complete, request to stop loading remaining database
memcached<0.391.0>: 2015-11-30T19:41:00.309098Z WARNING (Bingo_Card) Engine warmup is complete, request to stop loading remaining database
memcached<0.391.0>: 2015-11-30T19:41:00.318449Z WARNING (Bingo_Card) Engine warmup is complete, request to stop loading remaining database

[ns_server:info,2015-11-30T19:41:15.655Z,babysitter_of_ns_1@127.0.0.1:<0.390.0>:supervisor_cushion:handle_info:58]Cushion managed supervisor for memcached failed:  {abnormal,137}
[ns_server:debug,2015-11-30T19:41:15.655Z,babysitter_of_ns_1@127.0.0.1:<0.393.0>:supervisor_cushion:init:39]starting ns_port_server with delay of 5000
[error_logger:error,2015-11-30T19:41:15.655Z,babysitter_of_ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.391.0> terminating
** Last message in was {#Port<0.3883>,{exit_status,137}}
** When Server state == {state,#Port<0.3883>,memcached,
                       {["2015-11-30T19:41:00.318449Z WARNING (Bingo_Card) Engine warmup is complete, request to stop loading remaining database",
                         "2015-11-30T19:41:00.309098Z WARNING (Bingo_Card) Engine warmup is complete, request to stop loading remaining database",
                         "2015-11-30T19:41:00.301124Z WARNING (Bingo_Card) Engine warmup is complete, request to stop loading remaining database",
                         "2015-11-30T19:41:00.293183Z WARNING (Bingo_Card) Engine warmup is complete, request to stop loading remaining database"],
                        ["2015-11-30T19:41:00.281142Z WARNING (Bingo_Card) Engine warmup is complete, request to stop loading remaining database"]},
                       undefined,undefined,[],0}
** Reason for termination ==
** {abnormal,137}

[error_logger:error,2015-11-30T19:41:15.656Z,babysitter_of_ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
  crasher:

initial call: ns_port_server:init/1
pid: <0.391.0>
registered_name: []
exception exit: {abnormal,137}
in function gen_server:terminate/6 (gen_server.erl, line 744)
ancestors: [<0.390.0>,<0.389.0>,ns_child_ports_sup,ns_babysitter_sup,
<0.56.0>]
messages: [{‘EXIT’,#Port<0.3883>,normal}]
links: [<0.390.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 17731
stack_size: 27
reductions: 3270550
neighbours:

[error_logger:error,2015-11-30T19:41:15.656Z,babysitter_of_ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.390.0> terminating
** Last message in was {die,{abnormal,137}}
** When Server state == {state,memcached,5000,
                       {1448,911394,878319},
                       undefined,infinity}
** Reason for termination ==
** {abnormal,137}

[error_logger:error,2015-11-30T19:41:15.656Z,babysitter_of_ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
  crasher:

initial call: supervisor_cushion:init/1
pid: <0.390.0>
registered_name: []
exception exit: {abnormal,137}
in function gen_server:terminate/6 (gen_server.erl, line 744)
ancestors: [<0.389.0>,ns_child_ports_sup,ns_babysitter_sup,<0.56.0>]
messages: []
links: [<0.389.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 987
stack_size: 27
reductions: 1811
neighbours:

[error_logger:error,2015-11-30T19:41:15.656Z,babysitter_of_ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
  crasher:

initial call: erlang:apply/2
pid: <0.389.0>
registered_name: []
exception exit: {abnormal,137}
in function restartable:loop/4 (src/restartable.erl, line 69)
ancestors: [ns_child_ports_sup,ns_babysitter_sup,<0.56.0>]
messages: []
links: [<0.69.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 376
stack_size: 27
reductions: 133
neighbours:

[error_logger:error,2015-11-30T19:41:15.656Z,babysitter_of_ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================

Supervisor: {local,ns_child_ports_sup}
Context: child_terminated
Reason: {abnormal,137}
Offender: [{pid,<0.389.0>},
{name,
{memcached,"/opt/couchbase/bin/memcached",
["-C",
"/opt/couchbase/var/lib/couchbase/config/memcached.json"],
[{env,
[{“EVENT_NOSELECT”,“1”},
{“MEMCACHED_TOP_KEYS”,“5”},
{“ISASL_PWFILE”,
"/opt/couchbase/var/lib/couchbase/isasl.pw"}]},
use_stdio,stderr_to_stdout,exit_status,
port_server_dont_start,stream]}},
{mfargs,
{restartable,start_link,
[{supervisor_cushion,start_link,
[memcached,5000,infinity,ns_port_server,
start_link,
[#Fun<ns_child_ports_sup.2.49698737>]]},
86400000]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,worker}]

[error_logger:info,2015-11-30T19:41:15.657Z,babysitter_of_ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
  supervisor: {local,ns_child_ports_sup}
     started: [{pid,<0.392.0>},
               {name,
                   {memcached,"/opt/couchbase/bin/memcached",
                       ["-C",
                        "/opt/couchbase/var/lib/couchbase/config/memcached.json"],
                       [{env,
                            [{"EVENT_NOSELECT","1"},
                             {"MEMCACHED_TOP_KEYS","5"},
                             {"ISASL_PWFILE",
                              "/opt/couchbase/var/lib/couchbase/isasl.pw"}]},
                        use_stdio,stderr_to_stdout,exit_status,
                        port_server_dont_start,stream]}},
               {mfargs,
                   {restartable,start_link,
                       [{supervisor_cushion,start_link,
                            [memcached,5000,infinity,ns_port_server,
                             start_link,
                             [#Fun<ns_child_ports_sup.2.49698737>]]},
                        86400000]}},
               {restart_type,permanent},
               {shutdown,infinity},
               {child_type,worker}]

[ns_server:info,2015-11-30T19:41:16.939Z,babysitter_of_ns_1@127.0.0.1:<0.394.0>:ns_port_server:log:210]memcached<0.394.0>: 2015-11-30T19:41:16.735024Z WARNING NUMA: Set memory allocation policy to 'interleave'.
[ns_server:info,2015-11-30T19:41:18.058Z,babysitter_of_ns_1@127.0.0.1:<0.130.0>:supervisor_cushion:handle_info:58]Cushion managed supervisor for indexer failed:  {abnormal,1}
[error_logger:error,2015-11-30T19:41:18.058Z,babysitter_of_ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.131.0> terminating
** Last message in was {#Port<0.3586>,{exit_status,1}}
** When Server state == {state,#Port<0.3586>,indexer,
                       {["[goport] 2015/11/30 19:41:18 /opt/couchbase/bin/indexer terminated: signal: killed",
                         "2015-11-30T19:41:10.707+00:00 [Info] StorageMgr::handleCreateSnapshot Skip Snapshot For MAINT_STREAM Bingo_Card SnapType NO_SNAP",
                         "2015-11-30T19:41:10.246+00:00 [Info] StorageMgr::handleCreateSnapshot Skip Snapshot For MAINT_STREAM Bingo_Card SnapType NO_SNAP",
                         "2015-11-30T19:41:09.362+00:00 [Info] logReaderStat:: MAINT_STREAM MutationCount 247060000"],
                        ["2015-11-30T19:41:08.382+00:00 [Warn] Indexer::MutationQueue Waiting for Node Alloc for 6090 Milliseconds Vbucket 1022"]},
                       indexer,undefined,[],0}
** Reason for termination ==
** {abnormal,1}

[error_logger:error,2015-11-30T19:41:18.059Z,babysitter_of_ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
  crasher:

initial call: ns_port_server:init/1
pid: <0.131.0>
registered_name: []
exception exit: {abnormal,1}
in function gen_server:terminate/6 (gen_server.erl, line 744)
ancestors: [<0.130.0>,<0.87.0>,ns_child_ports_sup,ns_babysitter_sup,
<0.56.0>]
messages: [{‘EXIT’,#Port<0.3586>,normal}]
links: [<0.130.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 196650
stack_size: 27
reductions: 160071734
neighbours:

[error_logger:error,2015-11-30T19:41:18.059Z,babysitter_of_ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.130.0> terminating
** Last message in was {die,{abnormal,1}}
** When Server state == {state,indexer,5000,
                       {1448,548818,1509},
                       undefined,infinity}
** Reason for termination ==
** {abnormal,1}

[error_logger:error,2015-11-30T19:41:18.059Z,babysitter_of_ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
  crasher:

initial call: supervisor_cushion:init/1
pid: <0.130.0>
registered_name: []
exception exit: {abnormal,1}
in function gen_server:terminate/6 (gen_server.erl, line 744)
ancestors: [<0.87.0>,ns_child_ports_sup,ns_babysitter_sup,<0.56.0>]
messages: []
links: [<0.87.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 987
stack_size: 27
reductions: 1824
neighbours:

[error_logger:error,2015-11-30T19:41:18.059Z,babysitter_of_ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================CRASH REPORT=========================
  crasher:

initial call: erlang:apply/2
pid: <0.87.0>
registered_name: []
exception exit: {abnormal,1}
in function restartable:loop/4 (src/restartable.erl, line 69)
ancestors: [ns_child_ports_sup,ns_babysitter_sup,<0.56.0>]
messages: []
links: [<0.69.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 987
stack_size: 27
reductions: 1989
neighbours:

[error_logger:error,2015-11-30T19:41:18.059Z,babysitter_of_ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================

Supervisor: {local,ns_child_ports_sup}
Context: child_terminated
Reason: {abnormal,1}
Offender: [{pid,<0.87.0>},
{name,
{indexer,"/opt/couchbase/bin/goport",[],
[use_stdio,exit_status,stderr_to_stdout,stream,
{log,“indexer.log”},
{env,
[{“GOPORT_ARGS”,
"["/opt/couchbase/bin/indexer","-vbuckets=1024","-cluster=127.0.0.1:8091","-adminPort=9100","-scanPort=9101","-httpPort=9102","-streamInitPort=9103","-streamCatchupPort=9104","-streamMaintPort=9105","-storageDir=/opt/couchbase/var/lib/couchbase/data/@2i"]"},
{“GOTRACEBACK”,[]},
{“CBAUTH_REVRPC_URL”,
http://%40:9ab273789342b2778368ff0594fbed8c@127.0.0.1:8091/index”}]}]}},
{mfargs,
{restartable,start_link,
[{supervisor_cushion,start_link,
[indexer,5000,infinity,ns_port_server,
start_link,
[#Fun<ns_child_ports_sup.2.49698737>]]},
86400000]}},
{restart_type,permanent},
{shutdown,infinity},
{child_type,worker}]

[ns_server:debug,2015-11-30T19:41:18.060Z,babysitter_of_ns_1@127.0.0.1:<0.396.0>:supervisor_cushion:init:39]starting ns_port_server with delay of 5000
[error_logger:info,2015-11-30T19:41:18.071Z,babysitter_of_ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
  supervisor: {local,ale_dynamic_sup}
     started: [{pid,<0.398.0>},
               {name,'sink-indexer'},
               {mfargs,
                   {ale_disk_sink,start_link,
                       ['sink-indexer',
                        "/opt/couchbase/var/lib/couchbase/logs/indexer.log",
                        [{rotation,
                             [{compress,true},
                              {size,41943040},
                              {num_files,10},
                              {buffer_size_max,52428800}]}]]}},
               {restart_type,permanent},
               {shutdown,5000},
               {child_type,worker}]

[error_logger:info,2015-11-30T19:41:18.132Z,babysitter_of_ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================PROGRESS REPORT=========================
  supervisor: {local,ns_child_ports_sup}
     started: [{pid,<0.395.0>},
               {name,
                   {indexer,"/opt/couchbase/bin/goport",[],
                       [use_stdio,exit_status,stderr_to_stdout,stream,
                        {log,"indexer.log"},
                        {env,
                            [{"GOPORT_ARGS",
                              "[\"/opt/couchbase/bin/indexer\",\"-vbuckets=1024\",\"-cluster=127.0.0.1:8091\",\"-adminPort=9100\",\"-scanPort=9101\",\"-httpPort=9102\",\"-streamInitPort=9103\",\"-streamCatchupPort=9104\",\"-streamMaintPort=9105\",\"-storageDir=/opt/couchbase/var/lib/couchbase/data/@2i\"]"},
                             {"GOTRACEBACK",[]},
                             {"CBAUTH_REVRPC_URL",
                              "http://%40:9ab273789342b2778368ff0594fbed8c@127.0.0.1:8091/index"}]}]}},
               {mfargs,
                   {restartable,start_link,
                       [{supervisor_cushion,start_link,
                            [indexer,5000,infinity,ns_port_server,
                             start_link,
                             [#Fun<ns_child_ports_sup.2.49698737>]]},
                        86400000]}},
               {restart_type,permanent},
               {shutdown,infinity},
               {child_type,worker}]

                                                                          
[ns_server:info,2015-11-30T19:41:19.458Z,babysitter_of_ns_1@127.0.0.1:<0.394.0>:ns_port_server:log:210]memcached<0.394.0>: 2015-11-30T19:41:19.257628Z WARNING (No Engine) Bucket Bingo_Card registered with low priority
memcached<0.394.0>: 2015-11-30T19:41:19.257702Z WARNING (No Engine) Spawning 4 readers, 4 writers, 1 auxIO, 1 nonIO threads
memcached<0.394.0>: 2015-11-30T19:41:19.258963Z WARNING 42: Slow CREATE_BUCKET operation on connection (127.0.0.1:50093 => 127.0.0.1:11209): 2139 ms

[ns_server:info,2015-11-30T19:41:24.176Z,babysitter_of_ns_1@127.0.0.1:<0.394.0>:ns_port_server:log:210]memcached<0.394.0>: 2015-11-30T19:41:23.975621Z WARNING (No Engine) Bucket Bingo_PlayerShout registered with low priority
memcached<0.394.0>: 2015-11-30T19:41:23.976977Z WARNING 43: Slow CREATE_BUCKET operation on connection (127.0.0.1:7119 => 127.0.0.1:11209): 6856 ms
memcached<0.394.0>: 2015-11-30T19:41:23.977100Z WARNING 42: Slow SELECT_BUCKET operation on connection (127.0.0.1:50093 => 127.0.0.1:11209): 4716 ms
memcached<0.394.0>: 2015-11-30T19:41:23.980434Z WARNING (Bingo_Card) Updated cluster configuration - first 100 bytes: '{"rev":2179,"name":"Bingo_Card","uri":"/pools/default/buckets/Bingo_Card?bucket_uuid=0e2bcc25447b444'...
memcached<0.394.0>: 2015-11-30T19:41:23.982121Z WARNING (Bingo_PlayerShout) Updated cluster configuration - first 100 bytes: '{"rev":2179,"name":"Bingo_PlayerShout","uri":"/pools/default/buckets/Bingo_PlayerShout?bucket_uuid=1'...

[ns_server:info,2015-11-30T19:41:31.135Z,babysitter_of_ns_1@127.0.0.1:<0.394.0>:ns_port_server:log:210]memcached<0.394.0>: 2015-11-30T19:41:30.934034Z WARNING (No Engine) Bucket Bingo_PlayerCards registered with low priority
memcached<0.394.0>: 2015-11-30T19:41:30.934235Z WARNING 45: Slow CREATE_BUCKET operation on connection (127.0.0.1:12406 => 127.0.0.1:11209): 6952 ms
memcached<0.394.0>: 2015-11-30T19:41:30.934322Z WARNING 52: Slow SELECT_BUCKET operation on connection (10.32.3.212:41077 => 10.32.3.212:11210): 6952 ms

[ns_server:info,2015-11-30T19:41:36.564Z,babysitter_of_ns_1@127.0.0.1:<0.394.0>:ns_port_server:log:210]memcached<0.394.0>: 2015-11-30T19:41:36.363491Z WARNING (No Engine) Bucket Bingo_Game registered with low priority
memcached<0.394.0>: 2015-11-30T19:41:36.363688Z WARNING 45: Slow SELECT_BUCKET operation on connection (127.0.0.1:12406 => 127.0.0.1:11209): 5429 ms
memcached<0.394.0>: 2015-11-30T19:41:36.363697Z WARNING 59: Slow SELECT_BUCKET operation on connection (127.0.0.1:10301 => 127.0.0.1:11209): 5428 ms
memcached<0.394.0>: 2015-11-30T19:41:36.363800Z WARNING 44: Slow CREATE_BUCKET operation on connection (127.0.0.1:54848 => 127.0.0.1:11209): 12382 ms
memcached<0.394.0>: 2015-11-30T19:41:36.365948Z WARNING (Bingo_PlayerCards) Updated cluster configuration - first 100 bytes: '{"rev":2179,"name":"Bingo_PlayerCards","uri":"/pools/default/buckets/Bingo_PlayerCards?bucket_uuid=2'...
memcached<0.394.0>: 2015-11-30T19:41:36.369875Z WARNING (Bingo_Game) Updated cluster configuration - first 100 bytes: '{"rev":2179,"name":"Bingo_Game","uri":"/pools/default/buckets/Bingo_Game?bucket_uuid=deebecc74740afb'...

And dmesg grep with oom-killer:

Nov 30 10:54:43 ip-10-32-3-212 kernel: [347215.081330] beam.smp invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
Nov 30 11:14:55 ip-10-32-3-212 kernel: [348426.578255] beam.smp invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
Nov 30 11:59:25 ip-10-32-3-212 kernel: [351097.227147] beam.smp invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
Nov 30 12:17:30 ip-10-32-3-212 kernel: [352182.739029] goport invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 12:38:34 ip-10-32-3-212 kernel: [353446.186443] goport invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 13:02:41 ip-10-32-3-212 kernel: [354892.651343] cbq-engine invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 13:20:42 ip-10-32-3-212 kernel: [355973.933736] cbq-engine invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 13:37:46 ip-10-32-3-212 kernel: [356997.990963] indexer invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
Nov 30 13:54:18 ip-10-32-3-212 kernel: [357990.577587] beam.smp invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
Nov 30 14:09:39 ip-10-32-3-212 kernel: [358911.151613] beam.smp invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
Nov 30 14:30:10 ip-10-32-3-212 kernel: [360142.359675] cbq-engine invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 14:47:47 ip-10-32-3-212 kernel: [361199.738149] ntpd invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 15:06:43 ip-10-32-3-212 kernel: [362334.959155] beam.smp invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 15:23:48 ip-10-32-3-212 kernel: [363359.707498] beam.smp invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
Nov 30 15:40:39 ip-10-32-3-212 kernel: [364370.820677] indexer invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 15:57:50 ip-10-32-3-212 kernel: [365402.080630] mc:worker 3 invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 16:19:52 ip-10-32-3-212 kernel: [366724.338910] goxdcr invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 16:37:07 ip-10-32-3-212 kernel: [367758.834485] indexer invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 16:59:29 ip-10-32-3-212 kernel: [369101.652364] goxdcr invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 17:17:05 ip-10-32-3-212 kernel: [370157.829432] beam.smp invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 17:37:40 ip-10-32-3-212 kernel: [371391.793876] mc:writer_5 invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 17:53:53 ip-10-32-3-212 kernel: [372364.801814] indexer invoked oom-killer: gfp_mask=0x10200da, order=0, oom_score_adj=0
Nov 30 18:11:47 ip-10-32-3-212 kernel: [373439.655298] goport invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 18:34:05 ip-10-32-3-212 kernel: [374776.625951] ntpd invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 18:51:19 ip-10-32-3-212 kernel: [375811.160988] beam.smp invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 19:07:15 ip-10-32-3-212 kernel: [376766.832195] beam.smp invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
Nov 30 19:07:15 ip-10-32-3-212 kernel: [376767.281656] beam.smp invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 19:23:14 ip-10-32-3-212 kernel: [377725.932775] beam.smp invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
Nov 30 19:41:15 ip-10-32-3-212 kernel: [378806.800754] goport invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 19:41:15 ip-10-32-3-212 kernel: [378807.145096] goport invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 21:37:01 ip-10-32-3-212 kernel: [385753.526360] goport invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Nov 30 22:35:21 ip-10-32-3-212 kernel: [389252.891067] goxdcr invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 01:59:36 ip-10-32-3-212 kernel: [401507.487052] beam.smp invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
Dec  1 01:59:36 ip-10-32-3-212 kernel: [401507.831900] beam.smp invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 02:45:37 ip-10-32-3-212 kernel: [404268.434741] beam.smp invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 03:10:36 ip-10-32-3-212 kernel: [405767.458390] beam.smp invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
Dec  1 03:10:36 ip-10-32-3-212 kernel: [405767.807478] beam.smp invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
Dec  1 03:43:59 ip-10-32-3-212 kernel: [407771.576795] indexer invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 04:13:14 ip-10-32-3-212 kernel: [409525.769219] beam.smp invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
Dec  1 04:33:12 ip-10-32-3-212 kernel: [410724.513248] moxi invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 04:56:03 ip-10-32-3-212 kernel: [412094.179134] beam.smp invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
Dec  1 05:17:48 ip-10-32-3-212 kernel: [413399.973475] projector invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 05:37:21 ip-10-32-3-212 kernel: [414573.129070] beam.smp invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
Dec  1 05:59:51 ip-10-32-3-212 kernel: [415923.652576] beam.smp invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 06:18:57 ip-10-32-3-212 kernel: [417069.003026] mc:reader_1 invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 06:37:03 ip-10-32-3-212 kernel: [418154.844607] projector invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 06:37:03 ip-10-32-3-212 kernel: [418155.184220] projector invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 06:56:56 ip-10-32-3-212 kernel: [419348.740184] moxi invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 07:14:58 ip-10-32-3-212 kernel: [420430.152622] beam.smp invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
Dec  1 07:38:58 ip-10-32-3-212 kernel: [421869.660789] beam.smp invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0
Dec  1 08:06:50 ip-10-32-3-212 kernel: [423541.740733] beam.smp invoked oom-killer: gfp_mask=0x10200da, order=0, oom_score_adj=0
Dec  1 08:29:06 ip-10-32-3-212 kernel: [424878.169631] beam.smp invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 08:51:37 ip-10-32-3-212 kernel: [426229.338850] goport invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 09:11:40 ip-10-32-3-212 kernel: [427432.588260] goxdcr invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 09:30:59 ip-10-32-3-212 kernel: [428591.236604] indexer invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 09:51:03 ip-10-32-3-212 kernel: [429794.776385] mc:nonio_9 invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 09:51:03 ip-10-32-3-212 kernel: [429795.160876] beam.smp invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 12:01:07 ip-10-32-3-212 kernel: [437598.820621] goport invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 12:48:26 ip-10-32-3-212 kernel: [440438.299320] df invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 13:15:38 ip-10-32-3-212 kernel: [442069.973793] indexer invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 13:45:45 ip-10-32-3-212 kernel: [443876.609278] beam.smp invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Dec  1 14:24:33 ip-10-32-3-212 kernel: [446204.968305] beam.smp invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0

BTW, I’ve been trying to do some N1QL queries now that the cluster seems to be stabilized and surprisingly it is complaining about indexes (that indeed exist) do not exist. Could this be because they are somehow corrupted after the indexer process being killed by the oom-killer?