Bucket is unheathy due to dropping off some lines by memcached


#1

i am running couchbase version 4.1 in linux machines. i restarted one of couchbase node. but node is unheathy due one of bucket is unheathy . i am getting below error

[ns_server:warn,2016-02-23T13:02:17.588+05:30,babysitter_of_ns_1@127.0.0.1:<0.76.0>:ns_port_server:log:215]Dropped 281 log lines from memcached
[ns_server:info,2016-02-23T13:02:19.115+05:30,babysitter_of_ns_1@127.0.0.1:<0.76.0>:ns_port_server:log:210]memcached<0.76.0>: 2016-02-23T13:02:18.914155+05:30 WARNING (pttdata) Updated cluster configuration - first 100 bytes: ‘{“rev”:2093,“name”:“test”,“uri”:"/pools/default/buckets/test?bucket_uuid=ca0ba354a766faa84a612’…

also getting below in crash report

crasher:
initial call: misc:turn_into_gen_server/4
pid: <11499.21837.666>
registered_name: ‘capi_set_view_manager-test’
exception throw: {file_already_opened,
"/Database/CouchIndex/@indexes/test/main_9807f8a59e2068d15554b529e4a20c29.spatial.1"}
in function couch_set_view:get_group_server/2 (/home/couchbase/jenkins/workspace/sherlock-unix/couchdb/src/couch_set_view/src/couch_set_view.erl, line 437)
in call from couch_set_view:define_group/4 (/home/couchbase/jenkins/workspace/sherlock-unix/couchdb/src/couch_set_view/src/couch_set_view.erl, line 143)
in call from timer:tc/3 (timer.erl, line 194)
in call from capi_set_view_manager:’-maybe_define_group/2-fun-1-’/3 (src/capi_set_view_manager.erl, line 305)
in call from capi_set_view_manager:maybe_define_group/2 (src/capi_set_view_manager.erl, line 305)
in call from capi_set_view_manager:’-init/1-lc$^0/1-0-’/2 (src/capi_set_view_manager.erl, line 182)
in call from capi_set_view_manager:init/1 (src/capi_set_view_manager.erl, line 182)
in call from misc:turn_into_gen_server/4 (src/misc.erl, line 608)
ancestors: [<0.14288.12>,‘single_bucket_kv_sup-pttdata’,ns_bucket_sup,
ns_bucket_worker_sup,ns_server_sup,ns_server_nodes_sup,
<0.153.0>,ns_server_cluster_sup,<0.88.0>]
messages: []
links: [<0.14288.12>,<11499.21797.666>]
dictionary: []
trap_exit: false
status: running
heap_size: 4185
stack_size: 27
reductions: 21262
neighbours:


#2

any update on above please. I had same issue again. looks like this is due views. bucket became healthy after deleting views.