Testing CB4.0 strange errors in logs?

We are testing CB 4.0 and are getting Errors in the Logs and Indexing hangs we need to Stop / Start the CB server and Flush the database with fresh restore

The set up is
1 node for Data 20GB Mem
2 Buckets
1st bucket with 3 design docs 3 views
2nd bucket 6 design docs 34 views

1 node for Index 20 GB Mem
1 node for Query 20 GB Mem no primary index defined and no N1QL is currently used

Any ideas what we should do .

Error 1

Port server query on node ‘babysitter_of_ns_1@127.0.0.1’ exited with
status 1. Restarting. Messages: _time=2015-07-20T10:28:48+03:00
_level=ERROR _msg= Unable to initialize cbauth. Error CBAuth database is
stale: last reason: dial tcp 127.0.0.1:8091: connection refused
_time=2015-07-20T10:28:48+03:00 _level=WARN _msg=Unable to intialize cbAuth, access to couchbase buckets may be restricted
2015/07/20 10:28:48 HTTP request returned error Get http://127.0.0.1:8091/pools: dial tcp 127.0.0.1:8091: connection refused
_time=2015-07-20T10:28:48+03:00
_level=ERROR _msg=Cannot connect url http://127.0.0.1:8091 - cause: Get
http://127.0.0.1:8091/pools: dial tcp 127.0.0.1:8091: connection
refused
[goport] 2015/07/20 10:28:48 /opt/couchbase/bin/cbq-engine terminated: exit status 1

Error 2

Port server projector on node ‘babysitter_of_ns_1@127.0.0.1’ exited with
status 1. Restarting. Messages:
github.com/couchbase/indexing/secondary/common.ExitOnStdinClose()
/home/couchbase/jenkins/workspace/sherlock-unix/goproj/src/github.com/couchbase/indexing/secondary/common/util.go:301
created by main.main

/home/couchbase/jenkins/workspace/sherlock-unix/goproj/src/github.com/couchbase/indexing/secondary/cmd/projector/main.go:95
+0x82c
[goport] 2015/07/20 10:28:44 /opt/couchbase/bin/projector terminated: exit status 2

Error 3
Control connection to memcached on ‘ns_1@192.168.1.233’ disconnected: {{badmatch,
{error,
closed}},
[{mc_client_binary,
cmd_vocal_recv,
5,
[{file,
“src/mc_client_binary.erl”},
{line,
156}]},
{mc_client_binary,
select_bucket,
2,
[{file,
“src/mc_client_binary.erl”},
{line,
351}]},
{ns_memcached,
ensure_bucket,
2,
[{file,
“src/ns_memcached.erl”},
{line,
1291}]},
{ns_memcached,
handle_info,
2,
[{file,
“src/ns_memcached.erl”},
{line,
745}]},
{gen_server,
handle_msg,
5,
[{file,
“gen_server.erl”},
{line,
604}]},
{ns_memcached,
init,
1,
[{file,
“src/ns_memcached.erl”},
{line,
174}]},
{gen_server,
init_it,
6,
[{file,
“gen_server.erl”},
{line,
304}]},
{proc_lib,
init_p_do_apply,
3,
[{file,
“proc_lib.erl”},
{line,
239}]}]}

Full Logs at

http://wikisend.com/download/542748/collectinfo-2015-07-20T085706-ns_1@192.168.1.231.zip
http://wikisend.com/download/373942/collectinfo-2015-07-20T085706-ns_1@192.168.1.232.zip

Hi @GeorgeLeon,
Sorry for the delay! I will forward your logs to our support team.

Hi @martinesmann
No Worries when they have some time I understand.
Thanks

Hi @GeorgeLeon,
From the logs we can see that there are a few issues.
The first being the number of views (37) seems very high for a 4 core node. Depending on the view size and data size it can be very resource intensive to update all 37 views.

Also running performance tests etc. while in in Beta will not reflect performance as it will be in RTM.

While we are in Beta we also recommend running/using a single node cluster.

Please let me know if this was helpfull

1 Like

@martinesmann

Understood thanks