Index nodes in cluster


#1

I’m trying to build the cluster with 2 data, 2 index, 1 search query, 1 analytic nodes.
There is no issue with cluster creation. All nodes are on place. Possbiel to create a bucket. BUT.
It is impossible to create an index.
In indexer.log there are errors about communications and broken pipes, but all machines are in the same lan without any firewall etc.
I"m trying to spine it up on Debian 9.7 with CB 6.0.
Interesting that on server tab there is just info about Index nodes CPU Memory use, but nothgin about items.

Please advice.

2019-02-06T08:53:06.723+01:00 [Info] clustMgrAgent::OnIndexCreate Notification Received for Create Index DefnId: 12350585422537354213 Name: beer_primary Using: memory_optimized Bucket: beer-sample IsPrimary: true NumReplica: 0 InstVersion: 0

SecExprs: <ud>()</ud>

Desc:

PartitionScheme: SINGLE

HashScheme: CRC32 PartitionKeys: WhereExpr: <ud>()</ud> RetainDeletedXATTR: false &{0} partitions [0]

2019-02-06T08:53:06.723+01:00 [Info] Indexer::handleCreateIndex

InstId: 14362750607154775886

Defn: DefnId: 12350585422537354213 Name: beer_primary Using: memory_optimized Bucket: beer-sample IsPrimary: true NumReplica: 0 InstVersion: 0

SecExprs: <ud>()</ud>

Desc:

PartitionScheme: SINGLE

HashScheme: CRC32 PartitionKeys: WhereExpr: <ud>()</ud> RetainDeletedXATTR: false

State: INDEX_STATE_CREATED

RState: RebalActive

Stream: NIL_STREAM

Version: 0

ReplicaId: 0

PartitionContainer: &{map[0:{0 0 [:9105]}] 1024 1 1024 SINGLE 0}

2019-02-06T08:53:06.768+01:00 [Info] Indexer::initPartnInstance Initialized Partition:

Index: 14362750607154775886 Partition: PartitionId: 0 Endpoints: [:9105]

2019-02-06T08:53:06.797+01:00 [Info] MemDBSlice:NewMemDBSlice Created New Slice Id 0 IndexInstId 14362750607154775886 PartitionId 0 WriterThreads 16 Persistence true

2019-02-06T08:53:06.797+01:00 [Info] Indexer::initPartnInstance Initialized Slice:

Index: 14362750607154775886 Slice: SliceId: 0 File: /opt/couchbase/var/lib/couchbase/data/@2i/beer-sample_beer_primary_14362750607154775886_0.index Index: 14362750607154775886 Partition: 0

2019-02-06T08:53:06.797+01:00 [Info] MutationMgr::handleUpdateIndexInstMap

Message: MsgUpdateInstMap

InstanceId: 14362750607154775886 Name: beer_primary Bucket: beer-sample State: INDEX_STATE_CREATED Stream: NIL_STREAM RState: RebalActive Version: 0 ReplicaId: 0

2019-02-06T08:53:06.797+01:00 [Info] MutationMgr::handleUpdateIndexPartnMap

Message: MsgUpdatePartnMap

InstanceId: 14362750607154775886 PartitionId: 0 Endpoints: [:9105]

2019-02-06T08:53:06.797+01:00 [Info] ScanCoordinator::initialize rollback times on new index inst map: map[beer-sample:1549439586797487583]

2019-02-06T08:53:06.797+01:00 [Info] ClustMgr:handleIndexMap

Message: MsgUpdateInstMap

InstanceId: 14362750607154775886 Name: beer_primary Bucket: beer-sample State: INDEX_STATE_CREATED Stream: NIL_STREAM RState: RebalActive Version: 0 ReplicaId: 0

2019-02-06T08:53:06.798+01:00 [Info] clustMgrAgent::OnIndexCreate Success for Create Index DefnId: 12350585422537354213 Name: beer_primary Using: memory_optimized Bucket: beer-sample IsPrimary: true NumReplica: 0 InstVersion: 0

SecExprs: <ud>()</ud>

Desc:

PartitionScheme: SINGLE

HashScheme: CRC32 PartitionKeys: WhereExpr: <ud>()</ud> RetainDeletedXATTR: false

2019-02-06T08:53:06.844+01:00 [Info] LifecycleMgr.handleCommitCreateIndex() : Create token posted for 12350585422537354213

2019-02-06T08:53:07.839+01:00 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 10.202.200.163:44700. Error = EOF. Kill Pipe.

2019-02-06T08:53:07.839+01:00 [Info] messageListener.start(): message channel closed. Remove peer ae:f:f4:c7:b1:a8:19:b2 as follower.

2019-02-06T08:53:07.871+01:00 [Info] DDLServiceMgr: connecting to node 10.202.200.162:9100

2019-02-06T08:53:07.871+01:00 [Info] MetadataProvider.WatchMetadata(): indexer 10.202.200.162:9100

2019-02-06T08:53:07.904+01:00 [Info] WatchMetadata(): successfully reach indexer at 10.202.200.162:9100.

2019-02-06T08:53:07.904+01:00 [Info] DDLServiceMgr: connecting to node 10.202.200.163:9100

2019-02-06T08:53:07.904+01:00 [Info] MetadataProvider.WatchMetadata(): indexer 10.202.200.163:9100

2019-02-06T08:53:07.944+01:00 [Info] WatchMetadata(): successfully reach indexer at 10.202.200.163:9100.

2019-02-06T08:53:07.944+01:00 [Info] MetadataProvider.CheckIndexerStatus(): adminport=10.202.200.162:9100 connected=true

2019-02-06T08:53:07.944+01:00 [Info] MetadataProvider.CheckIndexerStatus(): adminport=10.202.200.163:9100 connected=true

2019-02-06T08:53:07.944+01:00 [Info] MetadataProvider is terminated. Cleaning up …

2019-02-06T08:53:07.944+01:00 [Info] Unwatching metadata for indexer at 10.202.200.162:9100.

2019-02-06T08:53:07.944+01:00 [Info] Unwatching metadata for indexer at 10.202.200.163:9100.

2019-02-06T08:53:07.945+01:00 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 10.202.200.162:9100. Error = read tcp 10.202.200.162:46034->10.202.200.162:9100: use of closed network connection. Kill Pipe.

2019-02-06T08:53:07.945+01:00 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 10.202.200.162:46034. Error = EOF. Kill Pipe.

2019-02-06T08:53:07.945+01:00 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 10.202.200.163:9100. Error = read tcp 10.202.200.162:50786->10.202.200.163:9100: use of closed network connection. Kill Pipe.

2019-02-06T08:53:07.945+01:00 [Info] messageListener.start(): message channel closed. Remove peer c2:10:91:d0:e2:f3:85:3f as follower.


#2

Hi @izayniev,

Requesting you to collect the cluster logs, and attach the collected logs here. To collect the logs from the couchbase UI, click on “logs” => collect information => start collection. Once collection is complete, you can see the location of the collected logs on the same page.

Index creation is a multi-step process and as per the logs messages you have posted, indexer process has received the request to start index creation. I want to know how far this index creation request has reached. There are multiple services involved in index creation so, complete cluster logs will be required for identifying the problem.

Thanks.


#3

Hi @amit.kulkarni,
somehow i cant upload here. Please find logs from 6 nodes here https://drive.google.com/open?id=1TWGawoleVeTwf4yAP-9fSpXCYI_j-cXa
160, 161 - data nodes
162, 163 - index nodes
164 - Search&Query
165 - Analytic.

Thank you for your support


#4

I did try today (08.02) to import demo bucket and recreate the index.


#5

Huge, huge thanks to Couchbase team, especially to James Powenski.
The issue is simple - defer_build and it is needed to build an index.