Service 'fts' exited with status 1. Restarting. Messages:

Hi,
fts service is getting exited with below error. i wanted to remove the offending node fromthe cluster but rebalance fails as well and with that am not able to take node out of the cluster as well. Please suggest how can we proceed further.

Also , how can i get this offending node listen on 0.0.0.0 for the port 19130 and restart the search service.

Service ‘fts’ exited with status 1. Restarting. Messages:
2020-10-06T07:33:55.779-04:00 [INFO] &{Name:field-rule_name_69ac7703dfda35c1_54820232 UUID:22e4ea65047b78fa IndexType:fulltext-index IndexName:field-rule_name IndexUUID:69ac7703dfda35c1 IndexParams:{“doc_config”:{“docid_prefix_delim”:"",“docid_regexp”:"",“mode”:“type_field”,“type_field”:“type”},“mapping”:{“analysis”:{“analyzers”:{“rule_name-FTS”:{“token_filters”:[“rule_name_token”,“to_lower”],“tokenizer”:“whitespace”,“type”:“custom”}},“token_filters”:{“rule_name_token”:{“back”:“false”,“max”:3,“min”:3,“type”:“edge_ngram”}}},“default_analyzer”:“rule_name-FTS”,“default_datetime_parser”:“dateTimeOptional”,“default_field”:"_all",“default_mapping”:{“dynamic”:false,“enabled”:false},“default_type”:"_default",“docvalues_dynamic”:true,“index_dynamic”:true,“store_dynamic”:false,“type_field”:"_type",“types”:{“rule_name”:{“dynamic”:false,“enabled”:true}}},“store”:{“indexType”:“scorch”}} SourceType:couchbase SourceName:rule-config SourceUUID:adec8a323731713c9f84e901bca658c8 SourceParams:{} SourcePartitions:342,343,344,345,346,347,348,349,350,351,352,353,354,355,356,357,358,359,360,361,362,363,364,365,366,367,368,369,370,371,372,373,374,375,376,377,378,379,380,381,382,383,384,385,386,387,388,389,390,391,392,393,394,395,396,397,398,399,400,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,435,436,437,438,439,440,441,442,443,444,445,446,447,448,449,450,451,452,453,454,455,456,457,458,459,460,461,462,463,464,465,466,467,468,469,470,471,472,473,474,475,476,477,478,479,480,481,482,483,484,485,486,487,488,489,490,491,492,493,494,495,496,497,498,499,500,501,502,503,504,505,506,507,508,509,510,511,512 Nodes:map[556f931be74e9ec5622cbbb463890576:0xc0004cc080]}
2020-10-06T07:33:55.779-04:00 [INFO] janitor: pindexes to restart: 0
2020-10-06T07:33:56.652-04:00 [INFO] Using plain authentication for user @fts
2020-10-06T07:33:56.653-04:00 [INFO] Using plain authentication for user @fts
2020-10-06T07:33:56.653-04:00 [INFO] Using plain authentication for user @fts
2020-10-06T07:33:56.654-04:00 [INFO] Using plain authentication for user @fts
2020-10-06T07:33:56.654-04:00 [INFO] Using plain authentication for user @fts
2020-10-06T07:33:56.654-04:00 [INFO] audit: created new audit service
2020-10-06T07:33:56.654-04:00 [INFO] cbauth: key: fts/grpc-ssl registered for tls config updates
2020-10-06T07:33:56.655-04:00 [INFO] init_grpc: GrpcServer Started at 0.0.0.0:9130
2020-10-06T07:33:56.655-04:00 [FATA] init_grpc: mainGrpcServer, failed to listen: listen tcp 0.0.0.0:19130: bind: address already in use – main.startGrpcServer() at init_grpc.go:139

@prasanpd, did you try failover rebalance of the faulty node?

Hi @sreeks , faulty node had the isse with iptables which i fixed. Now it is failing with below error.

Rebalance exited with reason {service_rebalance_failed,fts,
{worker_died,
{‘EXIT’,<0.13406.5293>,
{rebalance_failed,inactivity_timeout}}}}.
Rebalance Operation Id = 6cd2e506f0b9c535cab66d396880de30.

I just triggered the rebalance.

@prasanpd, did you try Failover of the faulty node?