Backup of bucket not going to 100%


#1

When performing a backup of a particular couchbase bucket the progress bar does not go to 100%.

[################ ] 77.9% (76941/estimated 98812 msgs)
bucket: data1, msgs transferred…
: total | last | per sec
byte : 830630360 | 830630360 | 7348916.2
done

The backup command returns “done” and no error messages are displayed… Is this normal behaviour?
We are using couchbase server 4.0.0 community edition


#2

Could anyone please elaborate on this issue?


#3

Same issue we are facing in couchbase 4.1.0 CE version.

The number of documents = 26060149. backup time took 10 hours to reach 66.30%.

below is the log info
[############# ] 66.3% (26059957/estimated 39309053 msgs)^[[A

bucket: ptxdata, msgs transferred…
: total | last | per sec
byte : 173941781498 | 173941781498 | 4777396.5
2016-10-18 00:05:37,625: mt could not find index server:0
done

can anyone please provide detailed info of the same.


#4

We are getting a similar error with an additional python error. It does seem to get to 100% for the buckets although I have seen it stop short of 100 at times. I am concerned that the backups aren’t good. This started immediately after upgrading from 3.0.1 to 4.5.1. The output from the cbbackup command is below.

sudo /opt/couchbase/bin/cbbackup http://couchbase_server:8091 /backups/cbbackup_20170214_AV -u -p
Exception in thread s1:
Traceback (most recent call last):
File “/usr/lib/python2.7/threading.py”, line 810, in __bootstrap_inner
self.run()
File “/usr/lib/python2.7/threading.py”, line 763, in run
self.__target(*self.__args, **self.__kwargs)
File “/opt/couchbase/lib/python/pump_bfd.py”, line 614, in run
rv, db, db_dir = self.create_db(cbb)
File “/opt/couchbase/lib/python/pump_bfd.py”, line 753, in create_db
rv, dir = self.mkdirs()
File “/opt/couchbase/lib/python/pump_bfd.py”, line 813, in mkdirs
if error.errno != errno.EEXIST:
NameError: global name ‘errno’ is not defined

[####################] 100.0% (7/estimated 7 msgs)
bucket: CAS-Services, msgs transferred…
: total | last | per sec
byte : 4811 | 4811 | 3387.6
2017-02-14 12:38:44,421: mt could not find index server:0
[####################] 100.0% (1224/estimated 1224 msgs)
bucket: CAS-Tickets, msgs transferred…
: total | last | per sec
byte : 4811 | 4811 | 2451.4
2017-02-14 12:38:46,415: mt could not find index server:0
[####################] 100.0% (26930/estimated 26930 msgs)
bucket: PFM, msgs transferred…
: total | last | per sec
byte : 540342609 | 540342609 | 15435812.9
2017-02-14 12:39:21,452: mt could not find index server:0
[####################] 100.0% (301773/estimated 301772 msgs)
bucket: PFM-EventStorage-PROD, msgs transferred…
: total | last | per sec
byte : 644922244 | 644922244 | 14995491.5
2017-02-14 12:40:04,486: mt could not find index server:0
done


#5

We had a similar issue. First: kill all running cbbackup processes. They can fill up all available connections to the DCP client so no more backups are possible (and views are not updates, no more xdcr connections and othere issues). We changed then the option for maximum filesize to something like “cbb_max_mb=20000”, so only 1 big file per node is generated instead of many smaller files. For me it looks like there is a bug in newer versions of cbbackup that leads into a crash if the second file of a backup is generated. Anyway, with a limit of cbb_max_mb higher than backup size it works for us.