Backup of bucket not going to 100%

When performing a backup of a particular couchbase bucket the progress bar does not go to 100%.

[################ ] 77.9% (76941/estimated 98812 msgs)
bucket: data1, msgs transferred…
: total | last | per sec
byte : 830630360 | 830630360 | 7348916.2
done

The backup command returns “done” and no error messages are displayed… Is this normal behaviour?
We are using couchbase server 4.0.0 community edition

Could anyone please elaborate on this issue?

Same issue we are facing in couchbase 4.1.0 CE version.

The number of documents = 26060149. backup time took 10 hours to reach 66.30%.

below is the log info
[############# ] 66.3% (26059957/estimated 39309053 msgs)^[[A

bucket: ptxdata, msgs transferred…
: total | last | per sec
byte : 173941781498 | 173941781498 | 4777396.5
2016-10-18 00:05:37,625: mt could not find index server:0
done

can anyone please provide detailed info of the same.

We are getting a similar error with an additional python error. It does seem to get to 100% for the buckets although I have seen it stop short of 100 at times. I am concerned that the backups aren’t good. This started immediately after upgrading from 3.0.1 to 4.5.1. The output from the cbbackup command is below.

sudo /opt/couchbase/bin/cbbackup http://couchbase_server:8091 /backups/cbbackup_20170214_AV -u -p
Exception in thread s1:
Traceback (most recent call last):
File “/usr/lib/python2.7/threading.py”, line 810, in __bootstrap_inner
self.run()
File “/usr/lib/python2.7/threading.py”, line 763, in run
self.__target(*self.__args, **self.__kwargs)
File “/opt/couchbase/lib/python/pump_bfd.py”, line 614, in run
rv, db, db_dir = self.create_db(cbb)
File “/opt/couchbase/lib/python/pump_bfd.py”, line 753, in create_db
rv, dir = self.mkdirs()
File “/opt/couchbase/lib/python/pump_bfd.py”, line 813, in mkdirs
if error.errno != errno.EEXIST:
NameError: global name ‘errno’ is not defined

[####################] 100.0% (7/estimated 7 msgs)
bucket: CAS-Services, msgs transferred…
: total | last | per sec
byte : 4811 | 4811 | 3387.6
2017-02-14 12:38:44,421: mt could not find index server:0
[####################] 100.0% (1224/estimated 1224 msgs)
bucket: CAS-Tickets, msgs transferred…
: total | last | per sec
byte : 4811 | 4811 | 2451.4
2017-02-14 12:38:46,415: mt could not find index server:0
[####################] 100.0% (26930/estimated 26930 msgs)
bucket: PFM, msgs transferred…
: total | last | per sec
byte : 540342609 | 540342609 | 15435812.9
2017-02-14 12:39:21,452: mt could not find index server:0
[####################] 100.0% (301773/estimated 301772 msgs)
bucket: PFM-EventStorage-PROD, msgs transferred…
: total | last | per sec
byte : 644922244 | 644922244 | 14995491.5
2017-02-14 12:40:04,486: mt could not find index server:0
done

We had a similar issue. First: kill all running cbbackup processes. They can fill up all available connections to the DCP client so no more backups are possible (and views are not updates, no more xdcr connections and othere issues). We changed then the option for maximum filesize to something like “cbb_max_mb=20000”, so only 1 big file per node is generated instead of many smaller files. For me it looks like there is a bug in newer versions of cbbackup that leads into a crash if the second file of a backup is generated. Anyway, with a limit of cbb_max_mb higher than backup size it works for us.

I’ve tried to increase the max filesize but the issue still happened. Did anyone find another workaround?

Thanks.

Hi,
Does anyone gets a solution about this? I’m with the same problem, using Couchbase Server 6.0

w0 pump (http://xxx.xxx.xxx.xxx:8091(bucket-name@xxx.xxx.xxx.xxx:8091)->/scripts/testbkp(bucket-name@xxx.xxx.xxx.xxx:8091)) done.
w0 source : http://xxx.xxx.xxx.xxx:8091(bucket-name@xxx.xxx.xxx.xxx:8091)
w0 sink : /scripts/testbkp(bucket-name@xxx.xxx.xxx.xxx:8091)
w0 : total | last | per sec
w0 batch : 213 | 213 | 4.3
w0 byte : 98920884 | 98920884 | 1996105.6
w0 msg : 16211 | 16211 | 327.1
w0 node: xxx.xxx.xxx.xxx:8091, done; rv: 0
[############ ] 62.0% (16211/estimated 26163 msgs)
bucket: bucket-name, msgs transferred…
: total | last | per sec
batch : 213 | 213 | 4.2
byte : 98920884 | 98920884 | 1957445.1
msg : 16211 | 16211 | 320.8
mt rest_request: Administrator@xxx.xxx.xxx.xxx:8091/pools/default/buckets/bucket-name/ddocs; reason: provide_design
mt Starting new HTTP connection (1): xxx.xxx.xxx.xxx
mt “GET /pools/default/nodeServices HTTP/1.1” 200 268
mt Starting new HTTP connection (1): 127.0.0.1
mt “GET /getIndexMetadata?bucket=bucket-name HTTP/1.1” 200 None
mt Starting new HTTP connection (1): xxx.xxx.xxx.xxx
mt “GET /pools/default/nodeServices HTTP/1.1” 200 268
mt Starting new HTTP connection (1): 127.0.0.1
mt “GET /api/index HTTP/1.1” 200 33
done