Index Path 100% full

Hello All,

I am currently testing indexes on the following version:
4.1.1-5914 Enterprise Edition (build-5914)

I started an Index creation using the following command:

CREATE PRIMARY INDEX `#primary` ON `parse` USING GSI;

My Index folder usage reached 100%:

/dev/xvdi1       99G   98G     0 100% /index

The index Build progress says 70%. My bucket is 85.4GB in size, I thought I would have plenty for the index, turns out it needs more space than the actual data for some reason.

I am receiving the following error in the console:
Warning: We are having troubles communicating to the indexer process. The information might be stale.

Service 'indexer' exited with status 1. Restarting. Messages: net/http.(*conn).serve(0xc20b678f00)
/usr/local/go/src/net/http/server.go:1162 +0x69e
created by net/http.(*Server).Serve
/usr/local/go/src/net/http/server.go:1751 +0x35e
[goport] 2016/06/13 19:54:54 /opt/couchbase/bin/indexer terminated: exit status 2


Service 'indexer' exited with status 1. Restarting. Messages: github.com/couchbase/indexing/secondary/indexer.(*storageMgr).createSnapshotWorker(0xc20b4f8f00, 0xc20b0a0001, 0xc20b0706f8, 0x5, 0xc20c84f420, 0xc20cc7fb60, 0x400, 0xc20cc7fb90, 0xc20cc7fbc0, 0xc20b0a8b90)
/home/couchbase/jenkins/workspace/sherlock-unix/goproj/src/github.com/couchbase/indexing/secondary/indexer/storage_manager.go:383 +0x451
created by github.com/couchbase/indexing/secondary/indexer.(*storageMgr).handleCreateSnapshot
/home/couchbase/jenkins/workspace/sherlock-unix/goproj/src/github.com/couchbase/indexing/secondary/indexer/storage_manager.go:231 +0x712
[goport] 2016/06/13 19:54:47 /opt/couchbase/bin/indexer terminated: exit status 2

What would be the solution to my space issue. I did not use LVM for the index mount on my linux node. So I am not able to add space while the system is running. Is there a way I can drop the index? But the thing is that the index is not even created yet.
Need advise.

Thanks,

Steeve

Hi steevebisson

I had nearly the same issue once. My index disk was full during the rebalance.

I solved this with following steps.

  1. Stop Rebalance (doesn’t need for you)
  2. Started manual compaction of my buckets and views
  3. Delete not necessary views

With those steps I was able to clean up the index disk and finish the rebalance.

Maybe this helps you.

Turns out after the automatic compaction. Index size reduced dramatically, Disk space is fine now.

I had same issue. My data size is just 5G but overall index size is grown to 54G. That is surprising. Compaction did not helped much.

Can anyone help me with reason to have that large index size for comparatively smaller data?

@yogeshkore, @mathias , @steevebisson,
Uncontrollable grow of GSI is a known problem for 4.1.X branch. 4.5.X allows to use circular logging (and many more improvements in GSI implementation), that solves the problem. I can suppose that 2 sequential manual compactions probably could help, but that is nothing more then a suggestion
See also GSI: different sporadic bugs + fragmentation up to 98%

2 Likes

@egrep It happend to me during a rebalance without GSI. It was just an idea that it might help @steevebission