Possible Memory Leak ? (Windows v3.0.1 )


#1

I’m using 2 powerful windows server 2012 machines.
This is still a test environment, so i just have 1 bucket, with 4,5GB assigned on each server to that single bucket ( 9GB total)

Now after a restart of the bucket, everything is fine.

Start of leak:

However once i start to insert docs in the DB ( between 1-4MB each ), memory leaks seem to appear… until i get:
Hard Out Of Memory Error. Bucket “horescon” on node xxx.xxx.xxx.xxx is full. All memory allocated to this bucket is used for metadata. (repeated 14 times)
At this point i have a few hunded KB of metadata. and all user User Data in RAM are at 0.
HOWEVER, the total user data in RAM shows that pretty much all 4,5GB for each server are beeing used…

After HARD OUT OF MEMORY ERROR message:

(Image taken just after server gave me hard error)

Has anyone seen this before - or am i doing something wrong?


Bucket Hard out of Memory Error - Eviction bottleneck?
#2

Note that Windows 3.0.1 is in beta at the moment.

You’re quite right, that looks like an issue. If you can help, it’d be good to do a cbcollect_info both before and after you run into the issue and file an issue.

I think cbcollect_info should be okay, but I know there were some known issues with it. Post back here if you have trouble.


#3

Note that I filed a bare bones issue, MB-12420, for this. If you have any additional info like requested, that’d be really helpful to identifying the problem. It may also be fixed already, so you can track the issue there.


#4

I’ve seen this same error on Red Hat 6.5 with the new released 3.0.0 Enterprise Edition (build-1208). I see the same user RAM at 0 as seen in the attached screen shot.


#5

I’m investigating a similar case. It would be helpful to know if you are seeing client side time outs when you are inserting the docs into the DB. If you are, can you please increase the timeout on the client and see if the memory still leaks.

Thanks,
Patrick


#6

No, i havent seen timeouts, tried setting timeouts to 3 minutes… no change.
But i have noticed that there is a difference if i upload with one thread or several.
with locking so i only have 1 thread uploading at a time i have reached 8000+ files… with parallel upload i would get stuck at ~1500. ( but its taking ages in return )

One thing i noticed todat is that i do get occasional “null reference” from couchbases parallel observer when im uploading with several threads… i dont have the pdb’s so i cant tell you more than that atm.

Tomorrow i should be able to give you a cbcollect_info from beginning to end.
Edit:
Should propably add that im using the couchbase.NET connector 1.3.9


#7

I see it happening during inserts yes - but only when it starts to eject documents from memory… until that limit is reached all is fine and ok.


#8


A few days in between due to many checks and tests, so not sure if they are usable, but take a look.