Java out of memory

This afternoon I have started getting a lot of exceptions in Java on a test server where I use the Java SDK (v.2.7.3) to connect to a Couchbase cluster (v. 6.0 - Community edition).

After having done some research I found that the errors were related to a a query on records with some embedded images. This means that the documents in average size were ~100KB. So if I query these types of documents I found that the query runs out of memory if it returns around 6-700 documents or more.

I’m about to refine the queries to avoid this. However, I’m curious if there is a way to adjust the server or the SDK to not hit this boundery?

Thanks in advance!

/John

@jda It is a bummer that you are running into OOM issues. Can you please log a bug for this issue?

Yep, I can do that.

What is the procedure for logging an error? - and where?

In the meantime I had to re-organize my data (split the raw images out from the documents I select) - as we migrated the entire system yesterday… :slight_smile:

Hey @jda,

Just to let you know - improvements in this area is one of the big new features of the next generation of the Java SDK, which we’re busy polishing up currently. It’s going to support backpressure, which will automatically adjust the rate to allow your application to consume the incoming query data at the rate it’s capable of, and prevent these kinds of OOM errors.

2 Likes

@jda you can log a bug here https://issues.couchbase.com/
You need to create an account if you do not have an account

Does that mean that the result of a N1QL query is not loaded in-memory anymore?

We currently work around it by letting N1QL queries return only ids (meta().id ) and then built an Iterable which fetches small chunks of those ids using multi-get. That way only a smaller amount of data is kept in memory.

@synesty

Yes, that’s right. To be more explicit, we will have 3 Java API variants, one of which will provide an interface based around reactive streams from Project Reactor. This one will ensure that only small amounts of the N1QL query are kept around in memory at a time. If your app can’t keep up with the incoming data, then backpressure will ensure that we stop reading from the server until it can catch up.

Your workaround is good, but soon you will not need it anymore :slight_smile:

1 Like

I have contacted the Jira admins to get an account. Will log the bug once I get that…

I haven’t reported this issue as I never heard back about creation of an account.

@jda Did you get your account?

No, which is why I wrote yesterday :slight_smile: