Hi,
I ran the query with in the cbq shell (without LIMIT/OFFSET) and it streamed just fine. Here are the metrics:
"status": "success",
"metrics": {
"elapsedTime": "20m45.528252049s",
"executionTime": "20m45.528123318s",
"resultCount": 302922,
"resultSize": 1992956528
}
I will test it again today to see if it will consistently work or not. However this doesn’t really help me because if I want to download the results and I cannot do that for such a big query. This is why we are doing the pagination.
We do the pagination in exactly the way that you suggest. We have a limit of 1000 and then offset it by 1000 until the entire query goes through. However the error still occurs somewhere along the line. I have no idea what is causing this because the pagination should actually help in reducing memory consumption.
I really don’t know what to do to avoid this issue.
Cheers
Mike Maik