Do we need to close Couchbase bucket object in python explicitly

I am using couchbase with Python SDK (django 2.2.5 ), following is the code I am using for fetching data from couchbase bucket.

from couchbase.cluster import Cluster
from couchbase.cluster import PasswordAuthenticator
cluster = Cluster(‘couchbase://localhost’)
authenticator = PasswordAuthenticator(‘username’, ‘password’)
cluster.authenticate(authenticator)
cb = cluster.open_bucket(‘bucket-test’)

ssql = “Select a.name from bucket-test”
query = N1QLQuery(sSql)

for row in cb.n1ql_query(query):
print roe[“name”]

In the bucket is having 3000 documents, then, after executing above code, do the 3000 documents will be
kept open in the memory.
If yes do I need to close the bucket, or the SDK will do the job itself, after completing the code execution.

SDK does not provide any close method for bucket object or cluster object.

Hi @mdrdhaygude,

The N1QLRequest object which is returned by Bucket.n1ql_query() owns a ViewRequest object with an internal buffer.

The N1QLRequest._iter_ generator calls ViewRequest.fetch, which fills the buffer with a chunk of the result from the N1QL HTTP connection, and then moves this chunk into a temporary return object, clearing the original buffer. This return object is passed back to the generator which in turn yields it to the end user, without storing it elsewhere.

Hence, only small chunks of the result should be stored by the N1QLRequest object itself, and only for the period during which ViewRequest.fetch is running, and garbage collection should destroy references to old internal buffer data as it is dereferenced.

As the N1QLRequest object returned by cb.n1ql_query is no longer referenced after the last line of your example code, it should eventually be garbage collected and the internal structures deallocated.

Therefore, no more than a small amount of data should accumulate in memory, given that the garbage collection can keep up with the turnover. No documents resulting from the query should be owned or referenced by the N1QLRequest object, or the Bucket, and so, no manual closure/deallocation should be required.

Hope that helps,

Ellis

P.S. with respect to GC:

Regarding cleanup/closure of resources, we generally rely on garbage collection for memory, and the destructors called when enacting GC for non-memory oriented resources, and attempt to release ownership of resources/data as soon as possible, to minimise memory usage.

Any Python object that is referenced but no longer required can be disowned by the referring variable using del, although this does not guarantee immediate destruction - it only marks that reference as no longer valid.

The next GC sweep, whether automatic or manually triggered, will only call the destructor if no remaining references to the object are present. So you also need to make sure that any unneeded data taken from the n1ql_query method is also no longer referenced by any application-owned structures in order to allow GC to free unwanted data.