Couchbase java bulk load


I am reading bulk data from a couch base bucket and inserting this bulk data to another bucket.I am using couchbase java sdk 1.4.4 version.Using views for reading the whole data.

I am using set api to insert the data. After inserting whole data into couch base, I am closing the couchbaseclient.

Data gets inserted into couch base but as soon as it gets inserted, it starts indexing the data which I can see from couch base web console. As per couch base documentation, it should start indexing when we are reading the data , based on STALE parameter.

The other issue which I am facing is, at my java application console, it shows lots of warnings until indexing is not done.

| WARN [OperationFuture] Exception thrown wile executing com.couchbase.client.CouchbaseClient$15.operationComplete()
| java.lang.IllegalStateException: Shutting down
| at net.spy.memcached.MemcachedClient.broadcastOp( ~[spymemcached-2.11.4.jar:2.11.4]
| at net.spy.memcached.MemcachedClient.broadcastOp( ~[spymemcached-2.11.4.jar:2.11.4]
| at com.couchbase.client.CouchbaseClient.observe( ~[couchbase-client-1.4.4.jar:1.4.4]
| at com.couchbase.client.CouchbaseClient.observePoll( ~[couchbase-client-1.4.4.jar:1.4.4
| at com.couchbase.client.CouchbaseClient$15.onComplete( ~[couchbase-client-1.4.4.jar:1.4
| at com.couchbase.client.CouchbaseClient$15.onComplete( ~[couchbase-client-1.4.4.jar:1.4
| at net.spy.memcached.internal.AbstractListenableFuture$ ~[spymemcached-2.
| at java.util.concurrent.Executors$ [na:1.7.0_21]


You need to make sure all data is done before shutting the thing down. Can you please share your code? If you are using the async API, use latches to orchestrate between the callbacks and the shutdown.


Thanks daschl…I got that it is happening because I am shutting down couch base client before making sure that all data is inserted through asyncronous api by using count down latches.

I got the answer of other query as well related to indexing. Couchbase has some default setting for auto indexing



Hi daschl,

The strategy which I followed for inserting bulk data through couchbase is ,
I have 100,000 records. I have created batches of 5000 and inserting data asynchronously. First few times when running the application, it is able to insert whole 100,000 records. but 3rd for 4th times, I see that it is not to insert complete data.

Code which I am using for inserting the data:

final CountDownLatch latch = new CountDownLatch(listJsonObjects.size());
for (JsonObject jsonObject : listJsonObjects)
OperationFuture setFuture = this.couchbaseClient.set(key,jsonObject);// I am fetching key from jsonObject itself so just put as key here

    setFuture.addListener(new OperationCompletionListener()
      public void onComplete(OperationFuture<?> future) throws Exception
  catch (Exception e)
catch (InterruptedException e)

each time listJsonObjects will have 5000 records. this code is being called from a place where we create list of 5000 records till the whole set of records is inserted.


What you should probably do is look at the future result in the callback and see if it was successful. If it wasn’t the, status on the future will probably tell you what went wrong and then you can adapt your code to handle that accordingly.


Thanks Michal,

I have done the suggested changes.Based on the status , I have created new array for failed documents and retried RECURSIVELY. Failed documents are based as below:

if (!(setFuture.getStatus()).isSuccess()
&& (“timed out”.equals(setFuture.getStatus().getMessage()) || “Temporary failure”.equals(setFuture.getStatus().getMessage())))

This ensures me that records that are failed due to ‘timed out’ or ‘temporary failure’ will be tried again and again until it gets inserted. Is it correct?


With above approach, I was able to insert 4M records into couch base without any data loss. The problem currently facing is in reading of data. When ever I am reading this inserted data , It gives me timeout error.
I tried with default couch base client setting as well as below setting:

CouchbaseConnectionFactoryBuilder cfb = new CouchbaseConnectionFactoryBuilder();
CouchbaseConnectionFactory cf =  cfb.buildCouchbaseConnection(uris, bucketName,pwd);

I am reading from view:

  view = couchbaseClient.getView(designDocumentName, viewName);
  Query query = new Query();
  query.setStale(Stale.FALSE); //neither of stale value worked for me.
  ViewResponse result = couchbaseClient.query(view, query);
  viewIterator = result.iterator();

After inserting the data, waited for indexing to be done but even I was not able to read the data.The exception which I am getting is as follows:
java.lang.RuntimeException: Timed out waiting for operation
at com.couchbase.client.internal.HttpFuture.get(
at com.couchbase.client.CouchbaseClient.query(

… at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
at java.lang.reflect.Method.invoke(
at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(
at org.apache.cxf.jaxws.AbstractJAXWSMethodInvoker.invoke(
at org.apache.cxf.jaxws.JAXWSMethodInvoker.invoke(
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$
at java.util.concurrent.Executors$
at java.util.concurrent.FutureTask$Sync.innerRun(
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$
at org.apache.cxf.workqueue.SynchronousExecutor.execute(
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(
at org.apache.cxf.transport.servlet.ServletController.invoke(
at org.apache.cxf.transport.servlet.ServletController.invoke(
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(
at javax.servlet.http.HttpServlet.service(
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(
at …
Caused by: java.util.concurrent.TimeoutException: Timed out waiting for operation
at com.couchbase.client.internal.HttpFuture.waitForAndCheckOperation(
at com.couchbase.client.internal.ViewFuture.get(
at com.couchbase.client.internal.ViewFuture.get(
at com.couchbase.client.internal.HttpFuture.get(


The query you are performing does not set a limit, so are you trying to load all records in one batch? The old SDK does not do proper streaming, so this is more or less expected to timeout.

Please use the paginator (available on the client object) to get better batch loading of your responses or limit the clauses upfront. With the new SDK it should be doable since we stream it from the server in a more efficient manner.


In this context, “old” means 1.1.x through 1.4.x releases. New means 2.0.0 or later.

(for other future readers to understand)


Checked with paginator . It is working fine without any exceptions but not as fast as expected.
For 4 Million data ,with setting of page count as 20,000, it is taking 10-15 seconds/100,000 records for reading.


I’d rather do smaller batch counts, since getBulks are synchronous and also rather expensive on a single thread.
You can try smaller batch counts, Stale TRUE as well if that fits your requirements, or if that is still too slow I’d recommend you to either hand-roll some multithreaded view querying where batches are loaded from a thread pool in parallel or you try out the new SDK and use the asynchronous workflows.