When is the document's cas proceduced,and where?


#1

I have several questions:

a)When is the document’s cas proceduced?

b)and is the cas generated in sdk or in server?

c) What is your recommanded sdk env config for max performance?


#2

The cas value for a document is generated on the server when the document is being stored.


#3

when I surely store the value , seeing the log of the cas value from the server , i find the item count wrong.

I do not know why?


#4

I’m not sure I understand your question. What exactly do you do, what do you see and what do you expect to happen?


#5

Hi,

default env config;

I store 1000 data items into cb, and i check the log and find the printing log record matched in amount, but i find the item count wrong in admin console.


#6

How many items do you see in the web console?


#7

I see that it is 1000


#8

client.async().insert(doc, ReplicateTo.ONE)
.single()
.subscribe(new Subscriber<Document<?>>() {

                    @Override
                    public void onCompleted() {
                         cdl.countDown();
                    }
                    
                    @Override
                    public void onError(Throwable e) {
                         returnException.set(e);
                         cdl.countDown();
                         System.out.println("onerror===" + e.getMessage());
                    }
                    
                    @Override
                    public void onNext(Document<?> t) {
                           if(t.cas()>0){
                               System.out.println("success"+t.cas()+t.id());
                           }
                           returnValue.set(t);
                           //completeVal.set(1);
                           //cdl.countDown();
                    }
                   });
                   
                  
                   try{
                         cdl.await();
                   }catch (InterruptedException e) {
                        Thread.currentThread().interrupt();
                        throw new RuntimeException("Interrupted while waiting for subscription to complete.", e);
                   }

the record ‘System.out.println(“success”+t.cas()+t.id());’ occurs totally 1000 times;
but the item count just 999 in admin console


#9

retryStrategy: FailFast
Replicas: 2 copies

source code:
bucket.async().insert(doc, PersitTo.ONE) .single() .subscribe(new Subscriber>() {

                @Override
                public void onCompleted() {
                     cdl.countDown();
                }

                @Override
                public void onError(Throwable e) {
                     returnException.set(e);
                     cdl.countDown();
                     System.out.println("onerror===" + e.getMessage());
                }

                @Override
                public void onNext(Document&lt;?&gt; t) {
                       if(t.cas()&gt;0){
                           System.out.println("success"+t.cas()+t.id());
                       }
                       returnValue.set(t);
                       //completeVal.set(1);
                       //cdl.countDown();
                }
               });

               try{
                     cdl.await();
               }catch (InterruptedException e) {
                    Thread.currentThread().interrupt();
                    throw new RuntimeException("Interrupted while waiting for subscription to complete.", e);
               }

then i kill one of four nodes , but after auto-failover ends , the exception ’ com.couchbase.client.java.error.DurabilityException: Durability requirement failed: Replica number 2 not available for bucket price_datapool’ always occurs, and i find the cluster is of three nodes recreated successfully.

Why ?


#10

@xiger when you failover, you have one less replica in your cluster until your rebalance it back into a clean state. The operation may have succeeded, but since a replica was not available your durability requirement has failed and this is why we print that exception to you. Once your cluster is rebalanced it should not come up anymore.


#11

figure:


But
Log :
asdfasdf===>start
onerror===Could not dispatch request, cancelling instead of retrying.
39610 [cb-io-1-2] WARN com.couchbase.client.core.endpoint.Endpoint - [slave3.hadoop.com/192.168.103.134:8092][ViewEndpoint]: Could not connect to endpoint, retrying with delay 4096 MILLISECONDS:
java.net.ConnectException: Connection refused: no further information: slave3.hadoop.com/192.168.103.134:8092
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
at com.couchbase.client.deps.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:208)
at com.couchbase.client.deps.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:281)
at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at com.couchbase.client.deps.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at com.couchbase.client.deps.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at com.couchbase.client.deps.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:619)
39625 [cb-computations-2] INFO com.couchbase.client.core.node.Node - Disconnected from Node slave3.hadoop.com
test===>retry
asdfasdf===>start
onerror===Durability requirement failed: Replica number 2 not available for bucket price_datapool
test===>end
asdfasdf===>start
onerror===Durability requirement failed: Replica number 2 not available for bucket price_datapool
test===>end
asdfasdf===>start
onerror===Durability requirement failed: Replica number 2 not available for bucket price_datapool
test===>end
asdfasdf===>start
onerror===Durability requirement failed: Replica number 2 not available for bucket price_datapool
test===>end
asdfasdf===>start
onerror===Durability requirement failed: Replica number 2 not available for bucket price_datapool
test===>end
asdfasdf===>start
success160054210018611Price_85646339
test===>end
asdfasdf===>start
onerror===Durability requirement failed: Replica number 2 not available for bucket price_datapool
test===>end
asdfasdf===>start
onerror===Durability requirement failed: Replica number 2 not available for bucket price_datapool
test===>end
.
.
.
.
util the program exits


#12

Is this cluster not clean?


#13

Hi,
due to the reasons above,
In midnight, one of the nodes down, and no one can do the reblance, and the fatal problem is that in this situation,the data can be of loss without permanent retry.


#14

Hi,
I have a surprising problem:
source code is above.
and aysn writing.
duriablity: PerstTo.ONE

start writing 1000 items:
insert ===>start
success: casId:1732515897200159; docId:Price_85646069
touch’s result: true
insert===>end
.
.
.

insert ===>start
success: casId:1732515904932821; docId:Price_85646084
touch’s result: true
insert===>end

but i found the item count(999) wrong,
and finally I found the document(casId:1732515904932821; docId:Price_85646084) lost in admin web console.

Why?