I have used Couchbase (Community Edition) in production as page cache and session cache ( used via the memcached protocol ) and for us the sessions cluster was able to handle 8K requests in real time. Also, as our application was heavy on session data usage, I would deem that the writes happened 90+ % of the times (Don’t remember the exact no. of write operations). We chose the Memcached protol because it did not require any changes at the code level and PHP has inbuilt support for Memcached based sessions.
Considering that, you could say that it delivered ~ 7K write ops per second.
Please also note here that the data being written was session data te size of which was: 1KB < Sessions Data < 4MB (appx.)
Our Sessions cluster had 3 instances each of which had the following configuration:
EC2 c3.xlarge instance 4 vCPU, 14 ECU 7.5G of RAM + 100G EBS disk
All our application machines had a local instance of HAProxy running that load-balanced between the 3 machines and also took care of the removing dead machines and adding them back when they came back to life.
I am saying all this as the speed of couhbase is just awesome. Also, it is the easiest to scale. it takes just 2 -3 commands to install and get it running and then you just need to attach it to the cluster. It is as simple and trivial as that.
Also, regarding your problem with SDK, I guess, that should not be a problem as couchbase also provides a REST API for everything and should be trivial to write a SDK for the purpose. I would encourage you to write and publish the SDK for the general world.
In case you do not wish to write a SDK, the easier way I would suggest would be to write an AWS Lambda function in node.js and use the SDK there to write to couchbase instances. That way you won’t even have to deal with scaling your write throughput as Lambda would take care of it.
In a similar way, you could also try to have gearman in between to bypass the SDK limitation.
Hope this helps.