Autonomous operator real auto scaling

im working on couchbase with autonomous operator,as you know we can scale server with cbc config yaml,
is there any way(tool,plugin etc) to scale server according to cpu,ram usage or load .

kubernetes running on gke.


There isn’t an easy way to do this I’m afraid.

Kubernetes itself allows you to create an autoscaling group out of some resource types that support it (deployment etc.) however this is all driven off CPU and memory resources and assumes a homogeneous deployment.

As you are well aware we allow multi-dimensional scaling, so high CPU utilization on the query services doesn’t mean we should blindly scale up the entire cluster. Add into the mix that storage may be the bottleneck and we’d want to scale the data and indexing services.

This is a hard problem! But rest assured it is one we are thinking about how best to tackle. So short term nothing to offer you other than performing some rudimentary monitoring with nagios/icinga/sensu/etc and alert when cluster topology changes should be performed,


hi simon thanks for reply

i hava an idea maybe you can give some info,i plan use seperate service system basicly 1 pod for data 1 pod for query and more ,i havent searched yet but if i give label every service’s pod then write a tools,plugin on kubernetes to watch every pod according to label and it update config yaml which service pod number has to change.


Good luck! It sounds like a reasonable (and simple) approach.

We actually already label the pods with the services that are enabled, so you can use that out of the box. I expect you can get what you need from heapster as regards basic metrics, I think the most future-proof solution appears to be however.

thanks for advice simon.

1 Like