In Couchbase Server 2 most of the "background processes" that are used for replication, I/O, disk/file compaction, views, and other cleanup are per bucket.
So the reason why the documentation is mentioning as a best practice to reduce/control the number of bucket per cluster is really to avoid consuming too many resources on your servers.
We do not have a magic formula about the relation between node and buckets as it depends a lot of the volume of data and type of operations you do on them. (for example lot of mutation or not? views or not? ....)
I usually take another approach when I talk to developer about their project:
- you want multiple buckets, ok, but why? Can you explain why you need more?
Then we see if this "best practice" limit is an issue or not.
Some interesting reading in relation to your question:
- Sizing 1: http://blog.couchbase.com/how-many-nodes-part-1-introduction-sizing-couchbase-server-20-cluster
- Sizing 2: http://blog.couchbase.com/how-many-nodes-part-2-sizing-couchbase-server-20-cluster