I do not see any issues in storing all 10 million documents in the same bucket, in fact i would recommend you save all documents in the same bucket. The reason is that “cross bucket” query using views is not supported. Also storing different document content in each document is also possible and a very normal use case.
You can think of Buckets as a logical seperation of Views and Data, meaning that Views are bound to a single bucket and therefore only are executed for documents that are added to the bucket where a specific view belongs/lives. Bucket can therefore be used to help scale and perform in some cases.
One hard limit that you should be aware of is the document max size limit of 20MB. Couchbase does not support documents larger than 20 MB, in case the documents are larger you would need to split them up in logical pieces/parts. The video i referred to explains ways to do that.
Regarding historical data storage… In the early days (i think 2 major version back) Couchbase Server required all keys to be available in memory, meaning that if you would store huge amounts of historical data then you could run out of memory because all the document keys would not fit in memory. That is no longer the case and therefore storing a huge number of documents does not require the same amount of RAM.
Couchbase has a lot of customers that use Couchbase for both live data and historical data analysis, with even more than 10 million documents.
Perhaps this presentation about Elastic Search could give some inspiration to what is possible: