I’ve a use case where I store ~100m events (daily) gathered from client applications. Each event takes up to ~1.2 KB together with its metadata, so that makes up to a minimum of 115 GB disk space. The events are immutable and the planned retention will be varied from 6 months to a year which is impossible for me to afford with this amount of disk usage. There are also lots of indices and views defined for different needs.
I also stored same amount of events into ElasticSearch with the highest compression rate and its disk usage is ~15gb. (1.3b events = 170 gb)
I know that Couchbase uses SNAPPY compression library for storing documents. Is there a setting to increase its compression ratio? Do you have any suggestions besides trimming down event documents (shorten keys and values)?