Why is there a restriction on key length?

I’m using spymemcached in Java and I’ve noticed the maximum key length is 250, but this is an exception thrown by the library rather than server interaction so I was hoping that it could be out of date. Is this restriction implemented on Couchbase Server 2.0?

Ideally I’d like to be able to use keys many times this size.

Thanks,
Marcus

The 250 character key limit is still a restriction on the server. We have this restriction because we cache the most recently used data and a longer key name means less space to cache things.

We also would like to use keys longer than 250 characters.

A longer key name means it takes MORE space to cache things, I would think. Unless you mean there is less available space to cache things, which also isn’t accurate - the space available remains the same, simply the same amount of space can hold fewer items.

Just spent a few hours yesterday discovering this key length restriction myself. Wish it had occured to me that might be it. I spent time figuring out how to get the client to output logging, and all it informed me was that the socket was dead. Unfortunately, it was the very first item I tried to store. If I had happened to store an item with a key shorter than 250 and it worked, and then later a key longer and it didn’t work, i would have honed in quicker on the problem. As it was, the logging for the client made it seem like a socket problem. After attempting to store with a key longer than 250 characters, the (.net) client kills the socket, and all subsequent requests the socket is just dead. So that brings the question of why not just error on that operation, rather than kill the socket completely. Though I guess if it only sporadically didn’t save certain items this would have been a frustrating “sighting” bug, rather than something that I was forced to get to the bottom of and address immediately.

It would be nice if the server returned some error status, rather than the socket just not reading bytes and having it appear that there’s a socket problem, so the client kills the socket.

Any chance that the key length could be something configurable, defaulting to the current limit but able to be explicitly made greater after reading a blurb about why lower length is important?

When you say you cache the most recently used data, do you mean you cache just the keys of the most recently used data? Because if you cache both the key and the value, then the number of most recently used items you are able to cache is going to be much more influenced by the size of the value than by the size of the key. In which case, why not open it up to allow longer keys?

Thanks for your consideration,
-ChssAddct

  • Why do we reset the connection when the keylength or value is too long?

We reset the connection because with a key or value that is too long it is possible to receive an error from the server before all of the bytes have actually been written from the client. This puts the client in an unknown state and as a result we reset the connection. If your connection is just killed then there might be a bug.

  • Any chance that the key length could be something configurable, defaulting to the current limit but able to be explicitly made greater after reading a blurb about why lower length is important?

Probably not in the short term. The memcached ascii protocol states that the key length can not be longer than 256. Since we want to be compatible with the memcached protocol this would most likely have to change in a release of memcached server first.

  • When you say you cache the most recently used data, do you mean you cache just the keys of the most recently used data?

Two things can be cached, meta data and values. Meta data includes the key size plus 72 bytes and meta data is always kept in memory. Values are the only things that can be evicted from and restored to the cache.

Also, what is your use case for needing a key onger that 250 bytes.

My use case is similar to a file system. Currently 250 characters is very restrictive, but I will have to go into production using this limitation and if it is later removed I would like to be able to perform an upgrade on the cluster.
Thanks

The 250 key length is really a bad limitation.

We have used Couchbase server for caching on ASP.NET. We cache both objects and pages using a custom built CacheProvider.

If you want to do page caching keep in mind that the key has to be lengthy as there might be a lot of querystring parameters. The same apply to object caching, although there are fewer possibilities that you need a key more than 250 chars,

We have came across situations that we need keys more than 250 chars. Also i can confirm that the bug with socket being closed (and not reset) is still valid for version 1.8.1 of the .Net client. The web app freezes and there is no more interaction with app and the Couchbase server. This bug has to b addressed as soon as possible. Isn’t it possible that the .Net client validates the key prior to send data to Couchbase? I didn’t understand why you might have data corruption or uknown states.

Summarizing yes, there is much need for keys more than 250 chars especially for an enterprise class cache server with all these bell and whistles. Make it your top priority and until then fix the .Net critical bug.

Until then i suggest hashing your keys with MD5 or SHA256 algorithms.