Index (GSI) consistency at scale



I am interested in better understanding how far the indexes may get behind when using unbounded queries vs Request plus queries at scale.

If I can expect most indexes to be less than 1s behind most of the time, then I think that would be very acceptable.

However, it’s hard to find information about what to expect at scale. Some of the most helpful information I’ve found is in this post:

The post seems to indicate that GSI indexes are updated every 200ms (on ForestDB). But how long can a single update operation take? e.g. Let’s say I suddenly dumped 100,000 documents into one of my buckets. Up to 200ms later, the index update would begin. How long might this take at scale? Are we talking <1s or maybe like 10s?

I know this varies greatly depending on the server and many other factors, but I’m just looking for more detail into how these index updates work and how they perform.



@benbenwilde, a few things about how the indexing update works and impacts the query consistency:

  • GSI always has a live stream of mutations coming in from data store. 200ms(for ForestDB) and 20ms(for Memory Optimized Indexes in 4.5.0) is the interval at which a stable snapshot is generated for consumption by query. A query can only be answered using a stable snapshot.
  • If you bump up 100k documents in a bucket, the indexes will get updated in a particular timeframe, depending on the capacity of your hardware,. This timeframe is lot less for Memory Optimized Indexes(you’ll need to try it out on your hardware for exact numbers) compared to disk-based ForestDB indexes. On a reasonably good hardware, with 100k updates/sec to your bucket, you can achieve < 50ms latency with MOI for request_plus queries.
  • unbounded queries return results from the last available stable snapshot while request_plus queries will wait till a snapshot is available which has all the data indexed up to the point when the query was issued.

I hope this clarifies some of your questions. Feel free to post further questions you may have.