I was just curious if anyone had developed a working pattern to ensure that they weren’t saving documents with duplicate data?
Right now, I have a data supplier in the form of a piece of hardware, which sends data (data with no unique ID), and sometimes that data is replicated - however, since there is no unique ID - I cannot know it’s replicated.
That means that I need to handle stopping duplicate records right at the point they are saved - so I feel like this has something to do with my livequery or maintaining a live list of all records, and then searching through them before each item to be saved.
At the moment, I’m debating creating a hash of my document (excluding documentID and timestamps), and then keeping the hashes of all my already-persisted documents in a hashmap or list - and before each save, I can run through these lists/hashmaps to see if data is duplicated.
This feels like a real hack - so I was wondering if anyone else had developed a better solution?