Inefficient since it has to scan along the sliding window every time new reading is coming in which it scans almost the same data. Besides that, sliding windows can overlap but disjoint the first items.
Delay process because when it gets to identify probability of duplicate, it has to search the double-linked list to check whether the intersection of all time intervals corresponding to each hash function is empty so that the tag did not arrive within time. Just related dynamic setting with the object’s arrival rate and not the departure rate which indicates readings that can be removed from the filter.