thx for the response matz!
The specific use case can be found here: https://gitlab.com/os85/http-2-next/-/blob/master/lib/http/2/next/connection.rb#L737-739
The HTTP/2 allows frames to be ackowledged for streams that have been closed "recently", which in this case means "last 15 seconds". functionally, I'm using an hash to store them, with insertion at the time they're closed, and in a given event, I proceed to clean up the ones that have timed-out.
the first implementation traversed the whole collection for all timed out streams, and removed them. This O(n) complexity surfaced as a bottleneck in some benchmarks around long-lived connections. In order to improve it, I use #drop_while, which turned it into O(log n) , as given hash preserving insertion order, possibly timed out streams are left-most, so I can stop processing the collection as soon as I find a non-timed out element.
This greatly improved benchmarks, but the resulting multiple objects generated by the successive Enumerable#drop_while + #to_h calls resulted in a new bottleneck associated with temporary object generation, that could be greatly reduced by just reusing the same collection.
can you clarify what you mean by "the order of hash is fragile anyway"? As I understood, there are other stdlib data structures relying on hash order insertion. Are there plans to remove this guarantee?
I guess I could rely on Array#drop_while!, although that'd hurt lookups.