All the queued closures would originate from somewhere: either incoming messages, or by timed signals. If we allocate a closure for every single event, it means a whole lot of small, repetitive heap allocations. We can take down the churn a little by batching up messages, but we sacrifice some flexibility and latency.
But more importantly, why use closures at all? The set of things causing a closure is small and well-defined - we can keep the priority- and deadline-based approach, but we can implement it on run-of-the-mill message queues too. We have to add a few
enum
s to describe the messages, but that's okay. Then a fixed-size task pool can pop and process the prioritized events just like before.
Once again, I find myself bitten by trying to think too generically.
No comments:
Post a Comment