Monday, March 2, 2015

Stripping Down the Scheduler

In my last post, I laid out a high-level client/server design based around a custom scheduler enqueuing and prioritizing closures. The basic idea is still sound, but somewhat surprisingly (at least to me) the "closure" part of this design is the least relevant one.

All the queued closures would originate from somewhere: either incoming messages, or by timed signals. If we allocate a closure for every single event, it means a whole lot of small, repetitive heap allocations. We can take down the churn a little by batching up messages, but we sacrifice some flexibility and latency.

But more importantly, why use closures at all? The set of things causing a closure is small and well-defined - we can keep the priority- and deadline-based approach, but we can implement it on run-of-the-mill message queues too. We have to add a few enums to describe the messages, but that's okay. Then a fixed-size task pool can pop and process the prioritized events just like before.

Once again, I find myself bitten by trying to think too generically.

No comments:

Post a Comment