ipc-channel-mux router support
The IPC channel multiplexing crate, ipc-channel-mux, now includes a “router”. The router provides a means of automatically forwarding messages from subreceivers to Crossbeam receivers so that users can enjoy Crossbeam receiver features, such as selection (explained below). The absence of a router blocked the adoption of the crate by Servo, so it was an important feature to support. Routing involves running a thread which receives from various subreceivers and forwards the results to Crossbeam channels. Without a separate thread, a receive on one of the Crossbeam receivers would block and when a message became available on the subchannel, it wouldn’t be forwarded to the Crossbeam channel. Before we explain routing further, we need to introduce a concept which may be unfamiliar to some readers. Suppose you have a set of data sources – servers, file descriptors, or, in our case, channels – which may or may not be ready to deliver data. To wait for one or more of these to be ready, one option is to poll the items in the set. But if none of the items are ready, what should you do? If you loop around and repeatedly poll the items, you’ll consume a lot of CPU. If you delay for a period of time before polling again and an item becomes ready before the period has elapsed, you won’t notice. So polling either consumes excessive CPU or reduces responsiveness. How do we balance the requirements of efficiency and responsiveness? The solution is to somehow block until at least one item is ready. That’s just what selection does. In the context of IPC channel, this selection logic applies to a set of receivers, known as an . An holds a set of IPC receivers and, when requested, waits for at least one of the receivers to be ready and then returns a collection of the results from all the receivers which became ready. The purpose of routing is that users, such as Servo, can then select [1] [2] over a heterogeneous collection of IPC receivers and Crossbeam receivers. By converting IPC receivers into Crossbeam receivers, it’s possible to use Crossbeam channel’s selection feature on a homogeneous collection of Crossbeam receivers to implement a select on the corresponding heterogeneous collection of IPC receivers and Crossbeam receivers. Routing for has the same requirement: to convert a collection of subreceivers to Crossbeam receivers so that Crossbeam channel’s selection feature can be used on a homogeneous collection of Crossbeam receivers to implement selection on the corresponding heterogeneous collection of subreceivers and Crossbeam receivers. Let’s look at how this is implemented. The most obvious approach was to mirror the design of IPC channel routing and implement subchannel routing in terms of sets of subreceivers known as s. Receiving from a collection of subreceivers could be implemented by attempting a receive (using ) from each subreceiver of the collection in turn and returning any results returned. However there is a difficulty: if none of the subreceivers returns a result, what should happen? If we loop around and repeatedly attempt to receive from each subreceiver in the collection, we’ll consume a lot of CPU. If we delay for a certain period of time, we won’t be responsive if a subreceiver becomes ready to return a result. The solution is to somehow block until at least one of the subreceivers is ready to return a result. A does just that. It holds a set of subreceivers and, when requested, returns a collection of the results from all the receivers which became ready. This is a specific example of the advantages of using selection over polling, discussed above. Remember that the results of a subreceiver are demultiplexed from the results of an IPC receiver (provided by the crate). The following diagram shows how a MultiReceiver sits between an IpcReceiver and the SubReceivers served by that IpcReceiver: IPC channels already implements an . So a can be implemented in terms of an containing all the IPC receivers underlying the subreceivers in the set. There are some complications however. When a subreceiver is added to a , there may be other subreceivers with the same underlying IPC receiver which do not belong to the set and yet the will return a message that could demultiplex either to a subreceiver in the set or a subreceiver not in the set. Worse than that, subreceivers with the same underlying IPC receiver may be added to distinct s. So if we use an to implement a , more than one may need to share the same . There is one case where Servo uses directly, rather than via the router and it’s in the implementation of . So one option would be to avoid adding IpcReceiverSet to the API of . Then there would be at most one instance of and so some of the complications might not arise. But there’s a danger that it would be possible to encounter the same complication using the router, e.g. if some subreceivers were added to the router and other subreceivers with the same underlying IPC channel as those added to the router were used directly. Another complication of routing is that the router thread needs to receive messages from subchannels which originate outside that thread. So subreceivers need to be moved into the thread. In terms of Rust, they need to be . Given that some subreceivers can be moved into the thread and other subreceivers which have not not moved into the thread can share the same underlying IPC channel, subreceivers (or at least substantial parts of their implementation) need to be . To avoid polling, essentially it must be possible for a select operation on an SubReceiverSet to result in a select operation on an IpcReceiverSet comprising the underlying IpcReceiver(s). I expermented with the situation where some subreceivers were added to the router and other subreceivers with the same underlying IPC channel as those added to the router were used directly. This resulted in liveness and/or fairness issues when the thread using a subreceiver directly competed with the router thread. Both these threads would attempt to issue a select on an . The cleanest solution initially appeared to be to make both these depend on the router to issue the select operation. This came with some restrictions though, such as the stand-alone subreceiver not being able to receive any more messages after the router was shut down. A radical alternative was to restructure the router API so that it would not be possible for some subreceivers to be added to the router and other subreceivers with the same underlying IPC channel as those added to the router to be used directly. This may be a reasonable restriction for Servo because receivers tend to be added to the router soon after the receiver’s channel is created. With this redesigned router API in which subreceivers destined for routing are hidden from the API, the above liveness and fairness problems can be side-stepped. v0.0.5 of the ipc-channel-mux crate includes the redesigned router API. v0.0.6 improves the throughput for both subchannel receives and routing. The next step is to try to improve the code structure since the module has grown considerably and could do with some parts splitting into separate modules. After that, I’ll need to see if some of the missing features relative to ipc-channel need to be added to ipc-channel-mux before it’s ready to be tried out in Servo. [3] Another possibility, if some of the IPC receivers has been disconnected, is that select can return which IPC receivers have been disconnected. ↩︎ Crossbeam selection is a little more general. They allow the user to wait for operations to complete, each of which may be a send or a receive. An arbitrary one of the completed operations is chosen and its resultant value is returned. ↩︎ The main functional gaps in ipc-channel-mux compared to ipc-channel are shared memory transmission and non-blocking subchannel receive. ↩︎ Another possibility, if some of the IPC receivers has been disconnected, is that select can return which IPC receivers have been disconnected. ↩︎ Crossbeam selection is a little more general. They allow the user to wait for operations to complete, each of which may be a send or a receive. An arbitrary one of the completed operations is chosen and its resultant value is returned. ↩︎ The main functional gaps in ipc-channel-mux compared to ipc-channel are shared memory transmission and non-blocking subchannel receive. ↩︎