13.11 Output Queuing Switch 495
must also handle any contention issues that might arise if the packet is blocked from
leaving the buffer for any reason.
A controller at each input port classifies each packet by examining the header to
determine the appropriate path through the switch fabric. The controller must also
perform traffic management functions.
In one time step, the small input queue must be able to support one write and one
read operations which is a nice feature since the memory access time is not likely
to impose any speed bottlenecks. However, in one time step, the main buffer at each
output port must support N write and one read operations.
Assuming an N × N switch, the switch fabric (SF) must connect N input ports
to N output ports. Only a space division N × N switch can provide simultaneous
connectivity.
The main advantages of output queuing are
1. Distributed traffic management
2. Distributed table lookup at each input port
3. Ease of implementing QoS or differentiated services support
4. Ease of implementing distributed packet scheduling at each output port
The main disadvantages of output queuing are
1. High memory speed requirements for the output queues.
2. Difficulty of implementing data broadcast or multicast since this will further
slow down the switch due to the multiplication of HOL problem.
3. Support of broadcast and multicast requires duplicating the same data at different
buffers associated with each output port.
4. HOL problem is still present since the switch has input queues.
The switch throughput can be increased if the switching fabric can deliver more
than one packet to any output queue instead of only one. This can be done by
increasing the operating speed of the switch fabric which is known as speedup.
Alternatively, the switch fabric could be augmented using duplicate paths or by
choosing a switch fabric that inherently has more than one link to any output port.
When this happens, the output queue has to be able to handle the extra traffic by
increasing its operating speed or by providing separate queues for each incoming
link.
As we mentioned before, output queuing requires that each output queue must
be able to support one read and N write operations in one time step. This, of course,
could become a speed bottleneck due to cycle time limitations of current memory
technologies.
To achieve multicast in an output queuing switch, the packet at an input buffer
must remain in the buffer until all the multicast ports have received their own copies
at different time steps. Needless to say, this leads to increased buffer occupancy
since now we must deal with multiple blocking possibilities for the packet before it
finally leaves the buffer. Alternatively, the packet might make use of the multicast
capability of the switching fabric if one exists.