Saturday, August 3, 2013

QoS Discussion continued

IntServ is an IETF standard that provides per-flow QoS. Support specific applications such as video streaming because each flow is controlled based on equations. A service model describes what the network can guarantee for the flow. An application uses an interface to describe the guarantees it needs. And the network meets the guarantees by scheduling the packets. The guarantees are established at the time the applications are admitted.
Each stream or flow can have a different QoS - for example, best effort, predictive or differentiated services, and strong guarantees on the level of service (real-time).The set of services that is supported on a specific network can be viewed as a service model. Services can be selected based on performance or cost tradeoffs. Service models can be say guaranteed service, controlled load, or best effort. Guaranteed service is the best offering where bandwidth, delay and jitter are controlled and applications can expect real time performance. Controlled load targets applications that can adapt to network conditions for short durations and can specify traffic characteristics and bandwidth. Best effort has no quality of service. Both Guaranteed service and controlled load require that applications be denied admission if their QoS cannot be met. Once admitted, the QoS is maintained.
Flows are setup by sessions. Sessions define the QoS being requested by the receiver. This is usually defined by rate r  and is called the R-spec. The traffic characteristics is defined in terms of rate r and the buffer size b and is called the T-spec. Traffic characteristics are best described by a leaky bucket. The rate and the bucket size determine the flow. RSVP is the signaling protocol that is used to pass around the R-spec and T-spec to the routers where the reservation is required.
Guaranteed service uses token bucket filter to characterize traffic and weighted fair queuing at the routers.  A bit about this in detail shortly. Routers will admit calls based on their R-spec and T-spec and based on their current resource allocated at the routers to other calls.
Now back to weighted fair queuing which is a scheduling technique. This assigns different priorities to data flows that share the same communication link. Think of the link as being broken down into a number of communication channels each having varying bit rates. The sharing adapts to the instantaneous traffic demands of the data streams that are transferred over each channel. It uses statistical techniques.  With a link data rate of R and N simultaneous data flows, the average data rate is R/N. The weights are assigned to the data flows to denote the priorities. Therefore, the data flow number i will have an average data flow rate of Rwi/ (w1 + w2 + w3 ... + wn). Notice that if the weights are fractional and they add up to one, the average rate translates to probability distribution. And one flow does not affect the others if it sends packets more than its rate. As an example of weighted fair queuing, consider the CDMA spread spectrum networking. Here the weights may be the cost for the required energy
This is different from fixed sharing of links either via time division multiplexing or frequency division multiplexing. The improvement in the link utilization is referred to as the statistical multiplexing gain.
Courtesy : Lecture notes CMU and wikipedia.

No comments:

Post a Comment