Design and Performance Analysis of Link-by-Link Congestion Avoidance Algorithm for Multiplexed Traffic in the Internet

No Thumbnail Available
Journal Title
Journal ISSN
Volume Title
The performance of the Internet is closely linked to the dominant TCP protocol and the performance of the TCP is associated with its congestion control mechanism. With the increasing usage of TCP protocol, the worsening of end-to-end delay and jitter performance is a serious concern affecting QoS requirements for Internet communications, particularly for the real-time applications. Queuing delay and jitter are clearly related to congestion control and occur at the network layer; therefore, the queuing delay and jitter are analyzed at the network layer in this thesis work. Datagrams at the routers are from the number of multiplexed flows and constitute a stochastic process. To quantitatively analyze the effect of TCP multiplexing initially, arrival and service processes for the multiplexed TCP and UDP datagrams at the congested router output are modeled. The model accounts for the fraction of TCP and UDP datagrams (as they contend for resources), the arrival and service distributions of TCP and UDP with their respective datagram sizes. Mean queuing delay, average instantaneous queuing delay, and jitter are quantified using queuing theory and the arrival and service process model. The interesting observation is that multiplexing of the TCP flows hurts the performance of fellow flows. Delay and jitter of the tagged flow are adversely affected by the fraction of TCP in the background traffic.. The degradation of the average mean delay for TCP datagrams at the router, which has the highest proportion of the TCP in the background is as large as 400 %. For jitter, the degradation for datagrams of a typical flow is more than 3 times, for the highest proportion of TCP in the background traffic. These conclusions are true for Cubic, Reno, and Compound TCP flavors also. Further, in this thesis work, a new approach, namely the Link-by-link Congestion Avoidance (LbLCA) algorithm, which works at the network layer, has been proposed. The LbLCA is a proactive congestion avoidance algorithm. It uses explicit feedback to prevent congestion from happening in the first place. The novelty of the LbLCA is that no per-flow information is required, which makes it more scalable. Buffer sizes in the LbLCA depend upon the mean arrival rate at router input and outgoing link capacities and are independent of round trip time (RTT) and the number of flows passing through the router. The buffer sizes determined using LbLCA design are validated using extensive NS2 simulations. The performance evaluation has been carried out using NS2 simulations on the typical network topologies. The performance comparison between the TCP and the LbLCA reveals that the proposed LbLCA algorithm gives improved performance for the end-to-end delay and packet delivery ratio. The LbLCA is impartial to all flows, as the LbLCA works at the network layer and therefore, cannot differentiate between flows. Furthermore, the issue of the router buffer size design is significant as it is closely associated with the performance of the Internet. This thesis also addresses the issue of buffer size design in LbLCA using linear multiple regression and proposes that it is possible to predict the buffer size value for any core and edge router with multiple input/output ports, making use of the linear multiple regression technique. Though the preliminary results of LbLCA are encouraging, there is a need to really ensure that LbLCA is indeed a viable solution; this can be taken up as continuing research activity.
Supervisors: Lalit Mohan Patnaik and Sanjay Kumar Bose