PhD Theses (Computer Science and Engineering)
Browse
Browsing PhD Theses (Computer Science and Engineering) by Issue Date
Now showing 1 - 20 of 115
Results Per Page
Sort Options
Item Euclidean Distance Reansform and its Applications: Algorithms and Cellular Architectures(1999) Sudha, NThe present thesis embodies an in-depth study of Euclidean Distance Transform (EDT) of a binary image and its applications in the areas of image processing and computer vision. Main emphasis has been given for designing fast and parallel algorithm which ideally suits for VLSI implementation specifically in Cellular Architectures......................Item Fast and efficient non-parametric classification and clustering methods for largr data sets(2009) Babu, V SureshPattern Classification and clustering are two prominent pattern recognition tasks applied in various domains. Non-parametric methods are those which does not assume any model or distribution from for the data. Hence these methods are more general and can give better results provided the data set is a larger one. Nearest neighbor classifier(NNC) and its variants like k nearest neighbor classifier (k-NNC) are popular non-parametric classifiers. They show good performance and has asymptotic behavior comparable to that of the bayes classifier. When it comes to clustering methods, DBSCAN(Density based spatial clustering of applications with noise) uses density which is found non-parametrically at a point in order to derive density based clusters. DBSCAN can find arbitrary shaped clusters(unlike methods like k-means clustering) along with noisy outliers detection...Item Quality of Service Issues in Mobile Ad-hoc Networks(2009) Sarma, NityanandaMultihop wireless network has emerged recently as an evolution of wireless network technology and are likely to be the integral part of future communication environment. In this context support for Quality of Service (QoS) is becoming an inherit necessity rather than an additional feature of the network. Due to lack of a centralized control, highly dynamic nature of topology and existence of variable and limited shared resources, traditional approaches for supporting QoS in the internet cannot be applicable to mobile ad hoc networks (MANETs). Given that quality of service provisioning in MANETs is extremely challenging and is a multi-layer problem, this thesis takes a holistic view to this QoS issue by identifying the key components of an overall MANET QoS framework. We take minimum throughput and maximum delay as the applications` QoS requirements to be supported by the QoS mechanisms developed in this dissertation. We begin by developing two dynamic priority based QoS- aware MAC protocols which are based on legacy IEEE 802.11 DCF and hence can easily be integrated into existing systems without much difficulty. Our first scheme, called Priority based QoS –aware MAC protocol (PQAMP) is designed for achieving enhanced level of service differentiation to provide QoS for real time traffic along with maximizing network utilization in a multihop environment.Item I/O Efficient Algorithms for Matrix Computations(2009) Mohanty, Sraban KumarAlgorithms for large data sets, unlike in-core algorithms, have to keep the bulk of their data in the secondary memory, which typically is much slower than the main memory. In designing these out-of-core algorithms, the goal is therefore to minimise the number of I/Os executed. The literature is rich in efficient out-of-core algorithms for matrix computation. But very few of them are designed on the external memory model of Aggarwal and Vitter, and as such attempt to quantify their performances in terms of the number of I/Os performed. This thesis makes some contributions in that direction. We analyse some QR decomposition algorithms, and show that the I/O complexity of the tile based algorithm is asymptotically the same as that of matrix multiplication. This algorithm, we show, performs the best when the tile size is chosen so that exactly one tile fits in the main memory. We propose a constant factor improvement, as well as a new recursive cache oblivious algorithm with the same asymptotic I/O complexity. The traditional unblocked and blocked Hessenberg, tridiagonal, and bidiagonal reductions are not I/O efficient because vector-matrix operations dominate their performances. We design Hessenberg, tridiagonal, and bidiagonal reductions that use banded intermediate forms, and perform only asymptotically optimal numbers of I/Os; these are the first I/O optimal algorithms for these problems. In particular, we show that known slab based algorithms for two sided reductions all have suboptimal asymptotic I/O performances, even though they have been reported to do better than the traditional algorithms on the basis of empirical evidence. We propose new tile based variants of multishift QR and QZ algorithms that under certain conditions on the number of shifts, have better seek and I/O complexities than all known variants. We show that techniques like rescheduling of computational steps, appropriate choosing of the blocking parameters and incorporating of more matrix-matrix operations, can be used to improve the I/O and seek complexities of matrix computations...Item Delaunay Triangulation based Spanners for MANET(2009) Satyanarayana, D.Many position based routing protocols use Unit Disk Graph (UDG) as an underlying network topology for routing. Due to large number of edges in UDG, these protocols suffer from channel contention overhead, frequent packet collisions, and heavy resource consumption. To overcome these problems, many researchers proposed various local topology control algorithms to retain only linear number of links in the underlying network graph based on geometric neighborhood. These graphs are called geometric spanners. In this thesis, we study these spanners under various network requirements like less number of transmissions, frequent node failure, mobility, and fault tolerance. Geometric spanners, like planarized local Delaunay triangulation (PLDel), relative neighborhood graph (RNG), and gabriel graph (GG) which are based on neighborhood properties contain shorter edges. Because of these shorter edges the number of transmissions between source and destination increases which in turn increases the end-to-end packet delay and jitter. We present three planar constrained based geometric graphs called constrained local Delaunay triangulation (CDT), constrained relative neighborhood graph (CRNG), and constrained Gabriel graph (CGG), to reduce the number of hops by introducing longer constraint edges.In adhoc networks, nodes can go down due to various reasons, such as insuf cient battery power, environmental effects like eruption of volcano, cyclones, and oods, and accidents like landslides and debris. Moreover, to conserve the energy, nodes can switch off their transmitter or go to the sleep mode. There will be heavy packet loss if these nodes exist in any routing path. Similarly, a new node can join the network or an existing node wakes up from sleep mode. We have proposed three dynamic spanners called dynamic local Delaunay triangulation (DLDel), dynamic relative neighborhood graph (DRNG), and dynamic Gabriel graph (DGG), which change their network topology dynamically to preserve the spanner properties and reduce heavy packet loss.Various resource limitations and environmental constraints make frequent link and node failures in adhoc networks, which make the network unreliable. For example, the edge disconnections occur due to buildings, walls, mountains, and obstacles between the wireless nodes. Similarly, the node failures occur due to the exhausted battery power, accidents, landslides, debris, eruption of volcano, and cyclones. So, network topology should be fault tolerant to take care of these failures. In this thesis, we have proposed the algorithms for fault tolerant versions of PLDel, RNG and GG called fault tolerant local Delaunay triangulation (FTLDel), fault tolerant relative neighborhood graph (FTRNG), and fault tolerant Gabriel graph (FTGG), respectively, by choosing most stable nodes. The existing spanners assume that the nodes in the network are static. The frequent topology change due to node mobility disturbs various geometric properties of the spanner such as neighborhood relations, spanning ratio, and planarity. Moreover, some of the edges may become invalid links and may lead to disconnected network. In this thesis, we propose the algorithms for mobile local Delaunay triangulation (MLDel), mobile relative neighborhood graph (MRNG), and mobile Gabriel graph (MGG), to maintain their counter part spanners PLDel, RNG, and GG, respectively, under mobility. These proposed spanners are simulated...Item Load Balancing in Multihomed Stub Networks(2009) Sairam, Ashok SinghWhile there has been a steep growth in internet bandwidth, international bandwidth prices have declined sharply. This has led to a trend where large enterprises and educational in stitutes continually increase their Internet bandwidth and subscribe to more than one ISP (multihoming) to increase resilience. Higher speeds will of course mean better performance but beyond a point, the improvement in performance does not match the increase in speed. ....Item Efficient Algorithms and Data Structures for Massive Data Sets .(2010) AlkaFor many algorithmic problems, traditional algorithms that optimise on the number of instructions executed prove expensive on I/Os. Novel and very di erent design techniques, when applied to these problems, can produce algorithms that are I/O e cient. This thesis adds to the growing chorus of such results. The computational models we use are the external memory model and the W-Stream model. On the external memory model, we obtain the following results. (1) An I/O e cient algorithm for computing minimum spanning trees of graphs that improves on the performance of the best known algorithm. (2) The rst external memory version of soft heap, an approximate meldable priority queue. (3) Hard heap, the rst meldable external memory priority queue that matches the amortised I/O performance of the known external memory priority queues, while allowing a meld operation at the same amortised cost. (4) I/O e cient exact, approximate and randomised algorithms for the minimum cut problem, which has not been explored before on the external memory model. (5) Some lower and upper bounds on I/Os for interval graphs. On the W-Stream model, we obtain the following results. (1) Algorithms for various tree problems and list ranking that match the performance of the best known algorithms and are easier to implement than them. (2) Pass e cient algorithms for sorting, and the maximal independent set problems, that improve on the best known algorithms. (3) Pass e cient algorithms for the graphs problems of nding vertex-colouring, approximate single source shortest paths, maximal matching, and approximate weighted vertex cover. (4) Lower bounds on passes for list ranking and maximal matching. We propose two variants of the W-Stream model, and design algorithms for the maximal independent set, vertex-colouring, and planar graph single source shortest paths problems on those models..Item Design of Network Intrusion Detection Systems: An Effective Alarm Generation Perspective(2011) Hubballi, NeminathIntrusion Detection System, a hardware or software that monitors network or host activities for malicious behavior, is an indispensable component of system security. If an IDS deals with network activities (host activities) then it is called network based IDS (host based IDS). While signature and event based IDSs can detect known attacks only, anomaly based systems can detect both known and unknown attacks. An IDS is characterized by many parameters namely, effectiveness, efficiency, ease of use, security, transparency, interoperability etc. The thesis focusses at effectiveness, also called effective alarm generation, for all variants of network based IDSs namely, signature based, event based, header anomaly based and payload anomaly based systems. In signature based IDS, most of the alarms generated are false positives because signatures are generic and alarms are generated for all attack traffic which match some signature irrespective of the fact whether the attack could successfully exploit any vulnerability. To address this issue a false positive filtering scheme for signature based IDS has been proposed, which correlates alarms with network context information. As an enhancement to this filter, criticality of the application being targeted by an attack is examined, before eliminating the corresponding alarm estimated to be false positive. There is certain class of known attacks for which signatures cannot be written and so signature based IDSs cannot detect them. A novel event based IDS has been proposed for such attacks using the failure detection and diagnosis theory of discrete event systems. The working of the event based IDS has been illustrated on address resolution protocol based attacks. In header anomaly based IDSs, fairly high Detection Rate and Accuracy can be achieved for attacks detectable by header statistics. When the datasets to be processed by such systems are large, data summarization algorithms are applied. However, Detection Rate and Accuracy are not high in the systems that use data summarization algorithms compared to the ones which do not. A header anomaly based IDS for handling voluminous network traffic yet maintaining high Detection Rate and Accuracy has been proposed. Issues of effective alarm generation are more involved in payload anomaly based IDSs compared to the header based counterparts, which become more severe when training dataset has impurities. An impurity tolerant payload anomaly based IDS has been proposed using n-gram based statistical models. Tolerance is achieved by higher order n-grams and keeping their frequency information..Item Some Novel Approaches to Improve Performance of Peer - to - Peer systems(2011) Khataniar, GuruprasadOver the years Peer-to-peer (P2P) has become an accepted paradignm for largescale internet applications. A P2P system is defined as an autonomous, self-organized, scalable distributed system with shared resource pool in which all nodes have identical capabilities and responsibilities and all communications are normally symmetric. ...Item Mobile Agent Based bio-Inspired Mechanisms for Servicing Networked Robots(2011) Godfrey, W. WilfredThe development of middleware for networked robotics offers challenges that are often quite different from those encountered in singular robots. It calls for the design and development of a software framework that can facilitate interactions amongst the distributed, interoperable, heterogeneous robotic entities that comprise the network and the simplification of complex robot control software systems which in turn can ease the associated application development process. Mobile agents form one of the virtual machine based paradigms that are ideally suited to the development of such middleware. A framework that exploits all features of mobile agents to form the basic middleware is still missing. The work described in this thesis aims at the realization of an Artificial Being (AB) comprising a network of mobile robots which are serviced by mobile agents. While the robots form the physical effectors of the being the mobile agents and the network facilitate the movement of information amongst them. The being is portrayed to use bio-inspired paradigms. Application scenarios where mobile agents carry services in the form of programs for networked robots have been depicted. Bio-inspired mechanisms - PherCon and PherCon-C have been proposed to provide for faster servicing by the mobile agents. Both simulation and real-time experiments were carried out and their results have been portrayed and discussed. An increase in the number of agents through cloning can enhance the performance of such networked systems. However, uncontrolled cloning can lead to excessive consumption of system resources including network bandwidth. The work reported in the thesis addresses this problem and proposes a novel stigmergy based method for control of populations of heterogeneous mobile agents. As an offshoot of this research, a mobile agent framework for use in conjunction with an Internet of Things (IoT) has been proposed and its implementation described..Item Mining Arbitrary Shaped Clusters in Large Dataset(2011) Patra, Bidyut KumarCluster analysis or clustering, groups objects into a meaningful manner, is an important task in pattern recognition, data mining, bio-informatics, image processing, information retrieval, intrusion detection system, etc. To meet demands of clustering activities in these domains, many clustering methods have been developed over the years. These clustering methods can be categorized from different perspectives. An important perspective is algorithmic perspective. From algorithmic perspective (view), clustering methods are classified into distance based and density based methods (algorithms). Naturally, clusters are arbitrary shaped and different sizes in a dataset. However, finding clusters of arbitrary shapes and sizes is a difficult (challenging) task, especially in large size dataset. This thesis focuses at finding clusters of arbitrary shapes and sizes using both distance based and density based algorithms in large dataset. Distance based clustering methods usually find convexed shaped clusters. The classical single-link is a distance based method, which can find clusters of arbitrary shapes and sizes. However, it scans dataset many times and has quadratic time and space complexity. This prohibits single-link to apply to large dataset. To address these issues, a distance based hybrid clustering method has been proposed, which is termed as l-SL method. It produces a flat clustering for arbitrary shaped clusters in large dataset. It has two stages. In the first stage, leaders clustering method is applied to a dataset to obtain many convex shaped small clusters. In the second stage, these convexed shaped clusters are merged using single-link method (given minimum distance criteria). Proposed l-SL method scans a dataset once and clustering results of l-SL method is close to the final clustering of that of the single-link method. An improvement of the l-SL method has also been proposed in order to produce same clustering results as that of the single-link clustering method. Proposed l-SL and its improved methods output a flat clustering of a given dataset. A new data summarization scheme termed as data sphere has been proposed, which exploits leaders method to acquire summary of a dataset. Proposed data sphere is utilized to work with hierarchical single-link method in large dataset for generating a hierarchy of clustering. However, clustering results of data sphere based single-link method deteriorates at high compression ratio. To address this problem, tolerance rough set theory based data summarization scheme termed rough bubble has been proposed. Tolerance rough set theory is exploited to resolve uncertainty associated with cluster membership of leaders method, which collects directional distance statistics of each rough bubble. Main disadvantage of proposed distance based l-SL, improved l-SL, data sphere and rough bubble methods is that they cannot find arbitrary shaped clusters in presence of outliers, varying density clusters. To address these issues, a dynamic parameter termed Nearest Neighbor Factor(NNF) is introduced to determine relative position of an object in its neighborhood (local) region. A density based clustering method which uses NNF is introduced for finding clusters of arbitrary shapes, different sizes and variable density in a dataset..Item Consistent Online Backup in Transactional File Systems(2012) Deka, LipikaA consistent backup, preserving data integrity across les in a le system, is of utmost importance for the purpose of correctness and minimizing system downtime during the pro- cess of data recovery. With the present day demand for continuous access to data, backup has to be taken of an active le system, putting the consistency of the backup copy at risk. We propose a scheme referred to as mutual serializability to take a consistent backup of an active le system assuming that the le system supports transactions. The scheme extends the set of con icting operations to include read-read con icts, and it is shown that if the backup transaction is mutually serializable with every other transaction individually, a consistent backup copy is obtained. The user transactions continue to serialize within themselves using some standard concurrency control protocol such as Strict 2PL. Starting with considering only reads an writes, we extend the scheme to include le operations such as directory operations, le descriptor operations and operations such as append, truncate, rename, etc., as well as operations that insert and delete les. We put our scheme into a for- mal framework to prove its correctness, and the formalization as well as the correctness proof is independent of the concurrency control protocol used to serialize the user transactions. The formally proven results are then realized by a practical implementation and evalua- tion of the proposed scheme. In the practical implementation, applications run as a sequence of transactions and under normal circumstances when the backup program is not active, they simply use any standard concurrency control technique such as locking or timestamp based protocols (Strict 2PL in the current implementation) to ensure consistent operations. Now, once the backup program is activated, all other transactions are made aware of it by some triggering mechanism and they now need to serialize themselves with respect to the backup transaction also. If at any moment a con ict arises while establishing the pairwise mutu- ally serializable relationship, the con icting user transaction is either aborted or paused to resolve the con ict. We ensure that the backup transaction completes without ever having to rollback by always ensuring that it reads only from committed transactions and never choosing it as the victim for resolving a con ict. To be able to simulate the proposed technique, we designed and implemented a user space transactional le system prototype that exposes ACID semantics to all applications. We simulated the algorithms devised to realize the proposed technique and ran experiments to help tune the algorithms. The system was simulated through workloads exhibiting a wide range of access patterns and experiments were conducted on each workload in two scenarios, one with the mutual serializability protocol enabled (thus capturing a consistent online backup) and one without (thus capturing an online inconsistent backup) and comparing the results obtained from the two scenarios to calculate the overhead incurred while capturing a consistent backup. The performance evaluation shows that for workloads resembling most present day real workloads exhibiting low inter-transactional sharing and actively accessing only a small percentage of the entire le system space, has very little overheads (2.5% in terms of transactions con icting wit.Item (An) Online Semi Automated Part of Speech Tagging Technique Applied To Assamese(2013) Dutta, Pallav KumarDeveloping annotated tagged corpora for a language with limited electronic resources can be very demanding. Although Assamese is a language spoken by about 15 million people in the Indian state of Assam as a first language, the development of electronic resources for the language has been lagging behind other Indian languages. Also, there has not been much work done in POS tagging for Assamese. In order to fill this gap, we have designed a POS Tagger for Assamese. Our approach is to use a combination of methods to try and get good results. Further, we amortise the manual intervention over the period of tagging rather than doing all manual work at the beginning. This allows us to quickly start the tagging system. But it also means that what we have is a semi-automatic tagger and not an automatic tagger. Our method requires only native speakers intervention in stages other than the beginning making the system amenable to some form of with a few experts for moderation. This will enable our system to create very large tagged corpora in the language. We first create a knowledge base using a number of methods. This knowledge base is then used to automatically tag sentences in the language. This tagging uses a combination of stemming, application of a few grammatical rules, and a bigram tagger. The tagged sentences are then shown to non-expert native speakers for verification and correction. Before starting the actual tagging process, the knowledge base was tuned by examining the results on small data sets using experts instead of native speakers. The design of a user friendly interface plays an important role in reducing the time taken by native speakers in their examination.Item Adaptivity and Interface Design: A Human-Computer Interaction Study in E-Learning Applications(2013) Deshpande, Yogesh DComputer based teaching-learning or e-learning provides more flexible methods of interactions with learning contents compared to the traditional classroom set-up. It motivates learners towards self learning and evaluation in an open virtual environment. However, usefulness of e-learning depends upon learner beliefs and the degree of adjustments or adaptations shown by him in his learning behavior. The learning goal and the learning interface have a decisive role in influencing learner adaptations. Various researchers have addressed issues in learner adaptations to the (a) cognitive levels of learning goals and the (b) interaction environment. However they have been addressed separately. Also an efficient methodology of quantifying learner adaptations and learner ability of familiarizing with learning interfaces was lacking. Both these shortcomings have been addressed in this thesis by providing a methodology of measuring adaptations. In this thesis an adaptation score that quantifies adaptations and an adaptivity score that quantifies ability of adapting have been proposed. The thesis attempts to explain the combined impact of learning task complexity and user interface design on learner adaptations in beliefs, interactions and performance which was not done before. Quantitative data of e-learning interactions involving basic three cognitive levels of learning complexity viz. knowledge, comprehension and application and two types of navigation designs viz. hierarchical horizontal menu and non-hierarchical split menu was analyzed. The empirical data suggest the fact that learning task complexity (cognitive level) affects adaptations in interactions between similar tasks (task adaptation) on same interface. Since these task adaptations did not vary across user interfaces, they were found to be task-dependant. As a result, the cognitive load of learning could be judged by the task adaptation score and utilized to adapt pedagogic strategy or learning content. Results of our study reveal that belief in self e-learning skills (self efficacy) affected adaptations in learning behavior and learning performance. On the other hand, adaptivity to navigation design of user interface was found to be interface-dependant and, interestingly, also influenced learning performance. The beliefs were found to mediate the adaptivity scores. Based on the results of the experiments, the thesis provides recommendations on utilization of these metrics in personalization of e-learning on the bases of the adaptations. The study reveals research on the phenomenon of interactions between human and computer using a multidisciplinary view of Human Computer Interaction (HCI) combining computer science, behavioral science and education.Item Design and Development of Intrusion Detection System: A Discrete Event System Approach(2014) Barbhuiya, Ferdous AhmedWith the rapid increase of security threats in Internet, Intrusion Detection System(IDS), a hardware or software that monitors network or host activities for malicious behavior, is an indispensable component of Network Security. Among the two prevalent IDS designing techniques, signature based IDSs can detect known attacks only while anomaly based systems can detect both known and unknown attacks, but generates large number of false alarms. There are classes of attacks like ARP based attack, ICMP based attack, TCP low rate DoS attack etc. which escape detection by both signature and anomaly IDSs. This thesis proposes a Discrete Event System(DES) based approach to design IDS for attacks across di erent network layers. DES models are designed for the system under normal and failure conditions where attacks are mapped to failures. A state estimator called diagnoser is designed which observes sequences of events generated by the system to decide whether the states through which the system traverses correspond to the normal or faulty DES model. The diagnoser acts as the IDS engine. For detecting ARP based attacks, an active probing mechanism based on ARP requests and responses is used. Active DES framework is adopted to model ARP based attacks using a controllable event (ARP probe) which creates di erence in sequence of events for normal or attack condition. Next, to handle network uncertainties due to presence of congestion, for detecting ICMP based attack, I-diagnosis framework of DES has been adopted where diagnosis is tested only in those sequence of states where a fault is followed by a indicator event. Redundant states of diagnoser of I-diagnosis framework are removed and a reduced detector is also proposed to improve complexity. Further, in Induced Low Rate TCP DoS attack, the attack and genuine sequence of state di ers with some probability. So to detect this attack, Stochastic DES framework has been adapted where attack case can be identified with some probability. Lastly, considering the migration from IPv4 to IPv6 addressing in the Internet, detection mechanism for NDP based attacks of IPv6 network is proposed. To tackle the challenge of presence of error in building complex DES model manually for NDP related attacks in IPv6, LTL based DES framework is adopted. All proposed detection mechanisms are implemented in testbed and the results show the e ectiveness of the systems in terms of accuracy and detection rate.Item Asymmetric Region Local Binary Patterns for face Image Analysis(2014) Naika C. L., ShrinivasaThis Thesis explores feature extraction techniques based on local binary Patterns(LBP) for automatic face Image Analysis.Item An Architectural Framework for Seamless Hand-off between UMTS and WLAN Network(2014) Barooah, MaushumiIn recent years, Cellular wireless technologies like GPRS, UMTS, CDMA and Wireless Local Area Network (WLAN) technologies like IEEE 802.11 have seen a quantum leap in their growth. Cellular technologies can provide data services over a wide area, but with lower data rates. WLAN technologies offer higher data rates, but over smaller areas, popularly known as Spots The demand for an ubiquitous data service can be fulfilled, if it is possible for the end-user to seamlessly roam between these heterogeneous technologies. In this thesis, a novel architectural framework is proposed consisting of an intra-ISP network called Switching Network which is fused between UMTS and WLAN networks as well as data (Internet) services for providing seamless mobility without affecting user activities. The ISN uses MPLS and Multiprotocol-BGP to switch the data traffic between UMTS to IEEE 802.11 networks, as per the movements of the user. The ISN is integrated with the UMTS network at the GGSN-3G and at the Access Point for IEEE 802.11 network respectively. The Mobile Node considered, is a high end device (e.g. PDA or Smart Phone) which is equipped with two interfaces, one for UMTS and the other for WiFi and can use both the interfaces simultaneously. The simulation result shows the improved performance of the ISN based framework over existing schemes. Most of the traffic in today networks use the Transmission Control Protocol (TCP) as the transport layer protocol for reliable end-to-end packet delivery. However, TCP considers packet loss to be the result of network congestion which makes it unsuitable for mobile wireless communication, where sporadic and temporary packet losses are usually due to fading, shadowing, hand-off and other radio effects. During the vertical handoff between different wireless technologies, the problem of end-to-end connection and reliability management for TCP becomes more severe. This thesis also evaluates the performance of TCP over the proposed ISN based framework. The improved TCP scheme uses a cross layer interaction between the network and the transport layer to estimate TCP retransmit timeout and congestion window during handover. Simulation results establishes effectiveness of the proposed scheme. Ensuring Quality of Service(QoS) for the mobile users during vertical handover between IEEE 802.11 and UMTS is another key requirement for seamless mobility and transfer of existing connections from one network to another. The QoS assurance criteria for existing connections can be affected by fluctuations of data rates when a user moves from the high speed WLAN network to the low speed UMTS network, even in the presence of another WLAN network in its vicinity. This can happen if the alternate WLAN network is highly loaded. Therefore handover from a high speed network to a low speed network should be avoided, whenever possible. The final contribution of this thesis proposes a QoS based handover procedure that prioritizes the existing connection over the new connections, so that rate fluctuations due to handover can be avoided if there exist another WLAN network in the range of the mobile user. Whenever the possibility of handover is detected, a pre-handover bandwidth reservation technique is used to reserve bandwidth at the alternate WLAN networks to avoid QoS degradation. The proposed scheme is implemented in Qualnet network simulator and...Item Capacity Enhancement, QoS and Rate Adaptation in IEEE 802.11s: A Performance Improvement Perspective(2014) Chakraborty, SandipCurrent deployment of wireless community and municipal area networks provide ubiquitous connectivity to end users through wireless mesh backbone, that aims at replacing wired infrastructure through wireless multi-hop connectivity. IEEE 802.11s standard is published recently to support the mesh connectivity over well-deployed IEEE 802.11 architecture based on Wireless Fidelity (WiFi) access network. This thesis explores a number of research directions to optimize the mesh peering, channel access, scheduling and mesh path selection protocols for IEEE 802.11s mesh network. The standard provides three major protocols to support mesh functionality - Mesh Peer Management Protocol (MPM) to establish mesh connectivity and for topology management, Mesh Coordinated Channel Access (MCCA) for channel access and scheduling, and Hybrid Wireless Mesh Protocol (HWMP) to support mesh path establishment based on link layer characteristics. The objective of this thesis is to augment the existing protocols for better connectivity and e cient usage of the resources. In a mesh network, the e ciency of the backbone network can be improved through directional communication by exploring spatial reuse capability. However, uses of directional antennas impose several new research challenges that are explored in this thesis. The rst contribution of this thesis enhances the functionality of the mesh channel access and path selection protocols to support directional communication over an IEEE 802.11s mesh backbone. Though MCCA provides reservation based channel access, the standard does not implement any speci c mechanism for multi-class tra c services to improve the Quality of Service (QoS) for the end-users. The next contribution in this direction is to provide QoS support and service di erentiation for MCCA based channel access mechanism over the multi-interface communication paradigm. Modern wireless hardwares are capable of providing multiple data rate supports depending on wireless channel dynamics. As a consequence, the MPM protocol has been augmented to support multi-rate adaptation over IEEE 802.11s protocol elements.Item Stochastic Coverage and Connectivity in Heterogeneous Wireless Sensor Networks(2014) Gupta, Hari PrabhatA wireless sensor network (WSN) consists of several tiny battery-powered sensors that can communicate with each other to monitor a Field of Interest (FoI). Coverage is acknowledged as an important metric to measure the quality of service of WSNs. It speci es how well a FoI is monitored by the WSN. Network connectivity is complementary to coverage and it indicates how well the sensory data can be communicated by the sensors to the sink. This thesis studies coverage and connectivity problems in WSNs with stochastic deployment of sensors and demonstrates their applications. The main challenge in deployment of sensors is to make sure that a minimum number of sensors are used to provide the desired level of coverage and connectivity in the FoI. The rst contribution of this thesis is to estimate the critical sensor density required for the desired coverage ratio under border e ects in WSNs. This work can be used to accurately determine the sensors required in any convex polygon-shaped FoI. In some applications of WSNs, the sensors cannot be deployed directly inside the FoI to be monitored. For example, in canal water monitoring project, existing coverage solutions cannot be used because, the sensors cannot be deployed on the water surface. In the second contribution, we consider the sensors to be deployed uniformly at random outside the FoI near the boundary and estimate the minimum number of sensors required for desired level of coverage and connectivity. We also illustrate an application of this work to develop a tra c information acquisition system.Item Tree Based Data Gathering from Sensors:Topology Management Sustaining QoS.(2014) Chakraborty, Suchetanamaintain both the connectivity and the sensing coverage in the network. While the rst contribution focuses only on the connectivity aspect, the second contribution of the thesis considers both the connectivity and the coverage maintenance as the design objectives. Considering irregular terrain property and optimal positioning of the sink, the energy depletion rate gradually decreases from the sink towards the terrain periphery. So, both the connectivity and the sensing coverage are a ected as the nodes near the sink die out of energy sooner than the leaves of the tree rooted at the sink, thereby creating network holes. For an improved network lifetime, a gradient based node deployment strategy has been proposed that also satis es the initial connectivity and the coverage criteria. The density of the deployed nodes follows a gradient, which is estimated as the amount of energy dissipation at any intermediate node to that of the leaf nodes in its rooted subtree. The proposed theory has been justi ed through the worst case analysis of the sensor network calculus. As unbalanced data gathering tree escalates the problem of uneven energy depletion in the network, a load-balanced distributed BFS tree construction scheme has been proposed. Further, to handle arbitrary node failure a localized and coste ective tree maintenance scheme has been introduced. Finally, the characteristics and the design objectives of tree based data gathering with failure support have been explored considering two di erent applications, one for road tra c monitoring, and the other for critical infrastructure monitoring. The inherent challenges in distribution and management of sensor network along the road require an application-speci c protocol support for the network connectivity, sensing coverage, reliable data gathering and the network lifetime improvement. The next contribution of the thesis introduces the concept of k-strip length coverage along the road, that ensures a better sensing coverage for the detection of moving vehicles, compared to the conventional barrier coverage and full area coverage, in terms of the availability of su cient information for statistical processing as well as the number of sensors required to be active. To extend the network lifetime, every sensor follows a sleep-wakeup schedule maintaining the network connectivity and the k-strip length coverage. This scheduling problem is modeled as a graph optimization, the NP-hardness of which motivates to design a centralized heuristic, providing an approximate solution. The properties of the proposed centralized heuristic are then explored to design a per-node solution based on local information.