Skip to main content
eScholarship
Open Access Publications from the University of California

UC Santa Cruz

UC Santa Cruz Electronic Theses and Dissertations bannerUC Santa Cruz

A Benchmarking System for Mobile Ad Hoc Network Routing Protocols

Abstract

Network simulations are heavily used in the networking community to evaluate the performance of computer networks and their protocols. Simulations are often chosen over alternatives such as live experiments due to limited resources in terms of scalability, as well as reproduceability of the experiments. Many of the routing protocols designed for Mobile Ad Hoc Networks (MANETs) are evaluated solely on their performance calculated by these simulations, but the simulation environments the routing protocols are exposed to are often limited in scope. Only certain aspects of the routing protocols are tested, so the protocols are only understood in terms of the fabricated scenarios that they are subjected to.

We first investigate the current best practices in simulation-based multi-hop wireless ad-hoc network (MANET) protocol evaluation to examine how wide-spread this problem is in the networking community. We extend a prior characterization of the settings and parameters used in MANET simulations by studying the papers published in one of the premier mobile networking conferences between 2006 and 2010. We find that there are still several configuration pitfalls which many papers fall victim to, which in turn damages the integrity of the results as well as any research aimed at reproducing and extending these results. We then describe the simulation “design space” of MANET routing in terms of its basic dimensions and corresponding parameters. We then discuss the benchmark infrastructure that was created to provide an easy to use solution for testing these protocols in a wide range of scenarios. The following chapter looks extensively at the realistic scenarios provided with the benchmark that act as sample scenarios to promote modeling simulations after real-world situations, and to show the flexibility in adding new scenarios. We also propose four “auxiliary” metrics to increase simulation integrity. Next, we show results generated by the benchmarking tool and provide our concluding thoughts.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View