- Main
Spectrum Sharing by Cognitive Radios: Opportunities and Challenges
- Tandra, Rahul
- Advisor(s): Sahai, Anant
Abstract
Under the current regulatory model of static frequency assignment, most of the spectrum is allocated while the actual usage is sparse. This seeming waste is commonly referred to as the problem of ``regulatory overhead''. The advent of frequency-agile cognitive radios represents a potential opportunity to improve performance and reduce this regulatory overhead. The white space ruling on Nov 4th 2008, legalizing the reuse of television band whitespaces, is a first
step taken by the FCC to address the issue of regulatory overhead. However, the problem is that we do not yet know what the right regulatory changes are that will reduce the regulatory overhead.
In this thesis we focus on one fundamental aspect of this problem, namely sensing spectrum for empty bands that are not being used at their current time and location for their primary purpose. It turns out that obtaining a suitable technical formulation for this seemingly simple problem is a highly non-trivial task. Traditionally, sensing problems are formulated mathematically using a binary hypotheses test between the two hypotheses --- ``signal present'' and ``signal
absent''. In the latter part of this thesis we show that the traditional framework does not completely capture all the interesting dimensions of the sensing problem. However, there are
several 1-bit decision problems in spectrum sharing that can be directly modeled using the hypothesis testing framework, and our technical results are directly applicable for those problems.
In the traditional binary hypothesis testing framework, we show that designing robust algorithms that can distinguish between the two hypotheses at low signal to noise ratios is a very hard problem. Real world uncertainties in the noise plus interference, and the fading process make spectrum sensing very challenging. In particular, we prove that there exist fundamental limits called SNR walls, below which robust detection is impossible, regardless of how many samples we take.
We show that the presence of signal features in the primary signal can significantly improve robustness of detection for the secondary user. Coherent detection algorithms that look for commonly existing signal features like pilot tones and cyclostationary features are also limited by SNR walls, although they are more robust than signals without any features. We also explicitly construct signals with macroscale features that can be robustly detected at arbitrarily low SNRs. These results suggest that in order to allow cognitive radio operation, the primary user must pay in terms of restrictions on its freedom to choose any possible signaling scheme.
The SNR wall result leads us to the natural question: why does a cognitive radio need to sense at extremely low SNRs? To answer this question, we are forced to look at the spatial dimension
of the sensing problem, which introduces new tradeoffs that cannot be fully understood using the traditional hypothesis testing formulation. In this thesis, we propose a new space-time sensing framework that helps us better understand this problem and brings out new tradeoffs that are
otherwise not apparent. We give two new metrics, ``Fear of harmful interference" and the ``Weighted probability of space-time recovered" which characterize the safety for the primary user and the performance of the secondary user respectively. These metrics show that single-radio based sensing algorithms can only recover a small fraction of the available opportunities even if they take an infinite number of samples. The key reason for this is that single-radio sensors are forced to be conservative to protect the primary user from atypical fading events. This helps us make a concrete claim for looking at other approaches like collaborative sensing algorithms, multiband algorithms etc.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-