Skip to main content
eScholarship
Open Access Publications from the University of California

UC Riverside

UC Riverside Electronic Theses and Dissertations bannerUC Riverside

Security and Privacy in Search Services

  • Author(s): Wang, Peng
  • Advisor(s): Ravishankar, Chinya
  • et al.
Abstract

In the first part of this dissertation, we show how to execute range queries securely and efficiently on encrypted databases in the cloud. Current methods provide either security or efficiency, but not both. Many schemes even reveal the ordering of encrypted tuples, which, as we show, allows adversaries to estimate plaintext values accurately.

We present the R̂-tree, a hierarchical encrypted index that may be securely placed in the cloud, and searched efficiently. It is based on a mechanism we design for encrypted halfspace range queries in Rd, using Asymmetric Scalar-product Preserving Encryption. Data owners can tune the R̂-tree parameters to achieve desired security-efficiency tradeoffs. We also present extensive experiments to evaluate R̂-tree performance. Our results show that R̂-tree queries are efficient on encrypted databases, and reveal far less information than competing methods. In the second part, we propose a new query obfuscation scheme to protect user privacy in key word search. Text-based search queries reveal user intent to the search engine, compromising privacy. Topical Intent Obfuscation (TIO) is a promising new approach to preserving user privacy. TIO masks topical intent by mixing real user queries with dummy queries matching various different topics. Dummy queries are generated using a Dummy Query Generation Algorithm (DGA).

We demonstrate various shortcomings in current TIO schemes, and show how to correct them. Current schemes assume that DGA details are unknown to the adversary. We argue that this is a flawed assumption, and show how DGA details can be used to construct efficient attacks on TIO schemes, using an iterative DGA as an example. Our extensive experiments on real data sets show that our attacks can flag up to 80% of dummy queries. We also propose HDGA, a new DGA that we prove to be immune to the attacks based on DGA semantics that we describe.

Main Content
Current View