Intelligent voice assistants and other microphone-enabled Internet of Things devices are increasingly popular but may pose grave privacy risks. As more technologies take advantage of continuous audio access, new products may emerge that are passively listening and actively analyzing all conversations they hear, instead of only processing commands that follow their wake-words. The research in this dissertation investigates how to develop a privacy permission system for these devices. It takes a user-centered perspective on this problem and uses empirical methods to study the research questions that can guide the design of the privacy controls:
- What are people's expectations about what information needs to be protected and in what context?- Which privacy-enhancing techniques could be feasibly applied to limit the devices' listening? How do people perceive their trade-offs and acceptability?
- Which interfaces and affordances would allow users to express their privacy preferences and explore the implications of their choices?
This dissertation explores these questions through a combination of surveys, user studies, and prototype evaluations. Major conclusions include:
- People exhibit nuanced and heterogeneous preferences, notably in relation to other members of their households, and are especially wary of undisclosed data flows to third parties. They are most protective of financial data and other information that can cause them harm.- Block-listing and filtering approaches may be most feasible to implement. In combination with existing techniques and privacy-friendly design choices, they can address immediate user requirements. However, more complex privacy needs must be addressed with content-based controls, which require additional research in privacy and natural language understanding.
- People appreciate the control install-time and runtime permissions provide over their own data. However, both have challenges with their user experience. Transparency-based approaches may be comparatively frictionless.