Deep-GUI: Towards Platform-Independent UI Input Generation with Deep Reinforcement Learning
- Author(s): YazdaniBanafsheDaragh, Faraz
- Advisor(s): Malek, Sam
- et al.
Although many Android input generation tools with different paradigms have been proposed, many of them fail to surpass even the simplest form of testing, i.e. random testing, in terms of coverage. Moreover, almost all these tools assume specific structures about the environment under the test. For instance, they require an XML encoding of the UI elements, or access to the source code for static analysis. This, however, is not always possible, e.g. when an application is simply a wrapper that uses a web-view to show content, or when the source code is not available. Moreover, these assumptions prevent these tools from applying to other platforms, such as web or iOS. In other words, unless a testing tool is truly black-box and platform-independent, its applicability is greatly compromised.
In this work, we propose Deep-GUI as the first effort towards fully cross-platform and black-box automated input generation. Using the power of deep learning, Deep-GUI learns the valid interactions given only the applications’ screenshots, and therefore does not need any implementation-specific information about the application under test. Moreover, since the data collection, training, and inference processes are performed independently of the platform, Deep-GUI can be used in other platforms. We implement our extension of Google Monkey called Monkey++ that uses Deep-GUI, and show its effectiveness over Google Monkey in crawling Android applications. Furthermore, we provide evidence for the ability of Deep-GUI to operate across platforms without the need to re-train it, and explore future directions that can use the idea behind Deep-GUI to give rise to the next generation of automated input generation tools.