When Life Gives You LMs, Probe Them for Knowledge With Automatically Generated Prompts
- Author(s): Shin, Taylor
- Advisor(s): Singh, Sameer
- et al.
Determining the knowledge captured by pretrained language models is an important challenge, and is commonly tackled by probing model representations using classifiers. However, it is difficult to design probes for semantic knowledge such as facts. Reformulating these semantic tasks as cloze tests (i.e., fill-in-the-blank problems) is a promising method for probing such knowledge, but, it requires manually crafting textual prompts that elicit the desired knowledge, thus limiting its use. In this paper, we develop an automated, task-agnostic method to create cloze prompts for any classification task based on a gradient-guided search. Applying our technique, we find prompts that demonstrate MLMs have an inherent capability to perform fact retrieval and relation extraction. We show that our prompts elicit more accurate factual knowledge from MLMs compared to manual prompts, and further, MLMs can be used as relation extractors out of the box, when prompted with suitable prompts, more effectively than recent supervised RE models.