Sign languages are visual-spatial languages, articulated not only by hands but also with facial expressions and other body parts. Facial expressions, in particular, have important grammatical roles, such as defining whether a sentence is a question or not and differentiating the meaning of a sign when assuming the role of a quantifier. Despite the importance of facial expressions, technologies to recognize and generate sign language mainly focus on manual signs, ignoring the information from facial expressions. In this work, we focus on collecting and annotating data to evaluate the role of facial expressions in communicating adjectives. We study the role of facial muscle activity to express the intensities of manual signs and show how modeling facial action units change the overall meaning of a sentence. We also explore how facial expressions allow a deeper understanding of sign language when included in the design of machine learning models.