Skip to main content
eScholarship
Open Access Publications from the University of California

Searching for Argument-Counterargument Relationships in Vector Embedding Spaces

Abstract

Vector embedding spaces are representational structures that can capture both the similarity relationship between items and various other semantic relationships. Current state-of-the-art embedding models can generate embedding vectors for individual words and longer strings of text, enabling vector spaces to encode the similarity between entire documents of text. We investigated three embedding models to see if semantic relationships besides similarity are represented in these spaces across three embedding models, focusing on the relationship between arguments and counterarguments as a specific example. While there was not a linear subspace that captured the semantic relationship between an argument and its counterargument, we found that neural networks with a single hidden layer could partially learn the transformations between an argument's embedding and the corresponding counterargument's embedding in all three spaces. The trained models generalized across three different datasets of arguments, suggesting these partially learned transformations are applicable to arguments and counterarguments in general, not just tied to the semantic context of the models' training dataset. This approach has practical applications in designing information retrieval systems for intelligent agents and, potentially, in models of cognition that use vector embedding spaces as a representational structure.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View