A knowledge graph represents factual information in the form of graphs, where nodes repre- sent real-world entities such as people, places and movies, and edges represent the relation- ships between these entities. Existing knowledge graphs are far from complete. Knowledge graph completion or link prediction refers to the task of predicting new relations (links) between entities by deriving information from the existing relations. A number of link pre- diction model have been proposed, several of which make probabilistic predictions about new links. These models can be rule-based methods derived from observed edges, latent represen- tation based embedding methods, or a combination of both. These methods must capture different kinds of relational patterns in the data, such as symmetry or inversion patterns to fully model the data. Rule-based methods explicitly learn these patterns, and provide an interpretable approach to predict new edges. With embedding based models, however, due to the nature of latent embeddings, it is difficult to understand what is being captured by these models. In this work, we explore the logical inference capabilities of knowledge graph embedding models. We experiment with various knowledge graph embedding models on syn- thetic datasets to identify specific properties of each model. The objective is to empirically validate the suitability of these models to learning different relational patterns that exist in real-world knowledge graphs.