The field of machine learning has grown tremendously in the past decade. It is utilized in many different industries with applications ranging from high-tech security systems to medical diagnosis. In order to train their models, technology companies aggregate vast amounts of data from their users and execute training in big data centers. In such an era, where the means of machine intelligence is gathered at the hands of the few and data privacy rights are continuously breached, the need for a more democratic and privacy-preserving machine learning method is exigent. Fortunately, advances in mobile computing is gradually moving the computations on the cloud to the devices. Decentralized learning reinforces these advances by enabling training on decentralized data. In decentralized learning, users train their models with their local datasets and share the acquired knowledge with each other. It mitigates data-sharing and provides a degree of freedom for model personalization. Thus, research on decentralized learning has gained pace. In this thesis, we explore decentralized learning, make an analysis from a bayesian perspective, explain the relevance of continual learning, and demonstrate empirical results from an up-and-coming decentralized learning method.