Current advances in both deep learning techniques and in cloud computing allow the advancement of innovations that work to the benefit of physicians and patients. This dissertation explores leveraging of these advancements to create a cloud-based analysis platforms for physicians to analyze cardiac MRI as well as a four-tier outcome prediction machine learning model for COVID-19 patients based on their chest X-rays and metadata. The MRI analysis website is hosted on the American Heart Association’s (AHA) Precision Medicine Platform (PMP) and integrates the cardiac MRI segmentation model by Karimi-Bidhendi, et al.2 The back-end web framework was created using Python and Django, with MySQL as the database manager. This allowed a flexible and reliable base to build the website on as well as strong support from the AHA. The website includes an automatic end-systolic (ES) and end-diastolic (ED) detection system for each ventricle, which allows physicians to upload patients’ MRI DICOMs without the need to manually select files relating to each cardiac phase for each ventricle. Hundreds of files are processed in seconds and a report of all segmented images relating to the ED and ES phases for each ventricle as well as the associated ventricle volumes would be immediately presented after file processing. With regards to the COVID-19 outcome prediction model, 6,259 chest X-ray images from 1,771 patients seen at UCI and UCLA Medical Centers were used to train two VGG16 models and a CheXNet model. The first VGG16 model is a convolutional neural network (CNN) that processed only the chest X-ray images and the second is a CNN for the images as well as a separate deep neural network (DNN) for patient metadata including age and BMI and another DNN that processes the combined output of the CNN and the metadata DNN. This combination allows both images and metadata to be factored in when training the model. The CheXNet model is tailored specifically for chest X-ray images and was used to assess the performance of the VGG-16 models. The accuracy of the image-only VGG16 model was 56% on the four-class prediction, compared to 59% for the image and metadata VGG16 model. The CheXNet model resulted in 60% accuracy. This suggests that the metadata did not significantly improve the performance of the model and that the image data was not informative enough beyond 60% accuracy for four-tier predicting of COVID-19 patients’ outcomes.