- Main
Eye of Aurora
Abstract
Visually impairment is an essential issue in the world that many people are trying to solve. Our project aims to help visually impaired people know what is happening around them. Therefore, we made a pair of camera glasses that can describe the scene in front of the user with sound. The describing sentences are generated by deep learning. We trained our own image caption models and found that VGG 19 as the neural network model with Flickr8k as the dataset performed best. The hardware includes a camera, an earphone, a glass frame, an LCD touch screen, batteries, and a Raspberry Pi 4. With our project, visually impaired people can truly “see” and engage with the world around them.
Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-