Skip to main content
Download PDF
- Main
Eye of Aurora
Abstract
Visually impairment is an essential issue in the world that many people are trying to solve. Our project aims to help visually impaired people know what is happening around them. Therefore, we made a pair of camera glasses that can describe the scene in front of the user with sound. The describing sentences are generated by deep learning. We trained our own image caption models and found that VGG 19 as the neural network model with Flickr8k as the dataset performed best. The hardware includes a camera, an earphone, a glass frame, an LCD touch screen, batteries, and a Raspberry Pi 4. With our project, visually impaired people can truly “see” and engage with the world around them.
Main Content
For improved accessibility of PDF content, download the file to your device.
Enter the password to open this PDF file:
File name:
-
File size:
-
Title:
-
Author:
-
Subject:
-
Keywords:
-
Creation Date:
-
Modification Date:
-
Creator:
-
PDF Producer:
-
PDF Version:
-
Page Count:
-
Page Size:
-
Fast Web View:
-
Preparing document for printing…
0%