Sign language is distinct from spoken languages and does not have standard written forms. However, the vast majority of communications technologies are designed to support spoken or written languages; which excludes sign languages. Most people do not know sign language and as a result, many communication barriers exist for deaf sign language users. Sign language processing would help break down these barriers for sign language users. We are trying to train a machine learning model on American sign Language recognition.
This project has real time Object detection using TensonflowJS and the camera to interpret Sign Language. The interface is created using ReactJS with camera input displaying the video with matching framerate of the display. For object storage I have used AWS S3 and used ReactJS to fetch pre-trained python models.
- Identifying Hand Gestures within American Sign Language.
- Developing a Visual Recognition Program/Formula for ASL Gestures and Words.
- Develop a visual recognition program for people with disabilities to be able to communicate using sign language. The program should be able to recognize and capture the words.
Installation is simple, just clone the repo, cd into the "ReactComputerVisionTemplate.main" directory, and run the following commands (in order) to get started:
npm i
npm start
- Real time object detection.
- Object tracking window.
- Detection Certainty Level (0-100).
- AWS S3 Object storage.