iOS application to aid the visually impaired traverse external environments. The app uses a DeepLabV3+MobileNetV2 model trained on the Cityscapes dataset for inference. This application is based on and inspired by blindassist-ios.
Currently works only on simulator.
- Run
TheiaModel/app.py
. Server is launched onlocalhost
. - Launch Theia on simulator.
- Add Camera module instead of photopicker.
- Run model on device locally using CoreML.
- Incorporate ARKit for depth data.
- Improve algorithm to predict next action to perform.
- Expand to diverse environments, not only areas of traffic/roads.