Framework : Pytorch, MediaPipe
This repository contains all the code(model, make_dataset etc) so you can customize it.
- conda create -n Gesture
- pip install -r requirements.txt
├── development_files_for_reference
│ ├── CustomDataset.ipynb
│ ├── model.ipynb
│ └── validation
│
├── images
│ └── model.png
│
├── main_data
│ ├── data_gesture_Left.csv
│ ├── data_gesture_Right.csv
│ ├── data_gesture_Turn Anticlockwise.csv
│ └── data_gesture_Turn Clockwise.csv
│
├── main_files
│ ├── CustomDataset.py
│ ├── make_dataset.ipynb
│ └── model.py
│
├── main_models
│ ├── model_dict().pt
│ └── model.pt
│
├── test.ipynb
└── train.ipynb
development_files_for_reference : Miscellaneous files used during development (feel free to delete)
main_data : Path where collected data is stored: We collected left, right clockwise and counterclockwise data.
main_files :
-
CustomDataset.py : Implement customdataloader as a class using pytorch.
-
make_dataset.ipynb : Use it when collecting data. You must change class_num when you gather other dataset.
-
model.py : CNN-LSTM model was implemented as class type.
main_models : model save path
test.ipynb : Code used when testing the model
train.ipynb: Code used when training the model
- mediapipe need python version 3.8 - 3.11 , Beware of version conflicts! (If you used requirement.txt, no need to worry)
- If you use a public server rather than a local computer, an error will occur in your code because there is no camera.
- So i did data collect in local computer with camera attached and test in local computer. Only Traning process is used in School Server.
- Early stopping was not applied here.