Based on a dataset from Figure Eight that has messages received during emergencies and classified based on the best response for the message. the dataset contains a total of 36 different classifications and each message can have multiple categories.
this project attempt to recognize the set of categories of each message to be able to make the process of sending the proper help (response) in a more efficient way (time and resources).
this is achieved by leveraging the power of machine learning. in this project, we use Random Forest to tackle this problem reaching an accuracy as high as 80%.
the process is divided into 3 main parts
- Data processing
clean the data and prepare the categories in a way that can be consumed by the machine learning algorithms
you can see in details the process at the ETL notebook
- Model training
this part is where the magic happens, all the is passed to a pipeline and creating the prediction model.
you can see in details the process at the ML notebook
- Visualization and Prediction
that is the final part of the project, here you will find a web app where is possible to infer some sentences and analyse the results.
this project was made using the Pipenv tool but a requirements.txt is also included
run the following commands to use this tool
pipenv install
run the following commands inside your virtualenv to use this tool
pip install -r requirements.txt
Run the following commands in the project's root directory to set up your database, model and start the server
pipenv run ./run.sh
inside your virtualenv
./run.sh
Go to http://0.0.0.0:3001/
here is how to run each module of this project:
to generate the database you will need to run the following command:
python data/process_data.py data/disaster_messages.csv data/disaster_categories.csv data/disaster_response
in order to create the model you will need to to run the following command:
this can take up to 20min depending on the system used.
python models/train_classifier.py data/disaster_response.db models/model.pkl
and finally to run the flask app you will need to run the following command:
python app/run.py
or
cd app
python run.py
Go to http://0.0.0.0:3001/
-
./app - the web app folder
./app/run.py - flask web app
./app/templates - html templates
-
./data - contains all the files related to the dataset
./data/disaster_categories.csv - csv file with all the categories
./data/disaster_messages.csv - csv file with all the messages
./data/process_data.py - data cleaning and database creation script
./data/disaster_response.db - the sqlite3 database (generated after process_data.py execution)
-
./models - contains all the files related to the training of the model
./model/train_classifier.py - model training and dump script
./model/model.pkl - saved model (generated after train_classifier.py execution )
-
./notebooks - Udacity notebooks used to test the algorithms before implementation