GithubHelp home page GithubHelp logo

emotime's Introduction

Emotime

Recognizing emotional states in faces


Authors: Luca Mella, Daniele Bellavista

Contributors: Rohit Krishnan

Development Status: Experimental

Copyleft: CC-BY-NC 2013


Goal

This project aims to recognize main facial expressions (neutral, anger, disgust, fear, joy, sadness, surprise) in image sequences using the approaches described in:

References

Here is listed some interesting material about machine learning, opencv, gabor transforms and other stuff that could be useful to get in this topic:

Project Structure

src
  \-->dataset 		 Scripts for dataset management
  \-->facecrop 		 Utilities and modules for face cropping and registration
  \-->gaborbank		 Utilities and modules for generating gabor filters and image filtering
  \-->adaboost 		 Utilities and modules for adaboost train, prediction, and feature selection
  \-->svm          Utilities and modules for svm training and prediction
  \-->detector     Multiclass detector and preprocessor
  \-->utils        String and IO utilities, CSV supports, and so on..
doc                Documentation (doxigen)
report             Class project report (latex)
resources          Containing third party resources (eg. OpenCV haar classifiers)
assets             Binary folder (I know, I know, it is not beautiful)
test               Some testing scripts here

Build

Dependencies:

  • CMake >= 2.8
  • Python >= 2.7, < 3.0
  • OpenCV >= 2.4.5

Compiling on linux:

  • mkdir build
  • cd build
  • cmake .. ; make ; make install - now the asset folder should be populated

Cross-compiling for windows:

  • Using CMake or CMakeGUI, select emotime as source folder and configure.
  • If it complains about setting the variable OpenCV_DIR set it to the appropriate path so that:
    • C:/path/to/opencv/dir/ contains the libraries (*.lib)
    • C:/path/to/opencv/dir/include contains the include directories (opencv and opencv2)
    • IF the include directory is missing the project will likely not be able to compile due to missing reference to opencv2/opencv or similar.
  • Then generate the project and compile it.
  • This was tested with Visual Studio 12 64 bit.

Detection and Prediction

Proof of concept model trained using faces extracted using the detector cbcl1 are available for download, mulclass strategy 1 vs all and many vs many.

NOTE: Trained models for latest version of the code are available in the v1.2 release page (deprecated). Other trained model working better with master branch are available here

NOTE: watch for illumination! At the moment optimal results can be obtained in live webcam sessions using direct illumination directed to the user's face. Don't worry you are not required to blind you with a headlight.

_If you'd like to try emotime without any further complication you should take a look to the x86_64 release. ( obsolete ) _

Usage

Video gui:

echo "VIDEOPATH" | ./emotimevideo_cli FACEDETECTORXML (EYEDETECTORXML|none) WIDTH HEIGHT NWIDTHS NLAMBDAS NTHETAS (svm|ada) (TRAINEDCLASSIFIERSXML)+

Cam gui:

./emotimegui_cli FACEDETECTORXML (EYEDETECTORXML|none) WIDTH HEIGHT NWIDTHS NLAMBDAS NTHETAS (svm|ada) (TRAINEDCLASSIFIERSXML)+

Or using the python script:

python gui.py --cfg <dataset_configuration_path> --mode svm --eye-correction <dataset_path>

Binary Release and Trained Models

If you just want to take a quick look to the project we strongly suggest to go to the release section and download compiled binaries for Linux 64bit, then:

  • download and unzip the binaries in an empty folder
  • run ./download_trained_models.sh
  • Then cd assets and ./emotimegui_cli ../resources/haarcascade_frontalface_cbcl1.xml none 48 48 3 5 4 svm ../dataset_svm_354_cbcl1_1vsallext/classifiers/svm/*

Training

After mkdir build; cd build; cmake ..; make ; make install go to the assets folder and:

  1. Initialize a dataset using:

     python datasetInit.py -cfg <CONFIGFILE> <EMPTY_DATASET_FOLDER>
    
  2. Then fill it with your images or use the Cohn-Kanade importing script:

     python datasetFillCK --cfg <CONFIGFILE> <DATASETFOLDER> <CKFOLDER> <CKEMOTIONFOLDER>
    
  3. Now you are ready to train models:

     python train_models.py --cfg <CONFIGFILE> --mode svm --prep-train-mode [1vsall|1vsallext] <DATASETFOLDER>
    

Dataset

The Cohn-Kanade database is one of the most used faces database. Its extended version (CK+) contains also FACS code labels (aka Action Units) and emotion labels (neutral, anger, contempt, disgust, fear, happy, sadness, surprise).

Validation (old, check v1.2 release page)

First, rough evaluation of the performance of the system Validation test involved the whole system face detector + emotion classifier, so should not be considered relative to the emotion classifier itself.

Of course, a more fine validation shuld be tackled in order to evaluate emotion classifier alone. For the sake of completeness the reader have to know that the cbcl1 face model is a good face locator rather than detector, roughly speaking it detects less but is more precise.

Following results are commented with my personal - totally informal - evaluation after live webcam session.

multicalss method: 1vsAllExt 
face detector:     cbcl1
eye correction:    no 
width:             48
height:            48 
nwidths:           3 
nlambdas:          5
nthetas:           4

Sadness                   <-- Not good in live webcam sessions too
  sadness -> 0.67%
  surprise -> 0.17%
  anger -> 0.17%
Neutral                   <-- Good in live webcam sessions
  neutral -> 0.90%
  contempt -> 0.03%
  anger -> 0.03%
  fear -> 0.02%
  surprise -> 0.01%
Disgust                   <-- Good in live webcam sessions
  disgust -> 1.00%
Anger                     <-- Good in live webcam sessions
  anger -> 0.45%
  neutral -> 0.36%
  disgust -> 0.09%
  contempt -> 0.09%
Surprise                  <-- Good in live webcam sessions
  surprise -> 0.94%
  neutral -> 0.06%
Fear                      <-- Almost Good in live webcam sessions
  fear -> 0.67%
  surprise -> 0.17%
  happy -> 0.17%
Contempt                  <-- Not good in live webcam sessions
  neutral -> 0.50%
  contempt -> 0.25%
  anger -> 0.25%
Happy                     <-- Good in live webcam sessions
  happy -> 1.00%
multicalss method: 1vsAll 
face detector:     cbcl1
eye correction:    no 
width:             48
height:            48 
nwidths:           3 
nlambdas:          5
nthetas:           4

Sadness                   <-- Not good in live webcam sessions too
  unknown -> 0.50%
  sadness -> 0.33%
  fear -> 0.17%
Neutral                   <-- Good in live webcam sessions 
  neutral -> 0.73%
  unknown -> 0.24%
  surprise -> 0.01%
  fear -> 0.01%
  contempt -> 0.01%
Disgust                   <-- Good in live webcam sessions
  disgust -> 0.82%
  unknown -> 0.18%
Anger                     <-- Almost sufficient in live webcam sessions
  anger -> 0.36%
  neutral -> 0.27%
  unknown -> 0.18%
  disgust -> 0.09%
  contempt -> 0.09%
Surprise                  <-- Good in live webcam sessions
  surprise -> 0.94%
  neutral -> 0.06%
Fear                      <-- Sufficient in live webcam sessions
  fear -> 0.67%
  surprise -> 0.17%
  happy -> 0.17%
Contempt                  <-- Not good in live webcam sessions too
  unknown -> 1.00%
Happy                     <-- Good in live webcam sessions 
  happy -> 1.00%

Also main difference between the 1vsAll and the 1vsAllExt mode experimented in livecam sessions are related to the amount of unknown states registered and the stability of the detected states. In detail 1vsAll multiclass method provide more less noisy detections during a live web-cam session, 1vsAllExt mode instead is able to always predict a valid state for each frame processed, but sometimes it result to be more unstable during the expression transition.

Sorry for the lack of fine tuning and detail, but it is a spare time project at the moment.. If you have any idea or suggestion feel free to write us!

Further Development

  • Tuning GaborBank parameters for accuracy enhancement.
  • Tuning image sizes for better real-time performance.
  • Better handle illumination, detections are good when frontal light is in place (keep it in mind when you use it with your camera).

emotime's People

Contributors

dbellavista avatar luca-m avatar ritou11 avatar usmanayubsh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

emotime's Issues

Running on Windows

Hi,
I'm trying to run your code on Windows and am running into trouble. I can compile and run both the webcam and video file versions of your code. However, when I do run the code, a blank window appears (no video is shown -- see attachment for picture of problem). I have tried several video files with faces clearly in them. I have also used my head against a blank backdrop with a webcam, and the problem persists.

This is the line that I am using in MVS 2012:
emotimegui_cli C:/opencv/build/emotime-experimental-v1.1/resources/haarcascade_frontalface_cbc1.xml none 48 48 3 5 4 svm C:/opencv/build/emotime-experimental-v1.1/dataset_svm_354_cbcl1_1vsallext/classifiers/svm/*

If running in MVS 2012 isn't optimal, how can I run this line in a command prompt on Windows?

I have tried your instructions here: https://github.com/luca-m/emotime/releases/tag/v1.1-experimental. However, they don't work for cygwin on Windows (possibly because ELF executable files don't run on Windows machines).

Thanks.
troubleshoot2

Illumination: better GUI

Develop a better GUI can show image processing intermediate processing steps for a better understanding of illumination issues.

OpenCV Error: Parsing error

hi ,i am trying to run your project .for some time it was working fine but after a few trys it is giving some errors like this and i request you to help me out.
errors:
OpenCV Error: Parsing error (../dataset_svm_354_cbcl1_1vsallext/classifiers/svm/(0): Valid XML should start with '') in icvXMLParse, file /build/buildd/opencv-2.4.8+dfsg1/modules/core/src/persistence.cpp, line 2252
terminate called after throwing an instance of 'cv::Exception'
what(): /build/buildd/opencv-2.4.8+dfsg1/modules/core/src/persistence.cpp:2252: error: (-212) ../dataset_svm_354_cbcl1_1vsallext/classifiers/svm/(0): Valid XML should start with '' in function icvXMLParse

Aborted

VIDIOC_QUERYMENU: Invalid argument

when i m running this command ...
following messages shows up....
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument

and sometime this work quiet slow...

and you had give the link of version 1.2 in that 1vsall and 1vsallext link are not working....

Doxygen documentation

Inside INPUT, set the directory you are currently working on. Otherwise the warnings will be impossibile to read. Once the documentation is reasonable warning-free, commit the doc and the doxygen edit

I cannot open GUI

I'm so interested in this project, but I encountered some problems while trying to use it. After successfully compiling the project, I have trouble on how to open the GUI.
I tried running the project on Visual Studio 2012, but it won't appear.
Which part did I do wrong?
Thanks in advance :)

ERR: something wrong ('module' object has no attribute 'next')

Fill dataset: I'm not sure that I'm giving the right path, but it seems works... expect the error.
Thanks

c:\workspace\emotime\assets>python datasetFillCK.py --cfg test_dataset.cfg dataset_folder C:\workspace\emotime_dataset\ck\ck_plus\cohn-kanade-images C:\workspace\emotime_dataset\ck\ck_plus\Emotion
INFO: 123 subjects found in CK+ database
INFO: Processing subject S005
INFO: Processing shot 001
INFO: Picture 'S005_001_00000011.png' has been marked as disgust
ERR: something wrong ('module' object has no attribute 'next')

ERR: something wrong ('CLAADAFIER_ADA_FOLDER')

Error when I try to use:
python datasetInit.py --cfg test_dataset.cfg dataset_folder
dataset_foldet = the empty folder
test_dataset.cfg = an exact copy of example_dataset.cfg

training and validation test are correctly populated with subfolder, classifiers folder show only svm.
It seems that it doesn't create ada subfolder and that all the process is stopped.
Thanks.

Training: train adaboost

Review CvMLData, make it works or load directly feature matrix, transform it in a feature vector and manually prepare training set (and validation set)

help on emotimevideo_cli error:sample size is different

Hello
First of All thanks for your project. I am working on emotimevideo_cli on Ubuntu 14.04 opencv 2.4.9
And I am giving multicolor mp4 video for input but emotimevideo_cli says "Loading 'video.mp4'" and "init done""opengl support avaliable"and then return below error.
Opencv Error: Sizes of input arguments do not match (the sample size is different what has been used for training )in cvPreparePredictData
Aborted core dumped.
I cannot resolve the problem What do you offer me to do? Is it about video size, color or framerate?

GUI: gabor gui

Implement a gabor gui (with sliders) for a better gabor parameter tuning.

Gabor parameters:

  • nwidths = 1
  • nlambas = 9
  • nthetas = 3

May work better

Always Surprised

Hello!

I cloned your project and ported it to Android, so it now takes images from the camera and runs analysis on them to see expressions. However, I keep getting "Surprised" and Score=1.0. Could you tell me what size I need for the image (I am currently doing 48x48) and whether or not the image needs to be monochrome before I pass it to the FacePreProcessor?

Thanks,
Rohit

Deploy

Deploy the project (zip + examples link + README + doxygen dox + report)

ERR: something wrong (No section: 'TRAINING')

I'm on windows, compiling with mingw and all installations seems to be good (assets folder is populated). If I try

python gui.py --cfg ..\dataset_svm_354_cbcl1_1vsallext\dataset.cfg --mode svm ..\dataset_svm_354_cbcl1_1vsallext\classifiers\svm*

It doesn't works and it prompt me: ERR: something wrong (No section: 'TRAINING')
What's wrong? Thanks

Doc README.md

Adjust the readme for better integration with Doxygen

Training: adaboost for feature selection

Develop a tool for selecting most relevant feature using adaboost instead of PCA.

Gentle Ada Boost regression trees (no neutral): a3d381c

My PC doesn't support the featselect_cli utility due to lack of RAM (it require over 3GB of reserved RAM). Do someone with an ultracool, finetuned notebook run it on all the ada classifers trained?

OpenCV Error: Bad argument (The SVM should be trained first)

Hi,
Can you help me with this error?

./emotimegui_cli ../resources/haarcascade_frontalface_cbcl1.xml none 48 48 3 5 4 svm ../dataset_svm_354_cbcl1_1vsallext/classifiers/svm/*
OpenCV Error: Bad argument (The SVM should be trained first) in CvSVM::predict, file /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_graphics_opencv/opencv/work/opencv-2.4.11/modules/ml/src/svm.cpp, line 2131
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_graphics_opencv/opencv/work/opencv-2.4.11/modules/ml/src/svm.cpp:2131: error: (-5) The SVM should be trained first in function CvSVM::predict

Abort trap: 6

Thanks

Windows build - opencv_contib issue

Hey! First of all, thank you so much for making this project. Your sources and cited papers have taught me so much more than I expected to learn when I started my own.

I'm trying to build emotime on Windows, but I simply cannot figure out how to include the opencv_contrib directory in the make file. Could you help me out with that?

train_cli problem with openCV

I ran a brief test yesterday but I still have problem with this issue.

[FOLDER]
...
# Folder containing the csv files of the features for training set and validation set (one file per classifier)
TRAIN:      trainfiles
...

I miss the point. "trainfiles" is another folder where to put the csv. data?
Do we create this one with datasetInit.py? because right now "trainfiles" doesn't exist...
What does it change the name of the folder?

Last thing, in my previous test I failed one test with facecropping on neutral, is it possible that this result have some bad influence on the rest?

Thanks

Access denied

I did the configuration for Windows. I configured the program using Cmake and then generated it for Visual studio 12 2013 Win64. However when I open the solution in visual studio and try to run ALL_BUILD I get Unable to run program 'C:...\ALL_BUILD access is denied. Any help?
problem

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.