GithubHelp home page GithubHelp logo

iceddoggie / micro-expression-with-deep-learning Goto Github PK

View Code? Open in Web Editor NEW
267.0 24.0 102.0 58.43 MB

Experimentation of deep learning on the subjects of micro-expression spotting and recognition.

Python 82.76% Makefile 0.03% C 15.37% MATLAB 1.74% Shell 0.09%

micro-expression-with-deep-learning's Introduction

New work available @

https://github.com/IcedDoggie/DSSN-MER

Code Updates and some notes:

Very sorry for the late reply to most of the messages due to industrial work commitments and rushing deadlines. Apparently os.m was missing and I found it in my local code base. added in External Tools/ tvl1flow /. hope it helps and thank you. Also, hope that it benefits your research works :)

Micro-Expression-with-Deep-Learning

Experimentation of deep learning on the subjects of micro-expression spotting and recognition.

Platforms and dependencies

Ubuntu 16.04 Python 3.6 Keras 2.0.6 Opencv 3.1.0 pandas 0.19.2 CuDNN 5110. (Optional but recommended for deep learning)

Download files from this url

(CASMEII with TIM applied are removed due to licensing, hence you need to apply the TIM and cropping yourself, the link below is access request for the database.) http://fu.psych.ac.cn/CASME/casme2-en.php

TIM-related: TIM code can be downloaded below, https://www.oulu.fi/cmvs/node/33019 Note: Parameters are default except from TIM size

Optical Flow Related: I added some script to extract optical flow features (in External_Tools/tvl1flow_3/), original repo are from below: https://github.com/Paul-Darius/ipol-matlab/tree/master/tvl1flow_3/Matlab

Since LSTM is used, all the numbers of files have to be the same length. Currently the code does not work on CASMEII Raw. SMIC not tested.

Shape predictor for Facial Landmarks extraction: dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2

vgg-16 model pretrained on LFW dataset: https://drive.google.com/file/d/1F99D1U9rhaDHp4Re_ky_NjbdT7vmz0mr/view?usp=sharing ( Not available Anymore, but if anyone found an equivalent one, please do not hesitate to reach me out, thank you and sorry for the troubles.) Found one, try this https://github.com/rcmalli/keras-vggface ( if it does not work, let me know via email, thanks )

CASME2_Optical: https://drive.google.com/open?id=1fq_eHCLiUT9hP0npq6vkMYiO2Ka-39Mf

CASME2_STRAIN: https://drive.google.com/open?id=1-l_CtP9awfMV6pXSrBIPRiIujQLjCv9H

Running from scratch

main.py is a main control script to run the codes and there are several parameters to tune the training. The guide is as follows:

List of parameters: --train: determines the training script to run. eg: train.py, train_samm_cross.py --batch_size: the number of data to be run per batch --spatial_epochs: the number of epochs to run for spatial module(vgg module) --temporal_epochs: the number of epochs to run for the LSTM/Recurrent module. --train_id: the name of the training. --dB: the database/databases to be used. --spatial_size: the image resolution --flag: the type of training to be run. can choose whether to perform Spatial Enrichment, Temporal Enrichment or train single module only --objective_flag: choose either objective labels or emotion labels. --tensorboard: choose to use tensorboard. Deprecated.

Type of flags: st - spatial temporal. it's used to train single DB with both vgg-lstm. st4se - spatial temporal four channel spatial enrichment. optical flow + optical strain. st4te - spatial temporal four channel temporal enrichment. train both optical flow and optical strain with pre-trained weights and separately. st5se - flow + strain + grayscale raw image. st5te - flow, strain and grayscale train separately with vgg pre-trained weights.

flags with cde behind indicates that to use composite database evaluation as proposed in MEGC 2018.

Deprecated/not supported flags: s - spatial only. training vgg only. Remember to use --train './train_spatial_only.py' t - temporal only. training the lstm only. nofine - without finetuning. use the pre-trained weights directly

Type of scripts: main.py - control scripts. train.py - training scripts for single db and cde. models.py - deep models utilities.py - various functions for preprocessing and data loading. list_databases.py - scripts to load databases and restructure data. train_samm_cross.py - hde training test_samm_cross.py - hde testing evaluation_matrix.py - for evaluation purposes. labelling.py - load labels for designated db lbptop.py - where we created baselines using lbptop samm_utilities - preprocessing functions for samm only.

Examples for training temporal only: (the spatial size used in paper is 50) Not supported yet. but the code is in train_temporal_only.py

note: for two layers lstm, you can go to models.py and add another line in temporal_module. model.add(LSTM(3000, return_sequences=False))

Example for training spatial only: Not supported yet. but the code is in train_spatial_only.py

Example for single db: python main.py --dB 'CASME2_Optical' --batch_size=1 --spatial_epochs=100 --temporal_epochs=100 --train_id='default_test' --spatial_size=224 --flag='st'

Example for training CDE: python main.py --dB 'CASME2_Optical' 'CASME2_Strain_TIM10' --batch_size=1 --spatial_epochs=100 --temporal_epochs=100 --train_id='default_test' --spatial_size=224 --flag='st4se'

Example for training HDE: python main.py --train './train_samm_cross' --dB 'CASME2_Optical' --batch_size=1 --spatial_epochs=100 --temporal_epochs=100 --train_id='default_test' --spatial_size=224 --flag='st4se'

python main.py --dB './test_samm_cross' 'SAMM_Optical' --batch_size=1 --spatial_epochs=100 --temporal_epochs=100 --train_id='default_test' --spatial_size=224 --flag='st4se'

file structure as follow:

  • asterisk indicates that the folder needs to be created manually

/db* (root):

/db*

/subjects

  /videos
  
    /video_frames.png

/Classification*

/Result*

  /db*

/CASME2_label_Ver_2.xls , CASME2-ObjectiveClasses.xlsx, SAMM_Micro_FACS_Codes_v2.xlsx

for eg: /CASME2_Optical:

/CASME2_Optical

/sub01...sub26

  /EP...
  
    /1...9.png

/Classification

/Result

  /CASME2_Optical

/CASME2_label_Ver_2.xls , CASME2-ObjectiveClasses.xlsx, SAMM_Micro_FACS_Codes_v2.xlsx

Results

Single DB

F1 : 0.4999726178

Accuracy/WAR: 0.5243902439

UAR : 0.4395928516

CDE

F1 : 0.4107312702

Accuracy/WAR: 0.57

UAR : 0.39

HDE

F1 : 0.3411487289

Accuracy/WAR: 0.4345389507

UAR : 0.3521973582

If you find this work useful, here's the paper and citation

https://arxiv.org/abs/1805.08417

@article{khor2018enriched, title={Enriched Long-term Recurrent Convolutional Network for Facial Micro-Expression Recognition}, author={Khor, Huai-Qian and See, John and Phan, Raphael CW and Lin, Weiyao}, journal={arXiv preprint arXiv:1805.08417}, year={2018} }

micro-expression-with-deep-learning's People

Contributors

iceddoggie avatar mg515 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

micro-expression-with-deep-learning's Issues

Need help on calculating optical strain

We found some unclear points in the script output_strain.m in the External-Tools folder. In line 103 [opticalStrain, strain_orientation, exx, eyy] = os(output), there's a function called "os" but we can't find a script named os or the definition of the function. Could you please upload the code of the function os mentioned above ? Looking forward to your reply, thanks.

Some details about TIM

Hello, I use the matlab code to process camse2 data sequences, there still some problems in details.

  1. How long should the processed data be? The raw data have different frame of each sample, should the processed data be the same length? What is the best length when you run the code?
  2. The data type of output data? when finished process, wether should i save the data as a .avi file or just .jpg s. How to name the file in order to keep the annotations?

Sorry for my poor English.

can't train CDE

I use TensorFlow 1.4.1 as the backend.
when the logs shown "Beginning temporal training".
I got an error: "expected lstm_1_input to have shape (9, 4096) but got array with shape (9, 5)" in "train.py line 375"
I have already download "CASME2_Optical" and "CASME2_Strain_TIM10" from google drive.
I don't know what's wrong, why it can't run.

how to run

I do not konw what the error is?
/home/lzq/anaconda3/lib/python3.6/site-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Warning! HDF5 library version mismatched error
The HDF5 header files used to compile this application do not match
the version used by the HDF5 library to which this application is linked.
Data corruption or segmentation faults may occur if the application continues.
This can happen when an application was compiled by one version of HDF5 but
linked with a different version of static or shared HDF5 library.
You should recompile the application or check your shared library related
settings such as 'LD_LIBRARY_PATH'.
You can, at your own risk, disable this warning by setting the environment
variable 'HDF5_DISABLE_VERSION_CHECK' to a value of '1'.
Setting it to 2 or higher will suppress the warning messages totally.
Headers are 1.10.1, library is 1.10.2
SUMMARY OF THE HDF5 CONFIGURATION
=================================

General Information:

               HDF5 Version: 1.10.2
              Configured on: Wed May  9 23:24:59 UTC 2018
              Configured by: root@19a5a2e5-30a4-45db-5607-0c407004981e
                Host system: x86_64-conda_cos6-linux-gnu
          Uname information: Linux 19a5a2e5-30a4-45db-5607-0c407004981e 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
                   Byte sex: little-endian
         Installation point: /home/lzq/anaconda3

Compiling Options:

                 Build Mode: production
          Debugging Symbols: no
                    Asserts: no
                  Profiling: no
         Optimization Level: high

Linking Options:

                  Libraries: static, shared

Statically Linked Executables:
LDFLAGS: -Wl,-O2 -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now -Wl,-rpath,/home/lzq/anaconda3/lib -L/home/lzq/anaconda3/lib
H5_LDFLAGS:
AM_LDFLAGS: -L/home/lzq/anaconda3/lib
Extra libraries: -lrt -lpthread -lz -ldl -lm
Archiver: /tmp/build/80754af9/hdf5_1525908201964/_build_env/bin/x86_64-conda_cos6-linux-gnu-ar
AR_FLAGS: cr
Ranlib: /tmp/build/80754af9/hdf5_1525908201964/_build_env/bin/x86_64-conda_cos6-linux-gnu-ranlib

Languages:

                          C: yes
                 C Compiler: /tmp/build/80754af9/hdf5_1525908201964/_build_env/bin/x86_64-conda_cos6-linux-gnu-cc
                   CPPFLAGS: -DNDEBUG -D_FORTIFY_SOURCE=2 -O2
                H5_CPPFLAGS: -D_GNU_SOURCE -D_POSIX_C_SOURCE=200112L   -DNDEBUG -UH5_DEBUG_API
                AM_CPPFLAGS:  -I/home/lzq/anaconda3/include
                    C Flags: -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -pipe -I/home/lzq/anaconda3/include -fdebug-prefix-map=${SRC_DIR}=/usr/local/src/conda/${PKG_NAME}-${PKG_VERSION} -fdebug-prefix-map=${PREFIX}=/usr/local/src/conda-prefix
                 H5 C Flags:   -std=c99 -pedantic -Wall -Wextra -Wbad-function-cast -Wc++-compat -Wcast-align -Wcast-qual -Wconversion -Wdeclaration-after-statement -Wdisabled-optimization -Wfloat-equal -Wformat=2 -Winit-self -Winvalid-pch -Wmissing-declarations -Wmissing-include-dirs -Wmissing-prototypes -Wnested-externs -Wold-style-definition -Wpacked -Wpointer-arith -Wredundant-decls -Wshadow -Wstrict-prototypes -Wswitch-default -Wswitch-enum -Wundef -Wunused-macros -Wunsafe-loop-optimizations -Wwrite-strings -finline-functions -s -Wno-inline -Wno-aggregate-return -Wno-missing-format-attribute -Wno-missing-noreturn -O
                 AM C Flags: 
           Shared C Library: yes
           Static C Library: yes


                    Fortran: yes
           Fortran Compiler: /tmp/build/80754af9/hdf5_1525908201964/_build_env/bin/x86_64-conda_cos6-linux-gnu-gfortran
              Fortran Flags: 
           H5 Fortran Flags:  -pedantic -Wall -Wextra -Wunderflow -Wimplicit-interface -Wsurprising -Wno-c-binding-type  -s -O2
           AM Fortran Flags: 
     Shared Fortran Library: yes
     Static Fortran Library: yes

                        C++: yes
               C++ Compiler: /tmp/build/80754af9/hdf5_1525908201964/_build_env/bin/x86_64-conda_cos6-linux-gnu-c++
                  C++ Flags: -fvisibility-inlines-hidden -std=c++17 -fmessage-length=0 -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -pipe -I/home/lzq/anaconda3/include -fdebug-prefix-map=${SRC_DIR}=/usr/local/src/conda/${PKG_NAME}-${PKG_VERSION} -fdebug-prefix-map=${PREFIX}=/usr/local/src/conda-prefix
               H5 C++ Flags:   -pedantic -Wall -W -Wundef -Wshadow -Wpointer-arith -Wcast-qual -Wcast-align -Wwrite-strings -Wconversion -Wredundant-decls -Winline -Wsign-promo -Woverloaded-virtual -Wold-style-cast -Weffc++ -Wreorder -Wnon-virtual-dtor -Wctor-dtor-privacy -Wabi -finline-functions -s -O
               AM C++ Flags: 
         Shared C++ Library: yes
         Static C++ Library: yes

                       Java: no

Features:

              Parallel HDF5: no
         High-level library: yes
               Threadsafety: yes
        Default API mapping: v110

With deprecated public symbols: yes
I/O filters (external): deflate(zlib)
MPE: no
Direct VFD: no
dmalloc: no
Packages w/ extra debug output: none
API tracing: no
Using memory checker: yes
Memory allocation sanity checks: no
Metadata trace file: no
Function stack tracing: no
Strict file format checks: no
Optimization instrumentation: no
Bye...

cooperate on micro expression recognition

With regard to micro expression recognition, we hope it can be applied to real scenes. Do you have any suggestions on this? Can we cooperate? Can you give me your email address? Look forward to reply.Thx.

The result did not reach the declared accuracy

Hi, I ran the example for single db use the data you give in
CASME2_Optical: https://drive.google.com/open?id=1fq_eHCLiUT9hP0npq6vkMYiO2Ka-39Mf CASME2_STRAIN: https://drive.google.com/open?id=1-l_CtP9awfMV6pXSrBIPRiIujQLjCv9H.
The result is as follows:
F1-score:0.2625012255303448
WAR:0.3617886178861789
UAR:0.2455069375069375

Since this result is too far from what you claim to be,I want to know if I have missed any important steps, or can I ask you to give me a trained model.
I look forward to hearing from you.Thank you very much.

NVML Shared Library Not Found

@IcedDoggie
Finally i sovled the path problem.
But I got an error :" Using TensorFlow backend.
Namespace(batch_size=1, dB=['CASME2_Optical'], flag='st', objective_flag=1, spatial_epochs=100, spatial_size=224, temporal_epochs=100, tensorboard=False, train='./train.py', train_id='default_test')
1
CASME2_Optical
arrived
Loaded Images into the tray.
Loaded Labels into the tray.
Beginning training process.
.starting subject0
Traceback (most recent call last):
File "/Users/youyin/Documents/人工智能研究项目/Micro-Expression-with-Deep-Learning/pynvml/pynvml.py", line 644, in _LoadNvmlLibrary
nvmlLib = CDLL("libnvidia-ml.so.1")
File "/Users/youyin/anaconda3/lib/python3.6/ctypes/init.py", line 348, in init
self._handle = _dlopen(self._name, mode)
OSError: dlopen(libnvidia-ml.so.1, 6): image not found

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "main.py", line 61, in
main(args)
File "main.py", line 15, in main
train(args.batch_size, args.spatial_epochs, args.temporal_epochs, args.train_id, args.dB, args.spatial_size, args.flag, args.objective_flag, args.tensorboard)
File "/Users/youyin/Documents/人工智能研究项目/Micro-Expression-with-Deep-Learning/train.py", line 204, in train
gpu_observer()
File "/Users/youyin/Documents/人工智能研究项目/Micro-Expression-with-Deep-Learning/utilities.py", line 520, in gpu_observer
nvmlInit()
File "/Users/youyin/Documents/人工智能研究项目/Micro-Expression-with-Deep-Learning/pynvml/pynvml.py", line 608, in nvmlInit
_LoadNvmlLibrary()
File "/Users/youyin/Documents/人工智能研究项目/Micro-Expression-with-Deep-Learning/pynvml/pynvml.py", line 646, in _LoadNvmlLibrary
_nvmlCheckReturn(NVML_ERROR_LIBRARY_NOT_FOUND)
File "/Users/youyin/Documents/人工智能研究项目/Micro-Expression-with-Deep-Learning/pynvml/pynvml.py", line 310, in _nvmlCheckReturn
raise NVMLError(ret)
pynvml.pynvml.NVMLError_LibraryNotFound: NVML Shared Library Not Found

I use Mac book pro to run this code ,is there any way to walk around this problem?

needs full trained model

Hi, due to the poor PC and GPU I can't train a full model using this param:

    parser = argparse.ArgumentParser()
    parser.add_argument('--train', type=str, default='./simple_train.py', help='Using which script to train.')
    parser.add_argument('--batch_size', type=int, default=32, help='Training Batch Size')
    parser.add_argument('--spatial_epochs', type=int, default=10, help='Epochs to train for Spatial Encoder')
    parser.add_argument('--temporal_epochs', type=int, default=40, help='Epochs to train for Temporal Encoder')
    parser.add_argument('--train_id', type=str, default="default_test", help='To name the weights of model')
    parser.add_argument('--dB', nargs="+", type=str, default='CASME2_Optical', help='Specify Database')
    parser.add_argument('--spatial_size', type=int, default=224, help='Size of image')
    parser.add_argument('--flag', type=str, default='st', help='Flags to control type of training')
    parser.add_argument('--objective_flag', type=int, default=1,
                        help='Flags to use either objective class or emotion class')
    parser.add_argument('--tensorboard', type=bool, default=False, help='tensorboard display')
    parser.add_argument('--root_db_path', type=str, default='E:/PycharmProjects/Micro-Expression-with-Deep-Learning-master/')
    args = parser.parse_args()
    print(args)

    main(args)

could you send me these weights which contains temporal_model, and VGG_model

Problem about "restructure_data" func

Hi, I think there is a mistake in "list_databases.py" function "restructure_data":

`Train_X_spatial = Train_X.reshape(Train_X.shape[0]timesteps_TIM, r, w, channel)
Test_X_spatial = Test_X.reshape(Test_X.shape[0]
timesteps_TIM, r, w, channel)

# Extend Y labels 10 fold, so that all images have labels
Train_Y_spatial = np.repeat(Train_Y, timesteps_TIM, axis=0)
Test_Y_spatial = np.repeat(Test_Y, timesteps_TIM, axis=0)

X = Train_X_spatial.reshape(Train_X_spatial.shape[0], channel, r, w)
y = Train_Y_spatial.reshape(Train_Y_spatial.shape[0], n_exp)`

Here, "Train_X_spatial" has dim e.g. nVid*9 x 224 x 224 x 3.
Then you use "reshape" to get "X" with dim nVid*9 x 3 x 224 x 224.
I think that "reshape" will mess up the image data.
"X = np.transpose(Train_X_spatial, (0, 3, 1, 2))" seems to do the work rightly.
What's your opinion?
Best regards!

Out of memory error with GTX 1080 Ti when training

I am running into an issue when trying to run the code in your repo.

I have downloaded the full source code, changed the root_db_path to where my folder with the "CASME2_Optical" folder structure is located. Then I try to run the below:
python main.py --dB 'CASME2_Optical' --batch_size=1 --spatial_epochs=1 --temporal_epochs=1 --train_id='test20' --spatial_size=224 --flag='st'
This is exactly the same command as the example in your README, except having reduced the spatial_epochs and temporal_epochs to 10 for the sake of reducing the time it takes to run.

However, I always run into OOM error at some point during the process. I have tried reducing the LSTM in "temporal_module" in models.py from 3000 to 300, which made it so that it will be able to train a few more subjects before the OOM occurs. For example just now it was about to start training subject 10 when it ran out of memory when I had the LSTM reduced to 300, but if it is on the default setting with 3000, it doesnt even get past subject 3.

So I am wondering what hardware you might be using to run this, or what else I could be doing wrong?

Here is the full output of my command prompt of when running this with the defaults (e.g. LSTM still 3000) and with the abovementioned command:
full_stack_trace.txt

Any help is appreciated!

train_ram.py

Hello,author!Thank you for publish your code.But I can't find train_ram.py.Please reply me

errors in maxpooling

Testing started at 14:37 ...
C:\ProgramData\Anaconda3\python.exe "C:\Program Files\JetBrains\PyCharm 2017.3.4\helpers\pycharm_jb_pytest_runner.py" --path C:/Users/LawLi/PycharmProjects/Micro-Expression-with-Deep-Learning-master/test_spatial_only.py
Launching py.test with arguments C:/Users/LawLi/PycharmProjects/Micro-Expression-with-Deep-Learning-master/test_spatial_only.py in C:\Users\LawLi\PycharmProjects\Micro-Expression-with-Deep-Learning-master

============================= test session starts =============================
platform win32 -- Python 3.6.4, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
rootdir: C:\Users\LawLi\PycharmProjects\Micro-Expression-with-Deep-Learning-master, inifile:Datasets/CASME2_TIM/CASME2_TIM/
[ 9. 13. 7. 5. 19. 5. 9. 3. 15. 14. 10. 12. 8. 4. 3. 4. 36. 3.
16. 11. 2. 2. 12. 11. 7. 17.]
[ 8. 8. 9. 16. 16. 18. 23. 23. 23. 23. 25.]
Loaded Images into the tray...
Loaded Labels into the tray...
C:\ProgramData\Anaconda3\lib\site-packages\h5py_init_.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.

test_spatial_only.py:None (test_spatial_only.py)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py:1567: in _create_c_op
c_op = c_api.TF_FinishOperation(op_desc)
E tensorflow.python.framework.errors_impl.InvalidArgumentError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_2/MaxPool' (op: 'MaxPool') with input shapes: [?,1,112,128].

During handling of the above exception, another exception occurred:
test_spatial_only.py:112: in
vgg_model = VGG_16_tim(spatial_size, classes=5, channels=3)
models.py:199: in VGG_16_tim
model.add(MaxPooling2D((2, 2), strides=(2, 2)))

C:\ProgramData\Anaconda3\lib\site-packages\keras\models.py:492: in add
output_tensor = layer(self.outputs[0])
C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py:619: in call
output = self.call(inputs, **kwargs)
C:\ProgramData\Anaconda3\lib\site-packages\keras\layers\pooling.py:158: in call
data_format=self.data_format)
C:\ProgramData\Anaconda3\lib\site-packages\keras\layers\pooling.py:221: in _pooling_function
pool_mode='max')
C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:3657: in pool2d
data_format=tf_data_format)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\nn_ops.py:2142: in max_pool
name=name)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py:5045: in max_pool
data_format=data_format, name=name)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py:787: in _apply_op_helper
op_def=op_def)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py:3392: in create_op
op_def=op_def)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py:1734: in init
control_input_ops)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py:1570: in _create_c_op
raise ValueError(str(e))
E ValueError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_2/MaxPool' (op: 'MaxPool') with input shapes: [?,1,112,128].
collected 0 items / 1 errors
=================================== ERRORS ====================================
____________________ ERROR collecting test_spatial_only.py ____________________
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py:1567: in _create_c_op
c_op = c_api.TF_FinishOperation(op_desc)
E tensorflow.python.framework.errors_impl.InvalidArgumentError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_2/MaxPool' (op: 'MaxPool') with input shapes: [?,1,112,128].

During handling of the above exception, another exception occurred:
test_spatial_only.py:112: in
vgg_model = VGG_16_tim(spatial_size, classes=5, channels=3)
models.py:199: in VGG_16_tim
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
C:\ProgramData\Anaconda3\lib\site-packages\keras\models.py:492: in add
output_tensor = layer(self.outputs[0])
C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py:619: in call
output = self.call(inputs, **kwargs)
C:\ProgramData\Anaconda3\lib\site-packages\keras\layers\pooling.py:158: in call
data_format=self.data_format)
C:\ProgramData\Anaconda3\lib\site-packages\keras\layers\pooling.py:221: in _pooling_function
pool_mode='max')
C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:3657: in pool2d
data_format=tf_data_format)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\nn_ops.py:2142: in max_pool
name=name)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py:5045: in max_pool
data_format=data_format, name=name)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py:787: in _apply_op_helper
op_def=op_def)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py:3392: in create_op
op_def=op_def)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py:1734: in init
control_input_ops)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py:1570: in create_c_op
raise ValueError(str(e))
E ValueError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_2/MaxPool' (op: 'MaxPool') with input shapes: [?,1,112,128].
------------------------------- Captured stdout -------------------------------
Datasets/CASME2_TIM/CASME2_TIM/
[ 9. 13. 7. 5. 19. 5. 9. 3. 15. 14. 10. 12. 8. 4. 3. 4. 36. 3.
16. 11. 2. 2. 12. 11. 7. 17.]
[ 8. 8. 9. 16. 16. 18. 23. 23. 23. 23. 25.]
Loaded Images into the tray...
Loaded Labels into the tray...
------------------------------- Captured stderr -------------------------------
C:\ProgramData\Anaconda3\lib\site-packages\h5py_init
.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!
========================== 1 error in 14.94 seconds ===========================
Process finished with exit code 0

Passing the features from CNN to LSTM

When I run the code I get the following error:
*** ValueError: Error when checking input: expected lstm_1_input to have shape (9, 50176) but got array with shape (9, 5)

Where 50176 is the data_dim variable (50176=244*244 being the photo dimensions).
And the input variable to LSTM (features.shape) equals (237, 9, 5), where 237 is the number of video samples, so basically the features variable is the probability output of CNN for each class. Is this the way it is supposed to work?

The way I see it, the input layer to LSTM is supposed to be size of 5.

I can't run the train.py with CASME2 dataset.

There are so many errors when running train.py with CASME2 dataset.
I use this:
python main.py --train ./train.py --dB CASME2_Optical --batch_size=1 --spatial_epochs=2 --temporal_epochs=2 --train_id='default_test' --spatial_size=224 --flag='st4se' --objective_flag=1

I have modified the static variables like the path of directory, but still have problem with Train_X = np.vstack(Train_X) in function data_loader_with_LOSO of utilities.py.

It costs me couples of days. I feel so upset.

How to set TIM length?

Because I found that most micro-expression sequences are over 20 frames from onset to offset,Why did you set it 10? And which parameter in TIM correspnd to this length?Looking forward to your reply ~

how to download train_cae_lstm

how to download train_cae_lstm,when I try to run this code,but it shows "No module named 'train_cae_lstm' ",I have searched in the google,but can't find how to download train_cae_lstm

tvl1flow

libpng, libtiff are installed but I don't know how to run run "make" to generate an executable file named "tvl1flow", I tried to run it directly in the terminal but it failed, please ask how to solve it look forward to your reply!

list_databases.py

A new error: return r, w, subjects, samples, n_exp, VidPerSubject, timesteps_TIM, data_dim, channel, table, listOfIgnoredSamples, db_home, db_images, cross_db_flag
UnboundLocalError: local variable 'r' referenced before assignment.

list_databases.py, is this file missing some codes?

I have some problems about list_databases.py

When I run this code,an error appeard,I don't konw how to solve it .Can you give me some advices.Thank you very much.

File "D:/github/Micro-Expression-with-Deep-Learning-master/main.py", line 59, in

main(args)

File "D:/github/Micro-Expression-with-Deep-Learning-master/main.py", line 15, in main
train(args.batch_size, args.spatial_epochs, args.temporal_epochs, args.train_id, args.dB, args.spatial_size, args.flag, args.objective_flag, args.tensorboard)
File "D:\github\Micro-Expression-with-Deep-Learning-master\train.py", line 64, in train
r, w, subjects, samples, n_exp, VidPerSubject, timesteps_TIM, data_dim, channel, table, listOfIgnoredSamples, db_home, db_images, cross_db_flag = load_db(root_db_path, list_dB, spatial_size, objective_flag)
TypeError: 'NoneType' object is not iterable

Can you show me the path of dataset?

hi
@IcedDoggie

I have downloaded CASME2_Optical, and put it in the root path of 'Micro-Expression-with-Deep-Learning',
i found that the 'root_db_path = "/media/ice/OS/Datasets/"' in train.py,but there is no '/media/ice/OS/Datasets' path on my machine .

Could you please give a structure of whole Micro-Expression-with-Deep-Learning dir ?
Thank you

cannot find function os for optical strain calculation

Hi,
Thanks for sharing your code. I noticed that you haved answered the similar question in previous issues, and I found "output_strain.m" file (\External-Tools\tvl1flow_3). The sentence for optical strain calculation seems to be "[opticalStrain, strain_orientation, exx, eyy] = os(output)", but I could not find the function "os" for more detail. I would be very grateful if you could share more on this calculation.
Looking forward to hearing from you soon.
Many thanks :)

FACS

hi thank you for your work , whether have a lib in python package for FACS?

Pre-processing steps help

Hey!

1. TIM10 - Can you please suggest me how to compute the TIM10 from the original data sequences for let's say CASMEII. Is there any code available on the project that I could use?

2. SAMM landmark extraction - Is the code in the landmark_extraction.py used to cropping the whole images to face images only? If I see correctly, I simply set the correct paths and run that file?

3. Optical Flow - If I would like to calculate the optical flow on the sequence of images, the code is available in matlab under the run_my_data.m script, is that right?

Thanks a lot for your help!

Where do you get the optical strain codes?

Sorry to bother.
I have searched all the time about the optical strain codes, and tried a lot of codes with "strain".
But they are all about something else.
It will be very kind of you to tell me where you find the codes.
THANK you!

environment

sorry to bother, can this code run on windows in the same environment?

I ran out of this code, but the accuracy is too low.

 Hi, I ran the example for single db,the parameter is batch_size=1,epoch=70,patience=20,but the accuracy is too low,is only 0.34,it have large different from your results.Can you tell me how to solve this problem?Thank you.
Looking forward to you reply!

how to download train_ram

Thank you for your code!
Can you source your code of train_ram?
ImportError: No module named 'train_ram'

Something wrong with Train_Y, and Test_Y.

I'v found a similar problem when I run the code.
Train_X_shape: (237, 9, 150528)
Train_Y_shape: (243, 5)
Test_X_shape: (9, 9, 150528)
Test_Y_shape: (3, 5)
X_shape: (2133, 3, 224, 224)
y_shape: (2187, 5)
test_X_shape: (81, 3, 224, 224)
test_y_shape: (27, 5)
and I guess the problem may be caused by Test_Y, for Test_Y_shape is always (3,5) whatever the ignored files are. Another thing worth mentioning is 243-237=6 while 9-3=6, Train_Y_shape(234) seems to have 6 samples in Test_Y_shape.
Can you give me some suggestion? Looking forward for your reply sincerely.

how to run

Hello, I see a lot of your code. I beg you to briefly introduce your project. How to run your code? What is your final result?

Use of TIM

I downloaded the TIM code from the connection you gave, but there are only three. M files in it. I don't know how to use it. Can you tell me? Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.