GithubHelp home page GithubHelp logo

mousavisajad / sleepeegnet Goto Github PK

View Code? Open in Web Editor NEW
192.0 3.0 72.0 367 KB

SleepEEGNet: Automated Sleep Stage Scoring with Sequence to Sequence Deep Learning Approach

Python 85.32% Shell 14.68%
sleep-stage-scoring eeg deep-learning sequence-to-sequence cnn rnn tensorflow biomedical biosignals

sleepeegnet's People

Contributors

mousavisajad avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

sleepeegnet's Issues

Python 2.x EOL

Dear Sir,
I'm a final year student doing project on the same field of study. Since python 2.7 has reached the end of life on 1st Jan 2020, there are some warning about installing py 2.7 libraries showing you are using the depreciation version of python. How can I get the same result running the source code you provided?

the loss function is inconsistent with that in the paper

hi, I love what you do and open source work.
and I have a question about the loss function.
I read your paper, the paper said loss function is MFE and MSFE.
but I found the code is:

   for i in range(logits.get_shape().as_list()[-1]): # [128, None, 7]
        class_fill_targets = tf.fill(tf.shape(targets), i) #[?,?]
        weights_i = tf.cast(tf.equal(targets, class_fill_targets), "float") #[?,?] 
        loss_is.append(tf.contrib.seq2seq.sequence_loss(logits, targets, weights_i, average_across_batch=False))

I googled this function. It compute cross-entropy loss. also weights_i parameter is my another question.
for example, when i=7 the weight is 7.
If I have something wrong, can you help me clarify it. thanks a lot!

Issue with using Beam Search Algorithm

When setting the use_beamsearch_decode=True, it shows the ValueError: t must have statically known rank.

File "seq2seq_sleep_sleep-EDF.py", line 755, in main
run_program(hparams,FLAGS)
File "seq2seq_sleep_sleep-EDF.py", line 607, in run_program
logits, pred_outputs, loss, optimizer,dec_states = build_whole_model(hparams,char2numY,inputs,targets, dec_inputs, keep_prob_)
File "seq2seq_sleep_sleep-EDF.py", line 385, in build_whole_model
logits, pred_outputs,dec_states = build_network(hparams,char2numY,inputs, dec_inputs, keep_prob_)
File "seq2seq_sleep_sleep-EDF.py", line 295, in build_network
encoder_state = tf.contrib.seq2seq.tile_batch(encoder_state, multiplier=hparams.beam_width)
File "/home/Student/s4518991/.local/lib/python3.6/site-packages/tensorflow/contrib/seq2seq/python/ops/beam_search_decoder.py", line 126, in tile_batch
return nest.map_structure(lambda t_: tile_batch(t, multiplier), t)
File "/home/Student/s4518991/.local/lib/python3.6/site-packages/tensorflow/python/util/nest.py", line 515, in map_structure
structure[0], [func(*x) for x in entries],
File "/home/Student/s4518991/.local/lib/python3.6/site-packages/tensorflow/python/util/nest.py", line 515, in
structure[0], [func(*x) for x in entries],
File "/home/Student/s4518991/.local/lib/python3.6/site-packages/tensorflow/contrib/seq2seq/python/ops/beam_search_decoder.py", line 126, in
return nest.map_structure(lambda t_: tile_batch(t, multiplier), t)
File "/home/Student/s4518991/.local/lib/python3.6/site-packages/tensorflow/contrib/seq2seq/python/ops/beam_search_decoder.py", line 85, in _tile_batch
raise ValueError("t must have statically known rank")
ValueError: t must have statically known rank

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe4 in position 2050: invalid continuation byte

Traceback (most recent call last):
File "prepare_physionet.py", line 226, in
main()
File "prepare_physionet.py", line 110, in main
reader_raw.read_header()
File "/home/tangw/Desktop/SleepEEGNet/SleepEEGNet_tensorflow/dhedfreader.py", line 90, in read_header
self.header = h = edf_header(self.file)
File "/home/tangw/Desktop/SleepEEGNet/SleepEEGNet_tensorflow/dhedfreader.py", line 46, in edf_header
assert f.read(8) == '0 '
File "/home/tangw/anaconda3/lib/python3.7/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe4 in position 2050: invalid continuation byte

parameter settings

Can you disclose your parameter settings to achieve the performance in the paper, because the settings in the code can only achieve very low performance

Verifying on different dataset and packaging

First of all: Great work! Nice to see people working on it and publishing their code open source.

Secondly: I would like to encourage you to verify your results on different datasets.
So far the biggest problem in sleep research is not to find an algorithm that works well on sleep-edfx but one that works 'in the wild'. This can only be verified by running on different datasets as well.
(The Sleep-EDFx is a very old dataset and absolutely overused, creating a bias in itself in AI/Sleep-research.)

For some data set suggenstions see https://github.com/skjerns/AutoSleepScorer#appendix-sleep-datasets , some of them are easily available, for others you need to apply (but should not be difficult)

As a further step, if testing on other data sets reveals a good generalization, I encourage you to publish some pre-trained weights. This way those could be included in open-source sleep software such as visbrain

Hi. I have a question.

Hello. I'm a graduate student in Korea.
I would like to get the same result of accuracy:84.26%, kappa coefficient:0.79.
However, changing the number of epochs, changing the batch_size, changing the optimizer type, turning on or turning off bidirectional produces low results.
Please give me some advice. Thank you!

ValueError: use_beamsearch_decode=True

I used use_beamsearch_decode=True.
ValueError: t must have statically known rank

Use tf.where in 2.0, which has the same broadcast rule as np.where
Traceback (most recent call last):
File "/home/ubuntu/Code/SleepEEGNet/seq2seq_sleep_sleep-EDF-test1.py", line 772, in
tf.app.run()
File "/home/ubuntu/anaconda3/envs/tf1/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40, in run
run(main=main, argv=argv, flags_parser=parse_flags_tolerate_undef)
File "/home/ubuntu/anaconda3/envs/tf1/lib/python3.7/site-packages/absl/app.py", line 299, in run
run_main(main, args)
File "/home/ubuntu/anaconda3/envs/tf1/lib/python3.7/site-packages/absl/app.py", line 250, in run_main
sys.exit(main(argv))
File "/home/ubuntu/Code/SleepEEGNet/seq2seq_sleep_sleep-EDF-test1.py", line 770, in main
run_program(hparams,FLAGS)
File "/home/ubuntu/Code/SleepEEGNet/seq2seq_sleep_sleep-EDF-test1.py", line 622, in run_program
logits, pred_outputs, loss, optimizer,dec_states = build_whole_model(hparams,char2numY,inputs,targets, dec_inputs, keep_prob
)
File "/home/ubuntu/Code/SleepEEGNet/seq2seq_sleep_sleep-EDF-test1.py", line 390, in build_whole_model
logits, pred_outputs,dec_states = build_network(hparams,char2numY,inputs, dec_inputs, keep_prob
)
File "/home/ubuntu/Code/SleepEEGNet/seq2seq_sleep_sleep-EDF-test1.py", line 294, in build_network
encoder_state = tf.contrib.seq2seq.tile_batch(encoder_state, multiplier=hparams.beam_width)
File "/home/ubuntu/anaconda3/envs/tf1/lib/python3.7/site-packages/tensorflow/contrib/seq2seq/python/ops/beam_search_decoder.py", line 126, in tile_batch
return nest.map_structure(lambda t
: tile_batch(t, multiplier), t)
File "/home/ubuntu/anaconda3/envs/tf1/lib/python3.7/site-packages/tensorflow/python/util/nest.py", line 515, in map_structure
structure[0], [func(*x) for x in entries],
File "/home/ubuntu/anaconda3/envs/tf1/lib/python3.7/site-packages/tensorflow/python/util/nest.py", line 515, in
structure[0], [func(*x) for x in entries],
File "/home/ubuntu/anaconda3/envs/tf1/lib/python3.7/site-packages/tensorflow/contrib/seq2seq/python/ops/beam_search_decoder.py", line 126, in
return nest.map_structure(lambda t: tile_batch(t, multiplier), t)
File "/home/ubuntu/anaconda3/envs/tf1/lib/python3.7/site-packages/tensorflow/contrib/seq2seq/python/ops/beam_search_decoder.py", line 85, in _tile_batch
raise ValueError("t must have statically known rank")
ValueError: t must have statically known rank

ImportError: No module named layers.core

Hi, MousaviSajad. Thanks for your contribution of code.

Could I ask which version of TensorFlow was using on this project? I got the following error while running this script.

python seq2seq_sleep_sleep-EDF.py --data_dir data_2013/eeg_fpz_cz --output_dir output_2013 --n_folds 20

The error info:
File "seq2seq_sleep_sleep-EDF.py", line 17, in
from tensorflow.python.layers.core import Dense
ImportError: No module named layers.core

Best wishes

ValueError: need at least one array to concatenate

Hello
according to https://github.com/genaris/deepsleepnet I made some changes to run this code on Python 3.7. after solving some warnings about packages compatibility this error is not solved yet:

(Python f) F:\PhD\5th semester\Deep Learning\project\Python f>python seq2seq_sleep_sleep-EDF.py --data_dir data_2013/eeg_fpz_cz --output_dir output_2013 --n_folds 6
2021-02-11 14:47:47.456383

========== [Fold-0] ==========

Load training set:

Load Test set:

Training set: n_subjects=0
Number of examples = 0
Traceback (most recent call last):
File "seq2seq_sleep_sleep-EDF.py", line 757, in
tf.app.run()
File "F:\PhD\5th semester\Deep Learning\project\Python f\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run
_sys.exit(main(argv))
File "seq2seq_sleep_sleep-EDF.py", line 755, in main
run_program(hparams,FLAGS)
File "seq2seq_sleep_sleep-EDF.py", line 500, in run_program
X_train, y_train, X_test, y_test = data_loader.load_data(seq_len=hparams.max_time_step)
File "F:\PhD\5th semester\Deep Learning\project\Python f\dataloader.py", line 160, in load_data
self.print_n_samples_each_class(np.hstack(label_train),self.classes)
File "F:\PhD\5th semester\Deep Learning\project\Python f\lib\site-packages\numpy\core\shape_base.py", line 340, in hstack
return _nx.concatenate(arrs, 1)
ValueError: need at least one array to concatenate

it seems that the data is not loaded. I would appreciate if you could let me know the solution.

The overall accuracy is calculated as binary classification

the acc is calculated as:
ACC = (TP + TN) / (TP + FP + FN + TN)
ACC_macro = np.mean(ACC)

Which mean the if
y_true = [0, 1, 2]
y_predict = [0, 0, 0]

the accuracy for class 0 is 33.3%
the accuracy for class 1 is 66.6%
the accuracy for class 2 is 66.6%

Could you please give some reference for the correctness of this calculation?

inter patient paradigm

Hello Sajad,

thanks for sharing your awesome project, appreciate it! I ran your code for 20-fold CV, and it appears that train and test files are NOT from different individuals. In my log example below the individuals 405 and 412 appear in both training and test set. This corresponds to the intrapatient paradigm, which leads to higher scores.

In your paper for SleepEEGNet you claim

"comparisons presented in this study are based on the inter-patient paradigm"

You defined the inter-patient paradigm as

"the epochs sets for test and training come from different individuals"

Could you please clarify?

Regards
Farna

Load training set:
Loading data_2013/eeg_fpz_cz\SC4091E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4042E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4141E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4142E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4062E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4022E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4181E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4072E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4131E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4041E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4012E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4162E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4081E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4152E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4182E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4192E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4172E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4092E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4021E0.npz ...
Loading data_2013/eeg_fpz_cz\SC**412**2E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4112E0.npz ...
Loading data_2013/eeg_fpz_cz\SC**405**1E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4001E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4071E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4061E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4082E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4191E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4032E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4002E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4011E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4111E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4151E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4161E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4171E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4101E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4102E0.npz ...
Loading data_2013/eeg_fpz_cz\SC4031E0.npz ...
 
Load Test set:
Loading data_2013/eeg_fpz_cz\SC**405**2E0.npz ...
Loading data_2013/eeg_fpz_cz\SC**412**1E0.npz ...

UnicodeDecodeError: 'gbk' codec can't decode byte 0xe4 in position 2050: illegal multibyte sequence

I get the following when I run the code:

Traceback (most recent call last):
File "D:/report/SleepEEGNet-master/prepare_physionet.py", line 226, in
main()
File "D:/report/SleepEEGNet-master/prepare_physionet.py", line 110, in main
reader_raw.read_header()
File "D:\report\SleepEEGNet-master\dhedfreader.py", line 90, in read_header
self.header = h = edf_header(self.file)
File "D:\report\SleepEEGNet-master\dhedfreader.py", line 49, in edf_header
h['local_subject_id'] = f.read(80).strip()
UnicodeDecodeError: 'gbk' codec can't decode byte 0xe4 in position 2050: illegal multibyte sequence

why the ouput length is fixed?

from I know, the "seq2seq" model output is variable, but when i print the pred_out shape, I found that all the predict out is (batch, 11, 3000), can u help me understand this, I am so confused.

Memory limited

Hi,
Thanks for you paper and code which helped me a lot.
I tried to run the code, but it stoped at
'X_train, y_train, X_test, y_test = data_loader.load_data(seq_len=hparams.max_time_step)'
It remained me that I don't have enough memory.
My pc is ubuntu18.04 with 16GB, and also, I set swap area as 16 GB, but the problem existed still.
Is there any way to solve this without installing a new memory.
Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.