GithubHelp home page GithubHelp logo

yerevann / mimic3-benchmarks Goto Github PK

View Code? Open in Web Editor NEW
773.0 39.0 321.0 16.9 MB

Python suite to construct benchmark machine learning datasets from the MIMIC-III 💊 clinical database.

Home Page: https://arxiv.org/abs/1703.07771

License: MIT License

Python 100.00%
machine-learning benchmark clinical-data deep-learning

mimic3-benchmarks's People

Contributors

ast0414 avatar gitter-badger avatar hrant-khachatrian avatar hrayrhar avatar jgc128 avatar kwonoh avatar natny avatar partizanos avatar saqibm128 avatar turambar avatar vadim0x60 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mimic3-benchmarks's Issues

Import of mimic3benchmark module fails

Trying to run through the readme on Ubuntu:

~/git/mimic3-benchmarks$ python scripts/extract_subjects.py /data/mimic3/csv/ /data/mimic3/csv/
Traceback (most recent call last):
  File "scripts/extract_subjects.py", line 5, in <module>
    from mimic3benchmark.mimic3csv import *
ImportError: No module named mimic3benchmark.mimic3csv

It works if I move extract_subjects.py to the main folder - probably need to change to relative imports

item_id_to_variable_map, variable_ranges.csv inconsistencies

Currently, names for variables are inconsistent between the two. For example, item_id_to_variable_map.csv has a variable called "Heart Rate" while variable_ranges.csv has a variable called "Heart rate". This may not seem like much, but it is enough for clean functions and range checking functions to fail to recognize the same variable; a new column called Heart rate is created and populated solely with the impute value (i.e. the assumed normal heart rate of 86 bpm).

Because of #25, the issue is not too important in the current implementation, but it should be considered if remove_outliers_for_variables is ever run on the data.

Issues with running logistic regression python script

Hello,

I used the latest code to re-process the benchmarks. But I'm still getting an error. This is the error that I get when I tried to run logistic regression for decompensation and length-of-stay:

(ICU_env) oggi2@oggi2-Precision-Tower-7910:/media/oggi2/DATA2/kgp_new/mimic3-benchmarks/mimic3models/length_of_stay/logistic$ python2 -u main_cf.py
Namespace(features='all', period='all')
==> reading data and extracting features
Traceback (most recent call last):
File "main_cf.py", line 70, in
(train_X, train_y, train_actual) = read_and_extract_features(train_reader, chunk_size)
File "main_cf.py", line 54, in read_and_extract_features
ret = common_utils.read_chunk(reader, read_chunk_size)
File "/media/oggi2/DATA2/kgp_new/mimic3-benchmarks/mimic3models/common_utils.py", line 31, in read_chunk
ret = reader.read_next()
File "/media/oggi2/DATA2/kgp_new/mimic3-benchmarks/mimic3benchmark/readers.py", line 35, in read_next
return self.read_example(to_read_index)
File "/media/oggi2/DATA2/kgp_new/mimic3-benchmarks/mimic3benchmark/readers.py", line 196, in read_example
raise ValueError("Index must be from 0 (inclusive) to number of lines (exclusive).")
ValueError: Index must be from 0 (inclusive) to number of lines (exclusive).

And this is the error that I get for logistic regression of in hospital mortality and phenotyping:

(ICU_env) oggi2@oggi2-Precision-Tower-7910:/media/oggi2/DATA2/kgp_new/mimic3-benchmarks/mimic3models/phenotyping/logistic$ python2 -u main.py
Namespace(features='all', period='all')
==> reading data and extracting features
Traceback (most recent call last):
File "main.py", line 46, in
(train_X, train_y) = read_and_extract_features(train_reader)
File "main.py", line 36, in read_and_extract_features
ret = common_utils.read_chunk(reader, reader.get_number_of_examples())
File "/media/oggi2/DATA2/kgp_new/mimic3-benchmarks/mimic3models/common_utils.py", line 36, in read_chunk
data["header"] = data["header"][0]
KeyError: 'header'

Thanks!

Modifying on a copy

Running preprocessing.clean_events, specifically this line

events.VALUE.ix[idx] = clean_fn(events.ix[idx])

gives back an error suggesting that we may be modifying a copy of events instead of events itself. The function returns events, which is the original parameter passed in. However, if we modify a copy instead, then the original events object is essentially untouched during this process.

To summarize, this may be problematic, as it may suggest that the clean functions do not actually return their results correctly.
http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy

validate_events fails for subjects without events.csv files extracted.

For some of the subjects who don't have events.csv file extracted, validate_events step will fail.
===============Error message =================
Traceback (most recent call last):
File "scripts/validate_events.py", line 94, in
main()
File "scripts/validate_events.py", line 48, in main
dtype={'HADM_ID': str, "ICUSTAY_ID": str})
File "/usr/lib64/python2.7/dist-packages/pandas/io/parsers.py", line 655, in parser_f
return _read(filepath_or_buffer, kwds)
File "/usr/lib64/python2.7/dist-packages/pandas/io/parsers.py", line 405, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "/usr/lib64/python2.7/dist-packages/pandas/io/parsers.py", line 764, in init
self._make_engine(self.engine)
File "/usr/lib64/python2.7/dist-packages/pandas/io/parsers.py", line 985, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "/usr/lib64/python2.7/dist-packages/pandas/io/parsers.py", line 1605, in init
self._reader = parsers.TextReader(src, **kwds)
File "pandas/_libs/parsers.pyx", line 394, in pandas._libs.parsers.TextReader.cinit (pandas/_libs/parsers.c:4209)
File "pandas/_libs/parsers.pyx", line 710, in pandas._libs.parsers.TextReader._setup_parser_source (pandas/_libs/parsers.c:8873)
IOError: File data/root/78819/events.csv does not exist

[Error] "No such file '../../data/multitask/train_listfile.csv' "

Hi,

First of all, great work! We are looking forward to contributing our own benchmarks and tests!

Regarding the error I'm receiving, it appears that I am successfully implementing the scripts throughout the README.

However, when I try to run one of the models such as:
python -u main.py --network lstm_2layer --dim 512 --mode train --batch_size 8 --log_every 30

I receive the following error:
IOError: [Errno 2] No such file or directory: '../../data/multitask/train_listfile.csv'

I checked to make sure that I am in the mimic3models/phenotyping/ directory. Also, as the error appears to state I do not have the listed file inside of the mimic3-benchmarks-master/data directory.

Here is the I/O from the terminal if that helps:
`bwimbp:mimic3-benchmarks-master bwi$ source activate py27
(py27) bwimbp:mimic3-benchmarks-master bwi$ export PYTHONPATH=$PYTHONPATH:/Users/bwi/Documents/AD-LSTM-Benchmark/mimic3-benchmarks-master
(py27) bwimbp:mimic3-benchmarks-master bwi

$ python scripts/extract_subjects.py /Users/bwi/Documents/AD-LSTM-Benchmark/MIMIC/physionet.org/works/MIMICIIIClinicalDatabaseDemo/files/version_1_4 data/root/
'START: 136 129 100
int64
REMOVE ICU TRANSFERS: 126 120 94
REMOVE MULTIPLE STAYS PER ADMIT: 115 115 90
/Users/bwi/anaconda/envs/py27/lib/python2.7/site-packages/pandas/core/indexing.py:179: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
self._setitem_with_indexer(indexer, value)
REMOVE PATIENTS AGE < 18: 114 114 89
SUBJECT 89 of 89...DONE!
SUBJECT 89 of 89...DONE!
processing CHARTEVENTS: ROW 100000 of 263201376...last write (104) 667 rows for processing CHARTEVENTS: ROW 200000 of 263201376...last write (298) 25438 rows foprocessing CHARTEVENTS: ROW 300000 of 263201376...last write (72(py27) bwimbp:mimic3-benchmarks-master bwi$ python scripts/validate_events.py data/root/
Namespace(subjects_root_path='data/root/')
processed 1 / 89
('n_events', 747793, 'emptyhadm', 13184, 'noicustay', 45336, 'recovered', 45336, 'couldnotrecover', 0, 'icustaymissinginstays', 1200, 'nohadminstay', 86481)
(py27) bwimbp:mimic3-benchmarks-master bwi$ python scripts/extract_episodes_from_subjects.py data/root/
Subject 10006: reading...got 1 stays, 19 diagnoses, 1164 events...cleaning and converting to time series.../Users/bwi/anaconda/envs/py27/lib/python2.7/site-packages/pandas/core/indexing.py:179: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
self._setitem_with_indexer(indexer, value)
/Users/bwi/Documents/AD-LSTM-Benchmark/mimic3-benchmarks-master/mimic3benchmark/preprocessing.py:157: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
v.ix[idx] = np.nan
/Users/bwi/Documents/AD-LSTM-Benchmark/mimic3-benchmarks-master/mimic3benchmark/preprocessing.py:149: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
v.ix[idx] = np.nan
extracting separate episodes... 206504 DONE!
Subject 10011: reading...got 1 stays, 6 diagnoses, 12486 events...cleaning and converting to time series...extracting separate episodes... 232110 DONE!
Subject 10013: reading...got 1 stays, 8 diagnoses, 2348 events...cleaning and converting to time series...extracting separate episodes... 264446 DONE!
Subject 10017: reading...got 1 stays, 14 diagnoses, 2063 events...cleaning and converting to time series...extracting separate episodes... 204881 DONE!
Subject 10019: reading...got 1 stays, 12 diagnoses, 2717 events...cleaning and converting to time series...extracting separate episodes... 228977 DONE!
Subject 10026: reading...got 1 stays, 7 diagnoses, 3909 events...cleaning and converting to time series...extracting separate episodes... 277021 DONE!
Subject 10027: reading...got 1 stays, 12 diagnoses, 10688 events...cleaning and converting to time series...extracting separate episodes... 286020 DONE!
Subject 10029: reading...got 1 stays, 21 diagnoses, 3381 events...cleaning and converting to time series...extracting separate episodes... 226055 DONE!
Subject 10032: reading...got 1 stays, 8 diagnoses, 3215 events...cleaning and converting to time series...extracting separate episodes... 267090 DONE!
Subject 10033: reading...got 1 stays, 12 diagnoses, 1221 events...cleaning and converting to time series...extracting separate episodes... 254543 DONE!
Subject 10035: reading...got 1 stays, 3 diagnoses, 2019 events...cleaning and converting to time series...extracting separate episodes... 296804 DONE!
Subject 10036: reading...got 1 stays, 12 diagnoses, 1735 events...cleaning and converting to time series...extracting separate episodes... 227834 DONE!
Subject 10038: reading...got 1 stays, 20 diagnoses, 3873 events...cleaning and converting to time series...extracting separate episodes... 235482 DONE!
Subject 10040: reading...got 1 stays, 9 diagnoses, 3780 events...cleaning and converting to time series...extracting separate episodes... 272047 DONE!
Subject 10042: reading...got 1 stays, 9 diagnoses, 10130 events...cleaning and converting to time series.../Users/bwi/anaconda/envs/py27/lib/python2.7/site-packages/pandas/core/generic.py:2999: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
self[name] = value
extracting separate episodes... 258147 DONE!
Subject 10043: reading...got 1 stays, 8 diagnoses, 2555 events...cleaning and converting to time series...extracting separate episodes... 266122 DONE!
Subject 10044: reading...got 1 stays, 9 diagnoses, 4902 events...cleaning and converting to time series...extracting separate episodes... 270154 DONE!
Subject 10046: reading...got 1 stays, 3 diagnoses, 1979 events...cleaning and converting to time series...extracting separate episodes... 213289 DONE!
Subject 10056: reading...got 1 stays, 8 diagnoses, 1305 events...cleaning and converting to time series...extracting separate episodes... 285789 DONE!
Subject 10059: reading...got 2 stays, 21 diagnoses, 14133 events...cleaning and converting to time series...extracting separate episodes... 215460/Users/bwi/Documents/AD-LSTM-Benchmark/mimic3-benchmarks-master/mimic3benchmark/subject.py:41: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
events['HOURS'] = (events.CHARTTIME - dt).apply(lambda s: s / np.timedelta64(1, 's')) / 60./60
248755 DONE!
Subject 10061: reading...got 1 stays, 9 diagnoses, 25884 events...cleaning and converting to time series...extracting separate episodes... 223177 DONE!
Subject 10064: reading...got 1 stays, 4 diagnoses, 505 events...cleaning and converting to time series...extracting separate episodes... 231809 DONE!
Subject 10067: reading...got 1 stays, 8 diagnoses, 73 events...cleaning and converting to time series...extracting separate episodes... 236674 DONE!
Subject 10069: reading...got 1 stays, 8 diagnoses, 16658 events...cleaning and converting to time series...extracting separate episodes... 290490 DONE!
Subject 10074: reading...got 1 stays, 11 diagnoses, 851 events...cleaning and converting to time series...extracting separate episodes... 224021 DONE!
Subject 10076: reading...got 1 stays, 15 diagnoses, 11258 events...cleaning and converting to time series...extracting separate episodes... 201006 DONE!
Subject 10088: reading...got 3 stays, 55 diagnoses, 10799 events...cleaning and converting to time series...extracting separate episodes... 256345 249695 277403 DONE!
Subject 10089: reading...got 1 stays, 5 diagnoses, 3066 events...cleaning and converting to time series...extracting separate episodes... 246080 DONE!
Subject 10090: reading...got 1 stays, 7 diagnoses, 1949 events...cleaning and converting to time series...extracting separate episodes... 295741 DONE!
Subject 10093: reading...got 1 stays, 10 diagnoses, 1539 events...cleaning and converting to time series...extracting separate episodes... 251573 DONE!
Subject 10094: reading...got 2 stays, 19 diagnoses, 10284 events...cleaning and converting to time series...extracting separate episodes... 243600 273347 DONE!
Subject 10098: reading...got 1 stays, 5 diagnoses, 1400 events...cleaning and converting to time series...extracting separate episodes... 262670 DONE!
Subject 10101: reading...got 1 stays, 7 diagnoses, 1086 events...cleaning and converting to time series...extracting separate episodes... 293280 DONE!
Subject 10102: reading...got 1 stays, 12 diagnoses, 3135 events...cleaning and converting to time series...extracting separate episodes... 223870 DONE!
Subject 10104: reading...got 1 stays, 8 diagnoses, 934 events...cleaning and converting to time series...extracting separate episodes... 204201 DONE!
Subject 10106: reading...got 1 stays, 5 diagnoses, 2091 events...cleaning and converting to time series...extracting separate episodes... 217960 DONE!
Subject 10111: reading...got 1 stays, 9 diagnoses, 10494 events...cleaning and converting to time series...extracting separate episodes... 263934 DONE!
Subject 10112: reading...got 1 stays, 9 diagnoses, 4694 events...cleaning and converting to time series...extracting separate episodes... 224063 DONE!
Subject 10114: reading...got 1 stays, 9 diagnoses, 3502 events...cleaning and converting to time series...extracting separate episodes... 234989 DONE!
Subject 10117: reading...got 1 stays, 5 diagnoses, 982 events...cleaning and converting to time series...extracting separate episodes... 214861 DONE!
Subject 10119: reading...got 2 stays, 14 diagnoses, 17327 events...cleaning and converting to time series...extracting separate episodes... 247686 205589 DONE!
Subject 10124: reading...got 1 stays, 16 diagnoses, 4053 events...cleaning and converting to time series...extracting separate episodes... 261764 DONE!
Subject 10126: reading...got 1 stays, 13 diagnoses, 88206 events...cleaning and converting to time series...extracting separate episodes... 249805 DONE!
Subject 10130: reading...got 1 stays, 8 diagnoses, 2124 events...cleaning and converting to time series...extracting separate episodes... 241562 DONE!
Subject 10132: reading...got 1 stays, 9 diagnoses, 1181 events...cleaning and converting to time series...extracting separate episodes... 292910 DONE!
Subject 40124: reading...got 2 stays, 14 diagnoses, 7570 events...cleaning and converting to time series...extracting separate episodes... 279554 269173 DONE!
Subject 40177: reading...got 1 stays, 19 diagnoses, 1445 events...cleaning and converting to time series...extracting separate episodes... 285750 DONE!
Subject 40204: reading...got 1 stays, 14 diagnoses, 855 events...cleaning and converting to time series...extracting separate episodes... 285369 DONE!
Subject 40277: reading...got 1 stays, 10 diagnoses, 1539 events...cleaning and converting to time series...extracting separate episodes... 219013 DONE!
Subject 40286: reading...got 1 stays, 10 diagnoses, 2006 events...cleaning and converting to time series...extracting separate episodes... 238399 DONE!
Subject 40310: reading...got 1 stays, 17 diagnoses, 8757 events...cleaning and converting to time series...extracting separate episodes... 204132 DONE!
Subject 40456: reading...got 1 stays, 9 diagnoses, 2240 events...cleaning and converting to time series...extracting separate episodes... 242790 DONE!
Subject 40503: reading...got 1 stays, 14 diagnoses, 1068 events...cleaning and converting to time series...extracting separate episodes... 293429 DONE!
Subject 40595: reading...got 1 stays, 24 diagnoses, 12066 events...cleaning and converting to time series...extracting separate episodes... 276601 DONE!
Subject 40601: reading...got 1 stays, 11 diagnoses, 2564 events...cleaning and converting to time series...extracting separate episodes... 279529 DONE!
Subject 40612: reading...got 1 stays, 11 diagnoses, 4008 events...cleaning and converting to time series...extracting separate episodes... 231005 DONE!
Subject 40655: reading...got 1 stays, 18 diagnoses, 1163 events...cleaning and converting to time series...extracting separate episodes... 220016 DONE!
Subject 40687: reading...got 1 stays, 14 diagnoses, 1698 events...cleaning and converting to time series...extracting separate episodes... 279183 DONE!
Subject 41795: reading...got 2 stays, 38 diagnoses, 17160 events...cleaning and converting to time series...extracting separate episodes... 216185 293178 DONE!
Subject 41914: reading...got 1 stays, 31 diagnoses, 17864 events...cleaning and converting to time series...extracting separate episodes... 256338 DONE!
Subject 41976: reading...got 14 stays, 251 diagnoses, 41135 events...cleaning and converting to time series...extracting separate episodes... 285272 205170 253931 234541 265505 285353 263095 213315 209797 242680 280943 291067 216493 267267 DONE!
Subject 41983: reading...got 1 stays, 16 diagnoses, 2427 events...cleaning and converting to time series...extracting separate episodes... 283875 DONE!
Subject 42033: reading...got 1 stays, 7 diagnoses, 1005 events...cleaning and converting to time series...extracting separate episodes... 256542 DONE!
Subject 42066: reading...got 1 stays, 10 diagnoses, 5326 events...cleaning and converting to time series...extracting separate episodes... 244243 DONE!
Subject 42075: reading...got 1 stays, 17 diagnoses, 18326 events...cleaning and converting to time series...extracting separate episodes... 298685 DONE!
Subject 42135: reading...got 2 stays, 34 diagnoses, 30076 events...cleaning and converting to time series...extracting separate episodes... 210164 281609 DONE!
Subject 42199: reading...got 1 stays, 15 diagnoses, 6311 events...cleaning and converting to time series...extracting separate episodes... 274509 DONE!
Subject 42231: reading...got 1 stays, 13 diagnoses, 1177 events...cleaning and converting to time series...extracting separate episodes... 254635 DONE!
Subject 42275: reading...got 1 stays, 16 diagnoses, 880 events...cleaning and converting to time series...extracting separate episodes... 290478 DONE!
Subject 42292: reading...got 1 stays, 18 diagnoses, 1302 events...cleaning and converting to time series...extracting separate episodes... 277238 DONE!
Subject 42302: reading...got 1 stays, 23 diagnoses, 1005 events...cleaning and converting to time series...extracting separate episodes... 251281 DONE!
Subject 42321: reading...got 1 stays, 18 diagnoses, 2645 events...cleaning and converting to time series...extracting separate episodes... 201204 DONE!
Subject 42346: reading...got 2 stays, 40 diagnoses, 2136 events...cleaning and converting to time series...extracting separate episodes... 223285 279721 DONE!
Subject 42367: reading...got 1 stays, 19 diagnoses, 38700 events...cleaning and converting to time series...extracting separate episodes... 250305 DONE!
Subject 42412: reading...got 1 stays, 21 diagnoses, 1841 events...cleaning and converting to time series...extracting separate episodes... 241992 DONE!
Subject 42430: reading...got 1 stays, 10 diagnoses, 2919 events...cleaning and converting to time series...extracting separate episodes... 210474 DONE!
Subject 42458: reading...got 1 stays, 6 diagnoses, 897 events...cleaning and converting to time series...extracting separate episodes... 219307 DONE!
Subject 43779: reading...got 1 stays, 11 diagnoses, 1004 events...cleaning and converting to time series...extracting separate episodes... 229194 DONE!
Subject 43798: reading...got 1 stays, 15 diagnoses, 9854 events...cleaning and converting to time series...extracting separate episodes... 243229 DONE!
Subject 43827: reading...got 1 stays, 14 diagnoses, 1889 events...cleaning and converting to time series...extracting separate episodes... 243238 DONE!
Subject 43879: reading...got 1 stays, 8 diagnoses, 1246 events...cleaning and converting to time series...extracting separate episodes... 264258 DONE!
Subject 43881: reading...got 2 stays, 34 diagnoses, 3360 events...cleaning and converting to time series...extracting separate episodes... 214180 243123 DONE!
Subject 43909: reading...got 1 stays, 9 diagnoses, 232 events...cleaning and converting to time series...extracting separate episodes... 297782 DONE!
Subject 43927: reading...got 1 stays, 11 diagnoses, 1865 events...cleaning and converting to time series...extracting separate episodes... 290513 DONE!
Subject 44083: reading...got 3 stays, 14 diagnoses, 8196 events...cleaning and converting to time series...extracting separate episodes... 265615 282640 286428 DONE!
Subject 44154: reading...got 1 stays, 8 diagnoses, 1246 events...cleaning and converting to time series...extracting separate episodes... 217724 DONE!
Subject 44212: reading...got 1 stays, 23 diagnoses, 55533 events...cleaning and converting to time series...extracting separate episodes... 239396 DONE!
Subject 44222: reading...got 1 stays, 11 diagnoses, 1130 events...cleaning and converting to time series...extracting separate episodes... 238186 DONE!
Subject 44228: reading...got 1 stays, 11 diagnoses, 5257 events...cleaning and converting to time series...extracting separate episodes... 217992 DONE!
(py27) bwimbp:mimic3-benchmarks-master bwi$ cd mimic3models
(py27) bwimbp:mimic3models bwi$ cd phenotyping
(py27) bwimbp:phenotyping bwi$ python -u main.py --network lstm_2layer --dim 512 --mode train --batch_size 8 --log_every 30
Namespace(batch_norm=True, batch_size=8, dim=512, dropout=0.0, epochs=100, l1=0, l2=0, load_state='', log_every=30, mode='train', network='lstm_2layer', prefix='', save_every=1, shuffle=True, small_part=False, timestep='0.8')
Traceback (most recent call last):
File "main.py", line 48, in
listfile='../../data/phenotyping/train_listfile.csv')
File "/Users/bwi/Documents/AD-LSTM-Benchmark/mimic3-benchmarks-master/mimic3benchmark/readers.py", line 242, in init
Reader.init(self, dataset_dir, listfile)
File "/Users/bwi/Documents/AD-LSTM-Benchmark/mimic3-benchmarks-master/mimic3benchmark/readers.py", line 13, in init
with open(listfile_path, "r") as lfile:
IOError: [Errno 2] No such file or directory: '../../data/phenotyping/train_listfile.csv'
(py27) bwimbp:phenotyping bwi$ cd ..
(py27) bwimbp:mimic3models bwi$ cd mult*
(py27) bwimbp:multitask bwi$ python -u main.py --network lstm --dim 1024 --mode train --batch_size 8 --log_every 30 --ihm_C 0.02
Namespace(batch_norm=True, batch_size=8, decomp_C=1.0, dim=1024, dropout=0.0, epochs=100, ihm_C=0.02, imputation='previous', l1=0, l2=0, load_state='', log_every=30, los_C=1.0, mode='train', nbins=10, network='lstm', partition='custom', ph_C=1.0, prefix='', save_every=1, shuffle=True, small_part=False, timestep='1.0')
Traceback (most recent call last):
File "main.py", line 58, in
listfile='../../data/multitask/train_listfile.csv')
File "/Users/bwi/Documents/AD-LSTM-Benchmark/mimic3-benchmarks-master/mimic3benchmark/readers.py", line 304, in init
Reader.init(self, dataset_dir, listfile)
File "/Users/bwi/Documents/AD-LSTM-Benchmark/mimic3-benchmarks-master/mimic3benchmark/readers.py", line 13, in init
with open(listfile_path, "r") as lfile:
IOError: [Errno 2] No such file or directory: '../../data/multitask/train_listfile.csv'
(py27) bwimbp:multitask bwi$ `

Thank you for any pointers in the right direction!

My best,
Michael

Add benchmark dataset to derived data repository

We just launched a repo to host derived data from MIMIC.

A static snapshot of the code is hosted here: https://physionet.org/physiotools/mimic-code/
A static snapshot of the data is hosted on PhysioNetWorks - everyone who has access to MIMIC will have access to the data: https://physionet.org/works/MIMICIIIDerivedDataRepository/

Would be good to make a GitHub release of the code, and upload that zip + the generated data to the PNW repository. Should make it easier for people to use the benchmarks. Also worth it to track this repo with Zenodo so that the releases are automatically assigned a DOI.

KeyError while running python script

Hello,

I'm also trying to run the logistic regression python script on the in-hospital-mortality benchmark data. However, it is giving me a key error on capillary refill rate? Does it have something to do with the data?

(ICU_env) oggi2@oggi2-Precision-Tower-7910:/media/oggi2/DATA2/cim_icu/icu_mimic3/mimic3-benchmarks/mimic3models/in_hospital_mortality/logistic$ python2 -u main.py --l2 --C 0.001
Namespace(C=0.001, epochs=100, features='all', l2=True, period='all')
==> reading data and extracting features
Traceback (most recent call last):
File "main.py", line 48, in
(train_X, train_y) = read_and_extract_features(train_reader)
File "main.py", line 42, in read_and_extract_features
X = common_utils.extract_features_from_rawdata(chunk, header, args.period, args.features)
File "/media/oggi2/DATA2/cim_icu/icu_mimic3/mimic3-benchmarks/mimic3models/common_utils.py", line 25, in extract_features_from_rawdata
data = [convert_to_dict(X, header, channel_info) for X in chunk]
File "/media/oggi2/DATA2/cim_icu/icu_mimic3/mimic3-benchmarks/mimic3models/common_utils.py", line 16, in convert_to_dict
if (len(channel_info[channel]['possible_values']) != 0):
KeyError: 'capillary refill rate'

Thank you!

step: python scripts/extract_episodes_from_subjects.py data/root/ failed.

Hi,

I try to follow your step by step and I got an error:

Subject 48304: reading...got 1 stays, 18 diagnoses, 3331 events...cleaning and converting to time series...Exception in clean_events: clean_fio2 '>' not supported between instances of 'str' and 'float'


values:      SUBJECT_ID  HADM_ID  ICUSTAY_ID           CHARTTIME  ITEMID VALUE  \
131       48304   162550      273042 2128-04-01 04:00:00  223835    60   
215       48304   162550      273042 2128-03-31 20:00:00  223835   100   
266       48304   162550      273042 2128-04-01 08:00:00  223835    60   
310       48304   162550      273042 2128-04-01 11:00:00  223835    50   
327       48304   162550      273042 2128-03-31 22:00:00  223835    60   
343       48304   162550      273042 2128-04-01 00:00:00  223835    60   
395       48304   162550      273042 2128-04-01 12:00:00  223835    40   
464       48304   162550      273042 2128-04-01 16:00:00  223835    40   
521       48304   162550      273042 2128-04-02 05:00:00  223835    40   
561       48304   162550      273042 2128-04-02 08:00:00  223835    40   
591       48304   162550      273042 2128-04-01 21:00:00  223835    40   
647       48304   162550      273042 2128-04-02 09:00:00  223835   100   
655       48304   162550      273042 2128-04-02 10:00:00  223835    40   
716       48304   162550      273042 2128-04-02 01:00:00  223835    40   
773       48304   162550      273042 2128-04-02 16:00:00  223835    40   

    VALUEUOM                  VARIABLE           MIMIC_LABEL  
131           Fraction inspired oxygen  Inspired O2 Fraction  
215           Fraction inspired oxygen  Inspired O2 Fraction  
266           Fraction inspired oxygen  Inspired O2 Fraction  
310           Fraction inspired oxygen  Inspired O2 Fraction  
327           Fraction inspired oxygen  Inspired O2 Fraction  
343           Fraction inspired oxygen  Inspired O2 Fraction  
395           Fraction inspired oxygen  Inspired O2 Fraction  
464           Fraction inspired oxygen  Inspired O2 Fraction  
521           Fraction inspired oxygen  Inspired O2 Fraction  
561           Fraction inspired oxygen  Inspired O2 Fraction  
591           Fraction inspired oxygen  Inspired O2 Fraction  
647           Fraction inspired oxygen  Inspired O2 Fraction  
655           Fraction inspired oxygen  Inspired O2 Fraction  
716           Fraction inspired oxygen  Inspired O2 Fraction  
773           Fraction inspired oxygen  Inspired O2 Fraction  ```

I am on python 3.6, which may be the problem. Which python do you use? 
When retrying this step, I don't get the same error, it simply does not add other episodes for what I see (I only have episode 7320). 
I executed the following steps hoping I would at least work a little(I only realized later I had only one episode). Maybe thats be a problem? 

I would appreciate any hints :) 

No default for phenotype_definitions

python extract_subjects.py /data/mimic3/csv/ /data/mimic3/csv/ fails:

Traceback (most recent call last):
  File "extract_subjects.py", line 60, in <module>
    phenotypes = add_hcup_ccs_2015_groups(diagnoses, yaml.load(open(args.phenotype_definitions, 'r')))
TypeError: coercing to Unicode: need string or buffer, NoneType found

I need to specify the phenotype for it to work: python extract_subjects.py /data/mimic3/csv /data/mimic3/csv -p resources/hcup_ccs_2015_definitions.yaml - I think it would be fair to use this resource file as the default, if there is no user input.

List Files

I have one small recommendation. Is it possible to list the .csv files needed?

cannot import name _time_distributed_dense

When trying to run the multitask model I get an import error regarding _time_distributed_dense . I believe it was removed 2.0.0 as per this stack overflow post.
https://stackoverflow.com/questions/45631235/importerror-cannot-import-name-timedistributeddense-in-keras

File "main.py", line 5, in <module>
    from mimic3models import keras_utils
  File "/content/mimic3-benchmarks/mimic3models/keras_utils.py", line 11, in <module>
    from keras.layers.recurrent import _time_distributed_dense
ImportError: cannot import name _time_distributed_dense

Fraction of inspired oxygen is different after python3 support commit.

#23 introduces some differences in the created datasets. The difference is present only in the Fraction of inspired oxygen columns (about 50 stays affected). All the differences have the same from: before the commit the value was 0.00x and after the commit it is 0.x.
The difference is probably caused by this change d1345ff#diff-1d490cd0d31a8ee697543d2ab47815e8L141. It seems that this fixes something, because 0.00x values for fraction of inspired oxygen are very strange, usually it is 0.x.

No dataset created

I find all the scripts are successfully executed but when I create the dataset, there is no dataset in any of the folder.
Specifically, I run

python scripts/create_in_hospital_mortality.py data/root/ data/in-hospital-mortality/

And the script would output 0 record, and all the list file are empty.

Can anyone know what might go wrong?

I find plenty of files and records under data/root/, so I assume it is the last step which goes wrong.

mimic3benchmark/mimic3csv.py repeated/copied lines

Lines 189 and afterwards are repeated; moreover, they appear to have been repeated from inside a for loop but then indented to be outside the for loop. As a result, it appears that this portion of the code may not work.

Shouldn't line 184 and 185 be placed AFTER line 186 and line 187?

In fact, I get errors when code runs into this section of the code about rowno being used before its assignment.

Issue with the data

Hi,
i am new in this area. Kindly suggest how to download MIMIC data. Is it free of cost.

On Windows?

Hrant et al., thanks so much for writing this amazing gateway paper. I am very new to machine learning. Has anyone tried out executing this code on windows yet? Iv'e had multiple issues (from unzipping on windows, theano on windows etc) , but most were fixable with a powerful search engine and common sense. However, I am not sure which of those is failing me now:
I finished training the 2 layer LSTM for phenotyping model on my personal instance of MIMIC III for 30 epochs and have their various states stored in the state folder. now do I rename one of them to best_model.state and drop it in the phenotyping folder? I did so but got an EOFError when I tried to cPickle.load them. Kindly help!

Minor issue in preprocessing demographics

I believe the following is a better ways of processing the demographics:

e_map = {'ASIAN': 1,
         'BLACK': 2,
         'CARIBBEAN ISLAND': 2,
         'HISPANIC': 3,
         'SOUTH AMERICAN': 3,
         'WHITE': 4,
         'MIDDLE EASTERN': 4,
         'PORTUGUESE': 4,
         'AMERICAN INDIAN': 5,
         'NATIVE HAWAIIAN': 6,
         'UNABLE TO OBTAIN': 0,
         'PATIENT DECLINED TO ANSWER': 0,
         'UNKNOWN': 0,
         '': 0}

Your preprocessing ignores American Indians and Native Hawaiians and also does not treat Caribbean Islanders, South Americans, Middle Easterns properly.

Class weights

I ran your main script in test mode with the best parameters for the standard LSTM. I noticed that the metrics for the two classes were very different. Did you not apply class weights? If not, why not?

Levels of Granularity Inconsistency

It is sometimes hard to follow the exact granularity at which the entire data analysis works on. For example, the split_train_test.py works on the granularity of subjects to create the test and train partitions. However, the validation partitioning occurs on the level of episodes.

In addition, validate_events.py works on the assumption that each icustay_id is associated with one hadm_id. I ran the following SQL query on the query builder from MIT.

SELECT count(distinct(subject_id, hadm_id, icustay_id)) FROM icustays where hadm_id in (SELECT hadm_id from (SELECT hadm_id, count(hadm_id) from icustays group by hadm_id having count(hadm_id) > 1) as foo)

There are at least 7006 hospital admissions where multiple icustays were associated with a single hadm_id. Validate_events.py currently assumes there is a one to one mapping between hadm_id and icustay_id meaning that it will remove events unnecessarily due to what it believes to be inconsistent icustay_id.

itemids

Hello again. I was trying to run python extract_subjects.py. However, events don't seem to be dealt with unless if the itemids_file flag is set to a location where a CSV of itemids are stored.

Is this an issue of documentation missing or is there a problem with how I ran extract_subjects.py?

F1 scores

Hello again,

I have noticed that in the file metrics.py, there's code to compute precision and recall. However, you seem to be converting inputs to binary arrays:
metrics.precision_score(y_true > 0.5, predictions > 0.5, average="micro")
If I need to compute the F1 scores, is this the way to go? Why can't I use the raw values of y and y^?

Thanks!

CSV file names

When I download MIMIC files from PhysioNetWorks, the CSVs have names like PATIENTS.csv and not PATIENTS_DATA_TABLE.csv. Does this mean some other preprocessing should happen before running extract_subjects ?

remove_outliers_for_variable

I cannot find an instance where remove_outliers_for_variable is actually used in the current code base. This is problematic as there is an excellent dataset with valid ranges for variables that is not integrated into the codebase.

Is this actually the case or did I miss something? I used a full search on my cloned repo for the exact phrasing of the function name but I cannot seem to find it; I also see that read variable_ranges is used as an import but is never called.

why did you move from lasagne to keras?

What improvements did you notice? I also saw that the LSTM shrank from 512 to 256.. Can you please tell me why so I don't have to go through the code in depth?

speeding up training times

I'm training the models on an Azure VM with 64 cores available but training only uses 1 from what I can tell, is there an option to make the training multi-core?

Update README

Start filling out details of README to make it easier for folks to build the benchmarks, as well as contribute.

Duplicate values in _timeseries files

Almost all timeseries files generated by extract_episodes_from_subjects.py contain duplicate rows. Can you confirm this is a bug? We can ignore them or try to fix the code

An example from subject_id = 11926:
image

Generator Error(s) when trying to train multitask model

I have been getting a variety of errors when attempting to train the multitask models the first one I got was this

  File "main.py", line 141, in <module>
    shuffle=True)
  File "/content/mimic3-benchmarks/mimic3-benchmarks/data/mimic3-benchmarks/mimic3models/multitask/utils.py", line 27, in __init__
    ret = common_utils.read_chunk(reader, N)
  File "/content/mimic3-benchmarks/mimic3-benchmarks/data/mimic3-benchmarks/mimic3models/common_utils.py", line 31, in read_chunk
    ret = reader.read_next()
  File "/content/mimic3-benchmarks/mimic3-benchmarks/data/mimic3-benchmarks/mimic3benchmark/readers.py", line 35, in read_next
    return self.read_example(to_read_index)
  File "/content/mimic3-benchmarks/mimic3-benchmarks/data/mimic3-benchmarks/mimic3benchmark/readers.py", line 336, in read_example
    (X, header) = self._read_timeseries(name)
  File "/content/mimic3-benchmarks/mimic3-benchmarks/data/mimic3-benchmarks/mimic3benchmark/readers.py", line 306, in _read_timeseries
    return (np.stack(ret), header)
  File "/usr/local/lib/python2.7/dist-packages/numpy/core/shape_base.py", line 353, in stack
    raise ValueError('all input arrays must have the same shape')
ValueError: all input arrays must have the same shape

I figured it might be a few bad files so I add a try except statement to just skip those ones. That seemed to solve that problem but then I got another error:

  File "main.py", line 141, in <module>
    shuffle=True)
  File "/content/mimic3-benchmarks/mimic3-benchmarks/data/mimic3-benchmarks/mimic3-benchmarks/mimic3-benchmarks/mimic3models/multitask/utils.py", line 45, in __init__
    self._preprocess_single(Xs[i], ts[i], ihms[i], decomps[i], loss[i], phenos[i])
  File "/content/mimic3-benchmarks/mimic3-benchmarks/data/mimic3-benchmarks/mimic3-benchmarks/mimic3-benchmarks/mimic3models/multitask/utils.py", line 68, in _preprocess_single
    X = self.discretizer.transform(X, end=max_time)[0]
  File "/content/mimic3-benchmarks/mimic3-benchmarks/data/mimic3-benchmarks/mimic3-benchmarks/mimic3-benchmarks/mimic3models/preprocessing.py", line 216, in transform
    write(data, bin_id, channel, row[j], begin_pos)
  File "/content/mimic3-benchmarks/mimic3-benchmarks/data/mimic3-benchmarks/mimic3-benchmarks/mimic3-benchmarks/mimic3models/preprocessing.py", line 196, in write
    data[bin_id, begin_pos[channel_id]] = float(value)
ValueError: could not convert string to float: NEG ```

[error]AttributeError: 'int' object has no attribute 'ndim'

HI
Thanks for sharing the code.
when I use python3 to run this model:
python3 -um mimic3models.in_hospital_mortality.main --network mimic3models/keras_models/lstm.py --dim 16 --timestep 1.0 --depth 2 --dropout 0.3 --mode train --batch_size 8 --output_dir mimic3models/in_hospital_mortality/
The error happend:

==> training
Traceback (most recent call last):
  File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/media/bigdate/software/MIMIC-III/mimic3-benchmarks/mimic3models/in_hospital_mortality/main.py", line 154, in <module>
    batch_size=args.batch_size)
  File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 1593, in fit
    batch_size=batch_size)
  File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 1430, in _standardize_user_data
    exception_prefix='target')
  File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 70, in _standardize_input_data
    data = [np.expand_dims(x, 1) if x is not None and x.ndim == 1 else x for x in data]
  File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 70, in <listcomp>
    data = [np.expand_dims(x, 1) if x is not None and x.ndim == 1 else x for x in data]
AttributeError: 'int' object has no attribute 'ndim'

Maybe the preprocessing data had some mistake,so I retryed the code :
python3 -m mimic3benchmark.scripts.create_in_hospital_mortality data/root/ data/in-hospital-mortality/
python3 -m mimic3models.split_train_val data/in-hospital-mortality
but this same error occured again.

itemid_to_variable_map.csv status

I note that we filter out the variable whose STATUS == 'ready' in itemid_to_variable_map.csv when we run extract_episodes_from_subjects.py

var_map = var_map.ix[(var_map.STATUS == 'ready')]

I don't know what the STATUS column in itemid_to_variable_map.csv means.

And I wonder whether i can use some more variables such as Alanine aminotransferase whose status is verify in itemid_to_variable_map.csv.

Thx.

Advice for project..

@Harhro94 @Hrant-Khachatrian

I would love your advice on a conference paper that we are working on here at Oak Ridge National Laboratory.

We would love to contribute a time-series anomaly detection benchmark for ICU events. Basically, the idea is we would predict what the next "event" would be for each patient stay, including lab values, diagnoses, demographic information. After the model is fully trained, we would then compare this prediction to what actually happened in the time-series. If there is a larger than expected error, then we would flag this event (preferably something specific in the event) as an anomaly. This technique is detailed here.

I imagine this should be fairly straightforward; however, I was curious to get your advice based on your experience with creating LSTMS for this dataset in particular.

Thank you!

My best,
Michael

P.S- We will of course be citing your fantastic work, here.

Suggestion

Thank you for the code it made it a lot easier for me to use MIMIC data, couple of suggestions :

  • Maybe add a verbosity parameter because it sure takes more time with all the printing
  • And a good addition would be to have a progress bar for each task (I think that tqdm does this quite easily)

Can't find zero.normalizer pickle file

Hi Nice work first!
When I have prepared train and test set for in-hospital-mortality. I get an error like this
File "E:/PythonProject/mimic3-benchmarks/mimic3models/in_hospital_mortality/main.py", line 58, in <module> normalizer.load_params(normalizer_state)
File "E:\PythonProject\mimic3-benchmarks\mimic3models\preprocessing.py", line 215, in load_params with open(load_file_path, "rb") as load_file:
OSError: [Errno 22] Invalid argument: 'E:/PythonProject/mimic3-benchmarks/mimic3models/in_hospital_mortality\\ihm_ts1.0.input_str:previous.start_time:zero.normalizer'

I follow the instructions in the Building a benchmark but I didn't got the zero.normalizer file.
Can anyone help me?

Add DB/SQL support

Is there a way to support the reading of data directly from a database instead of from csv files? Our lab hosts MIMIC3 data in an SQL database, so it would be very helpful for us.

Perhaps some optional argument that takes in key information to set up the connection, which could be passed in to functions in mimic3csv.py? Or maybe some kind of check for a global config file that sets up connection to MIMIC3 database?

I assume that the dataframe returned from the CSV would be the same as the dataframe from a pd.read_sql("SELECT * FROM ", connection)?

In-Hospital Mortality -- Number of patients in datasets

Hi, I've run your scripts to generate the datasets for the in-hospital-mortality task but I've found that there are substantially fewer patients in the train/val/test listfiles than is reported in the paper. In the paper (Section 2.1 at the end) it's reported that there are 42,276 patients in the dataset. However the numbers of patients I get are:

3236 test_listfile.csv
14681 train_listfile.csv
3222 val_listfile.csv
21139 total

Is there an error in the paper or have I done something wrong?

Is there a pre-trained model?

Hi,

Is there a pre-trained model? I'm interested in using the hidden states for some other neural network models.

Thanks.

decompensation benchmark python script issue

Hello,

I'm running the python script for logistic regression on the decompensation benchmark data. And I'm getting this error:

(ICU_env) oggi2@oggi2-Precision-Tower-7910:/media/oggi2/DATA2/cim_icu/icu_mimic3/mimic3-benchmarks/mimic3models/decompensation/logistic$ python2 -u main.py
Namespace(epochs=100, features='all', period='all')
==> reading data and extracting features
Traceback (most recent call last):
File "main.py", line 58, in
(train_X, train_y) = read_and_extract_features(train_reader, chunk_size)
File "main.py", line 44, in read_and_extract_features
ret = common_utils.read_chunk(reader, read_chunk_size)
File "/media/oggi2/DATA2/cim_icu/mimic3-benchmarks/mimic3models/common_utils.py", line 31, in read_chunk
ret = reader.read_next()
File "/media/oggi2/DATA2/cim_icu/mimic3-benchmarks/mimic3benchmark/readers.py", line 33, in read_next
return self.read_example(to_read_index)
File "/media/oggi2/DATA2/cim_icu/mimic3-benchmarks/mimic3benchmark/readers.py", line 84, in read_example
(X, header) = self._read_timeseries(name, t)
File "/media/oggi2/DATA2/cim_icu/mimic3-benchmarks/mimic3benchmark/readers.py", line 58, in _read_timeseries
return (np.stack(ret), header)
File "/usr/local/lib/python2.7/dist-packages/numpy/core/shape_base.py", line 353, in stack
raise ValueError('all input arrays must have the same shape')
ValueError: all input arrays must have the same shape

What could be the issue?

Range

My research involves exploring computational architectures in particular neural networks to find, express and extract information to improve clinical decision making and patient outcomes by analysing ICU variables.

I am currently working on a subset of the medical data (MIMIC III) and would like to know if you have the average normal ranges for the variables used in the database. For clarity, the ranges I am referring to are the normal ranges (that is for patients not in ICU), for example, the normal range for temperature is 97°F (36.1°C) to 99°F (37.2°C).

I am unable to obtain reliable ranges for all the variables from one source and I am hoping to obtain the ranges directly from your team as you are the source of the data and this reduces any discrepancies.

Negative hours in _timeseries files

The hours field in _timeseries files sometimes contain negative values. Is this fine? Does it denote some measurements before some "start" time that might be useful for predictions?

Example: subject_id=11782

image

diagnosis_labels

Hi, thanks for your good work. We want to use your code to deal with our EHR dataset.
We really want to konw how to get the diagnosis_labels which in mimic3-benchmarks-master/mimic3benchmark/preprocessing.py.
Another question, for different dataset, do we need to regenerate the files in mimic3-benchmarks-master/mimic3benchmark/resources.
Look forword for you answer. Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.