GithubHelp home page GithubHelp logo

thuml / time-series-library Goto Github PK

View Code? Open in Web Editor NEW
4.4K 57.0 741.0 78.05 MB

A Library for Advanced Deep Time Series Models.

License: MIT License

Python 44.63% Shell 47.57% Jupyter Notebook 7.80%
deep-learning time-series

time-series-library's Introduction

Time Series Library (TSlib)

TSlib is an open-source library for deep learning researchers, especially for deep time series analysis.

We provide a neat code base to evaluate advanced deep time series models or develop your model, which covers five mainstream tasks: long- and short-term forecasting, imputation, anomaly detection, and classification.

🚩News (2024.04) Many thanks for the great work from frecklebars. The famous sequenctial model Mamba has been included in our library. See this file, where you need to install mamba_ssm with pip at first.

🚩News (2024.03) Given the inconsistent look-back length of various papers, we split the long-term forecasting in the leaderboard into two categories: Look-Back-96 and Look-Back-Searching. We recommend researchers read TimeMixer, which includes both settings of the look-back length into experiments for scientific rigor.

🚩News (2023.10) We add an implementation to iTransformer, which is the state-of-the-art model for long-term forecasting. The official code and complete scripts of iTransformer can be found here.

🚩News (2023.09) We added a detailed tutorial for TimesNet and this library, which is quite friendly to beginners of deep time series analysis.

🚩News (2023.02) We release the TSlib as a comprehensive benchmark and code base for time series models, which is extended from our previous GitHub repository Autoformer.

Leaderboard for Time Series Analysis

Till March 2024, the top three models for five different tasks are:

Model
Ranking
Long-term
Forecasting
Look-Back-96
Long-term
Forecasting
Look-Back-Searching
Short-term
Forecasting
Imputation Classification Anomaly
Detection
🥇 1st iTransformer TimeMixer TimesNet TimesNet TimesNet TimesNet
🥈 2nd TimeMixer PatchTST Non-stationary
Transformer
Non-stationary
Transformer
Non-stationary
Transformer
FEDformer
🥉 3rd TimesNet DLinear FEDformer Autoformer Informer Autoformer

Note: We will keep updating this leaderboard. If you have proposed advanced and awesome models, you can send us your paper/code link or raise a pull request. We will add them to this repo and update the leaderboard as soon as possible.

Compared models of this leaderboard. ☑ means that their codes have already been included in this repo.

  • TimeMixer - TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting [ICLR 2024] [Code].
  • TSMixer - TSMixer: An All-MLP Architecture for Time Series Forecasting [arXiv 2023] [Code]
  • iTransformer - iTransformer: Inverted Transformers Are Effective for Time Series Forecasting [ICLR 2024] [Code].
  • PatchTST - A Time Series is Worth 64 Words: Long-term Forecasting with Transformers [ICLR 2023] [Code].
  • TimesNet - TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis [ICLR 2023] [Code].
  • DLinear - Are Transformers Effective for Time Series Forecasting? [AAAI 2023] [Code].
  • LightTS - Less Is More: Fast Multivariate Time Series Forecasting with Light Sampling-oriented MLP Structures [arXiv 2022] [Code].
  • ETSformer - ETSformer: Exponential Smoothing Transformers for Time-series Forecasting [arXiv 2022] [Code].
  • Non-stationary Transformer - Non-stationary Transformers: Exploring the Stationarity in Time Series Forecasting [NeurIPS 2022] [Code].
  • FEDformer - FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting [ICML 2022] [Code].
  • Pyraformer - Pyraformer: Low-complexity Pyramidal Attention for Long-range Time Series Modeling and Forecasting [ICLR 2022] [Code].
  • Autoformer - Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting [NeurIPS 2021] [Code].
  • Informer - Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting [AAAI 2021] [Code].
  • Reformer - Reformer: The Efficient Transformer [ICLR 2020] [Code].
  • Transformer - Attention is All You Need [NeurIPS 2017] [Code].

See our latest paper [TimesNet] for the comprehensive benchmark. We will release a real-time updated online version soon.

Newly added baselines. We will add them to the leaderboard after a comprehensive evaluation.

  • Mamba - Mamba: Linear-Time Sequence Modeling with Selective State Spaces [arXiv 2023] [Code]
  • SegRNN - SegRNN: Segment Recurrent Neural Network for Long-Term Time Series Forecasting [arXiv 2023] [Code].
  • Koopa - Koopa: Learning Non-stationary Time Series Dynamics with Koopman Predictors [NeurIPS 2023] [Code].
  • FreTS - Frequency-domain MLPs are More Effective Learners in Time Series Forecasting [NeurIPS 2023] [Code].
  • TiDE - Long-term Forecasting with TiDE: Time-series Dense Encoder [arXiv 2023] [Code].
  • FiLM - FiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting [NeurIPS 2022][Code].
  • MICN - MICN: Multi-scale Local and Global Context Modeling for Long-term Series Forecasting [ICLR 2023][Code].
  • Crossformer - Crossformer: Transformer Utilizing Cross-Dimension Dependency for Multivariate Time Series Forecasting [ICLR 2023][Code].

Usage

  1. Install Python 3.8. For convenience, execute the following command.
pip install -r requirements.txt
  1. Prepare Data. You can obtain the well pre-processed datasets from [Google Drive] or [Baidu Drive], Then place the downloaded data in the folder./dataset. Here is a summary of supported datasets.

  1. Train and evaluate model. We provide the experiment scripts for all benchmarks under the folder ./scripts/. You can reproduce the experiment results as the following examples:
# long-term forecast
bash ./scripts/long_term_forecast/ETT_script/TimesNet_ETTh1.sh
# short-term forecast
bash ./scripts/short_term_forecast/TimesNet_M4.sh
# imputation
bash ./scripts/imputation/ETT_script/TimesNet_ETTh1.sh
# anomaly detection
bash ./scripts/anomaly_detection/PSM/TimesNet.sh
# classification
bash ./scripts/classification/TimesNet.sh
  1. Develop your own model.
  • Add the model file to the folder ./models. You can follow the ./models/Transformer.py.
  • Include the newly added model in the Exp_Basic.model_dict of ./exp/exp_basic.py.
  • Create the corresponding scripts under the folder ./scripts.

Citation

If you find this repo useful, please cite our paper.

@inproceedings{wu2023timesnet,
  title={TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis},
  author={Haixu Wu and Tengge Hu and Yong Liu and Hang Zhou and Jianmin Wang and Mingsheng Long},
  booktitle={International Conference on Learning Representations},
  year={2023},
}

Contact

If you have any questions or suggestions, feel free to contact:

Or describe it in Issues.

Acknowledgement

This project is supported by the National Key R&D Program of China (2021YFB1715200).

This library is constructed based on the following repos:

All the experiment datasets are public, and we obtain them from the following links:

All Thanks To Our Contributors

time-series-library's People

Contributors

alienishi avatar ayushrakesh avatar bhargavshirin avatar cennn avatar chucktg avatar eltociear avatar farukhs52 avatar frecklebars avatar guokaku avatar haniejalili avatar htg17 avatar imanmousaei avatar jdleaverton avatar jurij-ch avatar kashif avatar lichenyu20 avatar liyiersan avatar lss-1138 avatar mico3o avatar mohitd404 avatar rs-labhub avatar shivam250702 avatar shresthasurav avatar wenweithu avatar wuhaixu2016 avatar zdandsomsp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

time-series-library's Issues

Transform raw data

作者您好,论文中提到了转换原始数据,这个地方我没明白怎么转换的,您能说一下吗?

Why not FiLM

FiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting
https://github.com/DAMO-DI-ML/NeurIPS2022-FiLM
And I wonder why there is no compartion with FiLM in your paper. I think this type of orthogonal basis projection model is also a pretty good long-term ts prediction method.

显存不足

作者你好,在我替换自己的数据集后,发生了显存不足的问题。在文献中我看到你用的时24G的单张TITAN,而我的服务器时两张12G的TITAN,我想问一下你在实验过程中消耗的显存大概是多少呢?

Is there tf implementation?

Hi,

We would like to try TimesNet model. And our production environment is based on tensorflow and tf serving.
If there is also tf implementations, that would be great.

运行TimesNet_ETTh1.sh时报错ValueError: source code string cannot contain null bytes

您好,我根据您提供的requirement.txt配置好了环境,想使用您的代码跑ETTh1数据集上长期时序预测的实验。我先将您的数据集放在自己建立的dataset下面解压,解压出来多了一层all_datasets的文件夹,因此我修该了TimesNet_ETTh1.sh中root_path的属性,但是当我按照提示bash ./scripts/long_term_forecast/ETT_script/TimesNet_ETTh1.sh运行TimesNet_ETTh1.sh时出现报错提示:
Traceback (most recent call last) :
File "run.py", line 4, in
fron exp.exp_long_tern_forecasting import Exp_Long_Tern_Forecast
ValueError: source code string cannot contain null bytes
请问这个问题如何解决呢

No module named 'reformer_pytorch'

I failed to run the code because there is no module named 'reformer_pytorch'.
I found there is "from reformer_pytorch import LSHSelfAttention" in SelfAttention_Family.py
and there is no file related to reformer_pytorch.
Could you give me a solution?

MICN x_dec 被处理了两次吗?

File /storage/thumlbased/TimeSeriesLibrary/models/MICN.py:161, in Model.forecast(self, x_enc, x_mark_enc, x_dec, x_mark_dec)
159 zeros = torch.zeros([x_dec.shape[0], self.pred_len, x_dec.shape[2]], device=x_enc.device)
160 seasonal_init_dec = torch.cat([seasonal_init_enc[:, -self.seq_len:, :], zeros], dim=1)
--> 161 dec_out = self.dec_embedding(seasonal_init_dec, x_mark_dec)

在exp_long_term_forecasting.py中看起来已经有类似的处理,重复了吗?

关于reshape

您好,作者,感谢您的TimesNet网络,看了您的论文,我想问一下。对于一个T×C 的时间序列,在进行reshape时,是将C整体来进行reshape,还是将C分开,分别对 C1、C2...Cn 进行reshape?(比如脑电信号这种横轴为时间点,纵轴是不同通道的时间序列)

adjustment function in utils.tools

Could you please explain the idea of using ground truth labels in predictions before quality estimation?
Does this approach have any target leaks?

how to use own dataset?

首先非常感谢作者的分享,这边有几个问题想请教一下你。
1、我想把ETTh数据集替换成我自己的数据集,我的数据集特征如下图,单变量预测单变量,请问我需要做哪些工作呢?
微信图片_20230314160431
我试图尝试调整了一些参数,把TimesNet_ETTh1.sh,run.py,data_loader.py这三个文件中的feature从M改成了S,随后变直接把自己的数据集放进去,因此出现了以下的错误。
微信图片_20230314161526

How to use the Timenet code?

hello,As a student, I would like to practice through your published code, and have some questions to ask you.
1、# long-term forecast bash ./scripts/long_term_forecast/ETT_script/TimesNet_ETTh1.sh.
I followed the instructions to run the file, but it didn't seem to respond and appeared ERROR:gpu_init.cc(481)] Passthrough is not supported, GL is disabled, ANGLE is.
2、The load data function in the package sktime does not seem to work. Is there a better solution
3、I sincerely want to ask you if you can guide me to explain how I should start to run the Timenet model in the article.

positionEmbedding的实际意义?

非常棒的idea创新!请问楼主,源码中下面positionEmbedding每步操作的实际意义是什么呢?
pe = torch.zeros(max_len, d_model).float()
pe.require_grad = False # 不更新参数
position = torch.arange(0, max_len).float().unsqueeze(1)
div_term = (torch.arange(0, d_model, 2).float() * -(math.log(10000.0) / d_model)).exp()
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)

转换数据(嵌入)后提取周期

先对原始时间序列 $T \times C$ 进行嵌入到 $T \times d_{model}$,然后进行提取周期等后续操作;
有没有做过不进行嵌入直接进行周期提取的实验呢,嵌入对于模型是否是必要的和有效的呢?

The Script can't get a high performance?

For classification:
python -u run.py --task_name classification --is_training 1 --root_path ./dataset/EthanolConcentration/ --model_id EthanolConcentration --model TimesNet --data UEA --e_layers 3 --batch_size 16 --d_model 32 --d_ff 32 --top_k 3 --des 'Exp' --itr 1 --learning_rate 0.001 --train_epochs 30 --patience 10 --devices 0 --loss Cross Entropy

Namespace(task_name='classification', is_training=1, model_id='EthanolConcentration', model='TimesNet', data='UEA', root_path='./dataset/EthanolConcentration/', data_path='EthanolConcentration_TRAIN.ts', features='M', target='OT'
, freq='h', checkpoints='./checkpoints/', seq_len=96, label_len=48, pred_len=96, seasonal_patterns='Monthly', mask_rate=0.25, anomaly_ratio=0.25, top_k=3, num_kernels=6, enc_in=7, dec_in=7, c_out=7, d_model=32
, n_heads=8, e_layers=3, d_layers=1, d_ff=32, moving_avg=25, factor=1, distil=True, dropout=0.1, embed='timeF', activation='gelu', output_attention=False, num_workers=0, itr=1, train_epochs=30, batch_size=16,
patience=10, learning_rate=0.001, des='Exp', loss='CrossEntropy', lradj='type1', use_amp=False, use_gpu=True, gpu=0, use_multi_gpu=False, devices='0', p_hidden_dims=[128, 128], p_hidden_layers=2)
Use GPU: cuda:0

start training : classification_EthanolConcentration_TimesNet_UEA_ftM_sl96_ll48_pl96_dm32_nh8_el3_dl1_df32_fc1_ebtimeF_dtTrue_Exp_0>>>
261
TRAIN 261
263
TEST 263
263
TEST 263
Epoch: 1 cost time: 9.020669221878052
Epoch: 1, Steps: 17 | Train Loss: 7.638 Vali Loss: 4.421 Vali Acc: 0.240 Test Loss: 4.436 Test Acc: 0.240
Validation loss decreased (inf --> -0.239544). Saving model ...
Epoch: 2 cost time: 4.3697826862335205
Epoch: 2, Steps: 17 | Train Loss: 3.181 Vali Loss: 2.245 Vali Acc: 0.259 Test Loss: 2.181 Test Acc: 0.259
Validation loss decreased (-0.239544 --> -0.258555). Saving model ...
Epoch: 3 cost time: 4.335063219070435
Epoch: 3, Steps: 17 | Train Loss: 1.939 Vali Loss: 2.468 Vali Acc: 0.251 Test Loss: 2.484 Test Acc: 0.251
EarlyStopping counter: 1 out of 10
Epoch: 4 cost time: 4.311454772949219
Epoch: 4, Steps: 17 | Train Loss: 2.317 Vali Loss: 3.222 Vali Acc: 0.251 Test Loss: 3.222 Test Acc: 0.251
EarlyStopping counter: 2 out of 10
Epoch: 5 cost time: 4.3489673137664795
Epoch: 5, Steps: 17 | Train Loss: 1.622 Vali Loss: 1.773 Vali Acc: 0.228 Test Loss: 1.793 Test Acc: 0.228
EarlyStopping counter: 3 out of 10
Updating learning rate to 6.25e-05
Epoch: 6 cost time: 4.338350772857666
Epoch: 6, Steps: 17 | Train Loss: 1.099 Vali Loss: 1.676 Vali Acc: 0.274 Test Loss: 1.699 Test Acc: 0.274
Validation loss decreased (-0.258555 --> -0.273764). Saving model ...
Epoch: 7 cost time: 4.292100429534912
Epoch: 7, Steps: 17 | Train Loss: 1.017 Vali Loss: 1.671 Vali Acc: 0.255 Test Loss: 1.643 Test Acc: 0.255
EarlyStopping counter: 1 out of 10
Epoch: 8 cost time: 4.2485511302948
Epoch: 8, Steps: 17 | Train Loss: 1.015 Vali Loss: 1.627 Vali Acc: 0.228 Test Loss: 1.623 Test Acc: 0.228
EarlyStopping counter: 2 out of 10
Epoch: 9 cost time: 4.330702066421509
Epoch: 9, Steps: 17 | Train Loss: 0.980 Vali Loss: 1.650 Vali Acc: 0.289 Test Loss: 1.642 Test Acc: 0.289
Validation loss decreased (-0.273764 --> -0.288973). Saving model ...
Epoch: 10 cost time: 4.375368595123291
Epoch: 10, Steps: 17 | Train Loss: 0.974 Vali Loss: 1.710 Vali Acc: 0.224 Test Loss: 1.715 Test Acc: 0.224
EarlyStopping counter: 1 out of 10
Updating learning rate to 1.953125e-06
Epoch: 11 cost time: 4.352377891540527
Epoch: 11, Steps: 17 | Train Loss: 0.981 Vali Loss: 1.668 Vali Acc: 0.247 Test Loss: 1.680 Test Acc: 0.247
EarlyStopping counter: 2 out of 10
Epoch: 18, Steps: 17 | Train Loss: 0.934 Vali Loss: 1.632 Vali Acc: 0.247 Test Loss: 1.628 Test Acc: 0.247
EarlyStopping counter: 9 out of 10
Epoch: 19 cost time: 4.405331134796143
Epoch: 19, Steps: 17 | Train Loss: 0.949 Vali Loss: 1.649 Vali Acc: 0.247 Test Loss: 1.660 Test Acc: 0.247
EarlyStopping counter: 10 out of 10
Early stopping
testing : classification_EthanolConcentration_TimesNet_UEA_ftM_sl96_ll48_pl96_dm32_nh8_el3_dl1_df32_fc1_ebtimeF_dtTrue_Exp_0<<<
263
TEST 263
test shape: torch.Size([263, 4]) torch.Size([263, 1])
accuracy:0.2889733840304182 python -u run.py --task_name classification --is_training 1 --root_path ./dataset/EthanolConcentration/ --model_id EthanolConcentration --model TimesNet --data UEA --e_laye
rs 3 --batch_size 16 --d_model 32 --d_ff 32 --top_k 3 --des 'Exp' --itr 1 --learning_rate 0.001 --train_epochs 30 --patience 10 --devices 0 --loss CrossEntropy --data_path EthanolConcentration_TRAIN.ts
Args in experiment:
Namespace(task_name='classification', is_training=1, model_id='EthanolConcentration', model='TimesNet', data='UEA', root_path='./dataset/EthanolConcentration/', data_path='EthanolConcentration_TRAIN.ts', featu
res='M', target='OT', freq='h', checkpoints='./checkpoints/', seq_len=96, label_len=48, pred_len=96, seasonal_patterns='Monthly', mask_rate=0.25, anomaly_ratio=0.25, top_k=3, num_kernels=6, enc_in=7, dec_in=7,
c_out=7, d_model=32, n_heads=8, e_layers=3, d_layers=1, d_ff=32, moving_avg=25, factor=1, distil=True, dropout=0.1, embed='timeF', activation='gelu', output_attention=False, num_workers=0, itr=1, train_epochs
=30, batch_size=16, patience=10, learning_rate=0.001, des='Exp', loss='CrossEntropy', lradj='type1', use_amp=False, use_gpu=True, gpu=0, use_multi_gpu=False, devices='0', p_hidden_dims=[128, 128], p_hidden_lay
ers=2)
Use GPU: cuda:0
261
TRAIN 261
263
Epoch: 3, Steps: 17 | Train Loss: 1.939 Vali Loss: 2.470 Vali Acc: 0.247 Test Loss: 2.487 Test Acc: 0.247
EarlyStopping counter: 1 out of 10
Epoch: 4 cost time: 4.327386856079102
Epoch: 4, Steps: 17 | Train Loss: 2.315 Vali Loss: 3.216 Vali Acc: 0.247 Test Loss: 3.215 Test Acc: 0.247
EarlyStopping counter: 2 out of 10
Epoch: 5 cost time: 4.256535053253174
Epoch: 5, Steps: 17 | Train Loss: 1.625 Vali Loss: 1.791 Vali Acc: 0.232 Test Loss: 1.814 Test Acc: 0.232
EarlyStopping counter: 3 out of 10
Updating learning rate to 6.25e-05
Epoch: 6 cost time: 4.306419372558594
Epoch: 6, Steps: 17 | Train Loss: 1.105 Vali Loss: 1.668 Vali Acc: 0.274 Test Loss: 1.690 Test Acc: 0.274
Validation loss decreased (-0.258555 --> -0.273764). Saving model ...
Epoch: 7 cost time: 4.484087705612183
Epoch: 7, Steps: 17 | Train Loss: 1.013 Vali Loss: 1.669 Vali Acc: 0.262 Test Loss: 1.641 Test Acc: 0.262
EarlyStopping counter: 1 out of 10
Epoch: 8 cost time: 4.230833053588867
Epoch: 8, Steps: 17 | Train Loss: 1.012 Vali Loss: 1.628 Vali Acc: 0.232 Test Loss: 1.624 Test Acc: 0.232
EarlyStopping counter: 2 out of 10
Epoch: 9 cost time: 4.222209453582764
Epoch: 9, Steps: 17 | Train Loss: 0.979 Vali Loss: 1.649 Vali Acc: 0.285 Test Loss: 1.641 Test Acc: 0.285
Validation loss decreased (-0.273764 --> -0.285171). Saving model ...
Epoch: 10 cost time: 4.276477813720703
Epoch: 10, Steps: 17 | Train Loss: 0.974 Vali Loss: 1.710 Vali Acc: 0.224 Test Loss: 1.715 Test Acc: 0.224
EarlyStopping counter: 1 out of 10
Updating learning rate to 1.953125e-06
Epoch: 11 cost time: 4.274434328079224
Epoch: 11, Steps: 17 | Train Loss: 0.981 Vali Loss: 1.668 Vali Acc: 0.247 Test Loss: 1.680 Test Acc: 0.247
EarlyStopping counter: 2 out of 10
Epoch: 12 cost time: 4.337649345397949
Epoch: 12, Steps: 17 | Train Loss: 0.942 Vali Loss: 1.661 Vali Acc: 0.240 Test Loss: 1.635 Test Acc: 0.240
EarlyStopping counter: 3 out of 10
Epoch: 13 cost time: 4.211060523986816
Epoch: 13, Steps: 17 | Train Loss: 0.937 Vali Loss: 1.646 Vali Acc: 0.255 Test Loss: 1.647 Test Acc: 0.255
EarlyStopping counter: 4 out of 10
Epoch: 14 cost time: 4.213907718658447
Epoch: 14, Steps: 17 | Train Loss: 0.942 Vali Loss: 1.641 Vali Acc: 0.255 Test Loss: 1.634 Test Acc: 0.255
EarlyStopping counter: 5 out of 10
Epoch: 15 cost time: 4.28999137878418
Epoch: 15, Steps: 17 | Train Loss: 0.930 Vali Loss: 1.650 Vali Acc: 0.251 Test Loss: 1.651 Test Acc: 0.251
EarlyStopping counter: 6 out of 10
Updating learning rate to 6.103515625e-08
Epoch: 16 cost time: 4.301165580749512
Epoch: 16, Steps: 17 | Train Loss: 0.935 Vali Loss: 1.638 Vali Acc: 0.251 Test Loss: 1.649 Test Acc: 0.251
EarlyStopping counter: 7 out of 10
Epoch: 17 cost time: 4.311629295349121
Epoch: 17, Steps: 17 | Train Loss: 0.927 Vali Loss: 1.644 Vali Acc: 0.251 Test Loss: 1.661 Test Acc: 0.251
EarlyStopping counter: 8 out of 10
Epoch: 18 cost time: 4.277967214584351
Epoch: 18, Steps: 17 | Train Loss: 0.934 Vali Loss: 1.633 Vali Acc: 0.251 Test Loss: 1.628 Test Acc: 0.251
EarlyStopping counter: 9 out of 10
Epoch: 19 cost time: 4.347052574157715
Epoch: 19, Steps: 17 | Train Loss: 0.949 Vali Loss: 1.650 Vali Acc: 0.251 Test Loss: 1.660 Test Acc: 0.251
EarlyStopping counter: 10 out of 10
Early stopping
testing : classification_EthanolConcentration_TimesNet_UEA_ftM_sl96_ll48_pl96_dm32_nh8_el3_dl1_df32_fc1_ebtimeF_dtTrue_Exp_0<<<
263
TEST 263
test shape: torch.Size([263, 4]) torch.Size([263, 1])
accuracy:0.28517110266159695

[Dataset] How to feed my own dataset

Hi, nice job!
I am confused that how to feed my own data into the models, such as TimesNet, should I write the dataloader?

Is there any possiablity that the Library could handle the origin pandas DataFrame?

TimesNet runs slowly as num_kernels grows

Hi,

Great work and thanks for releasing so many benchmark models.
On my own dataset, I find that the training speed is decent when the num_kernels is set to 1. However, as I increase the num_kernels to 6 (default number in Inception net), the training speed is extremly slow. FYI, I use torch 1.11 and NVIDIA A100 GPU. There are two questions:

  1. Do you have any idea why inception net runs slowly by using 6 kernels? Does this happen in your tasks?
  2. Can I just use 1 or 2 kernels in time series analysis? I didn't find the ablation study about num_kernels in your paper (maybe I miss them).

ERROR:gpu_init.cc(481)] Passthrough is not supported, GL is disabled, ANGLE is

作者你好,我有几个问题想向你请教一下

首先是我标题显示的错误,这个错误是我运行TimesNet_ETTh1.sh这个文件出现的,如下图所示,同时系统会自动打开vs code软件,但一直停留在那个画面

第二个问题我的torch版本是1.11.1+cu113,但安装要求里面的软件包都没有问题,有一些版本过高不知是否有影响。我按照文档里面的要求运行了TimesNet_ETTh1.sh,这个操作是在我并没有添加任何数据的情况下,我下载了作者你给的数据集,但不是很清楚要放在哪一个路径里面,还是程序会自动检索文件所在位置?所以请教一下你我如果要测试自己的数据集,应该放在哪个文件里面呢?

最后我说一下我运行这个代码的思路,如果有哪里不对请作者指正一下,真的很想让代码跑起来。前面的准备工作都已经做完,环境配置好了并且数据集已经下载好了之后,我将他放在同一个文件夹里面,如下图是我放文件的路径,完成之后我就运行了timenet文件,但是结果并不如愿,出现了开头的错误。希望作者可以给一下帮助,期待你的回复。
微信图片_20230307115419

微信图片_20230307112905

How to train, test, and forecast my own datasets?

Dear Authors,
Thanks for your job and open source code.

Here is a question I would like to ask you in all honesty. My own datasets do not contain a 'Date' column because it is a long time data sampled at a certain sample rate (sr). Instead, it may only last a few tens of minutes, but there are actually minutes * 60 * sr of data in each dataset. How should I apply my personal data set to FedFormer?

I would be grateful if you could answer my questions.

CUDA not available.

EDIT: I resolved this issue by removing "export CUDA_VISIBLE_DEVICES=2" from the bash script. Didn't see that! Sorry for crowding the issue board.

I cannot get this to use the GPU. I can run other pytorch projects on my GPU, but this one does not detect CUDA.

When running the following code in my terminal:

import torch
print("CUDA AVAILABLE: ")
print(torch.cuda.is_available())
print("\nCUDA VERSION: ")
print(torch.version.cuda)
print("\nCUDA DEVICE: ")
print(torch.cuda.get_device_name(torch.cuda.current_device()))

I get the output:

CUDA AVAILABLE:
True

CUDA VERSION:
11.3

CUDA DEVICE:
NVIDIA GeForce RTX 2070 SUPER

If I put that exact same code within run.py directly after importing modules, I get this output:

CUDA AVAILABLE:
False

CUDA VERSION:
11.3

CUDA DEVICE:
Traceback (most recent call last):
  File "run.py", line 17, in <module>
    print(torch.cuda.get_device_name(torch.cuda.current_device()))
  File "C:\Users\admin\anaconda3\envs\TN2\lib\site-packages\torch\cuda\__init__.py", line 481, in current_device
    _lazy_init()
  File "C:\Users\admin\anaconda3\envs\TN2\lib\site-packages\torch\cuda\__init__.py", line 216, in _lazy_init
    torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available

Why drop_last is set to True when generating the dataloader?

It appears that drop_last is set to True in data_provider.py for both training and test settings when performing long-term forecasting and imputation tasks. This seens confusing, as it may lead to inaccurate test results when the batch size is large and the data is relatively small.

experiment results

楼主,我这边使用CUDA 11.4和pytorch1.11.0+cu115,按照script配置得到的测试结果略优于FEDformer,差于DLinear,无法匹配paper中的测试结果,请问是运行环境和torch版本的原因吗?

频率的选择

作者您好,多元的时间序列应该每单个序列都有自己的频率,从您的源代码可以看出您的主频率的选择是通过做了两次平均之后得到的。这样做会不会影响到时间序列不同周期的选择?

单变量预测出错

作者你好,按照之前
#42 的做法,我尝试将自己的数据带入模型,进行单变量预测,出现如下错误:

Traceback (most recent call last):
  File "/content/drive/MyDrive/Time-Series-Library/run.py", line 145, in <module>
    exp.train(setting)
  File "/content/drive/MyDrive/Time-Series-Library/exp/exp_long_term_forecasting.py", line 135, in train
    outputs = self.model(batch_x, batch_x_mark, dec_inp, batch_y_mark)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/content/drive/MyDrive/Time-Series-Library/models/TimesNet.py", line 203, in forward
    dec_out = self.forecast(x_enc, x_mark_enc, x_dec, x_mark_dec)
  File "/content/drive/MyDrive/Time-Series-Library/models/TimesNet.py", line 112, in forecast
    enc_out = self.enc_embedding(x_enc, x_mark_enc)  # [B,T,C]
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/content/drive/MyDrive/Time-Series-Library/layers/Embed.py", line 124, in forward
    x = self.value_embedding(
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/content/drive/MyDrive/Time-Series-Library/layers/Embed.py", line 41, in forward
    x = self.tokenConv(x.permute(0, 2, 1)).transpose(1, 2)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/conv.py", line 255, in forward
    return F.conv1d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
RuntimeError: Given groups=1, weight of size [16, 7, 3], expected input[32, 1, 98] to have 7 channels, but got 1 channels instead

貌似是模型维度不匹配,但是我不知道该怎么修改。

以下是我的sample data:
image

btw, 原始数据需要进行缺失值预处理吗?

Runtime error caused by run.py not being contained within idiom "if __name__ == '__main__':"

Running any of the bash scripts results in the following error:


RuntimeError:
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.

Containing the entirety of run.py within the idiom "if __name__ == '__main__':" fixes this issue.

请问是否有多条时间序列预测的例子?

作者您好,请问如果我的数据集中包含了多条时间序列,例如需要同时预测多个地区未来的天气,有没有哪个例子可以参考呢?我看ETTh1以及很多数据集都是只有一条时间序列,而我的数据集结构类似:
ts_id,date,temperature
1,2022-01-01,23
2,2022-01-01,25
1,2022-01-02,24
2,2022-01-02,26
其中每个ts_id都代表不同的时间序列。

周期选择和变2D时的length设定

作者您好,非常感谢您做的扎实、具有启发性的工作。
我在学习您文章和代码时,遇到了如下问题,希望您能帮忙解惑:
Q1: 在FFT_for_Period中,有一步 frequency_list[0] = 0,请问这边赋值为0的目的/好处是什么?
Q2: 在1D变2D时,涉及到了 length = (((self.seq_len + self.pred_len) // period) + 1) * period,我不是很明白这边为什么要把self.pred_len也加入进去?
期待回复,谢谢!

关于其中Classification模型的困惑

当 Exp 为 classification时,输入数据应该是[batch_size, seq_length, channel]吗?
为何在Exp 为 classification时,多出一个 x_mark_enc,该如何理解呢?
如果想要用 torch.ones() 来生成一个简单数据,测试其中的 classification 应当如何生成呢?
有点小困惑,请教下作者,不胜感激。

experiment results

I try to reproduce the exp results on M4 datasets, but I get pretty bad performance, it's far away from the performance in the paper .I just simply run the short-term forecasting scripts for M4 datasets, so I want to ask there is something wrong in your scripts or how can I get the right results

About TimesNet

In the forecasting task of TimesNet, a linear layer is used to align the temporal dimension before extracting the intraperiod and interperiod features using the TimesBlock module, the temporal dimension is guaranteed to be the seq_len+pred_len. If I remove this step and extract the intraperiod and periperiod features from the seq_len sequence directly, and then use a linear layer to output pred_len sequence, the model performance will become very poor. I would like to ask how is this considered. How should this be explained?

关于短序列预测

您好!我在TimesNet长期时间序列的预测上效果很好,请问我怎样在TimesNet模型上使用自己的数据集做短期的时间序列预测呢?
为此我具体需要怎样修改代码?

python 3.6 or 3.8

I followed the instruction, installed python 3.6 on windows and tried pip install -r requirements.txt
but it reminded me that 1.22.4 Requires-Python >=3.8
So I recreated a py3.8 environment and successfully installed all the packages required. I'm wondering that which version I should install and if there will be some problems to use py3.8?
thanks

TimesNet about classification job

What's the "pred_len" meaning in TimesBlock ? Why inputs of TimesBlock are padded to length of "seq_len + pred_len"?( not seq_len)

timesnet的 timesblock输出为0

您好,我在使用timesnet在心电信号上的时候,遇到了一个问题:
心电信号经过一层timesblock输出后的为0,导致在第二层timesblock上无法正确进行傅里叶变化求周期值。
请问您在处理类似数据集的时候,有没有遇到类似的状况?

模型在windows系统下的运行问题

作者您好:
在windows系统下配置好环境后,会产生如下错误:
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

请问有什么解决方法吗,还是说必须使用linux系统运行。

about PatchTST

According to the paper, PatchTsT have a self-supervise pre training procedure. Can your team will add it soon ? Even if not, thank you very much! !!!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.