Skip to content

Releases: WenjieDu/PyPOTS

v0.8.1 🪲 Fix model saving issues

26 Sep 05:56
cc28737
Compare
Choose a tag to compare

We fixed two model-saving issues in some conditions:

  1. unintended overwrite of the existing model file when calling func .save() even with arg overwrite default to False;
  2. model_saving_strategy=best does not work and pypots still save every better model;

What's Changed

Full Changelog: v0.8...v0.8.1

v0.8 🚀 New models

13 Sep 09:41
Compare
Choose a tag to compare

We bring you new models ModernTCN (ICLR 2024), TimeMixer (ICLR 2024), and TEFN in this release ;-)

Kudos to our new contributors Eljas (@eroell) and Tianxiang (@ztxtech)!

What's Changed

New Contributors

Full Changelog: v0.7.1...v0.8

v0.7.1 Fix missing load_specific_dataset()

27 Jul 02:14
3c1d7d0
Compare
Choose a tag to compare

Previously we removed pypots.data.load_specific_datasets packages since the preprocessing functions have been all gathered and managed in BenchPOTS. The removal caused some incompatibility (see #474), hence we added it back in this minor version. But it still will be deprecated in the near future and we encourage users to use BenchPOTS for dataset preprocessing, which now supports 170+ public time series datasets. Also, we

  1. added a visualization function to plot the map of attention weights. 👍Kudos to Anshu @gugababa for his contribution;
  2. deprecated setup.py and added pyproject.toml to config the project;

What's Changed

  • Visualize attention matrix in SAITS by @gugababa in #302
  • Add attention map visualization func by @WenjieDu in #475
  • Gather requirements in one dir by @WenjieDu in #477
  • Add toml config and gather dependency files by @WenjieDu in #478
  • Add pyproject.toml, gather dependency files, and fix flake8 with toml config file by @WenjieDu in #480
  • Fix missing load_specific_dataset(), update testing_daily workflow, release v0.7.1 by @WenjieDu in #481

New Contributors

Full Changelog: v0.7...v0.7.1

v0.7 New Algos & Bug Fix

21 Jul 17:22
b103df4
Compare
Choose a tag to compare

Update summary for v0.7 release:

  1. included ImputeFormer [KDD'24], kudos👍 to @tongnie, also the author of ImputeFormer;
  2. implemented Lerp (Linear Interpolation), thanks👍 to @colesussmeier;
  3. added TCN as an imputation model, with SAITS embedding and training methodology applied;
  4. fixed a minor bug in RevIN for POTS data;
  5. fixed failed model saving when model_saving_strategy is set as better;
  6. added pypots.data.utils.inverse_sliding_window func to help restore time series samples sliced by sliding_window func;

What's Changed

New Contributors

Full Changelog: v0.6...v0.7

v0.6 🔥🪭 Nine New Models

18 Jun 09:12
Compare
Choose a tag to compare

In v0.4 and v0.5, PyPOTS brought you new models. Now, let's fan🪭 the frame🔥 in v0.6!

  1. Non-stationary Transformer, Pyraformer, Reformer, SCINet, RevIN, Koopa, MICN, TiDE, StemGNN are included in this new release;
  2. another new PyPOTS Ecosystem library BenchPOTS has been released and supports preprocessing pipelines of 170 public time series datasets for benchmarking machine learning on POTS data;
  3. add the argument verbose to mute all info level logging;

👍 Kudos to our new contributor @LinglongQian.

Please refer to the changelog below for more details.

What's Changed

  • Implement Non-stationary Transformer as an imputation model by @WenjieDu in #388
  • Implement Pyraformer as an imputation model by @WenjieDu in #389
  • Add Nonstationary Transformer and Pyraformer, update docs by @WenjieDu in #390
  • Treat keyboard interruption during training as a warning, and update the docs by @WenjieDu in #391
  • Add SCINet modules and implement it as an imputation model by @WenjieDu in #406
  • Add RevIN modules and implement it as an imputation model by @WenjieDu in #407
  • Add Koopa modules and implement it as an imputation model by @WenjieDu in #403
  • Add MICN modules and implement it as an imputation model by @WenjieDu in #401
  • Update docs and references by @WenjieDu in #410
  • Add TiDE modules and implement it as an imputation model by @WenjieDu in #402
  • Add Koopa, SCINet, RevIN, MICN and TiDE, and update the docs by @WenjieDu in #412
  • Add StemGNN modules and implement it as an imputation model by @WenjieDu in #415
  • Add GRU-D as an imputation model by @WenjieDu in #417
  • Update README and docs by @WenjieDu in #420
  • Implement StemGNN and GRU-D as an imputation model by @WenjieDu in #421
  • Update set_random_seed() by @WenjieDu in #423
  • Enable tuning new added models by @WenjieDu in #424
  • ETSformer hyperparameters mismatch during NNI tuning by @LinglongQian in #425
  • Fix ETSformer tuning bug, and release v0.6rc1 by @WenjieDu in #427
  • Add arg verbose to control logging by @WenjieDu in #428
  • Add Reformer as an imputation model by @WenjieDu in #433
  • Add Reformer, add option version to control training log, and add benchpots as a dependency by @WenjieDu in #434
  • Raise the minimum support python version to v3.8 by @WenjieDu in #436
  • Fix linting error by @WenjieDu in #437

Full Changelog: v0.5...v0.6

v0.6 RC

28 May 16:42
e6d7c9f
Compare
Choose a tag to compare

This pre-release version is for public beta testing.

v0.5 🔥 New Models & Features

06 May 14:22
046c52f
Compare
Choose a tag to compare

Here is the summary of this new version's changelog:

  1. the modules of iTransformer, FiLM, and FreTS are included in PyPOTS. The three have been implemented as imputation models in this version;
  2. CSDI is implemented as a forecasting model;
  3. MultiHeadAttention is enabled to manipulate all attention operators in PyPOTS;

What's Changed

  • Fix failed doc building, fix a bug in gene_random_walk(), and refactor unit testing configs by @WenjieDu in #355
  • Implement CSDI as a forecasting model by @WenjieDu in #354
  • Update the templates by @WenjieDu in #356
  • Implement forecasting CSDI and update the templates by @WenjieDu in #357
  • Update README by @WenjieDu in #359
  • Update docs by @WenjieDu in #362
  • Implement FiLM as an imputation model by @WenjieDu in #369
  • Implement FreTS as an imputation model by @WenjieDu in #370
  • Implement iTransformer as an imputation model by @WenjieDu in #371
  • Add iTransformer, FreTS, FiLM by @WenjieDu in #372
  • Fix failed CI testing on macOS with Python 3.7 by @WenjieDu in #373
  • Add SaitsEmbedding, fix failed CI on macOS with Python3.7, and update docs by @WenjieDu in #374
  • Fix error in gene_random_walk by @WenjieDu in #375
  • Try to import torch_geometric only when init Raindrop by @WenjieDu in #381
  • Enable all attention operators to work with MultiHeadAttention by @WenjieDu in #383
  • Fix a bug in gene_random_walk, import pyg only when initing Raindrop, and make MultiHeadAttention work with all attention operators by @WenjieDu in #384
  • Refactor code and update docstring by @WenjieDu in #385
  • 添加中文版README文件 by @Justin0388 in #386
  • Refactor code and update docs by @WenjieDu in #387

New Contributors

We also would like to thank Sijia @phoebeysj, Haitao @CowboyH, and Dewang @aizaizai1989 for their help in polishing Chinese README.

Full Changelog: v0.4.1...v0.5

v0.4.1 🚧 Refactor&Modularization

17 Apr 16:04
cb1ae37
Compare
Choose a tag to compare

In this refactoring version, we

  1. applied SAITS loss function to the newly added imputation models (Crossformer, PatchTST, DLinear, ETSformer, FEDformer, Informer, and Autoformer) in v0.4, and add the arguments MIT_weight and ORT_weight in them for users to balance the multi-task learning;
  2. modularized all neural network models and put their modules in the package pypots.nn.modules;
  3. removed deprecated metric funcs (e.g. pypots.utils.metrics.cal_mae that has been replaced by pypots.utils.metrics.calc_mae);

What's Changed

  • Apply SAITS loss to newly added models and update the docs by @WenjieDu in #346
  • Modularize neural network models by @WenjieDu in #348
  • Modularize NN models, remove deprecated metric funcs, and update docs by @WenjieDu in #349
  • Remove pypots.imputation.locf.modules and add assertions for BTTF by @WenjieDu in #350
  • Test building package during CI by @WenjieDu in #353
  • Avoid the import error MessagePassing not defined by @WenjieDu in #351

Full Changelog: v0.4...v0.4.1

v0.4 🔥 New models

09 Apr 13:46
eb03a15
Compare
Choose a tag to compare
  1. applied the SAITS embedding strategy to models Crossformer, PatchTST, DLinear, ETSformer, FEDformer, Informer, and Autoformer to make them applicable to POTS data as imputation methods;
  2. fixed a bug in USGAN loss function;
  3. gathered several Transformer embedding methods into the packagepypots.nn.modules.transformer.embedding;
  4. added the attribute best_epoch for NN models to record the best epoch num and log it after model training;
  5. made the self-attention operator replaceable in the class MultiHeadAttention for Transformer models;
  6. renamed the argument d_inner of all models in previous versions into d_ffn. This is for unified argument naming and easier understanding;
  7. removed deprecated functions save_model() and load_model() in all NN model classes, which are now replaced by save()and load();

What's Changed

  • Removing deprecated functions by @WenjieDu in #318
  • Add Autoformer as an imputation model by @WenjieDu in #320
  • Removing deprecated save_model and load_model, adding the imputation model Autoformer by @WenjieDu in #321
  • Simplify MultiHeadAttention by @WenjieDu in #322
  • Add PatchTST as an imputation model by @WenjieDu in #323
  • Renaming d_inner into d_ffn by @WenjieDu in #325
  • Adding PatchTST, renaming d_innner into d_ffn, and refactoring Autofomer by @WenjieDu in #326
  • Add DLinear as an imputation model by @WenjieDu in #327
  • Add ETSformer as an imputation model by @WenjieDu in #328
  • Add Crossformer as an imputation model by @WenjieDu in #329
  • Add FEDformer as an imputation model by @WenjieDu in #330
  • Add Crossformer, Autoformer, PatchTST, DLinear, ETSformer, FEDformer as imputation models by @WenjieDu in #331
  • Refactor embedding package, remove the unused part in Autoformer, and update the docs by @WenjieDu in #332
  • Make the self-attention operator replaceable in Transformer by @WenjieDu in #334
  • Add informer as an imputation model by @WenjieDu in #335
  • Speed up testing procedure by @WenjieDu in #336
  • Add Informer, speed up CI testing, and make self-attention operator replaceable by @WenjieDu in #337
  • debug USGAN by @AugustJW in #339
  • Fix USGAN loss function, and update the docs by @WenjieDu in #340
  • Add the attribute best_epoch to record the best epoch num by @WenjieDu in #342
  • Apply SAITS embedding strategy to new added models by @WenjieDu in #343
  • Release v0.4, apply SAITS embedding strategy to the newly added models, and update README by @WenjieDu in #344

Full Changelog: v0.3.2...v0.4

v0.3.2 🐞 Bugfix

19 Mar 09:24
a0470b2
Compare
Choose a tag to compare
  1. fixed an issue that stopped us from running Raindrop on multiple CUDA devices;
  2. added Mean and Median as naive imputation methods;

What's Changed

  • Refactor LOCF, fix Raindrop on multiple cuda devices, and update docs by @WenjieDu in #308
  • Remind how to display the figs rather than invoking plt.show() by @WenjieDu in #310
  • Update the docs and requirements by @WenjieDu in #311
  • Fixing some bugs, updating the docs and requirements by @WenjieDu in #312
  • Make CI workflows only test with Python v3.7 and v3.11 by @WenjieDu in #313
  • Update the docs and release v0.3.2 by @WenjieDu in #314
  • Add mean and median as imputation methods, and update docs by @WenjieDu in #317

Full Changelog: v0.3.1...v0.3.2