Skip to content

Commit

Permalink
update README
Browse files Browse the repository at this point in the history
  • Loading branch information
Dong Zhou authored and you-n-g committed Jul 30, 2021
1 parent a7c41b6 commit da1f4db
Show file tree
Hide file tree
Showing 5 changed files with 58 additions and 44 deletions.
3 changes: 3 additions & 0 deletions examples/benchmarks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ The numbers shown below demonstrate the performance of the entire `workflow` of
| TCTS (Xueqing Wu, et al.)| Alpha360 | 0.0485±0.00 | 0.3689±0.04| 0.0586±0.00 | 0.4669±0.02 | 0.0816±0.02 | 1.1572±0.30| -0.0689±0.02 |
| Transformer (Ashish Vaswani, et al.)| Alpha360 | 0.0141±0.00 | 0.0917±0.02| 0.0331±0.00 | 0.2357±0.03 | -0.0259±0.03 | -0.3323±0.43| -0.1763±0.07 |
| Localformer (Juyong Jiang, et al.)| Alpha360 | 0.0408±0.00 | 0.2988±0.03| 0.0538±0.00 | 0.4105±0.02 | 0.0275±0.03 | 0.3464±0.37| -0.1182±0.03 |
| TRA (Hengxu Lin, et al.)| Alpha360 | 0.0500±0.00 | 0.3966±0.04 | 0.0594±0.00 | 0.4856±0.03 | 0.1000±0.02 | 1.3425±0.31 | -0.0845±0.02 |

## Alpha158 dataset
| Model Name | Dataset | IC | ICIR | Rank IC | Rank ICIR | Annualized Return | Information Ratio | Max Drawdown |
Expand All @@ -43,6 +44,8 @@ The numbers shown below demonstrate the performance of the entire `workflow` of
| TabNet (Sercan O. Arik, et al.)| Alpha158 | 0.0383±0.00 | 0.3414±0.00| 0.0388±0.00 | 0.3460±0.00 | 0.0226±0.00 | 0.2652±0.00| -0.1072±0.00 |
| Transformer (Ashish Vaswani, et al.)| Alpha158 | 0.0274±0.00 | 0.2166±0.04| 0.0409±0.00 | 0.3342±0.04 | 0.0204±0.03 | 0.2888±0.40| -0.1216±0.04 |
| Localformer (Juyong Jiang, et al.)| Alpha158 | 0.0355±0.00 | 0.2747±0.04| 0.0466±0.00 | 0.3762±0.03 | 0.0506±0.02 | 0.7447±0.34| -0.0875±0.02 |
| TRA (Hengxu Lin, et al.)| Alpha158 (with selected 20 features) | 0.0440±0.00 | 0.3592±0.03 | 0.0500±0.00 | 0.4256±0.02 | 0.0747±0.03 | 1.1281±0.49 | -0.0813±0.03 |
| TRA (Hengxu Lin, et al.)| Alpha158 | 0.0474±0.00 | 0.3653±0.03 | 0.0573±0.00 | 0.4494±0.02 | 0.0770±0.02 | 1.1342±0.38 | -0.0852±0.03 |

- The selected 20 features are based on the feature importance of a lightgbm-based model.
- The base model of DoubleEnsemble is LGBM.
93 changes: 52 additions & 41 deletions examples/benchmarks/TRA/README.md
Original file line number Diff line number Diff line change
@@ -1,53 +1,77 @@
# Learning Multiple Stock Trading Patterns with Temporal Routing Adaptor and Optimal Transport

This code provides a PyTorch implementation for TRA (Temporal Routing Adaptor), as described in the paper [Learning Multiple Stock Trading Patterns with Temporal Routing Adaptor and Optimal Transport](http://arxiv.org/abs/2106.12950).
Temporal Routing Adaptor (TRA) is designed to capture multiple trading patterns in the stock market data. Please refer to [our paper](http://arxiv.org/abs/2106.12950) for more details.

* TRA (Temporal Routing Adaptor) is a lightweight module that consists of a set of independent predictors for learning multiple patterns as well as a router to dispatch samples to different predictors.
* We also design a learning algorithm based on Optimal Transport (OT) to obtain the optimal sample to predictor assignment and effectively optimize the router with such assignment through an auxiliary loss term.
If you find our work useful in your research, please cite:
```
@inproceedings{HengxuKDD2021,
author = {Hengxu Lin and Dong Zhou and Weiqing Liu and Jiang Bian},
title = {Learning Multiple Stock Trading Patterns with Temporal Routing Adaptor and Optimal Transport},
booktitle = {Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery \& Data Mining},
series = {KDD '21},
year = {2021},
publisher = {ACM},
}
@article{yang2020qlib,
title={Qlib: An AI-oriented Quantitative Investment Platform},
author={Yang, Xiao and Liu, Weiqing and Zhou, Dong and Bian, Jiang and Liu, Tie-Yan},
journal={arXiv preprint arXiv:2009.11189},
year={2020}
}
```

## Usage (Recommended)

**Update**: `TRA` has been moved to `qlib.contrib.model.pytorch_tra` to support other `Qlib` components like `qlib.workflow` and `Alpha158/Alpha360` dataset.

Please follow the official [doc](https://qlib.readthedocs.io/en/latest/component/workflow.html) to use `TRA` with `workflow`. Here we also provide several example config files:

- `workflow_config_tra_Alpha360.yaml`: running `TRA` with `Alpha360` dataset
- `workflow_config_tra_Alpha158.yaml`: running `TRA` with `Alpha158` dataset (with feature subsampling)
- `workflow_config_tra_Alpha158_full.yaml`: running `TRA` with `Alpha158` dataset (without feature subsampling)

The performances of `TRA` are reported in [Benchmarks](https://github.com/microsoft/qlib/tree/main/examples/benchmarks).

# Running TRA
## Usage (Not Maintained)

## Requirements
- Install `Qlib` main branch
This section is used to reproduce the results in the paper.

## Running
### Running

We attach our running scripts for the paper in `run.sh`.

And here are two ways to run the model:

* Running from scripts with default parameters
You can directly run from Qlib command `qrun`:
```
qrun configs/config_alstm.yaml
```

You can directly run from Qlib command `qrun`:
```
qrun configs/config_alstm.yaml
```

* Running from code with self-defined parameters
Setting different parameters is also allowed. See codes in `example.py`:
```
python example.py --config_file configs/config_alstm.yaml
```

Here we trained TRA on a pretrained backbone model. Therefore we run `*_init.yaml` before TRA's scipts.
Setting different parameters is also allowed. See codes in `example.py`:
```
python example.py --config_file configs/config_alstm.yaml
```

# Results
Here we trained TRA on a pretrained backbone model. Therefore we run `*_init.yaml` before TRA's scipts.

## Outputs
### Results

After running the scripts, you can find result files in path `./output`:

`info.json` - config settings and result metrics.

`log.csv` - running logs.
* `info.json` - config settings and result metrics.
* `log.csv` - running logs.
* `model.bin` - the model parameter dictionary.
* `pred.pkl` - the prediction scores and output for inference.

`model.bin` - the model parameter dictionary.
Evaluation metrics reported in the paper:

`pred.pkl` - the prediction scores and output for inference.

## Our Results
| Methods | MSE| MAE| IC | ICIR | AR | AV | SR | MDD |
|-------------------|-------------------|---------------------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|
|-------|-------|------|-----|-----|-----|-----|-----|-----|
|Linear|0.163|0.327|0.020|0.132|-3.2%|16.8%|-0.191|32.1%|
|LightGBM|0.160(0.000)|0.323(0.000)|0.041|0.292|7.8%|15.5%|0.503|25.7%|
|MLP|0.160(0.002)|0.323(0.003)|0.037|0.273|3.7%|15.3%|0.264|26.2%|
Expand All @@ -61,21 +85,8 @@ After running the scripts, you can find result files in path `./output`:

A more detailed demo for our experiment results in the paper can be found in `Report.ipynb`.

# Common Issues
## Common Issues

For help or issues using TRA, please submit a GitHub issue.

Sometimes we might encounter situation where the loss is `NaN`, please check the `epsilon` parameter in the sinkhorn algorithm, adjusting the `epsilon` according to input's scale is important.

# Citation
If you find this repository useful in your research, please cite:
```
@inproceedings{HengxuKDD2021,
author = {Hengxu Lin and Dong Zhou and Weiqing Liu and Jiang Bian},
title = {Learning Multiple Stock Trading Patterns with Temporal Routing Adaptor and Optimal Transport},
booktitle = {Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery \& Data Mining},
series = {KDD '21},
year = {2021},
publisher = {ACM},
}
```
Sometimes we might encounter situation where the loss is `NaN`, please check the `epsilon` parameter in the sinkhorn algorithm, adjusting the `epsilon` according to input's scale is important.
2 changes: 1 addition & 1 deletion examples/benchmarks/TRA/workflow_config_tra_Alpha158.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ task:
early_stop: 10
smooth_steps: 5
seed: 0
logdir: output/Alpha158/router
logdir:
lamb: 1.0
rho: 1.0
transport_method: router
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ task:
early_stop: 10
smooth_steps: 5
seed: 0
logdir: output/Alpha158_full/router
logdir:
lamb: 1.0
rho: 1.0
transport_method: router
Expand Down
2 changes: 1 addition & 1 deletion examples/benchmarks/TRA/workflow_config_tra_Alpha360.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ task:
max_steps_per_epoch: 100
early_stop: 10
smooth_steps: 5
logdir: output/Alpha360/router
logdir:
seed: 0
lamb: 1.0
rho: 1.0
Expand Down

0 comments on commit da1f4db

Please sign in to comment.