Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add two transformer models via upload #508

Merged
merged 18 commits into from
Jul 22, 2021
Merged

Add two transformer models via upload #508

merged 18 commits into from
Jul 22, 2021

Conversation

yingtaoluo
Copy link
Contributor

@yingtaoluo yingtaoluo commented Jul 13, 2021

Add a naive transformer model and an improved transformer model.

Description

The tested successful requirement is Python 3.6/3.7/3.8 and Pytorch 1.12/1.2.

The naive transformer implemented here for financial time series prediction follows the paper "Attention is all you need":
Given the input (N, T, F),

  1. An embedding layer that maps the input (N, T, F) to representation (N, T, F’);
  2. A positional encoding layer that adds the positional sigmoid;
  3. An encoder that consists of several encoding layers, each of which uses a self-attention layer as the computing module (function of query, key, and value).
  4. A decoder that consists of an MLP (or a Linear layer) that maps the representation of the last time (N, 1, F') into output (N, 1).

The improved transformer is a simple self-designed transformer (based on the paper 'SLGT: Self-adaptive Local-global aware Transformer for Sequential Recommendation', which is submitted to a conference and will be available on ArXiv soon). Localformer imports 1-dimensional convolutional layers besides the encoder layer as a locality inductive bias to supplement the long-term dependent self-attention module, which updates the representation of sequence at each time locally. Specifically, the input representation that passes through each encoder layer (self-attention layer) will be the original input adds (+) the output of the input passing through an extra 1-d convolutional layer. For example, if the encoder originally contains three self-attention layers attn-attn-attn, it will now be conv-attn-conv-attn-conv-attn. After the transformer module, a GRU is added to further aggregate the representation with sequential inductive bias (provided by the RNN layers).

Motivation and Context

It adds two famous transformer models for customers to select, besides other base models that Qlib already contains. The model performance reaches a 1.47 information ratio, which is fairly high. The improved version transformer adds convolution and RNN to supplement inductive bias, which is simple but effective.

How Has This Been Tested?

  • Pass the test by running: qrun benchmarks/Transformer/workflow_config_localformer_Alpha158.yaml under upper directory of qlib, where 'workflow_config_localformer_Alpha158.yaml' only needs to change this line of code 'task: model: class: LocalformerModel' or 'task: model: class: TransformerModel'.
  • The performances of the two models are described above.

Screenshots of Test Results (if appropriate):

Transformer Results on Alpha158:
'''
'IC': 0.03186587768611013,
'ICIR': 0.2556910881045764,
'Rank IC': 0.04735251936658551,
'Rank ICIR': 0.388378955424602

'The following are analysis results of the excess return without cost.'
risk
mean 0.000309
std 0.004209
annualized_return 0.077839
information_ratio 1.164993
max_drawdown -0.106215

'The following are analysis results of the excess return with cost.'
risk
mean 0.000126
std 0.004209
annualized_return 0.031707
information_ratio 0.474567
max_drawdown -0.131948

Transformer Results on Alpha360:
{'IC': 0.011659216755690713,
'ICIR': 0.07383408561758713,
'Rank IC': 0.03505118059955821,
'Rank ICIR': 0.2453042675836217}
'The following are analysis results of the excess return without cost.'
risk
mean 0.000026
std 0.005318
annualized_return 0.006658
information_ratio 0.078865
max_drawdown -0.104203

Localformer Results on Alpha158:
{'IC': 0.037426503365732174,
'ICIR': 0.28977883455541603,
'Rank IC': 0.04659889541774283,
'Rank ICIR': 0.373569340092482}

'The following are analysis results of the excess return without cost.'
risk
mean 0.000381
std 0.004109
annualized_return 0.096066
information_ratio 1.472729
max_drawdown -0.094917

'The following are analysis results of the excess return with cost.'
risk
mean 0.000213
std 0.004111
annualized_return 0.053630
information_ratio 0.821711
max_drawdown -0.113694

Localformer Results on Alpha360:
{'IC': 0.03766845905185995,
'ICIR': 0.26793394150788935,
'Rank IC': 0.0530091645633088,
'Rank ICIR': 0.40090294387953357}
'The following are analysis results of the excess return without cost.'
risk
mean 0.000131
std 0.004943
annualized_return 0.033129
information_ratio 0.422228
max_drawdown -0.127502

Types of changes

  • Fix bugs
  • Add new feature
  • Update documentation

Add naive transformer model and a improved transformer model.
@ghost
Copy link

ghost commented Jul 13, 2021

CLA assistant check
All CLA requirements met.

@you-n-g
Copy link
Collaborator

you-n-g commented Jul 14, 2021

@yingtaoluo It looks great! Thanks so much!

Please check the errors in the CI

These suggestions
maybe useful for you

Would you mind adding more docs about your model and include your PyTorch version in the requirements.txt like other models?

Thanks.

qlib/contrib/model/pytorch_transformer.py Outdated Show resolved Hide resolved
qlib/contrib/model/pytorch_transformer.py Outdated Show resolved Hide resolved
@yingtaoluo
Copy link
Contributor Author

I have cleared these errors with Black and have added yaml files and requirement.txt. I have also expanded the docs about the models. Please contact me at any time if there are other works needed to be done. :}

@you-n-g
Copy link
Collaborator

you-n-g commented Jul 18, 2021

@yingtaoluo

Thanks for your contribution
I'm trying to run your models 20 times... (If you have complete results, please send them directly to me)
image

Besides, please try to make the commit message meanful. Otherwise, the pull request will be squashed.

@you-n-g
Copy link
Collaborator

you-n-g commented Jul 18, 2021

@yingtaoluo
I made some modifications to the auto backtest scripts. Please merge it.
https://github.com/yingtaoluo/qlib/pull/1

I'm testing them with the following code(You can try them as well).

python run_all_model.py 1 localformer Alpha158 --qlib_uri "<your qlib file path>" --wait_when_err True
python run_all_model.py 1 localformer Alpha360 --qlib_uri "<your qlib file path>" --wait_when_err True
python run_all_model.py 1 transformer Alpha158 --qlib_uri "<your qlib file path>" --wait_when_err True
python run_all_model.py 1 transformer Alpha360 --qlib_uri "<your qlib file path>" --wait_when_err True

Copy link
Contributor Author

@yingtaoluo yingtaoluo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have reviewed the codes and haven't found errors.

@yingtaoluo
Copy link
Contributor Author

@yingtaoluo
I made some modifications to the auto backtest scripts. Please merge it.
yingtaoluo#1

I'm testing them with the following code(You can try them as well).

python run_all_model.py 1 localformer Alpha158 --qlib_uri "<your qlib file path>" --wait_when_err True
python run_all_model.py 1 localformer Alpha360 --qlib_uri "<your qlib file path>" --wait_when_err True
python run_all_model.py 1 transformer Alpha158 --qlib_uri "<your qlib file path>" --wait_when_err True
python run_all_model.py 1 transformer Alpha360 --qlib_uri "<your qlib file path>" --wait_when_err True

Thank you! I have merged.

@yingtaoluo
Copy link
Contributor Author

@yingtaoluo

Thanks for your contribution
I'm trying to run your models 20 times... (If you have complete results, please send them directly to me)
image

Besides, please try to make the commit message meanful. Otherwise, the pull request will be squashed.

Sure. The only problem might be that I may not have the adequate running time to run 20 times. I may wait until weeks later to finish that. Could you, if convenient, run the 20 times? I appreciate it.

@you-n-g
Copy link
Collaborator

you-n-g commented Jul 20, 2021

@yingtaoluo
Here is my result of 20 times running.
Do you have any questions about it?

Model Name Dataset IC ICIR Rank IC Rank ICIR Annualized Return Information Ratio Max Drawdown
Localformer Alpha158 0.0355±0.00 0.2747±0.04 0.0466±0.00 0.3762±0.03 0.0506±0.02 0.7447±0.34 -0.0875±0.02
Transformer Alpha158 0.0274±0.00 0.2166±0.04 0.0409±0.00 0.3342±0.04 0.0204±0.03 0.2888±0.40 -0.1216±0.04
Localformer Alpha360 0.0408±0.00 0.2988±0.03 0.0538±0.00 0.4105±0.02 0.0275±0.03 0.3464±0.37 -0.1182±0.03
Transformer Alpha360 0.0141±0.00 0.0917±0.02 0.0331±0.00 0.2357±0.03 -0.0259±0.03 -0.3323±0.43 -0.1763±0.07

If it is OK.
Please add them to the following links

And add your paper references and descriptions to the benchmark README after you publish it.

Thanks

@you-n-g
Copy link
Collaborator

you-n-g commented Jul 20, 2021

The above numbers are the results of the following commands after removing the fixed seed in your YAML files.

python run_all_model.py 20 localformer Alpha158 --qlib_uri "~/repos/libs/qlib/" --wait_when_err True
python run_all_model.py 20 localformer Alpha360 --qlib_uri "~/repos/libs/qlib/" --wait_when_err True
python run_all_model.py 20 transformer Alpha158 --qlib_uri "~/repos/libs/qlib/" --wait_when_err True
python run_all_model.py 20 transformer Alpha360 --qlib_uri "~/repos/libs/qlib/" --wait_when_err True

Add the performance of transformer and localformer.
Add transformer and localformer (SLGT) models for time series prediction in finance in the Quant Model Zoo.
@yingtaoluo
Copy link
Contributor Author

@yingtaoluo
Here is my result of 20 times running.
Do you have any questions about it?

Model Name Dataset IC ICIR Rank IC Rank ICIR Annualized Return Information Ratio Max Drawdown
Localformer Alpha158 0.0355±0.00 0.2747±0.04 0.0466±0.00 0.3762±0.03 0.0506±0.02 0.7447±0.34 -0.0875±0.02
Transformer Alpha158 0.0274±0.00 0.2166±0.04 0.0409±0.00 0.3342±0.04 0.0204±0.03 0.2888±0.40 -0.1216±0.04
Localformer Alpha360 0.0408±0.00 0.2988±0.03 0.0538±0.00 0.4105±0.02 0.0275±0.03 0.3464±0.37 -0.1182±0.03
Transformer Alpha360 0.0141±0.00 0.0917±0.02 0.0331±0.00 0.2357±0.03 -0.0259±0.03 -0.3323±0.43 -0.1763±0.07
If it is OK.
Please add them to the following links

And add your paper references and descriptions to the benchmark README after you publish it.

Thanks

I have added the results to the two links. I will add the paper reference after publication. Thank you again.

@you-n-g
Copy link
Collaborator

you-n-g commented Jul 21, 2021

@yingtaoluo Please merge the main branch to fix the CI error.

Thanks

@you-n-g
Copy link
Collaborator

you-n-g commented Jul 22, 2021

OK. I'll merge this branch first and then solve the CI problem o the main branch.

@yingtaoluo It's really a great job! Thanks so much!

@you-n-g you-n-g merged commit 025b1dc into microsoft:main Jul 22, 2021
@yingtaoluo
Copy link
Contributor Author

Thank you for patiently guiding me through every step!

@you-n-g
Copy link
Collaborator

you-n-g commented Jul 22, 2021

@yingtaoluo
Welcome to join the contributor list
Looking forward to your contributions in the future! :D

@you-n-g
Copy link
Collaborator

you-n-g commented Jan 26, 2022

@yingtaoluo
Is your paper published?
If your paper is published, you can add the paper link and more details about your code.
I think it will be helpful to make more Qlib users know your work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants