- The repository is the experimental codes and APIs for MTSA
- All codes is implemented with Python + Tensorflow
MTSA_API
: API for Universal and unified Multi-Mask Tensorized Self-Attention (MTSA) Tensorflow implementation and its stacking version.Proj_SNLI_mtsa
: Implementation of MTSA on Stanford Natural Language Inference (SNLI) dataset.Proj_NLI_Trans_mtsa
: Implementation of MTSA on SNLI and MultiNLI for transfer learning.Proj_SRL_mtsa
: Implementation of MTSA on Semantic Role Labeling Task (CONLL-05 Dataset).Proj_TREC_mtsa
: Implementation of MTSA on TREC Question-Type Classification dataset.Proj_SST_mtsa
: Implementation of MTSA on 5-class Stanford Sentiment Treebank.Proj_SentCls_mtsa
: Implementation of MTSA on Sentence Classification (e.g., CR, MPQA and SUBJ) datasets.Proj_NMT
: Implementation of MTSA on neural machine translation.
- Python2 and Python3
- tensorflow>=1.2
- Python2 for
SRL_mtsa
and Python3 for the remaining - tensorflow>=1.2
- tqdm
- nltk (with Models/punkt)
This framework is except SRL_mtsa
PROJECT FILE TREE:
ROOT
--dataset[d]
----glove[d]
----$task_dataset_name$[d]
--src[d]
----model[d]
------template.py[f]
------$model_name$.py[f]
----nn_utils[d]
----utils[d]
------file.py[f]
------nlp.py[f]
------record_log.py[f]
------time_counter.py[f]
----dataset.py[f]
----evaluator.py[f]
----graph_handler.py[f]
----perform_recorder.py[f]
--result[d]
----processed_data[d]
----model[d]
------$model_specific_dir$[d]
--------ckpt[d]
--------log_files[d]
--------summary[d]
--------answer[d]
--configs.py[f]
--$task$_main.py[f]
--$task$_log_analysis.py[f]
The result dir will appear after any running. Every files[f]
and directory[d]
will be detailed as follows:
./configs.py
: perform the parameters parsing and definitions and declarations of global variables, e.g., parameter definition/default value, name(of train/dev/test_data, model, processed_data, ckpt etc.) definitions, directories(of data, result, $model_specific_dir$
etc.) definitions and corresponding paths generation.
./$task$_main.py
: this is the main entry python script to run the project;
./$task$_log_analysis.py
: this provides a function to analyze the log file of training process.
./dataset/
: this is the directory including datasets for current project.
-
./dataset/glove
: including pre-trained glove file -
./dataset/$task_dataset_name$/
: This is the dataset dir for current task, we will concretely introduce this in each project dir. -
./src/dataset.py
: a class to process raw data from dataset, including data tokenization, token dictionary generation, data digitization, neural network data generation. In addition, there are also some method:generate_batch_sample_iter
for random mini-batch iteration,get_statistic
for sentence length statistics and a interface for deleting samples with long sentence in training data. -
./src/evaluator.py
: a class for model evaluation. -
./src/graph_handler.py
: a class for handling graph: session initialization, summary saving, model restore etc. -
./src/perform_recoder.py
: a class to save top-n dev accuracy model checkpoint for future loading. -
./src/model/
: the dir including tensorflow model file -
./src/model/template.py
: a abstract python class, including network placeholders, global tensorflow tensor variables, TF loss function, TF accuracy function, EMA for learnable variables and summary, training operation, feed dict generation and training step function. -
./src/model/$model_name$.py
: Main TF neural network model, implement the abstract interfacebuild_network
, extending fromtemplate.py
. -
./src/nn_utils/
: a package include various tensorflow layers implemented by this repo author. -
./src/utils/file.py
: file I/O functions. -
./src/utils/nlp.py
: natural language processing functions. -
./src/utils/record_log.py
: a log recorder class, and a corresponding instance for all use in current project. -
./src/utils/time_counter.py
: time counter class to collect the training time, note this time exclude the process to prepare data but only the time spending in training step.
./result/
: a dir to place the results.
./result/processed_data/
: a dir to place Dataset instance with pickle format. The file name is generated byget_params_str
in./config.py
according to the related parameters../result/model/$model_specific_dir$/
: the name of this dir is generated byget_params_str
in./config.py
according to the related parameters, to save the result for a combination of the parameters. In other words, we associate a dir for a combination of the parameters../result/model/$model_specific_dir$/ckpt/
: a dir to save top-n model checkpoints../result/model/$model_specific_dir$/log_files/
: a dir to save log file../result/model/$model_specific_dir$/summary/
: a dir to save tensorboard summary and tensorflow graph meta files../result/model/$model_specific_dir$/answer/
: a dir to save extra prediction result for a part of these projects.
The details of parsing hyper-params are elaborated in config.py
The full paper can be viewed at here
And please cite this paper if the paper or its code is useful for your work.
@inproceedings{shen2019tensorized,
Author = {Shen, Tao and Zhou, Tianyi and Long, Guodong and Jiang, Jing and Zhang, Chengqi},
Booktitle = {(NAACL) Annual Conference of the North American Chapter of the Association for Computational Linguisticss},
Pages = {1256--1266},
Title = {Tensorized Self-Attention: Efficiently Modeling Pairwise and Global Dependencies Together},
Year = {2019}
}
Please feel free to open an issue if any problem and bug is encountered.