Skip to content
/ IPM Public

Repo - Paper "Capturing Semantics for Imputation with Pre-trained Language Models." [ICDE 2021]

Notifications You must be signed in to change notification settings

EliasMei/IPM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

IPM

This repo is for paper "Capturing Semantics for Imputation with Pre-trained Language Models." (ICDE 2021).

File Structure

  • src: Source code of algorithms
  • data: Dataset source files.
    • In the example data, 10 represents the missing rate. An there are total 5 missing datasets generated by the original data

Requirements

  • torch>=1.1.0
  • pandas>=0.25.3
  • numpy>=1.19.5
  • pretrained language models can be found in Hugging Face.

Usage

Before running ipm-multi or ipm-binary, the users need to specify the to-impute categorical attributes in ipm_multi.py / ipm_binary.py. For example:

ATTR_DICT = {
    'example':['category', 'brand']
}

where "example" is the dataset name, ['category', 'brand'] is the list of to-impute categorical attributes names.

In our experiments, the to-impute attributes are as follows:

ATTR_DICT = {
    "walmart":["category", "brand"],
    "amazon":["category", "brand"],
    "restaurant":["city"],
    "buy":["manufacturer"],
    "housing":["region", "price", "sqfeet", "beds", "baths", "laundry_options", "parking_options", "lat", "long", "state"],
    "phone":["brand"],
    "zomato":["location"],
    "flipkart":["brand"]
}

To use IPM-Multi in the paper, run this command:

python /IPM/src/ipm_multi.py --model_type=roberta --model_name_or_path=/pretrained_models/roberta-base --data_dir="./data/example" --dataset="example" --train_batch_size=32 --eval_batch_size=32 --max_seq_length=75 --num_epochs=5 --seed=22

To use IPM-Binary in the paper, run this command:

python /IPM/src/ipm_binary.py --model_type=roberta --model_name_or_path=/pretrained_models/roberta-base --data_dir="./data/example" --dataset="example" --train_batch_size=32 --eval_batch_size=32 --max_seq_length=75 --num_epochs=5 --seed=1234

Important Arguments

  • --data_dir : Directory of data files.
  • --dataset : Name of the dataset.
  • --model_name_or_path : Pretrained language model name or path.
  • --model_type : Type of the pretrained model. Default is roberta.
  • --do_lower_case : Whether tansform inputs into lower case letters. Default True.
  • --max_seq_length : Maximum length of the input sequence.
  • --train_batch_size : Batch size for training.
  • --eval_batch_size : Batch size for evaluation.
  • --num_epochs : Number of epochs for training.
  • --neg_num : Number of negative examples. Default is 3.

Acknowledgements

We would like to thank the following public repo from which we borrowed various utilites.

Citation

If you use this code for your research, please consider citing:

@inproceedings{DBLP:conf/icde/MeiSFYFL21,
  author    = {Yinan Mei and
               Shaoxu Song and
               Chenguang Fang and
               Haifeng Yang and
               Jingyun Fang and
               Jiang Long},
  title     = {Capturing Semantics for Imputation with Pre-trained Language Models},
  booktitle = {{ICDE}},
  pages     = {61--72},
  publisher = {{IEEE}},
  year      = {2021}
}

About

Repo - Paper "Capturing Semantics for Imputation with Pre-trained Language Models." [ICDE 2021]

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages