Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge master into new_MT_branch. #977

Merged
merged 30 commits into from
Dec 18, 2019
Merged

Merge master into new_MT_branch. #977

merged 30 commits into from
Dec 18, 2019

Commits on Sep 19, 2019

  1. Readme update for bert npi paper (#915)

    * Update README.md
    
    * minor fix
    
    * Typo fix
    
    * typo fix
    HaokunLiu authored and sleepinyourhat committed Sep 19, 2019
    Configuration menu
    Copy the full SHA
    10fb192 View commit details
    Browse the repository at this point in the history

Commits on Sep 20, 2019

  1. Fixing index problem & minor pytorch_transformers_interface cleanup (#…

    …916)
    
    * update boundry func with offsets
    
    * update tasks that use indexes
    
    * remove outdated temporary fix
    HaokunLiu authored and sleepinyourhat committed Sep 20, 2019
    Configuration menu
    Copy the full SHA
    c36b74e View commit details
    Browse the repository at this point in the history

Commits on Sep 23, 2019

  1. Configuration menu
    Copy the full SHA
    706b652 View commit details
    Browse the repository at this point in the history

Commits on Oct 1, 2019

  1. QA-SRL (#716)

    * Initial QASRL
    
    * Updated pred writing for QASRL
    
    * Add validation shuffle to QASRL
    
    * Remove tqdm, modify class check in preds
    
    * qasrl rebase cleanup
    
    * Update QA-SRL to new repo changes
    
    * Removing src
    
    * QASRL Cleanup
    
    * updating to new model format
    
    * csv to tsv
    
    * QASRL update
    zphang authored and sleepinyourhat committed Oct 1, 2019
    Configuration menu
    Copy the full SHA
    b19ca78 View commit details
    Browse the repository at this point in the history
  2. Implementing Data Parallel (#873)

    * implemented data parallel
    
    * black style
    
    * Resolve last of merge marks
    
    * deleting irrelevant logs
    
    * adding new way to get attribute
    
    * updating to master
    
    * torch.Tensor -> torch.tensor for n_exs
    
    * black style
    
    * black style
    
    * Merge master
    
    * adapting other tasks to multiple GPU"
    
    * adding helper function for model attributes
    
    * adding get_model_attribute to main.py
    
    * deleting unecessary n_inbput for span_module
    
    * black style
    
    * revert comment change
    
    * fixing batch size keys
    
    * opt_params -> optimizer_params
    
    * Remove extraneous cahnges
    
    * changed n_exs to one-liner
    
    * adapting args.cuda to multi-GPU setting
    
    * adding use_cuda variable
    
    * Fixing parsing for case of args.cuda=subset
    
    * fixing tests
    
    * fixing nits, cleaning up parse_cuda function
    
    * additional nit
    
    * deleted extra space
    
    * Revert nit
    
    * refactoring into get_batch_size
    
    * removing use_cuda
    
    * adding options.py
    
    * removing use_cuda in tests, deleting extra changes
    
    * change cuda default
    
    * change parse_cuda_list_args import
    
    * jiant.options -> jiant.utils.options
    
    * change demo.conf cuda setting
    
    * fix bug -> make parse_cuda return int if only one gpu
    
    * fix bug
    
    * fixed tests
    
    * revert test_retokenize change
    
    * cleaning up code
    
    * adding addiitonal jiant.options
    
    * Separating cuda_device = int case with multiple cuda_device case
    
    * deleting remains of uses_cuda
    
    * remove time logging
    
    * remove use_cuda from evaluate
    
    * val_interval -> validation_interval
    
    * adding cuda comment to tutorial
    
    * fixed typo
    Yada Pruksachatkun committed Oct 1, 2019
    Configuration menu
    Copy the full SHA
    2553c2d View commit details
    Browse the repository at this point in the history

Commits on Oct 2, 2019

  1. replace correct_sent_indexing with non inplace version (#921)

    * replace correct_sent_indexing with non inplace version
    
    * Update modules.py
    
    * Update modules.py
    HaokunLiu authored and Yada Pruksachatkun committed Oct 2, 2019
    Configuration menu
    Copy the full SHA
    7508bea View commit details
    Browse the repository at this point in the history

Commits on Oct 7, 2019

  1. Abductive NLI (aNLI) (#922)

    * anli
    
    * anli fix
    
    * Adding aNLI link, additional test/dev warning
    zphang committed Oct 7, 2019
    Configuration menu
    Copy the full SHA
    9d4baf3 View commit details
    Browse the repository at this point in the history

Commits on Oct 10, 2019

  1. SocialIQA (#924)

    * black style
    
    * adding SocialQA
    
    * black style
    
    * black style
    
    * fixed socialQA task
    
    * black style
    
    * Update citation
    
    * Nit
    
    * senteval
    
    * socialIQA naming
    
    * reverse unnecessary add
    Yada Pruksachatkun committed Oct 10, 2019
    1 Configuration menu
    Copy the full SHA
    254dc37 View commit details
    Browse the repository at this point in the history

Commits on Oct 16, 2019

  1. Fixing bug with restoring checkpoint with two gpus + cleaning CUDA pa…

    …rsing related code (#928)
    
    * black style
    
    * remove
    
    * cleaning up code around cuda-parsing
    
    * adding defaulting to -1 if there is no cuda devices detected
    
    * fixing nits, throw error instead of log warning for cuda not found
    Yada Pruksachatkun committed Oct 16, 2019
    Configuration menu
    Copy the full SHA
    f0ef3f7 View commit details
    Browse the repository at this point in the history

Commits on Oct 17, 2019

  1. Updating CoLA inference script (#931)

    zphang authored and Yada Pruksachatkun committed Oct 17, 2019
    Configuration menu
    Copy the full SHA
    8f46d4f View commit details
    Browse the repository at this point in the history

Commits on Oct 21, 2019

  1. Adding Senteval Tasks (#926)

    * black style
    
    * adding initial senteval, senteval preprocessing script
    
    * black
    
    * adding senteval to registry
    
    * fixing bigram-shift
    
    * adding label_namespace arg, fixing the ksenteval tasks
    
    * revert extra changes
    
    * black style
    
    * change name -> senteval-probing
    
    * fixing senteval-probing tasks
    
    * renamed senteval -> sentevalprobing
    
    * delete extra imports
    
    * black style
    
    * renaming files and cleaning up preprocessing code
    
    * nit
    
    * black
    
    * deleting pdb
    
    * Senteval -> SE shorthand
    
    * fixing code style
    Yada Pruksachatkun committed Oct 21, 2019
    Configuration menu
    Copy the full SHA
    303a733 View commit details
    Browse the repository at this point in the history

Commits on Oct 22, 2019

  1. Speed up retokenization (#935)

    * black style
    
    * pre-loading tokenizer before retokenization function
    Yada Pruksachatkun committed Oct 22, 2019
    Configuration menu
    Copy the full SHA
    787e78b View commit details
    Browse the repository at this point in the history

Commits on Oct 26, 2019

  1. Scitail (#943)

    * scitail
    
    * Scitail
    
    * Scitail
    
    * update Scitail, removed config
    
    * update Scitail, removed config
    phu-pmh authored and Yada Pruksachatkun committed Oct 26, 2019
    Configuration menu
    Copy the full SHA
    2ed6802 View commit details
    Browse the repository at this point in the history
  2. Add corrected data stastistics (#941)

    Thanks to #936, we've discovered errors in our data statistics reporting in the edge probing paper. This table contains the corrected values. As there is more space here, the full (unrounded) values are reported instead. This was generated by a script that read the stats.tsv file and the diff vs. the paper should match my comment on the issue yesterday.
    pitrack authored and Yada Pruksachatkun committed Oct 26, 2019
    Configuration menu
    Copy the full SHA
    fb1eec1 View commit details
    Browse the repository at this point in the history
  3. CommonsenseQA+hellaswag (#942)

    * add commonsenseqa task
    
    * add hellaswag task
    
    * dabug
    
    * from #928
    
    * add special tokens to CommensenseQA input
    
    * format
    
    * revert irrelevant change
    
    * Typo fix
    
    * delete
    
    * rename stuff
    
    * Update qa.py
    
    * black
    HaokunLiu authored and Yada Pruksachatkun committed Oct 26, 2019
    Configuration menu
    Copy the full SHA
    1d40f23 View commit details
    Browse the repository at this point in the history

Commits on Oct 27, 2019

  1. fix name (#945)

    HaokunLiu authored and Yada Pruksachatkun committed Oct 27, 2019
    Configuration menu
    Copy the full SHA
    3b07a5e View commit details
    Browse the repository at this point in the history

Commits on Nov 3, 2019

  1. CCG update (#948)

    * generalize ccg to other transformer models
    
    * debug
    
    * I don't know who broke this at what time, but let's just fix it here now
    HaokunLiu authored and Yada Pruksachatkun committed Nov 3, 2019
    Configuration menu
    Copy the full SHA
    347f743 View commit details
    Browse the repository at this point in the history

Commits on Nov 5, 2019

  1. Fixing senteval-probing preprocessing (#951)

    * Copying configs from superglue
    
    * adding senteval probing config commands
    
    * adding meta-script for transfer and probing exps
    
    * Adding meta bash script fixed
    
    * give_permissions script
    
    * small fix transfer_analysis.sh (#946)
    
    model_*.th might indicate several models; fixed to model_*.best.th
    
    * lr_patience fix
    
    * target_task training -> pretrain training
    
    * adding edgeprobing configs and command
    
    * adding edge probing conf
    
    * fix load_target_train bug
    
    * add hyperparameter sweeping
    
    * val_interval change
    
    * adding sweep function
    
    * Task specific val_intervals
    
    * add reload_vocab to hyperparameter sweep
    
    * adding batch_size specification
    
    * fixing senteval-word-content
    
    * fixing senteval preprocess script
    
    * revert extra delete
    
    * remove extra files
    
    * black format
    
    * black formatting trainer.py
    
    * remove load_data()
    
    * removing extra changes
    Yada Pruksachatkun committed Nov 5, 2019
    Configuration menu
    Copy the full SHA
    41abe5f View commit details
    Browse the repository at this point in the history

Commits on Nov 6, 2019

  1. Adding tokenizer alignment function (#953)

    * Copying configs from superglue
    
    * adding senteval probing config commands
    
    * adding meta-script for transfer and probing exps
    
    * Adding meta bash script fixed
    
    * give_permissions script
    
    * small fix transfer_analysis.sh (#946)
    
    model_*.th might indicate several models; fixed to model_*.best.th
    
    * lr_patience fix
    
    * target_task training -> pretrain training
    
    * adding edgeprobing configs and command
    
    * adding edge probing conf
    
    * fix load_target_train bug
    
    * add hyperparameter sweeping
    
    * val_interval change
    
    * adding sweep function
    
    * Task specific val_intervals
    
    * add reload_vocab to hyperparameter sweep
    
    * adding batch_size specification
    
    * fixing senteval-word-content
    
    * fixing senteval preprocess script
    
    * revert extra delete
    
    * remove extra files
    
    * black format
    
    * black formatting trainer.py
    
    * remove load_data()
    
    * removing extra changes
    
    * adding alignment mapping function
    
    * fix comment nits
    
    * comment nit
    
    * adding example of token_alignment
    Yada Pruksachatkun committed Nov 6, 2019
    Configuration menu
    Copy the full SHA
    98b1dc8 View commit details
    Browse the repository at this point in the history

Commits on Nov 8, 2019

  1. Function words probing (#949)

    * add nli prob task template
    
    * Create acceptablity_probing.py
    
    * specify nli probing tasks
    
    * port acceptablity probing tasks
    
    * add directory name
    
    * debug
    
    * debug
    
    * format
    
    * black
    
    * revert unintended change
    HaokunLiu authored and Yada Pruksachatkun committed Nov 8, 2019
    Configuration menu
    Copy the full SHA
    d769338 View commit details
    Browse the repository at this point in the history

Commits on Nov 9, 2019

  1. CosmosQA (#952)

    * misc run scripts
    
    * cosmosqa
    
    * cosmosqa
    
    * cosmosqa
    
    * cosmosqa run
    
    * cleaned up repo
    
    * cleaned up repo
    
    * reformatted
    phu-pmh authored and Yada Pruksachatkun committed Nov 9, 2019
    Configuration menu
    Copy the full SHA
    8af068d View commit details
    Browse the repository at this point in the history

Commits on Nov 10, 2019

  1. qqp fix (#956)

    zphang authored and Yada Pruksachatkun committed Nov 10, 2019
    Configuration menu
    Copy the full SHA
    1ee0d95 View commit details
    Browse the repository at this point in the history

Commits on Nov 12, 2019

  1. QAMR + QA-SRL Update (#932)

    * qamr
    
    * tokenization
    
    * temp qamr
    
    * qamr
    
    * QASRL
    
    * Undo slicing
    
    * quick hack to bypass bad qasrl examples
    
    * f1 em fix
    
    * tokenization fixes
    
    * average
    
    * New tokenization aligner
    
    * update example counts
    
    * Cleanup
    
    * Typography
    zphang authored and Yada Pruksachatkun committed Nov 12, 2019
    Configuration menu
    Copy the full SHA
    2a9230b View commit details
    Browse the repository at this point in the history

Commits on Nov 14, 2019

  1. Set _unk_id in Roberta module (#959)

    Currently the `_unk_id` for Roberta is not set correctly, which triggers the assertion error on line 118.
    njjiang authored and sleepinyourhat committed Nov 14, 2019
    Configuration menu
    Copy the full SHA
    39b234e View commit details
    Browse the repository at this point in the history

Commits on Nov 15, 2019

  1. Fixing load_target_train_checkpoint with mixing setting (#960)

    * adding loading for mix
    
    * black style
    Yada Pruksachatkun committed Nov 15, 2019
    Configuration menu
    Copy the full SHA
    7dc9965 View commit details
    Browse the repository at this point in the history

Commits on Nov 20, 2019

  1. Configuration menu
    Copy the full SHA
    daec5cf View commit details
    Browse the repository at this point in the history

Commits on Nov 22, 2019

  1. CCG update (#955)

    * generalize ccg to other transformer models
    
    * debug
    
    * I don't know who broke this at what time, but let's just fix it here now
    
    * ccg lazy iterator
    
    * debug
    
    * clean up
    
    * debug
    
    * debug ccg, minor cleanup
    HaokunLiu authored and Yada Pruksachatkun committed Nov 22, 2019
    Configuration menu
    Copy the full SHA
    8a059b8 View commit details
    Browse the repository at this point in the history

Commits on Nov 23, 2019

  1. add adversarial_nli tasks (#966)

    pyeres authored and sleepinyourhat committed Nov 23, 2019
    Configuration menu
    Copy the full SHA
    c181273 View commit details
    Browse the repository at this point in the history

Commits on Nov 27, 2019

  1. Update README.md

    sleepinyourhat committed Nov 27, 2019
    Configuration menu
    Copy the full SHA
    42b389f View commit details
    Browse the repository at this point in the history

Commits on Dec 9, 2019

  1. Citation fix

    sleepinyourhat committed Dec 9, 2019
    Configuration menu
    Copy the full SHA
    18ca100 View commit details
    Browse the repository at this point in the history