-
Notifications
You must be signed in to change notification settings - Fork 297
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix tasks circular import #1296
Conversation
d2522e9
to
4cea118
Compare
Codecov Report
@@ Coverage Diff @@
## js/feature/easy_add_model #1296 +/- ##
=============================================================
+ Coverage 49.82% 49.83% +0.01%
=============================================================
Files 163 162 -1
Lines 11166 11170 +4
=============================================================
+ Hits 5563 5567 +4
Misses 5603 5603
Continue to review full report at Codecov.
|
jiant/tasks/evaluate/core.py
Outdated
import jiant.tasks.lib.templates.squad_style.core as squad_style | ||
import jiant.tasks.lib.templates.squad_style.utils as squad_style_utils | ||
|
||
from jiant.tasks.retrieval import AbductiveNliTask |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed, could we remove the individual imports and just do import jiant.tasks.retrieval as tasks retrieval
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Switched to this style of import.
4cea118
to
7e6a26b
Compare
* Update to Transformers v4.3.3 (#1266) * use default return_dict in taskmodels and remove hidden state context manager in models. * return hidden states in output of model wrapper * Switch to task model/head factories instead of embedded if-else statements (#1268) * Use jiant transformers model wrapper instead of if-else. Use taskmodel and head factory instead of if-else. * switch to ModelArchitectures enum instead of strings * Refactor get_output_from_encoder() to be member of JiantTaskModel (#1283) * refactor getting output from encoder to be member function of jiant model * switch to explicit encode() in jiant transformers model * fix simple runscript test * update to tokenizer 0.10.1 * Add tests for flat_strip() (#1289) * add flat_strip test * add list to test cases flat_strip * mlm_weights(), feat_spec(), flat_strip() if-else refactors (#1288) * moves remaining if-else statments to jiant model or replaces with model agnostic method * switch from jiant_transformers_model to encoder * fix bug in flat_strip() * Move tokenization logic to central JiantModelTransformers method (#1290) * move model specific tokenization logic to JiantTransformerModels * implement abstract methods for JiantTransformerModels * fix tasks circular import (#1296) * Add DeBERTa (#1295) * Add DeBERTa with sanity test * fix tasks circular import * [WIP] add deberta tests * Revert "fix tasks circular import" This reverts commit f924640. * deberta tests passing with transformers 6472d8 * switch to deberta-v2 * fix get_mlm_weights_dict() for deberta-v2 * update to transformers 4.5.0 * mark deberta test_export as slow * Update test_tokenization_normalization.py * add guide to add a model * fix test_expor_model tests * minor pytest fixes (add num_labels for rte, overnight flag fix) * bugfix for simple api notebook * bugfix for #1310 * bugfix for #1306: simple api notebook path name * squad running * 2nd bugfix for #1310: not all tasks have num_labels property * simple api notebook back to roberta-base * run test matrix for more steps to compare to master * save last/best model test fix Co-authored-by: Jesse Swanson <js11133Wnyu.edu>
Importing
BatchMixin
in primary previously imported all tasks resulting in a circular import withtokenization_normalization()
. This PR fixes the circular imports by removingimport *
in tasks__init__.py
.