-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Conversation
are we prepared to require torch 1.8 or higher? |
Probably not, I was just checking with another version because of Roller's note about the version 1.7 version. |
requirements.txt
Outdated
@@ -50,4 +49,4 @@ Unidecode==1.1.1 | |||
urllib3>=1.26.5 | |||
websocket-client==0.56.0 | |||
websocket-server==0.4 | |||
jsonlines==1.2.0 | |||
jsonlines==1.2.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we put this back in alphabetical order?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pseudo-alphabetical order lol. there are some intentional non-monotonic places
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok yeah i think we might've broken some stuff with the order wrong... sorry @mojtaba-komeili could you just revert to what it was before?
tests/nightly/gpu/test_bb2.py
Outdated
|
||
|
||
@testing_utils.skipUnlessGPU | ||
@unittest.skipIf(LOCAL, "Skipping Test because its slow and mem intensive") | ||
@unittest.skipUnless(TRANSFORMER_INSTALLED, "Needs transformer, not installed.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a bit funny because our GPU tests should always have transformers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah im positive this should not be necessary
The policy is usually past two versions of pytorch officially... So yeah requiring 1.8+ is viable now, but afaik, there's nothing we have that's 1.7 incompatible... If we're going to do that, I'd suggest a different PR to carve out 1.8/1.9 tests, and then rebase this |
if TRANSFORMER_INSTALLED: | ||
SEARCH_QUERY_MODEL = ZOO_MEMORY_DECODER | ||
PERSONA_SUMMARY_MODEL = ZOO_QUERY_GENERATOR | ||
ZOO_BB2 = 'zoo:blenderbot2/blenderbot2_400M/model' | ||
ZOO_BB2_3B = 'zoo:blenderbot2/blenderbot2_3B/model' | ||
SEARCH_SERVER = '<SERVER_API>' | ||
common_opt = { | ||
'model': 'projects.blenderbot2.agents.blenderbot2:BlenderBot2RagAgent', | ||
# rag args | ||
'init_opt': 'arch/bart_large', | ||
'generation_model': 'bart', | ||
'retriever_debug_index': 'compressed', | ||
'label_truncate': 128, | ||
'text_truncate': 512, | ||
'batchsize': 4, | ||
'fp16': True, | ||
'model_parallel': True, | ||
# train args | ||
'task': 'convai2,wizard_of_wikipedia', | ||
'num_examples': 8, | ||
} | ||
|
||
def _test_bb2_rag(retrieval_method: KnowledgeAccessMethod, **kwargs): | ||
opt = copy.deepcopy(common_opt) | ||
opt['knowledge_access_method'] = retrieval_method.value | ||
opt.update(dict(kwargs)) | ||
print(' '.join([f'--{k} {v}' for k, v in opt.items()])) | ||
testing_utils.eval_model(opt, skip_test=True) | ||
torch.cuda.empty_cache() | ||
|
||
def _test_bb2_fid(retrieval_method: KnowledgeAccessMethod, **kwargs): | ||
opt = copy.deepcopy(common_opt) | ||
opt['model'] = 'projects.blenderbot2.agents.blenderbot2:BlenderBot2FidAgent' | ||
opt['knowledge_access_method'] = retrieval_method.value | ||
opt.update(dict(kwargs)) | ||
testing_utils.eval_model(opt, skip_test=True) | ||
torch.cuda.empty_cache() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this protected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because if we can't import projects.blenderbot2
we don't have constants such as ZOO_MEMORY_DECODER
so we need to skip everything here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually does CircleCI even run tests in nightly/gpu
if it is not running the GPU tests (that have transformer)? I tried to debug running local pytest
with transformer not installed, and they failed. But now thinking maybe it doesn't work like that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this marker runs the nightly gpu tests: https://github.com/facebookresearch/ParlAI/blob/master/.circleci/config.yml#L388
you can see that the deps under torchgpu1.7 are installed, which includes transformers: https://github.com/facebookresearch/ParlAI/blob/master/.circleci/config.yml#L95
tests/nightly/gpu/test_bb2.py
Outdated
|
||
|
||
@testing_utils.skipUnlessGPU | ||
@unittest.skipIf(LOCAL, "Skipping Test because its slow and mem intensive") | ||
@unittest.skipUnless(TRANSFORMER_INSTALLED, "Needs transformer, not installed.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah im positive this should not be necessary
Patch description
Resolving the issues with the failing unit tests on CircleCI that were introduced after merging BB2 and its dependent projects: Personal Knowledge, and Wizard of Internet.