Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] fix generalized attention fp16 #1036

Merged
merged 4 commits into from
May 23, 2021
Merged

Conversation

AronLin
Copy link
Contributor

@AronLin AronLin commented May 19, 2021

Motivation

Fix open-mmlab/mmdetection#1241.

The function get_position_embedding in GeneralizedAttention returns float32 values, even if in fp16 mode.

Modification

  • Cast the return values of get_position_embedding function
  • Add fp16 unit tests for GeneralizedAttention with attention_type='1111'.

Comments

This bug does not exists in pytorch >= 1.6.0 and mmcv >= 1.3.2. It is not a bug with torch.cuda.amp.autocast applied.

BC-breaking (Optional)

No

@codecov
Copy link

codecov bot commented May 20, 2021

Codecov Report

Merging #1036 (a4e751d) into master (b8c09f3) will increase coverage by 0.02%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1036      +/-   ##
==========================================
+ Coverage   65.30%   65.32%   +0.02%     
==========================================
  Files         154      154              
  Lines        9891     9895       +4     
  Branches     1801     1801              
==========================================
+ Hits         6459     6464       +5     
- Misses       3099     3100       +1     
+ Partials      333      331       -2     
Flag Coverage Δ
unittests 65.32% <100.00%> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmcv/cnn/bricks/generalized_attention.py 92.81% <100.00%> (ø)
mmcv/runner/hooks/logger/tensorboard.py 34.37% <0.00%> (-1.11%) ⬇️
mmcv/runner/hooks/logger/pavi.py 71.01% <0.00%> (+0.42%) ⬆️
mmcv/runner/hooks/logger/mlflow.py 81.25% <0.00%> (+0.60%) ⬆️
mmcv/runner/hooks/logger/wandb.py 69.69% <0.00%> (+0.94%) ⬆️
mmcv/runner/hooks/logger/base.py 70.87% <0.00%> (+1.94%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b8c09f3...a4e751d. Read the comment docs.

@xvjiarui xvjiarui self-requested a review May 20, 2021 14:58
@ZwwWayne ZwwWayne merged commit 5be9593 into open-mmlab:master May 23, 2021
@AronLin AronLin deleted the fixGA branch May 23, 2021 07:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Error when using the faster_rcnn_fpn_attention_1111_dcn_1x.py with the fp16 mode.
3 participants