Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[doc] Summary of the models fixes #6511

Merged
merged 2 commits into from
Aug 17, 2020
Merged

Conversation

stas00
Copy link
Contributor

@stas00 stas00 commented Aug 16, 2020

  • improve readability - typos, punctuation, clearer sentences

2 questions:

  1. https://huggingface.co/transformers/model_summary.html#t5

I think the example is incorrect in the target part of the example. I think it's missing "cute".

Before:

For instance, if we have the sentence “My dog is very cute .”, and we decide to remove the token dog, is and cute, the input becomes “My very .” and the target is “ dog is . ”

Proposed change:

For instance, if we have the sentence “My dog is very cute .”, and we decide to remove the tokens: "dog", "is" and "cute", the encoder input becomes “My very .” and the target input becomes “ dog is cute .”

At https://huggingface.co/transformers/model_summary.html#full-vs-sparse-attention

  • in the "LSH attention" section:

"The attention mask is modified to mask the current token (except at the first position) because it will give a query and key equal (so very similar to each other). "

It's missing a word at the end of the sentence. Is it "equal attention"?

  • Also in the "Local attention" just after the previous section, it goes:

"This is shown in Figure 2d of the paper"

but there is no link or name of the paper it's referring to.

@sgugger

@codecov
Copy link

codecov bot commented Aug 16, 2020

Codecov Report

Merging #6511 into master will decrease coverage by 0.43%.
The diff coverage is 0.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #6511      +/-   ##
==========================================
- Coverage   80.37%   79.94%   -0.44%     
==========================================
  Files         156      156              
  Lines       28058    28058              
==========================================
- Hits        22552    22431     -121     
- Misses       5506     5627     +121     
Impacted Files Coverage Δ
src/transformers/configuration_utils.py 95.91% <ø> (-0.69%) ⬇️
src/transformers/generation_tf_utils.py 86.21% <ø> (-0.26%) ⬇️
src/transformers/generation_utils.py 96.94% <ø> (ø)
src/transformers/trainer.py 37.84% <0.00%> (ø)
src/transformers/modeling_tf_openai.py 22.58% <0.00%> (-72.26%) ⬇️
src/transformers/modeling_tf_flaubert.py 24.53% <0.00%> (-63.81%) ⬇️
src/transformers/tokenization_pegasus.py 45.31% <0.00%> (-50.00%) ⬇️
src/transformers/tokenization_marian.py 66.66% <0.00%> (-32.50%) ⬇️
src/transformers/tokenization_reformer.py 51.66% <0.00%> (-30.00%) ⬇️
src/transformers/tokenization_auto.py 95.55% <0.00%> (-2.23%) ⬇️
... and 10 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 24107c2...78bd9e3. Read the comment docs.

@JetRunner JetRunner merged commit 49d8076 into huggingface:master Aug 17, 2020
@stas00 stas00 deleted the doc-models branch August 17, 2020 14:57
Zigur pushed a commit to Zigur/transformers that referenced this pull request Oct 26, 2020
* [doc] Summary of the models fixes

* correction
fabiocapsouza added a commit to fabiocapsouza/transformers that referenced this pull request Nov 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants