You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Can anyone explain why the code snippet (interact.py) (underlined in red) was necessary?
As far as I know the logits returned by OpenAIGPTLMHeadModel is of the following form : (batch_size, sequence_length, vocabulary size).
Why was only the last token in the output sequence considered as the predicted next token?
Moreover, why do we have to iteratively generate an output text when the model itself returns a full sequence and not just a single token?
The text was updated successfully, but these errors were encountered:
Hi,
Can anyone explain why the code snippet (interact.py) (underlined in red) was necessary?
As far as I know the logits returned by OpenAIGPTLMHeadModel is of the following form : (batch_size, sequence_length, vocabulary size).
Why was only the last token in the output sequence considered as the predicted next token?
Moreover, why do we have to iteratively generate an output text when the model itself returns a full sequence and not just a single token?
The text was updated successfully, but these errors were encountered: