Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when running step (9) in fine_tuning function! #8

Open
abdelrahmaan opened this issue Jan 3, 2022 · 7 comments
Open

Error when running step (9) in fine_tuning function! #8

abdelrahmaan opened this issue Jan 3, 2022 · 7 comments

Comments

@abdelrahmaan
Copy link

Get this error
RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)

@elmadany
Copy link
Member

elmadany commented Jan 3, 2022

Hi @abdelrahmaan

Please provide the full error.

@abdelrahmaan
Copy link
Author

Hi @elmadany
Here you are.
Screen Shot 2022-01-03 at 9 53 31 PM

[INFO] step (9) start fine_tuning
Epoch: 0%| | 0/5 [00:00<?, ?it/s]

RuntimeError Traceback (most recent call last)
in ()
----> 1 report_df = fine_tuning(config)

10 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
2042 # remove once script supports set_grad_enabled
2043 no_grad_embedding_renorm(weight, input, max_norm, norm_type)
-> 2044 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
2045
2046

RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)

@elmadany
Copy link
Member

elmadany commented Jan 3, 2022

This error seems due to the torch version and the training code not being updated.
I recommend using the Huggingface trainer. These an examples,
https://github.com/huggingface/transformers/tree/master/notebooks
First, select the trainer code based on your task. Then use "UBC-NLP/MARBERT" as the model name.
I hope these will help you

@nboudad
Copy link

nboudad commented Mar 9, 2022

I get the same error!!
@abdelrahmaan did you find a solution?

@abdelrahmaan
Copy link
Author

I did MARABERT by k-train
Try to use it.

@nboudad
Copy link

nboudad commented Mar 10, 2022

Thanks @abdelrahmaan
For me, I fixed the issue using:
!pip install GPUtil pytorch_pretrained_bert transformers==4.12.2 sentencepiece==0.1.96

@Laratomeh
Copy link

Laratomeh commented Dec 26, 2022

Hello @elmadany
Can you please just tell me how to print the classification report in this code? Actually, it is urgent manner
https://colab.research.google.com/drive/1M0ls7EPUi1dwqIDh6HNfJ5y826XvcgGX?usp=sharing
because I want to know the f1 score for each label

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants