You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
C:\Users\XTEND\anaconda3\envs\ftorch_gpu\python.exe "C:/Program Files/JetBrains/PyCharm Community Edition 2022.3.2/plugins/python-ce/helpers/pydev/pydevd.py" --multiprocess --qt-support=auto --client 127.0.0.1 --port 63293 --file C:\Users\XTEND\PycharmProjects\Regression_Wav2vec\regression_model_train.py
Connected to pydev debugger (build 223.8617.48)
Dataset({
features: ['name', 'path', 'emotion'],
num_rows: 6925
})
Dataset({
features: ['name', 'path', 'emotion'],
num_rows: 1732
})
A regression problem with 3 items: [0, 1, 2]
C:\Users\XTEND\anaconda3\envs\ftorch_gpu\lib\site-packages\transformers\configuration_utils.py:380: UserWarning: Passing gradient_checkpointing to a config initialization is deprecated and will be removed in v5 Transformers. Using model.gradient_checkpointing_enable() instead, or if you are using the Trainer API, pass gradient_checkpointing=True in your TrainingArguments.
warnings.warn(
regression
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
The target sampling rate: 16000
Map: 100%|██████████| 100/100 [00:01<00:00, 60.78 examples/s]
Map: 100%|██████████| 100/100 [00:01<00:00, 56.42 examples/s]
Some weights of Wav2Vec2ForSpeechClassification were not initialized from the model checkpoint at lighteternal/wav2vec2-large-xlsr-53-greek and are newly initialized: ['classifier.out_proj.weight', 'classifier.out_proj.bias', 'classifier.dense.bias', 'classifier.dense.weight', 'wav2vec2.encoder.pos_conv_embed.conv.parametrizations.weight.original0', 'wav2vec2.encoder.pos_conv_embed.conv.parametrizations.weight.original1']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
0%| | 0/60 [00:00<?, ?it/s]C:\Users\XTEND\anaconda3\envs\ftorch_gpu\lib\site-packages\torch\amp\autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn(
C:\Users\XTEND\anaconda3\envs\ftorch_gpu\lib\site-packages\torch\utils\checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
warnings.warn(
Traceback (most recent call last):
File "C:\Users\XTEND\anaconda3\envs\ftorch_gpu\lib\contextlib.py", line 153, in exit
self.gen.throw(typ, value, traceback)
File "C:\Users\XTEND\anaconda3\envs\ftorch_gpu\lib\site-packages\accelerate\accelerator.py", line 988, in accumulate
yield
File "C:\Users\XTEND\anaconda3\envs\ftorch_gpu\lib\site-packages\transformers\trainer.py", line 1892, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "C:\Users\XTEND\PycharmProjects\Regression_Wav2vec\regression_model_train.py", line 456, in training_step
self.scaler.scale(loss).backward()
AttributeError: 'CTCTrainer' object has no attribute 'scaler'
python-BaseException
0%| | 0/60 [00:18<?, ?it/s]
Process finished with exit code -1073741510 (0xC000013A: interrupted by Ctrl+C)
The text was updated successfully, but these errors were encountered:
In the above code replace CTCTrainer with PyTorch trainer class Trainer. This tutorial is outdated due to newer versions of acclerate and transformers.
Error while trying the regression model
C:\Users\XTEND\anaconda3\envs\ftorch_gpu\python.exe "C:/Program Files/JetBrains/PyCharm Community Edition 2022.3.2/plugins/python-ce/helpers/pydev/pydevd.py" --multiprocess --qt-support=auto --client 127.0.0.1 --port 63293 --file C:\Users\XTEND\PycharmProjects\Regression_Wav2vec\regression_model_train.py
Connected to pydev debugger (build 223.8617.48)
Dataset({
features: ['name', 'path', 'emotion'],
num_rows: 6925
})
Dataset({
features: ['name', 'path', 'emotion'],
num_rows: 1732
})
A regression problem with 3 items: [0, 1, 2]
C:\Users\XTEND\anaconda3\envs\ftorch_gpu\lib\site-packages\transformers\configuration_utils.py:380: UserWarning: Passing
gradient_checkpointing
to a config initialization is deprecated and will be removed in v5 Transformers. Usingmodel.gradient_checkpointing_enable()
instead, or if you are using theTrainer
API, passgradient_checkpointing=True
in yourTrainingArguments
.warnings.warn(
regression
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
Ignored unknown kwarg option normalize
The target sampling rate: 16000
Map: 100%|██████████| 100/100 [00:01<00:00, 60.78 examples/s]
Map: 100%|██████████| 100/100 [00:01<00:00, 56.42 examples/s]
Some weights of Wav2Vec2ForSpeechClassification were not initialized from the model checkpoint at lighteternal/wav2vec2-large-xlsr-53-greek and are newly initialized: ['classifier.out_proj.weight', 'classifier.out_proj.bias', 'classifier.dense.bias', 'classifier.dense.weight', 'wav2vec2.encoder.pos_conv_embed.conv.parametrizations.weight.original0', 'wav2vec2.encoder.pos_conv_embed.conv.parametrizations.weight.original1']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
0%| | 0/60 [00:00<?, ?it/s]C:\Users\XTEND\anaconda3\envs\ftorch_gpu\lib\site-packages\torch\amp\autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn(
C:\Users\XTEND\anaconda3\envs\ftorch_gpu\lib\site-packages\torch\utils\checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
warnings.warn(
Traceback (most recent call last):
File "C:\Users\XTEND\anaconda3\envs\ftorch_gpu\lib\contextlib.py", line 153, in exit
self.gen.throw(typ, value, traceback)
File "C:\Users\XTEND\anaconda3\envs\ftorch_gpu\lib\site-packages\accelerate\accelerator.py", line 988, in accumulate
yield
File "C:\Users\XTEND\anaconda3\envs\ftorch_gpu\lib\site-packages\transformers\trainer.py", line 1892, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "C:\Users\XTEND\PycharmProjects\Regression_Wav2vec\regression_model_train.py", line 456, in training_step
self.scaler.scale(loss).backward()
AttributeError: 'CTCTrainer' object has no attribute 'scaler'
python-BaseException
0%| | 0/60 [00:18<?, ?it/s]
Process finished with exit code -1073741510 (0xC000013A: interrupted by Ctrl+C)
The text was updated successfully, but these errors were encountered: