Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training Evaluation Display on VSCode #22694

Open
2 of 4 tasks
sciencecw opened this issue Apr 10, 2023 · 16 comments
Open
2 of 4 tasks

Training Evaluation Display on VSCode #22694

sciencecw opened this issue Apr 10, 2023 · 16 comments

Comments

@sciencecw
Copy link

sciencecw commented Apr 10, 2023

System Info

  1. OSX Ventura 13.2
  2. VSCode 1.77.1
  • Chromium 102.0.5005.196
  • Jupyter extension v2023.3.1000892223
  1. Transformers 4.26.1

Who can help?

Not sure. Please let me know if it is a VSCode issue

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

https://github.com/huggingface/notebooks/blob/main/examples/text_classification.ipynb
Run the notebook (I commented out the parts pushing to hub)

Expected behavior

The table of metrics during evaluation phase in training fail to show up as html object in VSCode. There seems to be no similar issue on colab or AWS

Currently, the output looks like this (repeated by the number of times evaluation is run during training)

0.3564084804084804
{'eval_loss': 1.6524937152862549, 'eval_f1': 0.3564084804084804, 'eval_accuracy': 0.36, 'eval_runtime': 4.6151, 'eval_samples_per_second': 10.834, 'eval_steps_per_second': 1.517, 'epoch': 0.26}
***** Running Evaluation *****
  Num examples = 50
  Batch size = 8
{'loss': 1.6389, 'learning_rate': 3.611111111111111e-05, 'epoch': 0.28}

@sgugger
Copy link
Collaborator

sgugger commented Apr 10, 2023

We had specifically excluded VSCode in the past as the widgets were not properly working there. Could you try to install from source and see if commenting out those two lines result in a nice training?

@sciencecw
Copy link
Author

What do you mean by install from source?

@sciencecw
Copy link
Author

I installed the package from source. I can see the table formatted correctly now, but it stops updating after the first evaluation
Screenshot 2023-04-11 at 8 40 34 PM

I guess that is the widget problem you're referring to. Is there a workaround for people on VSCode so it doesn't print out a thousand lines of evaluation? Like hiding the printout and retrieving evaluation stats after training is done?

@sgugger
Copy link
Collaborator

sgugger commented Apr 12, 2023

You can filter the log level of printed informations with transformers.utils.set_verbosity_warning() (to avoid all infos like the logs of the evaluation results).

@MichalWeisman
Copy link

I have also encountered this problem, and for procedural reasons, I cannot install from source.
It would be very helpful if this issue could be addressed, please :)

@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@github-actions github-actions bot closed this as completed Jun 2, 2023
@lainisourgod
Copy link

image my trainer output looks very bad
args = TrainingArguments(
    "pokemon-habitat",
    evaluation_strategy="epoch",
    per_device_train_batch_size=batch_size,
    per_device_eval_batch_size=batch_size,
    num_train_epochs=num_epochs,
    use_mps_device=True,
)

# Trainer
trainer = Trainer(
    model,
    args,
    train_dataset=dataset["train"],
    eval_dataset=dataset["test"],
    compute_metrics=compute_metrics,
)
trainer.train()

transfomers: 4.30.2

@JackismyShephard
Copy link

I am having the exact same issues as @lainisourgod

@davies-w
Copy link
Contributor

davies-w commented May 6, 2024

I've just started having this issue - in vscode enviroment. It was working fine, and then suddenly stopped working and started printing out raw dict again. It may have started after I silenced some warnings using

from transformers import logging as transformers_logging
transformers_logging.set_verbosity_error()

@Joacodef
Copy link

I am also having the exact same issue as @lainisourgod, it looks terrible

@LysandreJik
Copy link
Member

cc @muellerzr

@muellerzr muellerzr reopened this Jun 3, 2024
@huggingface huggingface deleted a comment from github-actions bot Jun 28, 2024
@huggingface huggingface deleted a comment from github-actions bot Jul 23, 2024
@amyeroberts
Copy link
Collaborator

Gentle ping @muellerzr

@huggingface huggingface deleted a comment from github-actions bot Aug 19, 2024
@huggingface huggingface deleted a comment from github-actions bot Sep 16, 2024
Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@HERIUN
Copy link

HERIUN commented Nov 21, 2024

same problem. in vscode. only see.
image

transformers == 4.46.3
evalutate == 0.4.3

@Makankn
Copy link

Makankn commented Nov 22, 2024

same problem also.

@Rocketknight1
Copy link
Member

Reopening since this seems to still be happening! If anyone in the community wants to try debugging it, the relevant class is NotebookProgressBar.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests