-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The training and validation loss dropped to -0.6, and it stopped dropping #1271
Comments
Hello, sorry for the late response! Cheers |
There are a couple of predefined trainers with more epochs, see https://github.com/MIC-DKFZ/nnUNet/blob/b4e97fe38a9eb6728077678d4850c41570a1cb02/nnunetv2/training/nnUNetTrainer/variants/training_length/nnUNetTrainer_Xepochs.py You can invoke these trainers using the -tr flag, e.g. nnUNetv2_train DATASET_NAME_OR_ID UNET_CONFIGURATION FOLD -tr nnUNetTrainer_8000epochs |
I have also encountered the situation that loss is negative, may I ask if you have solved it? What is the basis for a negative loss value? |
@FabianIsensee , @dojoh I have the same question, why it is showing the loss in negative. |
|
|
Hello FabianIsensee,
I use nnunet to train my own data. After 1000 epochs of training, my training loss and validation loss are both -0.6+, and they don't continue to decrease. What's going on?
Below is my training process:
The text was updated successfully, but these errors were encountered: