You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for this great Library for Continual Learning datasets.
I wanted to ask regarding the label mapping when we are the task 2 of class incremental training. Let's say I am training on CIFAR100 on an increment of 10 classes in each task. The first training will have labels [0,1,2,3,4,5,6,7,8,9]. For the second task, the labels are [10,11,12,13,14,15,16,17,18,19]. Do you map these to [0,1,2,3,4,5,6,7,8,9] or use some other loss function instead of cross entropy? If you map it, can you please point out the code peice. And how do you handle this at inference time?
Thanks in Advance.
The text was updated successfully, but these errors were encountered:
Hi @Areeb2735 , thanks for your question.
We do not remap the class values in class incremental, but we usually extend the output layer size.
You can use the loss function of your choice to train.
As long as you do not need to task index for inference, you can do whatever you want and stay in the "class incremental" framework.
Have a nice day
And this makes sense when you are using a rehearsal buffer. What if we do not use the data from the previous task? I guess we wound be require to make a new classifier. And then we wound need to make the labels between 0 and 9. Please do let me know if I am wrong.
Hi.
Thank you for this great Library for Continual Learning datasets.
I wanted to ask regarding the label mapping when we are the task 2 of class incremental training. Let's say I am training on CIFAR100 on an increment of 10 classes in each task. The first training will have labels [0,1,2,3,4,5,6,7,8,9]. For the second task, the labels are [10,11,12,13,14,15,16,17,18,19]. Do you map these to [0,1,2,3,4,5,6,7,8,9] or use some other loss function instead of cross entropy? If you map it, can you please point out the code peice. And how do you handle this at inference time?
Thanks in Advance.
The text was updated successfully, but these errors were encountered: