Skip to content

Validation and test set seems to be the same ? #14

@koushiksrivats

Description

@koushiksrivats

Dear Authors
Thanks for the great work.

I was going through your code to understand how you have used each of the 4 datasets for cross dataset training and testing.

Observations:

  1. As per the code in the data_merge file, you are loading the target dataset as your test set (which seems fine).

  2. However, you seem to be using the target dataset as your validation set to compute the HTER and AUC scores after every epoch, and choose the epoch with the best score on the target dataset. training file for reference. Example: Lets choose OCIM protocol. For every epoch, you seem to be training on OCI and testing on M and choosing the epoch with the best score on M.

Questions:

  • Is this approach valid ? Shouldn't there be a validation set combining O-C-I, that is used to evaluate during training and finally the chosen model is tested on M to compute the HTER and AUC scores ?

Kindly request you to clarify the same.

Thanks

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions