You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I might be missing something, but it seems there is a mis-match between the label ordering when preparing the training data v.s. when doing the inference in the provided jupyter notebook.
In the 'datasets.py' file, in the 'transform_labels' method, the order is: "dict_labels = {'positive': 0, 'neutral':1, 'negative':2}"
But in the notebook: "labels = {0:'neutral', 1:'positive', 2:'negative'}"
In both cases, 'negative'=2, but 'positive' and 'neutral' are switched.
Is there a reason I am missing?
Thank you!
The text was updated successfully, but these errors were encountered:
Thanks for pointing it out. The transform_labels method in the dataset.py is used to prepare a different dataset (financialPhraseBankDataset), and it's not used by the inference Notebook code. If you use our notebook, I assume you want to use the model fine-tuned by the analyst-tone data, where the labels are {0:'neutral', 1:'positive', 2:'negative'}
I might be missing something, but it seems there is a mis-match between the label ordering when preparing the training data v.s. when doing the inference in the provided jupyter notebook.
In the 'datasets.py' file, in the 'transform_labels' method, the order is: "dict_labels = {'positive': 0, 'neutral':1, 'negative':2}"
But in the notebook: "labels = {0:'neutral', 1:'positive', 2:'negative'}"
In both cases, 'negative'=2, but 'positive' and 'neutral' are switched.
Is there a reason I am missing?
Thank you!
The text was updated successfully, but these errors were encountered: