You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, THX for your shared code of TNet.
However, I observed that an error in "read" function about TNet. Specifically, in utils.py:
words.append(t.strip(end))
target_words.append(t.strip(end))
if use t.strip(), it will cause an error , such as 'nicki/n'.strip('/n') the ouptut is 'icki' rather than 'nicki'
when I try to use t[:-2] to replace t.strip():
words.append(t[:-2])
target_words.append(t[:-2])
I find that the best accuracy and F1-score reported in your paper can not be achieved on those three datasets (Laptop, REST, TWITTER).
The text was updated successfully, but these errors were encountered:
Yes, you are right. Such pre-processing error indeed affects the final performances (1-2% accuracy drop). You can refer to issue 4 for more information.
Considering this pre-processing error and the issue that theano is no longer maintained, I highly recommend you to use ABSA-pytorch, a pytorch-based implementation of many ABSA models including TNet, for reproduction.
Hello, THX for your shared code of TNet.
However, I observed that an error in "read" function about TNet. Specifically, in utils.py:
words.append(t.strip(end))
target_words.append(t.strip(end))
if use t.strip(), it will cause an error , such as 'nicki/n'.strip('/n') the ouptut is 'icki' rather than 'nicki'
when I try to use t[:-2] to replace t.strip():
words.append(t[:-2])
target_words.append(t[:-2])
I find that the best accuracy and F1-score reported in your paper can not be achieved on those three datasets (Laptop, REST, TWITTER).
The text was updated successfully, but these errors were encountered: