You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 1, 2024. It is now read-only.
I am trying to use the GenRL, however I encounter some problems for which I kindly ask for your support.
I am mainly having problems with the knowledge_val.py file
Could you share the cache files? Or how can I produce them?
I mean the files required by the --val_cache parameter in the knowledge_val.py file
Executing the commands in the GenRL readme, it is impossible for me to run knowledge_val.py, because --model_output data/lcquad2/lcquad1_test_GenRL.json has the following keys q_id, question, etc which are different from those required as id, text, etc.
Where am I doing wrong?
Thank you in advance.
Sincerely,
Andrea
The text was updated successfully, but these errors were encountered:
Hi @nandana ,
sorry for bothering you again, do you have some updates?
Can you also share the code for the dataset construction? In the paper you mention a ranking strategy with a word embeddings, can you share also the preprocessing code?
Thank again for your brilliant work :)
Kind regards,
Andrea
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hello everybody (@nandana, @gaetangate),
I am trying to use the GenRL, however I encounter some problems for which I kindly ask for your support.
I am mainly having problems with the
knowledge_val.py
fileCould you share the cache files? Or how can I produce them?
I mean the files required by the
--val_cache
parameter in theknowledge_val.py
fileExecuting the commands in the GenRL readme, it is impossible for me to run
knowledge_val.py
, because--model_output data/lcquad2/lcquad1_test_GenRL.json
has the following keysq_id, question, etc
which are different from those required asid, text, etc
.Where am I doing wrong?
Thank you in advance.
Sincerely,
Andrea
The text was updated successfully, but these errors were encountered: