-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model is predicting empty string for custom python dataset #124
Comments
Hi @Tamal-Mondal ,
Did you also re-train the model after updating the config? I see that you get about F1=0.50 in the Uri |
Thanks @urialon for the quick reply. Yes, I have started training from scratch after making the config changes. In case of "training-logs-2", I was still getting output like "the|the|the|the". I started getting empty predictions(check training-logs-3) from step 3 i.e. when applied more data cleaning steps. One more thing is after applying so many constraints related to data cleaning like no punctuation, no numbers, etc. my training dataset size shrank to 1.6k, not sure if the small amount of training data can be the issue(I think the result still should not be this bad). Regards, |
Hi @urialon, sorry to bother you again. I still haven't understood the problem with my approach and am waiting for your reply. If you can please take a look into it and suggest something to me, it will be a great help. Thanks & Regards, |
Hey @Tamal-Mondal , The small number of examples can definitely be the issue. You can try to train on the python150k first, and after convergence -- train on the additional 1600 examples. As an orthogonal idea, in another project, we have recently released a multi-lingual model called PolyCoder: https://arxiv.org/pdf/2202.13169.pdf and code here: https://github.com/VHellendoorn/Code-LMs Best, |
No problem @urialon, thanks for the suggestions. I will try and let you know. |
Hi @urialon, Here are some updates on this issue.
Original: Get|default|session|or|create|one|with|a|given|config , predicted 1st: Get|a As you can see those predictions are way too short and this is after convergence(just in 17 epochs). I changed the config for summarization as you suggested in some previous issues. The problem here can still be the dataset size, target summary length, etc. I think(do let me know if you have any other observations). I am attaching the logs.
Thanks & Regards, |
Yes, this sounds correct! |
Oh yes, that can definitely be the issue.
You can try to train on the python150k first, and after convergence --
train on the additional 1600 examples.
Best,
Uri
…On Tue, Jun 21, 2022 at 10:57 PM tomy_495 ***@***.***> wrote:
Thanks @urialon <https://github.com/urialon> for the quick reply. Yes, I
have started training from scratch after making the config changes. In case
of "training-logs-2", I was still getting output like "the|the|the|the". I
started getting empty predictions(check testing-logs-3) from step 3 i.e.
when applied more data cleaning steps.
One more thing is after applying so many constraints related to data
cleaning like no punctuation, no numbers etc. my training dataset size
shrinked to 1.6k, not sure if small amount of training data can be the
issue(I think the result still should not be this bad).
Regards,
Tamal Mondal
—
Reply to this email directly, view it on GitHub
<#124 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADSOXMBXMVCJXHMPPF7BY2TVQKTM5ANCNFSM5ZICCKJQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Hi @urialon,
As mentioned in one of the previous issues, I am trying to train and test Code2Seq for the code summarization tasks on our own python dataset. I am able to train the model but now the predictions/training doesn't seem to be correct. This issue seems to be similar to #62 which is also not properly resolved. Following are the things that I have tried:
First time I tried to train with the same default config and after a couple of epochs, the predicted text for all cases was like "the|the|the|the|the|the".
Following the suggestions of Code Captioning Task #17 and Hi, how could I reproduce results for code documentation as described in the paper #45, I updated the model config to make it suitable for predicting longer sequences. But then also the predictions were similar but the length of predicted texts was varying which might be because I changed MAX_TARGET_PARTS as part of the config.
Next I have followed the suggestions in Empty hypothesis when periods are included in dataset #62 and make sure that there is no extra delimiter(",", "|" and " "), there is no punctuation and numbers, no non-alphanumeric characters(using str.isalpha() check over both doc and paths) and removing extra pipes(||). This time there was empty hypothesis for all the validation data points like Empty hypothesis when periods are included in dataset #62.
To check if there is any issue in my setup, I tried to train the model using the python150k dataset and it's training properly on that so I am assuming it's some kind of dataset issue only.
I have observed that during the first 1 or 2 epochs there are some texts in prediction but with more epochs it goes down to become empty for all data points.
Here are some of the training logs during my experiments.
training-logs-1.txt
training-logs-2(config change).txt
training-logs-3(alnum).txt
Thanks & Regards,
Tamal Mondal
The text was updated successfully, but these errors were encountered: