You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your interest on our reconstruction work. I am the first author of this paper, and try to answer your questions.
I feel like "adequacy" is a somewhat strange description of what the authors try to optimize. Wouldn't "coverage" be more appropriate?
Adequacy and/or Fluency evaluations are regularly employed for assessing the quality of machine translation. Adequacy measures how much of the meaning expressed in the source is also expressed in the target translation. It is well known that NMT favors fluent but inadequate translations, which have not only the coverage problems (e.g., over-translation and under-translation) but also mis-translated (e.g., wrong sense and unusual usage) and spurious translation (i.e., translation segments without any reference in source) problems.
In Table 1, why does BLEU score still decrease when length normalization is applied? The authors don't go into detail on this.
As shown in Table 1, likelihood with length normalization favors long translations, which may face over-translation problems.
The training curves are a bit confusing/missing. I would've liked to see a standard training curve that shows the MLE objective loss and the finetuning with reconstruction objective side-by-side.
We try to show that the increase of translation performance is indeed due to the improvement of reconstruction over time. We care more about the improvement of translation performance, since it is the ultimate goal of NMT.
The training procedure somewhat confusing. The say "We further train the model for 10 epochs" with reconstruction objective, byt then "we use a trained model at iteration 110k". I'm assuming they do early-stopping at 110k * 80 = 8.8M steps. Again, would've liked to see the loss curves for this, not just BLEU curves.
During the training, we validate the translation performance for every 10K iterations. We select models that yield best performances on the validation set for NMT models. This is a standard procedure to select a well trained model in NMT.
Again, we care more about the ultimate goal of NMT -- translation performance measured in BLEU scores.
I would've liked to see model performance on more "standard" NMT datasets like EN-FR and EN-DE, etc.
Is there perhaps a smarter way to do reconstruction iteratively by looking at what's missing from the reconstructed output? Trainig with reconstructor with MLE has some of the same drawbacks as training standard enc-dec with MLE and teacher forcing.
Really good point! There should be a better way to model the reconstruction (e.g., focus only on the wrong part). We will study on this in the future.
MLE favors fluent but inadequate translations, thus is not a optimal metric for NMT. It is necessary to introduce a better objective, such as sentence-level BLEU (Shen et al., 2016), and an auxiliary objective of reconstruction (in this work) or coverage penalty (in GNMT paper).
The text was updated successfully, but these errors were encountered:
Hi Denny,
Thank you for your interest on our reconstruction work. I am the first author of this paper, and try to answer your questions.
Adequacy and/or Fluency evaluations are regularly employed for assessing the quality of machine translation. Adequacy measures how much of the meaning expressed in the source is also expressed in the target translation. It is well known that NMT favors fluent but inadequate translations, which have not only the coverage problems (e.g., over-translation and under-translation) but also mis-translated (e.g., wrong sense and unusual usage) and spurious translation (i.e., translation segments without any reference in source) problems.
As shown in Table 1, likelihood with length normalization favors long translations, which may face over-translation problems.
We try to show that the increase of translation performance is indeed due to the improvement of reconstruction over time. We care more about the improvement of translation performance, since it is the ultimate goal of NMT.
During the training, we validate the translation performance for every 10K iterations. We select models that yield best performances on the validation set for NMT models. This is a standard procedure to select a well trained model in NMT.
Again, we care more about the ultimate goal of NMT -- translation performance measured in BLEU scores.
We only test on the Chinese-English translation task, which uses the same data as in our previous work Modeling Coverage for Neural Machine Translation and Context Gates for Neural Machine Translation.
Really good point! There should be a better way to model the reconstruction (e.g., focus only on the wrong part). We will study on this in the future.
MLE favors fluent but inadequate translations, thus is not a optimal metric for NMT. It is necessary to introduce a better objective, such as sentence-level BLEU (Shen et al., 2016), and an auxiliary objective of reconstruction (in this work) or coverage penalty (in GNMT paper).
The text was updated successfully, but these errors were encountered: