-
-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Freezing Trigram Phrase models yields inconsistent results #3326
Comments
This issue also leads to inconstant results in trigram models saved & reloaded from disk. |
Thanks for reporting. Are you interested in figuring out the cause? All code lives in the phrases module, and is fairly straightforward. |
Yes, the issues is that phrases are stored in A hacky workaround is to use a different delimiter for each stage but then you end up with phrasesgram keys like 'chief-executive_officer'. I think the easiest fix is probably to switch to making the phrase keys tuples. I should have time to work on it and put in a PR next week. |
Looking a bit further into the code, changing phrase keys to tuples would not be completely trivial, as this is a component of the model that is serialized on save, so some more backwards compatibility code would be needed. I also noted this comment Do you have any thoughts on whether I should refactor the |
Thanks for looking into this. IIRC we went for strings to save on RAM, tuples introduce a lot memory overhead. These "phrases" models are memory-hungry, by the nature of what they do (but see also #1654). But if
Why idea why? I don't remember how all this works any more :( Can't we compute simply split on all |
. |
I guess the fundamental problem is that if you have 'chief_executive_officer' you don't know if the underlying tokens are 'chief_executive' and 'officer' or 'chief' and 'executive_officer'. You could score on all sub-components (after stripping out connector words) but that would mean new scoring functions that work flexibly on 2 or more words. For example, training a Phrase model over existing bigrams can yield valid bigrams (words newly paired because the previous bigraming has changed their individual frequency) , trigrams(bigram paired with word) & 4-grams (two bigrams get paired). The issue with |
Problem description
Applying a Trigram phrase model yields different results after
freeze()
Code to reproduce
Output
Further Info
I think the issue lies in
export_phrases
. When split is called, it cannot distinguish between '_'s added by the bigram or trigram model.Gives:
Versions
The text was updated successfully, but these errors were encountered: