Replies: 1 comment
-
Yes, that’s a good solution! We can make and send representation and
tokens. Ideally, we send them as one event.
An alternative solution would be to surface a language detection plus
tokenization endpoint outside of the paywall. This would allow us to
package the data with the original message as normal, making it more
efficient and faster for the recipient. In this case, we’d be assuming a
language learner would be getting the message eventually.
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Related issue: #1723
Some messages do not have an originalSent representation or langCode. The main cause for this is messages sent by unsubscribed users. We don't run grammar_lite for this messages, so their langCode is never detected.
Because of this, when another (subscribed) user opens the toolbar for these messages, they can't click on tokens. Tokens cannot be generated, because there's no representation to generate them with, and a new representation cannot be generated because the language code is unknown.
We could run language detection in this case (if the language detection endpoint is still working. It's not currently being used by the client). This would allow us to generate a representation with tokens.
Do you have any thoughts on this @wcjord ?
Beta Was this translation helpful? Give feedback.
All reactions