You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have tried to write that to a gensim word2vec format then load, but it throws error about string to float conversion. Is there a standard way to use custom pre-trained embedding (not created through gensim) which is a python dictionary to load using torchtext?
Thanks,
The text was updated successfully, but these errors were encountered:
The format of your custom_embeddings.txt file needs to be the token followed by the values of each of the dimensions for the embedding, all separated by a single space, e.g. here's three tokens with 20 dimensional embeddings (all just ones as an example):
Hi,
I have tried to write that to a gensim word2vec format then load, but it throws error about string to float conversion. Is there a standard way to use custom pre-trained embedding (not created through gensim) which is a python dictionary to load using torchtext?
Thanks,
The text was updated successfully, but these errors were encountered: