You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 23, 2023. It is now read-only.
In a previous PR we adjusted the code in helper_embedding.py
Note the commented out larger transformer model.
# DAN model, lighter A stands for averaging; download and unzip
# https://tfhub.dev/google/universal-sentence-encoder/4
model = hub.Module(os.path.join(os.getenv('DIR_DATA_EXTERNAL'), 'universal-sentence-encoder_4'))
# Transformer model, more performant, runs on GPU, if available
# model = hub.load('data/external/universal-sentence-encoder-large_5')
Both @avisionh and I noticed the recommendations from the downstream model in 04_annoy_recommend_content.py are a bit rubbish!
This contrasts to those we saw produced by @whojammyflip when using his GPU and the larger model.
The main difference is the universal sentence encoder model version and size.
At a minimum we should document this in the script or change the default behaviour and recommend use on a GPU (on the cloud). This would require additional work.
The text was updated successfully, but these errors were encountered:
In a previous PR we adjusted the code in helper_embedding.py
Note the commented out larger transformer model.
Both @avisionh and I noticed the recommendations from the downstream model in 04_annoy_recommend_content.py are a bit rubbish!
This contrasts to those we saw produced by @whojammyflip when using his GPU and the larger model.
The main difference is the universal sentence encoder model version and size.
At a minimum we should document this in the script or change the default behaviour and recommend use on a GPU (on the cloud). This would require additional work.
The text was updated successfully, but these errors were encountered: