-
Notifications
You must be signed in to change notification settings - Fork 477
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
raise ConnectionError jetson-container offline #590
Comments
Hi @malocker, yes the models should automatically be cached under Can you check from outside the container if those get downloaded to there? You can also try specifying their actual path instead of the huggingface repo/model name, then it won't attempt to download them. I think it still checks for updates in the repos and compares checksums, maybe that is what it was complaining about. |
Hi Dustin Yes the models exist on /data/models folder but keep looking to huggingface hub before start when i try to pass folder location as parameter --model /folder/VILA1.5-3b i get huggingface_hubb.errors : repo id must use alphanumeric char where do you specify the path ? when run the container or into one of the config files ? Thank you |
@malocker try changing You will want to edit that source inside container, or by cloning an external copy of the NanoLLM sources and mounting it into container like this: https://www.jetson-ai-lab.com/agent_studio.html#dev-mode If that change isn't effective, keep drilling down and replacing the model string with the path: https://github.com/dusty-nv/NanoDB/blob/f8df95db3ac29098d2957628c8ee1fdd9f12b125/nanodb/nanodb.py#L42 |
I changed 'ViT-L/14@336px' on this line to the directory of the CLIP model (it should have been downloaded under /data/models/clip) https://github.com/dusty-nv/NanoLLM/blob/28fa5499e40f74c5a36883770584b0bc9fe03e76/nano_llm/agents/video_query.py#L100 |
thanks for the update i did the change video_query.py with direct folder and model name without success for me model=None if self.db_share_embed else '/data/models/clip/ViT-L/14@336px', here is the command am using to run the container, working but only online, is there any parameter i can pass to force look in the models folder instead of download? txs jetson-containers run $(autotag nano_llm) python3 -m nano_llm.agents.video_query --api=mlc --model Efficient-Large-Model/Llama-3-VILA1.5-8B --max-context-len 256 --max-new-tokens 32 --video-input /dev/video0 --video-output webrtc://@:8554/output --nanodb /data/nanodb/coco/2017 |
Hi Dustin
great job on Live Llava 2.0 - VILA + Multimodal NanoDB for jetson Orin
is it possible to run all the jetson-container offline instead of downloading from huggingface every time?
tried to commit the container while running but when i unplug the network cable it fail with error
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: (MaxRetryError('HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14-336/resolve/main/model.safetensors (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object
txs
The text was updated successfully, but these errors were encountered: