Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: Dropping GGUF metadata when creating model.yaml for Model Downloaded via URL #3558

Closed
Van-QA opened this issue Sep 5, 2024 · 4 comments · Fixed by #3725
Closed

bug: Dropping GGUF metadata when creating model.yaml for Model Downloaded via URL #3558

Van-QA opened this issue Sep 5, 2024 · 4 comments · Fixed by #3725
Assignees
Labels
category: model hub Built in models, latest HF models page P0: critical Mission critical type: bug Something isn't working

Comments

@Van-QA
Copy link
Contributor

Van-QA commented Sep 5, 2024

Describe the bug
When downloading the HF GGUF model via URL import from the model hub, the model is using default settings instead of the correct settings. The fixed context length is set to 2048, the prompt template is incorrect, and there is an issue with the stop word. However, importing GGUF directly seems to resolve these issues.

Steps to reproduce

  1. Download the HF GGUF model from the model hub using URL import.
  2. Check the settings for context length, prompt template, and stop word.
  3. Compare the settings with the correct settings for the GGUF model.

Expected behavior
The HF GGUF model downloaded via URL import should have the correct settings for context length, prompt template, and stop word, matching the settings when GGUF is imported directly.

Additional context
The issue seems to be specific to downloading the HF GGUF model via URL import from the model hub. Importing GGUF directly does not exhibit the same issue with default settings.

Comparing model.json, the GGUF direct import in on the right
image

@Van-QA Van-QA added the type: bug Something isn't working label Sep 5, 2024
@Van-QA Van-QA changed the title bug: [DESCRIPTION] bug: Incorrect Settings for HF GGUF Model Downloaded via URL Import Sep 5, 2024
@Van-QA Van-QA moved this to Planning in Jan & Cortex Sep 5, 2024
@0xSage
Copy link
Contributor

0xSage commented Sep 6, 2024

@Van-QA what do you mean by "correct settings"

There are a few settings:
In order of increasing priority

  1. Default for Jan application (determined by us as a fall back)
  2. Settings from GGUF binary metadata (determined by og model author)
  3. Settings we manually set in model.yamls (which should override the above)

@0xSage 0xSage moved this from Planning to Need Investigation in Jan & Cortex Sep 6, 2024
@Van-QA Van-QA assigned louis-jan and unassigned Van-QA Sep 6, 2024
@Van-QA Van-QA moved this from Need Investigation to Planning in Jan & Cortex Sep 6, 2024
@Van-QA
Copy link
Contributor Author

Van-QA commented Sep 6, 2024

hi @0xSage, c‌urrently, via impor‌ting from HF URL, ‌Jan is only ‌applying No.1 (default settings) w‌ithout considering No.2 (author settings)

@0xSage 0xSage changed the title bug: Incorrect Settings for HF GGUF Model Downloaded via URL Import bug: Dropping GGUF metadata when creating model.yaml for Model Downloaded via URL Sep 6, 2024
@imtuyethan imtuyethan added the P1: important Important feature / fix label Sep 18, 2024
@imtuyethan imtuyethan added this to the v0.5.5 milestone Sep 18, 2024
@imtuyethan imtuyethan moved this from Planning to Scheduled in Jan & Cortex Sep 18, 2024
@imtuyethan imtuyethan added P0: critical Mission critical category: model running category: model hub Built in models, latest HF models page and removed P1: important Important feature / fix labels Sep 18, 2024
@imtuyethan imtuyethan moved this from Review + QA to Completed in Jan & Cortex Oct 1, 2024
@imtuyethan
Copy link
Contributor

LGTM

Screenshot 2024-10-01 at 8 02 42 PM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: model hub Built in models, latest HF models page P0: critical Mission critical type: bug Something isn't working
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

4 participants