Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove the need for the config to be in the subfolder #2044

Merged
merged 8 commits into from
Oct 10, 2024
Merged

remove the need for the config to be in the subfolder #2044

merged 8 commits into from
Oct 10, 2024

Conversation

echarlaix
Copy link
Collaborator

No description provided.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@@ -380,27 +381,24 @@ def from_pretrained(
)
model_id, revision = model_id.split("@")

config_folder = (
subfolder if find_files_matching_pattern(model_id, cls.config_name)[0].parent == subfolder else ""
Copy link
Member

@tomaarsen tomaarsen Oct 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

find_files_matching_pattern may return an empty list, causing a crash here

(Edit: I now see this PR is still in draft mode, oops)

@tomaarsen
Copy link
Member

I did some testing with this branch and wrote about it here: huggingface/optimum-intel#923 (comment)

  • Tom Aarsen

@echarlaix echarlaix marked this pull request as ready for review October 9, 2024 14:31
Copy link
Member

@IlyasMoutawwakil IlyasMoutawwakil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@tomaarsen
Copy link
Member

tomaarsen commented Oct 9, 2024

@echarlaix

Edit: This seems to be related to the integration between optimum-intel and this PR!

I'm testing some more, and I think this is resulting in some issues with local models (a clone of this model), although perhaps the issue could be from optimum-intel as well. Here's a way to reproduce:

import tempfile
from pathlib import Path

from huggingface_hub import HfApi
from optimum.intel import OVModelForFeatureExtraction

model_id = "sentence-transformers-testing/stsb-bert-tiny-openvino"

# hub model
ov_model = OVModelForFeatureExtraction.from_pretrained(model_id, subfolder="openvino", file_name="openvino_model.xml")
print("Hub model:", ov_model)

# local model
api = HfApi()
with tempfile.TemporaryDirectory() as tmpdirname:
    local_dir = Path(tmpdirname) / "model"
    api.snapshot_download(repo_id=model_id, local_dir=local_dir)

    ov_model = OVModelForFeatureExtraction.from_pretrained(local_dir, subfolder="openvino", file_name="openvino_model.xml")
    print("Remote model:", ov_model)

with optimum at the fix branch and optimum-intel at main as of writing.

Outputs:

config.json not found in the specified subfolder openvino. Using the top level config.json.
Compiling the model to CPU ...
Hub model: <optimum.intel.openvino.modeling.OVModelForFeatureExtraction object at 0x0000028403872410>
config.json not found in the specified subfolder openvino. Using the top level config.json.
Traceback (most recent call last):
  File "c:\code\sentence-transformers\demo_optimum_intel_crash.py", line 19, in <module>
    ov_model = OVModelForFeatureExtraction.from_pretrained(local_dir, subfolder="openvino", file_name="openvino_model.xml")
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\tom\.conda\envs\sentence-transformers\Lib\site-packages\optimum\intel\openvino\modeling_base.py", line 465, in from_pretrained
    return super().from_pretrained(
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\tom\.conda\envs\sentence-transformers\Lib\site-packages\optimum\modeling_base.py", line 436, in from_pretrained
    return from_pretrained_method(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\tom\.conda\envs\sentence-transformers\Lib\site-packages\optimum\intel\openvino\modeling_base.py", line 381, in _from_pretrained
    model = cls.load_model(model_cache_path, quantization_config=quantization_config)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\tom\.conda\envs\sentence-transformers\Lib\site-packages\optimum\intel\openvino\modeling_base.py", line 232, in load_model
    core.read_model(file_name.resolve(), file_name.with_suffix(".bin").resolve())
  File "C:\Users\tom\.conda\envs\sentence-transformers\Lib\site-packages\openvino\runtime\ie_api.py", line 500, in read_model
    return Model(super().read_model(model, weights))
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Exception from src/inference/src/cpp/core.cpp:90:
Check 'util::directory_exists(path) || util::file_exists(path)' failed at src/frontends/common/src/frontend.cpp:117:
FrontEnd API failed with GeneralFailure:
ir: Could not open the file: "C:\Users\tom\AppData\Local\Temp\tmpqsvchnyg\model\openvino_model.xml"

Note that it's trying to load "C:\Users\tom\AppData\Local\Temp\tmpqsvchnyg\model\openvino_model.xml", whereas the model is actually in "C:\Users\tom\AppData\Local\Temp\tmpqsvchnyg\model\openvino\openvino_model.xml", i.e. the openvino subfolder.

The issue is only for local models, as remote loading still seems to work fine.

Edit: I believe the issue originates here:
https://github.com/huggingface/optimum-intel/blob/41d93a17ea4dfd0880f80ac25925e51e3622ac20/optimum/intel/openvino/modeling_base.py#L515
Which might have to be:

            if subfolder:
                model_cache_path = model_path / subfolder / file_name
            else:
                model_cache_path = model_path / file_name

Although I'm not really sure why this didn't pop up earlier.

  • Tom Aarsen

@echarlaix
Copy link
Collaborator Author

Thanks for taking the time to tests it @tomaarsen, the issue here seems to indeed come from the fact that subfolder is ignored for local models, a fix can be added in _cached_file as what you're suggesting. This is not directly related to this PR but can be fixed in a following PR. let me know if you'd like to take care of it!

@tomaarsen
Copy link
Member

I created a quick PR for OpenVINO on the optimum-intel side: huggingface/optimum-intel#933
As far as I can tell, everything should be working now: remote, local, ONNX & OpenVINO. I do have one small nit, see below

  • Tom Aarsen

Comment on lines +393 to +395
logger.info(
f"{cls.config_name} not found in the specified subfolder {subfolder}. Using the top level {cls.config_name}."
)
Copy link
Member

@tomaarsen tomaarsen Oct 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Edit: Nevermind, I think we should remove the logger.setLevel(logging.INFO) that exist in 2 places in optimum:
Otherwise I always see e.g.

Framework not specified. Using pt to export the model.
Using the export variant default. Available variants are:
    - default: The default ONNX variant.

***** Exporting submodel 1/1: MPNetModel *****
Using framework PyTorch: 2.5.0.dev20240807+cu121
config.json not found in the specified subfolder onnx. Using the top level config.json.
Compiling the model to CPU ...
See the original comment here:
Suggested change
logger.info(
f"{cls.config_name} not found in the specified subfolder {subfolder}. Using the top level {cls.config_name}."
)
logger.debug(
f"{cls.config_name} not found in the specified subfolder {subfolder}. Using the top level {cls.config_name}."
)

Any chance I can convince you to reduce this to debug?

I recognize that it's taken from

logger.info(
f"config.json not found in the specified subfolder {subfolder}. Using the top level config.json."
)

where info is used, but I feel like the setting is a bit different. This new info will be triggered every time someone loads an ONNX or OpenVINO model with Sentence Transformers (as those automatically save their modeling files in subfolders away from the configuration).

Another reason that I'd like it to be reduced to debug is because (for some reason?) the info-level logs are shown by default in Optimum. Or do I just have a weird/non-normal setup? I don't think I'm setting set_verbosity_info() or TRANSFORMERS_VERBOSITY.

<Logger optimum (INFO)>
<Logger transformers (WARNING)>
<Logger accelerate (WARNING)>

I'm usually getting quite a lot of logs from optimum already.

@tomaarsen
Copy link
Member

The TemporaryDirectory not cooperating on Windows is such an annoying bug. I wrote this workaround for it: https://github.com/UKPLab/sentence-transformers/blob/master/tests/utils.py

@echarlaix echarlaix merged commit 1b5a63d into main Oct 10, 2024
44 of 48 checks passed
@echarlaix echarlaix deleted the fix branch October 10, 2024 09:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants