-
Notifications
You must be signed in to change notification settings - Fork 27.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make pipeline
able to load processor
#32514
Changes from all commits
a3465ce
e377b29
500098d
bab9a57
7fd209f
3db0e0b
71c2d5b
a010357
c526e08
d45d7f6
a95d556
94f5616
2bd4e0e
49ec283
5686833
0a2349e
7d507a6
3873c1c
92d2be8
01d6040
ab7a229
6e0dde8
4c834b3
6a8e590
0533995
e06053d
d553dab
6ff4e68
a6993b5
5799775
9793e01
e712717
bcce4dc
64f002c
cbb813f
6996695
b67d24f
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -28,7 +28,9 @@ | |
from ..models.auto.feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING, AutoFeatureExtractor | ||
from ..models.auto.image_processing_auto import IMAGE_PROCESSOR_MAPPING, AutoImageProcessor | ||
from ..models.auto.modeling_auto import AutoModelForDepthEstimation, AutoModelForImageToImage | ||
from ..models.auto.processing_auto import PROCESSOR_MAPPING, AutoProcessor | ||
from ..models.auto.tokenization_auto import TOKENIZER_MAPPING, AutoTokenizer | ||
from ..processing_utils import ProcessorMixin | ||
from ..tokenization_utils import PreTrainedTokenizer | ||
from ..utils import ( | ||
CONFIG_NAME, | ||
|
@@ -556,6 +558,7 @@ def pipeline( | |
tokenizer: Optional[Union[str, PreTrainedTokenizer, "PreTrainedTokenizerFast"]] = None, | ||
feature_extractor: Optional[Union[str, PreTrainedFeatureExtractor]] = None, | ||
image_processor: Optional[Union[str, BaseImageProcessor]] = None, | ||
processor: Optional[Union[str, ProcessorMixin]] = None, | ||
framework: Optional[str] = None, | ||
revision: Optional[str] = None, | ||
use_fast: bool = True, | ||
|
@@ -571,11 +574,19 @@ def pipeline( | |
""" | ||
Utility factory method to build a [`Pipeline`]. | ||
|
||
Pipelines are made of: | ||
A pipeline consists of: | ||
|
||
- A [tokenizer](tokenizer) in charge of mapping raw textual input to token. | ||
- A [model](model) to make predictions from the inputs. | ||
- Some (optional) post processing for enhancing model's output. | ||
- One or more components for pre-processing model inputs, such as a [tokenizer](tokenizer), | ||
[image_processor](image_processor), [feature_extractor](feature_extractor), or [processor](processors). | ||
- A [model](model) that generates predictions from the inputs. | ||
- Optional post-processing steps to refine the model's output, which can also be handled by processors. | ||
|
||
<Tip> | ||
While there are such optional arguments as `tokenizer`, `feature_extractor`, `image_processor`, and `processor`, | ||
they shouldn't be specified all at once. If these components are not provided, `pipeline` will try to load | ||
required ones automatically. In case you want to provide these components explicitly, please refer to a | ||
specific pipeline in order to get more details regarding what components are required. | ||
</Tip> | ||
|
||
Args: | ||
task (`str`): | ||
|
@@ -644,6 +655,25 @@ def pipeline( | |
`model` is not specified or not a string, then the default feature extractor for `config` is loaded (if it | ||
is a string). However, if `config` is also not given or not a string, then the default feature extractor | ||
for the given `task` will be loaded. | ||
image_processor (`str` or [`BaseImageProcessor`], *optional*): | ||
The image processor that will be used by the pipeline to preprocess images for the model. This can be a | ||
model identifier or an actual image processor inheriting from [`BaseImageProcessor`]. | ||
|
||
Image processors are used for Vision models and multi-modal models that require image inputs. Multi-modal | ||
models will also require a tokenizer to be passed. | ||
|
||
If not provided, the default image processor for the given `model` will be loaded (if it is a string). If | ||
`model` is not specified or not a string, then the default image processor for `config` is loaded (if it is | ||
a string). | ||
processor (`str` or [`ProcessorMixin`], *optional*): | ||
The processor that will be used by the pipeline to preprocess data for the model. This can be a model | ||
identifier or an actual processor inheriting from [`ProcessorMixin`]. | ||
|
||
Processors are used for multi-modal models that require multi-modal inputs, for example, a model that | ||
requires both text and image inputs. | ||
|
||
If not provided, the default processor for the given `model` will be loaded (if it is a string). If `model` | ||
is not specified or not a string, then the default processor for `config` is loaded (if it is a string). | ||
framework (`str`, *optional*): | ||
The framework to use, either `"pt"` for PyTorch or `"tf"` for TensorFlow. The specified framework must be | ||
installed. | ||
|
@@ -905,13 +935,17 @@ def pipeline( | |
|
||
model_config = model.config | ||
hub_kwargs["_commit_hash"] = model.config._commit_hash | ||
load_tokenizer = ( | ||
type(model_config) in TOKENIZER_MAPPING | ||
or model_config.tokenizer_class is not None | ||
or isinstance(tokenizer, str) | ||
) | ||
|
||
load_tokenizer = type(model_config) in TOKENIZER_MAPPING or model_config.tokenizer_class is not None | ||
load_feature_extractor = type(model_config) in FEATURE_EXTRACTOR_MAPPING or feature_extractor is not None | ||
load_image_processor = type(model_config) in IMAGE_PROCESSOR_MAPPING or image_processor is not None | ||
load_processor = type(model_config) in PROCESSOR_MAPPING or processor is not None | ||
|
||
# Check that pipeline class required loading | ||
load_tokenizer = load_tokenizer and pipeline_class._load_tokenizer | ||
load_feature_extractor = load_feature_extractor and pipeline_class._load_feature_extractor | ||
load_image_processor = load_image_processor and pipeline_class._load_image_processor | ||
load_processor = load_processor and pipeline_class._load_processor | ||
Comment on lines
+944
to
+948
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For backward compatibility, we can control with There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Piggy-backing on the comment above, this is likely something we want to highlight very clearly in each pipeline's documentation |
||
|
||
# If `model` (instance of `PretrainedModel` instead of `str`) is passed (and/or same for config), while | ||
# `image_processor` or `feature_extractor` is `None`, the loading will fail. This happens particularly for some | ||
|
@@ -1074,6 +1108,31 @@ def pipeline( | |
if not is_pyctcdecode_available(): | ||
logger.warning("Try to install `pyctcdecode`: `pip install pyctcdecode") | ||
|
||
if load_processor: | ||
# Try to infer processor from model or config name (if provided as str) | ||
if processor is None: | ||
if isinstance(model_name, str): | ||
processor = model_name | ||
elif isinstance(config, str): | ||
processor = config | ||
else: | ||
# Impossible to guess what is the right processor here | ||
raise Exception( | ||
"Impossible to guess which processor to use. " | ||
"Please provide a processor instance or a path/identifier " | ||
"to a processor." | ||
) | ||
|
||
# Instantiate processor if needed | ||
if isinstance(processor, (str, tuple)): | ||
processor = AutoProcessor.from_pretrained(processor, _from_pipeline=task, **hub_kwargs, **model_kwargs) | ||
if not isinstance(processor, ProcessorMixin): | ||
raise TypeError( | ||
"Processor was loaded, but it is not an instance of `ProcessorMixin`. " | ||
f"Got type `{type(processor)}` instead. Please check that you specified " | ||
"correct pipeline task for the model and model has processor implemented and saved." | ||
) | ||
|
||
if task == "translation" and model.config.task_specific_params: | ||
for key in model.config.task_specific_params: | ||
if key.startswith("translation"): | ||
|
@@ -1099,4 +1158,7 @@ def pipeline( | |
if device is not None: | ||
kwargs["device"] = device | ||
|
||
if processor is not None: | ||
kwargs["processor"] = processor | ||
|
||
return pipeline_class(model=model, framework=framework, task=task, **kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are now a few overlapping inputs:
I believe it would be nice to highlight somewhere visible (like in the documentation above) what attribute is necessary for what: at no point should a user specify all four of them, for example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a separate
Note
section to highlight that we should not provide all types ofprocessors
at oncee712717 and refer to a specific pipeline in case one would like to provide them explicitly.
For each specific pipeline we have only required processors args in docs section configured with docs decorator. e.g. here
transformers/src/transformers/pipelines/image_classification.py
Line 53 in e782e95
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, updated pipeline doc to more relevant one in 6996695