Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chameleon: add model #31534

Merged
merged 86 commits into from
Jul 17, 2024
Merged
Show file tree
Hide file tree
Changes from 72 commits
Commits
Show all changes
86 commits
Select commit Hold shift + click to select a range
e536f6a
Merge pull request #9 from huggingface/update
molbap Mar 4, 2024
ef8c0fb
Merge branch 'main' of github.com:huggingface/new-model-addition
ArthurZucker Mar 30, 2024
387e5f5
Chameleon model integration
jacobkahn Feb 15, 2024
cd8f271
fix 7B, again. mask away image tokens
Mar 26, 2024
63b819f
Apply suggestions from code review
gante Apr 1, 2024
7bde0ce
remove pretrained_config_map
gante Apr 1, 2024
f7aff2c
make fixup passing up to utils/check_config_docstrings.py; vqgan move…
gante Apr 1, 2024
cfa92be
remove tokenizer (use llama's); remove codechameleon tests
gante Apr 1, 2024
9d21d04
a few copied from statements and minor changes
gante Apr 1, 2024
84ff660
copied from in ChameleonModel
gante Apr 1, 2024
91631ae
some copies in ChameleonForCausalLM
gante Apr 1, 2024
13390fa
a few more copies
gante Apr 1, 2024
60297d8
VQModel moved to ChameleonModel (as opposed to being in the processor)
gante Apr 2, 2024
61bc3f3
ChameleonProcessor ready
gante Apr 2, 2024
c899136
Merge branch 'main' of github.com:huggingface/transformers into main
ArthurZucker Apr 17, 2024
23b12a3
Merge branch 'main' of github.com:huggingface/transformers
ArthurZucker May 9, 2024
4e4a957
Merge branch 'main' of github.com:huggingface/new-model-addition
ArthurZucker May 9, 2024
23966c3
Merge remote-tracking branch 'origin' into dev
zucchini-nlp Jun 13, 2024
b2bed85
Fix chameleon weights convert
Jun 18, 2024
627ec7a
update conversion script
zucchini-nlp Jun 21, 2024
ae2f23b
clean-up processing
zucchini-nlp Jun 21, 2024
74c1454
update modeling a bit
zucchini-nlp Jun 21, 2024
a4a5a0a
Merge remote-tracking branch 'upstream/main' into dev
zucchini-nlp Jun 21, 2024
c94a245
update
zucchini-nlp Jun 21, 2024
cd182dc
update (throws error...)
zucchini-nlp Jun 21, 2024
371243e
correct conversion ready
zucchini-nlp Jun 24, 2024
6e5b207
fix tests
zucchini-nlp Jun 24, 2024
98d9fbf
fix docs
zucchini-nlp Jun 24, 2024
e1e7227
docs
zucchini-nlp Jun 24, 2024
a359ba9
ve swin norm
zucchini-nlp Jun 24, 2024
480a1f5
fix device for vocab map
zucchini-nlp Jun 24, 2024
155a3d3
add normalization
zucchini-nlp Jun 24, 2024
8e60f9e
update
zucchini-nlp Jun 24, 2024
4f7507f
update script with rope rotations
zucchini-nlp Jun 26, 2024
6d8f410
final fix on model conversion
zucchini-nlp Jun 26, 2024
73eff54
add slow tests
zucchini-nlp Jun 26, 2024
1b95699
more info in docs
zucchini-nlp Jun 26, 2024
187003e
fix repo consistency tests
zucchini-nlp Jun 26, 2024
49aac09
fix repo tests
zucchini-nlp Jun 26, 2024
37a5225
fix-copies
zucchini-nlp Jun 26, 2024
21d2b1d
hope this will make CI happy
zucchini-nlp Jun 26, 2024
29af48a
fix for 30b model
zucchini-nlp Jun 27, 2024
139d326
Update docs/source/en/index.md
zucchini-nlp Jun 27, 2024
9e6fd3a
Update docs/source/en/model_doc/chameleon.md
zucchini-nlp Jun 27, 2024
c2a69cb
Update src/transformers/models/chameleon/modeling_chameleon.py
zucchini-nlp Jun 27, 2024
6ce8c58
Update docs/source/en/model_doc/chameleon.md
zucchini-nlp Jun 27, 2024
64fdc93
Update docs/source/en/model_doc/chameleon.md
zucchini-nlp Jun 27, 2024
1ddb564
Update docs/source/en/model_doc/chameleon.md
zucchini-nlp Jun 27, 2024
d9ca009
Update docs/source/en/model_doc/chameleon.md
zucchini-nlp Jun 27, 2024
b8c7669
Update src/transformers/models/auto/configuration_auto.py
zucchini-nlp Jun 27, 2024
43b99af
Update src/transformers/models/chameleon/image_processing_chameleon.py
zucchini-nlp Jun 27, 2024
5a23713
Update src/transformers/models/chameleon/image_processing_chameleon.py
zucchini-nlp Jun 27, 2024
76223cf
Update src/transformers/models/chameleon/image_processing_chameleon.py
zucchini-nlp Jun 27, 2024
1f9eddc
Update src/transformers/models/chameleon/image_processing_chameleon.py
zucchini-nlp Jun 27, 2024
54cfa90
Update src/transformers/models/chameleon/modeling_chameleon.py
zucchini-nlp Jun 27, 2024
6577be7
Update src/transformers/models/chameleon/processing_chameleon.py
zucchini-nlp Jun 27, 2024
8aba509
Update src/transformers/models/chameleon/processing_chameleon.py
zucchini-nlp Jun 27, 2024
143c068
Update tests/models/chameleon/test_modeling_chameleon.py
zucchini-nlp Jun 27, 2024
d8e91ad
Update tests/models/chameleon/test_modeling_chameleon.py
zucchini-nlp Jun 27, 2024
2bd006f
Update tests/models/chameleon/test_modeling_chameleon.py
zucchini-nlp Jun 27, 2024
ff2f3b1
address comments
zucchini-nlp Jun 27, 2024
ea95c5c
remove assertion in conversion script
zucchini-nlp Jun 27, 2024
dbbc48f
add image processor test
zucchini-nlp Jun 27, 2024
16bdcdc
not copied
zucchini-nlp Jun 27, 2024
a14c5ef
port changes for qk layernorm
zucchini-nlp Jul 9, 2024
46e72d0
Merge remote-tracking branch 'upstream/main' into dev
zucchini-nlp Jul 9, 2024
1c5c729
fix-copies
zucchini-nlp Jul 9, 2024
2813c82
read token decorator for tests
zucchini-nlp Jul 10, 2024
719ff2a
[run-slow] chameleon
zucchini-nlp Jul 10, 2024
f5438b3
one more read-token
zucchini-nlp Jul 10, 2024
473b55e
address some comments
zucchini-nlp Jul 11, 2024
fa669c3
qk norm changes
zucchini-nlp Jul 11, 2024
c9b392e
tests and repo check
zucchini-nlp Jul 11, 2024
f636865
moved rope permutations to conversion, YAY!
zucchini-nlp Jul 11, 2024
86f9c86
fix past kv check
zucchini-nlp Jul 12, 2024
44706af
docs
zucchini-nlp Jul 12, 2024
65b1596
Merge remote-tracking branch 'upstream/main' into dev
zucchini-nlp Jul 12, 2024
ccc8352
layernorm done!
zucchini-nlp Jul 12, 2024
6e10890
let's be consistent in naming
zucchini-nlp Jul 12, 2024
fd3844a
Merge remote-tracking branch 'upstream/main' into dev
zucchini-nlp Jul 16, 2024
f01dd54
fix slow tests
zucchini-nlp Jul 16, 2024
08f0b1f
weird thing with slow CI, but let's see
zucchini-nlp Jul 16, 2024
80e702d
once more try
zucchini-nlp Jul 16, 2024
97e9a85
remove past-kv as tuple following llama
zucchini-nlp Jul 16, 2024
b78dd3c
ignore
zucchini-nlp Jul 16, 2024
3e1d22e
style
zucchini-nlp Jul 16, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -326,6 +326,8 @@
title: CamemBERT
- local: model_doc/canine
title: CANINE
- local: model_doc/chameleon
title: chameleon
- local: model_doc/codegen
title: CodeGen
- local: model_doc/code_llama
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,7 @@ Flax), PyTorch, and/or TensorFlow.
| [ByT5](model_doc/byt5) | ✅ | ✅ | ✅ |
| [CamemBERT](model_doc/camembert) | ✅ | ✅ | ❌ |
| [CANINE](model_doc/canine) | ✅ | ❌ | ❌ |
| [Chameleon](model_doc/chameleon) | ✅ | ❌ | ❌ |
| [Chinese-CLIP](model_doc/chinese_clip) | ✅ | ❌ | ❌ |
| [CLAP](model_doc/clap) | ✅ | ❌ | ❌ |
| [CLIP](model_doc/clip) | ✅ | ✅ | ✅ |
Expand Down
194 changes: 194 additions & 0 deletions docs/source/en/model_doc/chameleon.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,194 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.

⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

-->

# Chameleon

## Overview

The Chameleon model was proposed in [Chameleon: Mixed-Modal Early-Fusion Foundation Models
zucchini-nlp marked this conversation as resolved.
Show resolved Hide resolved
](https://arxiv.org/abs/2405.09818v1) by META AI Chameleon Team. Chameleon is a Vision-Language Model that use vector quantization to tokenize images which enables the model to generate multimodal output. The model takes images and texts as input, including an interleaved format, and generates textual response. Image generation module is not released yet.


The abstract from the paper is the following:

*We present Chameleon, a family of early-fusion token-based mixed-modal models capable of understanding and generating images and text in any arbitrary sequence. We outline a stable training
approach from inception, an alignment recipe, and an architectural parameterization tailored for the
early-fusion, token-based, mixed-modal setting. The models are evaluated on a comprehensive range
of tasks, including visual question answering, image captioning, text generation, image generation, and
long-form mixed modal generation. Chameleon demonstrates broad and general capabilities, including
state-of-the-art performance in image captioning tasks, outperforms Llama-2 in text-only tasks while
being competitive with models such as Mixtral 8x7B and Gemini-Pro, and performs non-trivial image
generation, all in a single model. It also matches or exceeds the performance of much larger models,
including Gemini Pro and GPT-4V, according to human judgments on a new long-form mixed-modal
generation evaluation, where either the prompt or outputs contain mixed sequences of both images and
text. Chameleon marks a significant step forward in a unified modeling of full multimodal documents*


<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/chameleon_arch.png"
alt="drawing" width="600"/>

<small> Chameleon incorporates a vector quantizer module to transform images into discrete tokens. That also enables image geenration using an auto-regressive transformer. Taken from the <a href="https://arxiv.org/abs/2405.09818v1">original paper.</a> </small>

This model was contributed by [joaogante](https://huggingface.co/joaogante) and [RaushanTurganbay](https://huggingface.co/RaushanTurganbay).
The original code can be found [here](https://github.com/facebookresearch/chameleon).


## Usage tips

- We advise users to use `padding_side="left"` when computing batched generation as it leads to more accurate results. Simply make sure to set `processor.tokenizer.padding_side = "left"` before generating.

- Note that Chameleon was tuned for safety alignment. If the model is refusing to answer, consider asking a more concrete question, instead of an open question.
zucchini-nlp marked this conversation as resolved.
Show resolved Hide resolved

- Chameleon generates in chat format which means that the generated text will always be the "assistant's turn". You can enable a text completion generation by passing `return_for_text_completion=True` when calling the processor.

> [!NOTE]
> Chameleon implementation in Transformers uses a special image token to indicate where to merge image embeddings. For special image token we didn't add a new one but used one of the reserved tokens: `<reserved08707>`.

## Usage example

### Single image inference

Here's how to load the model and perform inference in half-precision (`torch.float16`):

```python
from transformers import ChameleonProcessor, ChameleonForCausalLM
import torch
from PIL import Image
import requests

processor = ChameleonProcessor.from_pretrained("meta-chameleon")
model = ChameleonForCausalLM.from_pretrained("meta-chameleon", torch_dtype=torch.float16, device_map="auto")

# prepare image and text prompt
url = "https://bjiujitsu.com/wp-content/uploads/2021/01/jiu_jitsu_belt_white_1.jpg"
zucchini-nlp marked this conversation as resolved.
Show resolved Hide resolved
image = Image.open(requests.get(url, stream=True).raw)
prompt = "What color is the belt in this image?<image>"

inputs = processor(prompt, image, return_tensors="pt").to(model.device)

# autoregressively complete prompt
output = model.generate(**inputs, max_new_tokens=50)
print(processor.decode(output[0], skip_special_tokens=True))
```

### Multi image inference

Chameleon can perform inference with multiple images as input, where images either belong to the same prompt or different prompts (in batched inference). Here is how you can do it:

```python
from transformers import ChameleonProcessor, ChameleonForCausalLM
import torch
from PIL import Image
import requests

processor = ChameleonProcessor.from_pretrained("meta-chameleon")
model = ChameleonForCausalLM.from_pretrained("meta-chameleon", torch_dtype=torch.float16, device_map="auto")

# Get three different images
url = "https://www.ilankelman.org/stopsigns/australia.jpg"
image_stop = Image.open(requests.get(url, stream=True).raw)

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image_cats = Image.open(requests.get(url, stream=True).raw)

url = "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.jpg"
image_snowman = Image.open(requests.get(url, stream=True).raw)

# Prepare a batched prompt, where the first one is a multi-image prompt and the second is not
prompts = [
"What do these images have in common?<image><image>",
"<image>What is shown in this image?"
]

# We can simply feed images in the order they have to be used in the text prompt
# Each "<image>" token uses one image leaving the next for the subsequent "<image>" tokens
inputs = processor(text=prompts, images=[image_stop, image_cats, image_snowman], padding=True, return_tensors="pt").to(model.device)

# Generate
generate_ids = model.generate(**inputs, max_new_tokens=50)
processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
```

## Model optimization

### Quantization using Bitsandbytes

The model can be loaded in 8 or 4 bits, greatly reducing the memory requirements while maintaining the performance of the original model. First make sure to install bitsandbytes, `pip install bitsandbytes` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with:

```python
from transformers import ChameleonForCausalLM, BitsAndBytesConfig

# specify how to quantize the model
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)

model = ChameleonForCausalLM.from_pretrained("meta-chameleon", quantization_config=quantization_config, device_map="auto")
```

### Use Flash-Attention 2 and SDPA to further speed-up generation

The models supports both, Flash-Attention 2 and PyTorch's [`torch.nn.functional.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html) which can be enables for optimization. SDPA is the default options when you load the model, If you want to switch for Flash Attention 2, first make sure to install flash-attn. Refer to the [original repository](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with:

```python
from transformers import ChameleonForCausalLM

model = ChameleonForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
attn_implementation="flash_attention_2"
).to(0)
```

## ChameleonConfig

[[autodoc]] ChameleonConfig

## ChameleonVQConfig

[[autodoc]] ChameleonVQConfig

## ChameleonProcessor

[[autodoc]] ChameleonProcessor

## ChameleonImageProcessor

[[autodoc]] ChameleonImageProcessor
- preprocess

## ChameleonModel

[[autodoc]] ChameleonModel
- forward

## ChameleonForCausalLM

[[autodoc]] ChameleonForCausalLM
- forward

## ChameleonForSequenceClassification

[[autodoc]] ChameleonForSequenceClassification
- forward

## ChameleonForQuestionAnswering

[[autodoc]] ChameleonForQuestionAnswering
- forward
2 changes: 2 additions & 0 deletions docs/source/en/perf_infer_gpu_one.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ FlashAttention-2 is experimental and may change considerably in future versions.
FlashAttention-2 is currently supported for the following architectures:
* [Bark](https://huggingface.co/docs/transformers/model_doc/bark#transformers.BarkModel)
* [Bart](https://huggingface.co/docs/transformers/model_doc/bart#transformers.BartModel)
* [Chameleon](https://huggingface.co/docs/transformers/model_doc/chameleon#transformers.Chameleon)
* [Cohere](https://huggingface.co/docs/transformers/model_doc/cohere#transformers.CohereModel)
* [Dbrx](https://huggingface.co/docs/transformers/model_doc/dbrx#transformers.DbrxModel)
* [DistilBert](https://huggingface.co/docs/transformers/model_doc/distilbert#transformers.DistilBertModel)
Expand Down Expand Up @@ -198,6 +199,7 @@ For now, Transformers supports SDPA inference and training for the following arc
* [Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer#transformers.ASTModel)
* [Bart](https://huggingface.co/docs/transformers/model_doc/bart#transformers.BartModel)
* [Bert](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertModel)
* [Chameleon](https://huggingface.co/docs/transformers/model_doc/chameleon#transformers.Chameleon)
* [Cohere](https://huggingface.co/docs/transformers/model_doc/cohere#transformers.CohereModel)
* [Dbrx](https://huggingface.co/docs/transformers/model_doc/dbrx#transformers.DbrxModel)
* [DeiT](https://huggingface.co/docs/transformers/model_doc/deit#transformers.DeiTModel)
Expand Down
28 changes: 28 additions & 0 deletions src/transformers/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -249,6 +249,11 @@
"CanineConfig",
"CanineTokenizer",
],
"models.chameleon": [
"ChameleonConfig",
"ChameleonProcessor",
"ChameleonVQVAEConfig",
],
"models.chinese_clip": [
"ChineseCLIPConfig",
"ChineseCLIPProcessor",
Expand Down Expand Up @@ -1124,6 +1129,7 @@
_import_structure["models.bit"].extend(["BitImageProcessor"])
_import_structure["models.blip"].extend(["BlipImageProcessor"])
_import_structure["models.bridgetower"].append("BridgeTowerImageProcessor")
_import_structure["models.chameleon"].append("ChameleonImageProcessor")
_import_structure["models.chinese_clip"].extend(["ChineseCLIPFeatureExtractor", "ChineseCLIPImageProcessor"])
_import_structure["models.clip"].extend(["CLIPFeatureExtractor", "CLIPImageProcessor"])
_import_structure["models.conditional_detr"].extend(
Expand Down Expand Up @@ -1606,6 +1612,15 @@
"load_tf_weights_in_canine",
]
)
_import_structure["models.chameleon"].extend(
[
"ChameleonForCausalLM",
"ChameleonModel",
"ChameleonPreTrainedModel",
"ChameleonProcessor",
"ChameleonVQVAE",
]
)
_import_structure["models.chinese_clip"].extend(
[
"ChineseCLIPModel",
Expand Down Expand Up @@ -4879,6 +4894,11 @@
CanineConfig,
CanineTokenizer,
)
from .models.chameleon import (
ChameleonConfig,
ChameleonProcessor,
ChameleonVQVAEConfig,
)
from .models.chinese_clip import (
ChineseCLIPConfig,
ChineseCLIPProcessor,
Expand Down Expand Up @@ -5795,6 +5815,7 @@
from .models.bit import BitImageProcessor
from .models.blip import BlipImageProcessor
from .models.bridgetower import BridgeTowerImageProcessor
from .models.chameleon import ChameleonImageProcessor
from .models.chinese_clip import (
ChineseCLIPFeatureExtractor,
ChineseCLIPImageProcessor,
Expand Down Expand Up @@ -6242,6 +6263,13 @@
CaninePreTrainedModel,
load_tf_weights_in_canine,
)
from .models.chameleon import (
ChameleonForCausalLM,
ChameleonModel,
ChameleonPreTrainedModel,
ChameleonProcessor,
ChameleonVQVAE,
)
from .models.chinese_clip import (
ChineseCLIPModel,
ChineseCLIPPreTrainedModel,
Expand Down
1 change: 1 addition & 0 deletions src/transformers/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@
byt5,
camembert,
canine,
chameleon,
chinese_clip,
clap,
clip,
Expand Down
2 changes: 2 additions & 0 deletions src/transformers/models/auto/configuration_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@
("bros", "BrosConfig"),
("camembert", "CamembertConfig"),
("canine", "CanineConfig"),
("chameleon", "ChameleonConfig"),
("chinese_clip", "ChineseCLIPConfig"),
("chinese_clip_vision_model", "ChineseCLIPVisionConfig"),
("clap", "ClapConfig"),
Expand Down Expand Up @@ -328,6 +329,7 @@
("byt5", "ByT5"),
("camembert", "CamemBERT"),
("canine", "CANINE"),
("chameleon", "Chameleon"),
("chinese_clip", "Chinese-CLIP"),
("chinese_clip_vision_model", "ChineseCLIPVisionModel"),
("clap", "CLAP"),
Expand Down
1 change: 1 addition & 0 deletions src/transformers/models/auto/image_processing_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@
("blip", ("BlipImageProcessor",)),
("blip-2", ("BlipImageProcessor",)),
("bridgetower", ("BridgeTowerImageProcessor",)),
("chameleon", ("ChameleonImageProcessor",)),
("chinese_clip", ("ChineseCLIPImageProcessor",)),
("clip", ("CLIPImageProcessor",)),
("clipseg", ("ViTImageProcessor", "ViTImageProcessorFast")),
Expand Down
4 changes: 4 additions & 0 deletions src/transformers/models/auto/modeling_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@
("bros", "BrosModel"),
("camembert", "CamembertModel"),
("canine", "CanineModel"),
("chameleon", "ChameleonModel"),
("chinese_clip", "ChineseCLIPModel"),
("chinese_clip_vision_model", "ChineseCLIPVisionModel"),
("clap", "ClapModel"),
Expand Down Expand Up @@ -443,6 +444,7 @@
("blenderbot-small", "BlenderbotSmallForCausalLM"),
("bloom", "BloomForCausalLM"),
("camembert", "CamembertForCausalLM"),
("chameleon", "ChameleonForCausalLM"),
("code_llama", "LlamaForCausalLM"),
("codegen", "CodeGenForCausalLM"),
("cohere", "CohereForCausalLM"),
Expand Down Expand Up @@ -850,6 +852,7 @@
("bloom", "BloomForSequenceClassification"),
("camembert", "CamembertForSequenceClassification"),
("canine", "CanineForSequenceClassification"),
("chameleon", "ChameleonForSequenceClassification"),
("code_llama", "LlamaForSequenceClassification"),
("convbert", "ConvBertForSequenceClassification"),
("ctrl", "CTRLForSequenceClassification"),
Expand Down Expand Up @@ -942,6 +945,7 @@
("bloom", "BloomForQuestionAnswering"),
("camembert", "CamembertForQuestionAnswering"),
("canine", "CanineForQuestionAnswering"),
("chameleon", "ChameleonForQuestionAnswering"),
("convbert", "ConvBertForQuestionAnswering"),
("data2vec-text", "Data2VecTextForQuestionAnswering"),
("deberta", "DebertaForQuestionAnswering"),
Expand Down
1 change: 1 addition & 0 deletions src/transformers/models/auto/processing_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@
("blip", "BlipProcessor"),
("blip-2", "Blip2Processor"),
("bridgetower", "BridgeTowerProcessor"),
("chameleon", "ChameleonProcessor"),
("chinese_clip", "ChineseCLIPProcessor"),
("clap", "ClapProcessor"),
("clip", "CLIPProcessor"),
Expand Down
7 changes: 7 additions & 0 deletions src/transformers/models/auto/tokenization_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,13 @@
),
),
("canine", ("CanineTokenizer", None)),
(
"chameleon",
(
"LlamaTokenizer" if is_sentencepiece_available() else None,
"LlamaTokenizerFast" if is_tokenizers_available() else None,
),
),
("chinese_clip", ("BertTokenizer", "BertTokenizerFast" if is_tokenizers_available() else None)),
(
"clap",
Expand Down
Loading
Loading