diff --git a/docs/source/en/index.md b/docs/source/en/index.md
index 1c90fd71b3d..1374b27ab45 100644
--- a/docs/source/en/index.md
+++ b/docs/source/en/index.md
@@ -359,8 +359,8 @@ Flax), PyTorch, and/or TensorFlow.
| [ViTMAE](model_doc/vit_mae) | ✅ | ✅ | ❌ |
| [ViTMatte](model_doc/vitmatte) | ✅ | ❌ | ❌ |
| [ViTMSN](model_doc/vit_msn) | ✅ | ❌ | ❌ |
-| [VitPose](model_doc/vitpose) | ✅ | ❌ | ❌ |
-| [VitPoseBackbone](model_doc/vitpose_backbone) | ✅ | ❌ | ❌ |
+| [ViTPose](model_doc/vitpose) | ✅ | ❌ | ❌ |
+| [ViTPoseBackbone](model_doc/vitpose_backbone) | ✅ | ❌ | ❌ |
| [VITS](model_doc/vits) | ✅ | ❌ | ❌ |
| [ViViT](model_doc/vivit) | ✅ | ❌ | ❌ |
| [Wav2Vec2](model_doc/wav2vec2) | ✅ | ✅ | ✅ |
diff --git a/docs/source/en/model_doc/vitpose.md b/docs/source/en/model_doc/vitpose.md
index 361f8e30c75..4fbead04ea8 100644
--- a/docs/source/en/model_doc/vitpose.md
+++ b/docs/source/en/model_doc/vitpose.md
@@ -10,24 +10,28 @@ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express o
specific language governing permissions and limitations under the License.
-->
-# VitPose
+# ViTPose
## Overview
-The VitPose model was proposed in [ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation](https://arxiv.org/abs/2204.12484) by Yufei Xu, Jing Zhang, Qiming Zhang, Dacheng Tao. VitPose employs a standard, non-hierarchical [Vision Transformer](https://arxiv.org/pdf/2010.11929v2) as backbone for the task of keypoint estimation. A simple decoder head is added on top to predict the heatmaps from a given image. Despite its simplicity, the model gets state-of-the-art results on the challenging MS COCO Keypoint Detection benchmark.
+The ViTPose model was proposed in [ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation](https://arxiv.org/abs/2204.12484) by Yufei Xu, Jing Zhang, Qiming Zhang, Dacheng Tao. ViTPose employs a standard, non-hierarchical [Vision Transformer](vit) as backbone for the task of keypoint estimation. A simple decoder head is added on top to predict the heatmaps from a given image. Despite its simplicity, the model gets state-of-the-art results on the challenging MS COCO Keypoint Detection benchmark. The model was further improved in [ViTPose++: Vision Transformer for Generic Body Pose Estimation](https://arxiv.org/abs/2212.04246) where the authors employ
+a mixture-of-experts (MoE) module in the ViT backbone along with pre-training on more data, which further enhances the performance.
The abstract from the paper is the following:
*Although no specific domain knowledge is considered in the design, plain vision transformers have shown excellent performance in visual recognition tasks. However, little effort has been made to reveal the potential of such simple structures for pose estimation tasks. In this paper, we show the surprisingly good capabilities of plain vision transformers for pose estimation from various aspects, namely simplicity in model structure, scalability in model size, flexibility in training paradigm, and transferability of knowledge between models, through a simple baseline model called ViTPose. Specifically, ViTPose employs plain and non-hierarchical vision transformers as backbones to extract features for a given person instance and a lightweight decoder for pose estimation. It can be scaled up from 100M to 1B parameters by taking the advantages of the scalable model capacity and high parallelism of transformers, setting a new Pareto front between throughput and performance. Besides, ViTPose is very flexible regarding the attention type, input resolution, pre-training and finetuning strategy, as well as dealing with multiple pose tasks. We also empirically demonstrate that the knowledge of large ViTPose models can be easily transferred to small ones via a simple knowledge token. Experimental results show that our basic ViTPose model outperforms representative methods on the challenging MS COCO Keypoint Detection benchmark, while the largest model sets a new state-of-the-art.*
-![vitpose-architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vitpose-architecture.png)
+
+
+ ViTPose architecture. Taken from the original paper.
This model was contributed by [nielsr](https://huggingface.co/nielsr) and [sangbumchoi](https://github.com/SangbumChoi).
The original code can be found [here](https://github.com/ViTAE-Transformer/ViTPose).
## Usage Tips
-ViTPose is a so-called top-down keypoint detection model. This means that one first uses an object detector, like [RT-DETR](rt_detr.md), to detect people (or other instances) in an image. Next, ViTPose takes the cropped images as input and predicts the keypoints.
+ViTPose is a so-called top-down keypoint detection model. This means that one first uses an object detector, like [RT-DETR](rt_detr.md), to detect people (or other instances) in an image. Next, ViTPose takes the cropped images as input and predicts the keypoints for each of them.
```py
import torch
@@ -36,11 +40,7 @@ import numpy as np
from PIL import Image
-from transformers import (
- AutoProcessor,
- RTDetrForObjectDetection,
- VitPoseForPoseEstimation,
-)
+from transformers import AutoProcessor, RTDetrForObjectDetection, VitPoseForPoseEstimation
device = "cuda" if torch.cuda.is_available() else "cpu"
@@ -51,7 +51,7 @@ image = Image.open(requests.get(url, stream=True).raw)
# Stage 1. Detect humans on the image
# ------------------------------------------------------------------------
-# You can choose detector by your choice
+# You can choose any detector of your choice
person_image_processor = AutoProcessor.from_pretrained("PekingU/rtdetr_r50vd_coco_o365")
person_model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r50vd_coco_o365", device_map=device)
@@ -89,9 +89,50 @@ pose_results = image_processor.post_process_pose_estimation(outputs, boxes=[pers
image_pose_result = pose_results[0] # results for first image
```
+### ViTPose++ models
-### Visualization for supervision user
-```py
+The best [checkpoints](https://huggingface.co/collections/usyd-community/vitpose-677fcfd0a0b2b5c8f79c4335) are those of the [ViTPose++ paper](https://arxiv.org/abs/2212.04246). ViTPose++ models employ a so-called [Mixture-of-Experts (MoE)](https://huggingface.co/blog/moe) architecture for the ViT backbone, resulting in better performance.
+
+The ViTPose+ checkpoints use 6 experts, hence 6 different dataset indices can be passed.
+An overview of the various dataset indices is provided below:
+
+- 0: [COCO validation 2017](https://cocodataset.org/#overview) dataset, using an object detector that gets 56 AP on the "person" class
+- 1: [AiC](https://github.com/fabbrimatteo/AiC-Dataset) dataset
+- 2: [MPII](https://www.mpi-inf.mpg.de/departments/computer-vision-and-machine-learning/software-and-datasets/mpii-human-pose-dataset) dataset
+- 3: [AP-10K](https://github.com/AlexTheBad/AP-10K) dataset
+- 4: [APT-36K](https://github.com/pandorgan/APT-36K) dataset
+- 5: [COCO-WholeBody](https://github.com/jin-s13/COCO-WholeBody) dataset
+
+Pass the `dataset_index` argument in the forward of the model to indicate which experts to use for each example in the batch. Example usage is shown below:
+
+```python
+image_processor = AutoProcessor.from_pretrained("usyd-community/vitpose-plus-base")
+model = VitPoseForPoseEstimation.from_pretrained("usyd-community/vitpose-plus-base", device=device)
+
+inputs = image_processor(image, boxes=[person_boxes], return_tensors="pt").to(device)
+
+dataset_index = torch.tensor([0], device=device) # must be a tensor of shape (batch_size,)
+
+with torch.no_grad():
+ outputs = model(**inputs, dataset_index=dataset_index)
+```
+
+The ViTPose+ checkpoints use 6 experts, hence 6 different dataset indices can be passed.
+An overview of the various dataset indices is provided below:
+
+- 0: [COCO validation 2017](https://cocodataset.org/#overview) dataset, using an object detector that gets 56 AP on the "person" class
+- 1: [AiC](https://github.com/fabbrimatteo/AiC-Dataset) dataset
+- 2: [MPII](https://www.mpi-inf.mpg.de/departments/computer-vision-and-machine-learning/software-and-datasets/mpii-human-pose-dataset) dataset
+- 3: [AP-10K](https://github.com/AlexTheBad/AP-10K) dataset
+- 4: [APT-36K](https://github.com/pandorgan/APT-36K) dataset
+- 5: [COCO-WholeBody](https://github.com/jin-s13/COCO-WholeBody) dataset
+
+
+### Visualization
+
+To visualize the various keypoints, one can either leverage the `supervision` [library](https://github.com/roboflow/supervision (requires `pip install supervision`):
+
+```python
import supervision as sv
xy = torch.stack([pose_result['keypoints'] for pose_result in image_pose_result]).cpu().numpy()
@@ -119,8 +160,9 @@ annotated_frame = vertex_annotator.annotate(
)
```
-### Visualization for advanced user
-```py
+Alternatively, one can also visualize the keypoints using [OpenCV](https://opencv.org/) (requires `pip install opencv-python`):
+
+```python
import math
import cv2
@@ -223,26 +265,18 @@ pose_image
```
-### MoE backbone
-
-To enable MoE (Mixture of Experts) function in the backbone, user has to give appropriate configuration such as `num_experts` and input value `dataset_index` to the backbone model. However, it is not used in default parameters. Below is the code snippet for usage of MoE function.
+## Resources
-```py
->>> from transformers import VitPoseBackboneConfig, VitPoseBackbone
->>> import torch
-
->>> config = VitPoseBackboneConfig(num_experts=3, out_indices=[-1])
->>> model = VitPoseBackbone(config)
+A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViTPose. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
->>> pixel_values = torch.randn(3, 3, 256, 192)
->>> dataset_index = torch.tensor([1, 2, 3])
->>> outputs = model(pixel_values, dataset_index)
-```
+- A demo of ViTPose on images and video can be found [here](https://huggingface.co/spaces/hysts/ViTPose-transformers).
+- A notebook illustrating inference and visualization can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/ViTPose/Inference_with_ViTPose_for_human_pose_estimation.ipynb).
## VitPoseImageProcessor
[[autodoc]] VitPoseImageProcessor
- preprocess
+ - post_process_pose_estimation
## VitPoseConfig
diff --git a/src/transformers/models/auto/configuration_auto.py b/src/transformers/models/auto/configuration_auto.py
index 659fc441512..8ad7899dd04 100644
--- a/src/transformers/models/auto/configuration_auto.py
+++ b/src/transformers/models/auto/configuration_auto.py
@@ -650,8 +650,8 @@
("vit_msn", "ViTMSN"),
("vitdet", "VitDet"),
("vitmatte", "ViTMatte"),
- ("vitpose", "VitPose"),
- ("vitpose_backbone", "VitPoseBackbone"),
+ ("vitpose", "ViTPose"),
+ ("vitpose_backbone", "ViTPoseBackbone"),
("vits", "VITS"),
("vivit", "ViViT"),
("wav2vec2", "Wav2Vec2"),
diff --git a/src/transformers/models/vitpose/convert_vitpose_to_hf.py b/src/transformers/models/vitpose/convert_vitpose_to_hf.py
index f151adebbce..b1e55628a31 100644
--- a/src/transformers/models/vitpose/convert_vitpose_to_hf.py
+++ b/src/transformers/models/vitpose/convert_vitpose_to_hf.py
@@ -15,6 +15,8 @@
"""Convert VitPose checkpoints from the original repository.
URL: https://github.com/vitae-transformer/vitpose
+
+Notebook to get the original logits: https://colab.research.google.com/drive/1QDX_2POTpl6JaZAV2WIFjuiqDsDwiqMZ?usp=sharing.
"""
import argparse
@@ -43,34 +45,63 @@
}
MODEL_TO_FILE_NAME_MAPPING = {
+ # VitPose models, simple decoder
"vitpose-base-simple": "vitpose-b-simple.pth",
+ # VitPose models, classic decoder
"vitpose-base": "vitpose-b.pth",
+ # VitPose models, COCO-AIC-MPII
"vitpose-base-coco-aic-mpii": "vitpose_base_coco_aic_mpii.pth",
+ # VitPose+ models
+ "vitpose-plus-small": "vitpose+_small.pth",
"vitpose-plus-base": "vitpose+_base.pth",
+ "vitpose-plus-large": "vitpose+_large.pth",
+ "vitpose-plus-huge": "vitpose+_huge.pth",
}
def get_config(model_name):
- num_experts = 6 if "plus" in model_name else 1
- part_features = 192 if "plus" in model_name else 0
+ if "plus" in model_name:
+ num_experts = 6
+ if "small" in model_name:
+ part_features = 96
+ out_indices = [12]
+ elif "base" in model_name:
+ part_features = 192
+ out_indices = [12]
+ elif "large" in model_name:
+ part_features = 256
+ out_indices = [24]
+ elif "huge" in model_name:
+ part_features = 320
+ out_indices = [32]
+ else:
+ raise ValueError(f"Model {model_name} not supported")
+ else:
+ num_experts = 1
+ part_features = 0
- backbone_config = VitPoseBackboneConfig(out_indices=[12], num_experts=num_experts, part_features=part_features)
# size of the architecture
if "small" in model_name:
- backbone_config.hidden_size = 768
- backbone_config.intermediate_size = 2304
- backbone_config.num_hidden_layers = 8
- backbone_config.num_attention_heads = 8
+ hidden_size = 384
+ num_hidden_layers = 12
+ num_attention_heads = 12
elif "large" in model_name:
- backbone_config.hidden_size = 1024
- backbone_config.intermediate_size = 4096
- backbone_config.num_hidden_layers = 24
- backbone_config.num_attention_heads = 16
+ hidden_size = 1024
+ num_hidden_layers = 24
+ num_attention_heads = 16
elif "huge" in model_name:
- backbone_config.hidden_size = 1280
- backbone_config.intermediate_size = 5120
- backbone_config.num_hidden_layers = 32
- backbone_config.num_attention_heads = 16
+ hidden_size = 1280
+ num_hidden_layers = 32
+ num_attention_heads = 16
+
+ backbone_config = VitPoseBackboneConfig(
+ out_indices=out_indices,
+ hidden_size=hidden_size,
+ num_hidden_layers=num_hidden_layers,
+ num_attention_heads=num_attention_heads,
+ num_experts=num_experts,
+ part_features=part_features,
+ )
use_simple_decoder = "simple" in model_name
@@ -155,9 +186,7 @@ def prepare_img():
@torch.no_grad()
-def write_model(model_path, model_name, push_to_hub, check_logits=True):
- os.makedirs(model_path, exist_ok=True)
-
+def write_model(model_name, model_path, push_to_hub, check_logits=True):
# ------------------------------------------------------------
# Vision model params and config
# ------------------------------------------------------------
@@ -236,20 +265,27 @@ def write_model(model_path, model_name, push_to_hub, check_logits=True):
filepath = hf_hub_download(repo_id="nielsr/test-image", filename="vitpose_batch_data.pt", repo_type="dataset")
original_pixel_values = torch.load(filepath, map_location="cpu")["img"]
+ # we allow for a small difference in the pixel values due to the original repository using cv2
assert torch.allclose(pixel_values, original_pixel_values, atol=1e-1)
dataset_index = torch.tensor([0])
with torch.no_grad():
+ print("Shape of original_pixel_values: ", original_pixel_values.shape)
+ print("First values of original_pixel_values: ", original_pixel_values[0, 0, :3, :3])
+
# first forward pass
- outputs = model(pixel_values, dataset_index=dataset_index)
+ outputs = model(original_pixel_values, dataset_index=dataset_index)
output_heatmap = outputs.heatmaps
+ print("Shape of output_heatmap: ", output_heatmap.shape)
+ print("First values: ", output_heatmap[0, 0, :3, :3])
+
# second forward pass (flipped)
# this is done since the model uses `flip_test=True` in its test config
- pixel_values_flipped = torch.flip(pixel_values, [3])
+ original_pixel_values_flipped = torch.flip(original_pixel_values, [3])
outputs_flipped = model(
- pixel_values_flipped,
+ original_pixel_values_flipped,
dataset_index=dataset_index,
flip_pairs=torch.tensor([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]]),
)
@@ -261,6 +297,7 @@ def write_model(model_path, model_name, push_to_hub, check_logits=True):
pose_results = image_processor.post_process_pose_estimation(outputs, boxes=boxes)[0]
if check_logits:
+ # Simple decoder checkpoints
if model_name == "vitpose-base-simple":
assert torch.allclose(
pose_results[1]["keypoints"][0],
@@ -272,6 +309,7 @@ def write_model(model_path, model_name, push_to_hub, check_logits=True):
torch.tensor([8.66642594e-01]),
atol=5e-2,
)
+ # Classic decoder checkpoints
elif model_name == "vitpose-base":
assert torch.allclose(
pose_results[1]["keypoints"][0],
@@ -283,6 +321,7 @@ def write_model(model_path, model_name, push_to_hub, check_logits=True):
torch.tensor([8.8235235e-01]),
atol=5e-2,
)
+ # COCO-AIC-MPII checkpoints
elif model_name == "vitpose-base-coco-aic-mpii":
assert torch.allclose(
pose_results[1]["keypoints"][0],
@@ -294,6 +333,18 @@ def write_model(model_path, model_name, push_to_hub, check_logits=True):
torch.tensor([8.69966745e-01]),
atol=5e-2,
)
+ # VitPose+ models
+ elif model_name == "vitpose-plus-small":
+ assert torch.allclose(
+ pose_results[1]["keypoints"][0],
+ torch.tensor([398.1597, 181.6902]),
+ atol=5e-2,
+ )
+ assert torch.allclose(
+ pose_results[1]["scores"][0],
+ torch.tensor(0.9051),
+ atol=5e-2,
+ )
elif model_name == "vitpose-plus-base":
assert torch.allclose(
pose_results[1]["keypoints"][0],
@@ -305,18 +356,43 @@ def write_model(model_path, model_name, push_to_hub, check_logits=True):
torch.tensor([8.75046968e-01]),
atol=5e-2,
)
+ elif model_name == "vitpose-plus-large":
+ assert torch.allclose(
+ pose_results[1]["keypoints"][0],
+ torch.tensor([398.1409, 181.7412]),
+ atol=5e-2,
+ )
+ assert torch.allclose(
+ pose_results[1]["scores"][0],
+ torch.tensor(0.8746),
+ atol=5e-2,
+ )
+ elif model_name == "vitpose-plus-huge":
+ assert torch.allclose(
+ pose_results[1]["keypoints"][0],
+ torch.tensor([398.2079, 181.8026]),
+ atol=5e-2,
+ )
+ assert torch.allclose(
+ pose_results[1]["scores"][0],
+ torch.tensor(0.8693),
+ atol=5e-2,
+ )
else:
raise ValueError("Model not supported")
print("Conversion successfully done.")
- # save the model to a local directory
- model.save_pretrained(model_path)
- image_processor.save_pretrained(model_path)
+ if model_path is not None:
+ os.makedirs(model_path, exist_ok=True)
+ model.save_pretrained(model_path)
+ image_processor.save_pretrained(model_path)
if push_to_hub:
print(f"Pushing model and image processor for {model_name} to hub")
- model.push_to_hub(f"danelcsb/{model_name}")
- image_processor.push_to_hub(f"danelcsb/{model_name}")
+ # we created a community organization on the hub for this model
+ # maintained by the Transformers team
+ model.push_to_hub(f"usyd-community/{model_name}")
+ image_processor.push_to_hub(f"usyd-community/{model_name}")
def main():
@@ -330,16 +406,13 @@ def main():
help="Name of the VitPose model you'd like to convert.",
)
parser.add_argument(
- "--pytorch_dump_folder_path", default=None, type=str, help="Path to the output PyTorch model directory."
+ "--pytorch_dump_folder_path", default=None, type=str, help="Path to store the converted model."
)
parser.add_argument(
"--push_to_hub", action="store_true", help="Whether or not to push the converted model to the 🤗 hub."
)
parser.add_argument(
- "--push_to_hub",
- default=True,
- type=bool,
- help="Whether to check the logits of public converted model to the 🤗 hub. You can disable when using custom model.",
+ "--check_logits", action="store_false", help="Whether or not to verify the logits of the converted model."
)
args = parser.parse_args()
diff --git a/src/transformers/models/vitpose_backbone/modeling_vitpose_backbone.py b/src/transformers/models/vitpose_backbone/modeling_vitpose_backbone.py
index d3f05798837..d89f95e26b5 100644
--- a/src/transformers/models/vitpose_backbone/modeling_vitpose_backbone.py
+++ b/src/transformers/models/vitpose_backbone/modeling_vitpose_backbone.py
@@ -393,7 +393,7 @@ class VitPoseBackbonePreTrainedModel(PreTrainedModel):
supports_gradient_checkpointing = True
_no_split_modules = ["VitPoseBackboneEmbeddings", "VitPoseBackboneLayer"]
- def _init_weights(self, module: Union[nn.Linear, nn.Conv2d, nn.LayerNorm]) -> None:
+ def _init_weights(self, module: Union[nn.Linear, nn.Conv2d, nn.LayerNorm, VitPoseBackboneEmbeddings]) -> None:
"""Initialize the weights"""
if isinstance(module, (nn.Linear, nn.Conv2d)):
# Upcast the input in `fp32` and cast it back to desired `dtype` to avoid
diff --git a/tests/models/vitpose/test_modeling_vitpose.py b/tests/models/vitpose/test_modeling_vitpose.py
index 1c33b6cf367..73129956a3d 100644
--- a/tests/models/vitpose/test_modeling_vitpose.py
+++ b/tests/models/vitpose/test_modeling_vitpose.py
@@ -239,9 +239,7 @@ def default_image_processor(self):
@slow
def test_inference_pose_estimation(self):
image_processor = self.default_image_processor
- model = VitPoseForPoseEstimation.from_pretrained("usyd-community/vitpose-base-simple")
- model.to(torch_device)
- model.eval()
+ model = VitPoseForPoseEstimation.from_pretrained("usyd-community/vitpose-base-simple", device_map=torch_device)
image = prepare_img()
boxes = [[[412.8, 157.61, 53.05, 138.01], [384.43, 172.21, 15.12, 35.74]]]
@@ -284,9 +282,7 @@ def test_inference_pose_estimation(self):
@slow
def test_batched_inference(self):
image_processor = self.default_image_processor
- model = VitPoseForPoseEstimation.from_pretrained("usyd-community/vitpose-base-simple")
- model.to(torch_device)
- model.eval()
+ model = VitPoseForPoseEstimation.from_pretrained("usyd-community/vitpose-base-simple", device_map=torch_device)
image = prepare_img()
boxes = [