Skip to content

Conversation

@dependabot
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Aug 12, 2024

Bumps the pytorch group with 13 updates in the /pytorch directory:

Package From To
accelerate 0.32.1 0.33.0
peft 0.11.1 0.12.0
protobuf 5.27.2 5.27.3
tokenizers 0.19.1 0.20.0
transformers 4.42.4 4.44.0
torchvision 0.16.0.post2+cxx11.abi 0.19.0
intel-extension-for-pytorch 2.1.30+xpu 2.3.100+cpu
torch 2.1.0.post2+cxx11.abi 2.4.0
torchaudio 2.1.0.post2+cxx11.abi 2.4.0
oneccl-bind-pt 2.1.300+xpu 2.3.0+cpu
setuptools 71.1.0 72.1.0
jupyterlab 4.3.0a2 4.3.0b0
neural-compressor 2.6 3.0

Updates accelerate from 0.32.1 to 0.33.0

Release notes

Sourced from accelerate's releases.

v0.33.0: MUSA backend support and bugfixes

MUSA backend support and bugfixes

Small release this month, with key focuses on some added support for backends and bugs:

What's Changed

New Contributors

Full Changelog: huggingface/accelerate@v0.32.1...v0.33.0

Commits
  • 28a3b98 Release: v0.33.0
  • 415eddf feat(ci): add pip caching in CI (#2952)
  • 2308576 Properly handle Params4bit in set_module_tensor_to_device (#2934)
  • a5a3e57 Add torch.float8_e4m3fn format dtype_byte_size (#2945)
  • 0af1d8b delete CCL env var setting (#2927)
  • d16d737 Improve test reliability for Accelerator.free_memory() (#2935)
  • 7a5c231 Consider pynvml available when installed through the nvidia-ml-py distributio...
  • 4f02bb7 Fix import test (#2931)
  • 709fd1e Hotfix PyTorch Version Installation in CI Workflow for Minimum Version Matrix...
  • f4f1260 Correct loading of models with shared tensors when using accelerator.load_sta...
  • Additional commits viewable in compare view

Updates peft from 0.11.1 to 0.12.0

Release notes

Sourced from peft's releases.

v0.12.0: New methods OLoRA, X-LoRA, FourierFT, HRA, and much more

Highlights

peft-v0 12 0

New methods

OLoRA

@​tokenizer-decode added support for a new LoRA initialization strategy called OLoRA (#1828). With this initialization option, the LoRA weights are initialized to be orthonormal, which promises to improve training convergence. Similar to PiSSA, this can also be applied to models quantized with bitsandbytes. Check out the accompanying OLoRA examples.

X-LoRA

@​EricLBuehler added the X-LoRA method to PEFT (#1491). This is a mixture of experts approach that combines the strength of multiple pre-trained LoRA adapters. Documentation has yet to be added but check out the X-LoRA tests for how to use it.

FourierFT

@​Phoveran, @​zqgao22, @​Chaos96, and @​DSAILatHKUST added discrete Fourier transform fine-tuning to PEFT (#1838). This method promises to match LoRA in terms of performance while reducing the number of parameters even further. Check out the included FourierFT notebook.

HRA

@​DaShenZi721 added support for Householder Reflection Adaptation (#1864). This method bridges the gap between low rank adapters like LoRA on the one hand and orthogonal fine-tuning techniques such as OFT and BOFT on the other. As such, it is interesting for both LLMs and image generation models. Check out the HRA example on how to perform DreamBooth fine-tuning.

Enhancements

  • IA³ now supports merging of multiple adapters via the add_weighted_adapter method thanks to @​alexrs (#1701).
  • Call peft_model.get_layer_status() and peft_model.get_model_status() to get an overview of the layer/model status of the PEFT model. This can be especially helpful when dealing with multiple adapters or for debugging purposes. More information can be found in the docs (#1743).
  • DoRA now supports FSDP training, including with bitsandbytes quantization, aka QDoRA ()#1806).
  • VeRA has been extended by @​dkopi to support targeting layers with different weight shapes (#1817).
  • @​kallewoof added the possibility for ephemeral GPU offloading. For now, this is only implemented for loading DoRA models, which can be sped up considerably for big models at the cost of a bit of extra VRAM (#1857).
  • Experimental: It is now possible to tell PEFT to use your custom LoRA layers through dynamic dispatching. Use this, for instance, to add LoRA layers for thus far unsupported layer types without the need to first create a PR on PEFT (but contributions are still welcome!) (#1875).

Examples

Changes

Casting of the adapter dtype

Important: If the base model is loaded in float16 (fp16) or bfloat16 (bf16), PEFT now autocasts adapter weights to float32 (fp32) instead of using the dtype of the base model (#1706). This requires more memory than previously but stabilizes training, so it's the more sensible default. To prevent this, pass autocast_adapter_dtype=False when calling get_peft_model, PeftModel.from_pretrained, or PeftModel.load_adapter.

Adapter device placement

The logic of device placement when loading multiple adapters on the same model has been changed (#1742). Previously, PEFT would move all adapters to the device of the base model. Now, only the newly loaded/created adapter is moved to the base model's device. This allows users to have more fine-grained control over the adapter devices, e.g. allowing them to offload unused adapters to CPU more easily.

PiSSA

... (truncated)

Commits
  • e6cd24c Release v0.12.0 (#1946)
  • 05f57e9 PiSSA, OLoRA: Delete initial adapter after conversion instead of the active a...
  • 2ce83e0 FIX Decrease memory overhead of merging (#1944)
  • ebcd079 [WIP] ENH Add support for Qwen2 (#1906)
  • ba75bb1 FIX: More VeRA tests, fix tests, more checks (#1900)
  • 6472061 FIX Prefix tuning Grouped-Query Attention (#1901)
  • e02b938 FIX PiSSA & OLoRA with rank/alpha pattern, rslora (#1930)
  • 5268495 FEAT Add HRA: Householder Reflection Adaptation (#1864)
  • 2aaf9ce ENH Sync LoRA tp_layer methods with vanilla LoRA (#1919)
  • a019f86 FIX sft script print_trainable_parameters attr lookup (#1928)
  • Additional commits viewable in compare view

Updates protobuf from 5.27.2 to 5.27.3

Commits
  • 7cc670c Updating version.json and repo version numbers to: 27.3
  • 67d7298 Merge pull request #17617 from protocolbuffers/cp-utf8-ascii
  • e20cb7a Remove /utf-8 flag added in #14197
  • c9839cb Merge pull request #17473 from protocolbuffers/cp-revert-hack
  • 8a579c1 Downgrade CMake to 3.29 to workaround Abseil issue.
  • ba3e7d7 Revert workaround for std::mutex issues on github windows runners.
  • 861be78 Merge pull request #17331 from protocolbuffers/cp-cp
  • c1ec82f Merge pull request #17232 from simonberger/bugfix/php-ext-persistent-global-c...
  • aec8a76 Upgrade macos-11 tests to macos-12
  • 4e3b4f0 Use explicit names of our large runners
  • Additional commits viewable in compare view

Updates tokenizers from 0.19.1 to 0.20.0

Release notes

Sourced from tokenizers's releases.

Release v0.20.0: faster encode, better python support

Release v0.20.0

This release is focused on performances and user experience.

Performances:

First off, we did a bit of benchmarking, and found some place for improvement for us! With a few minor changes (mostly #1587) here is what we get on Llama3 running on a g6 instances on AWS https://github.com/huggingface/tokenizers/blob/main/bindings/python/benches/test_tiktoken.py : image

Python API

We shipped better deserialization errors in general, and support for __str__ and __repr__ for all the object. This allows for a lot easier debugging see this:

>>> from tokenizers import Tokenizer;
>>> tokenizer = Tokenizer.from_pretrained("bert-base-uncased");
>>> print(tokenizer)
Tokenizer(version="1.0", truncation=None, padding=None, added_tokens=[{"id":0, "content":"[PAD]", "single_word":False, "lstrip":False, "rstrip":False, ...}, {"id":100, "content":"[UNK]", "single_word":False, "lstrip":False, "rstrip":False, ...}, {"id":101, "content":"[CLS]", "single_word":False, "lstrip":False, "rstrip":False, ...}, {"id":102, "content":"[SEP]", "single_word":False, "lstrip":False, "rstrip":False, ...}, {"id":103, "content":"[MASK]", "single_word":False, "lstrip":False, "rstrip":False, ...}], normalizer=BertNormalizer(clean_text=True, handle_chinese_chars=True, strip_accents=None, lowercase=True), pre_tokenizer=BertPreTokenizer(), post_processor=TemplateProcessing(single=[SpecialToken(id="[CLS]", type_id=0), Sequence(id=A, type_id=0), SpecialToken(id="[SEP]", type_id=0)], pair=[SpecialToken(id="[CLS]", type_id=0), Sequence(id=A, type_id=0), SpecialToken(id="[SEP]", type_id=0), Sequence(id=B, type_id=1), SpecialToken(id="[SEP]", type_id=1)], special_tokens={"[CLS]":SpecialToken(id="[CLS]", ids=[101], tokens=["[CLS]"]), "[SEP]":SpecialToken(id="[SEP]", ids=[102], tokens=["[SEP]"])}), decoder=WordPiece(prefix="##", cleanup=True), model=WordPiece(unk_token="[UNK]", continuing_subword_prefix="##", max_input_chars_per_word=100, vocab={"[PAD]":0, "[unused0]":1, "[unused1]":2, "[unused2]":3, "[unused3]":4, ...}))
>>> tokenizer
Tokenizer(version="1.0", truncation=None, padding=None, added_tokens=[{"id":0, "content":"[PAD]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}, {"id":100, "content":"[UNK]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}, {"id":101, "content":"[CLS]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}, {"id":102, "content":"[SEP]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}, {"id":103, "content":"[MASK]", "single_word":False, "lstrip":False, "rstrip":False, "normalized":False, "special":True}], normalizer=BertNormalizer(clean_text=True, handle_chinese_chars=True, strip_accents=None, lowercase=True), pre_tokenizer=BertPreTokenizer(), post_processor=TemplateProcessing(single=[SpecialToken(id="[CLS]", type_id=0), Sequence(id=A, type_id=0), SpecialToken(id="[SEP]", type_id=0)], pair=[SpecialToken(id="[CLS]", type_id=0), Sequence(id=A, type_id=0), SpecialToken(id="[SEP]", type_id=0), Sequence(id=B, type_id=1), SpecialToken(id="[SEP]", type_id=1)], special_tokens={"[CLS]":SpecialToken(id="[CLS]", ids=[101], tokens=["[CLS]"]), "[SEP]":SpecialToken(id="[SEP]", ids=[102], tokens=["[SEP]"])}), decoder=WordPiece(prefix="##", cleanup=True), model=WordPiece(unk_token="[UNK]", continuing_subword_prefix="##", max_input_chars_per_word=100, vocab={"[PAD]":0, "[unused0]":1, "[unused1]":2, ...}))

The pre_tokenizer.Sequence and normalizer.Sequence are also more accessible now:

from tokenizers import normalizers
norm = normalizers.Sequence([normalizers.Strip(), normalizers.BertNormalizer()])
norm[0]
norm[1].lowercase=False

What's Changed

... (truncated)

Commits
  • a5adaac version 0.20.0
  • a8def07 Merge branch 'fix_release' of github.com:huggingface/tokenizers into branch_v...
  • fe50673 Fix CI
  • b253835 push cargo
  • fc3bb76 update dependencies
  • bfd9cde Perf improvement 16% by removing offsets. (#1587)
  • bd27fa5 add deserialize for pre tokenizers (#1603)
  • 56c9c70 Tests + Deserialization improvement for normalizers. (#1604)
  • 49dafd7 Fix strip python type (#1602)
  • bded212 Support None to reset pre_tokenizers and normalizers, and index sequences (...
  • Additional commits viewable in compare view

Updates transformers from 4.42.4 to 4.44.0

Release notes

Sourced from transformers's releases.

Release v4.44.0: End to end compile generation!!! Gemma2 (with assisted decoding), Codestral (Mistral for code), Nemotron, Efficient SFT training, CPU Offloaded KVCache, torch export for static cache

This release comes a bit early in our cycle because we wanted to ship important and requested models along with improved performances for everyone!

All of these are included with examples in the awesome https://github.com/huggingface/local-gemma repository! 🎈 We tried to share examples of what is now possible with all the shipped features! Kudos to @​gante, @​sanchit-gandhi and @​xenova

💥 End-to-end generation compile

Generate: end-to-end compilation #30788 by @​gante: model.generate now supports compiling! There are a few limitations, but here is a small snippet:

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
import copy
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Meta-Llama-3.1-8B", torch_dtype=torch.bfloat16, device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3.1-8B")
compile generate
compiled_generate = torch.compile(model.generate, fullgraph=True, mode="reduce-overhead")
compiled generate does NOT accept parameterization except a) model inputs b) a generation config
generation_config = copy.deepcopy(model.generation_config)
generation_config.pad_token_id = model.config.eos_token_id
model_inputs = tokenizer(["Write a poem about the market crashing in summer"], return_tensors="pt")
model_inputs = model_inputs.to(model.device)
output_compiled = compiled_generate(**model_inputs, generation_config=generation_config)
print(output_compiled)

⚡ 3 to 5x compile speedup (compilation time 👀 not runtime)

  • 3-5x faster torch.compile forward compilation for autoregressive decoder models #32227* by @​fxmarty . As documented on the PR, this makes the whole generation a lot faster when you re-use the cache! You can see this when you run model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)

🪶 Offloaded KV cache: offload the cache to CPU when you are GPU poooooor 🚀

  • Offloaded KV Cache #31325* by @​n17s : you just have to set cache_implementation="offloaded" when calling from_pretrained or using this:
from transformers import GenerationConfig
gen_config = GenerationConfig(cache_implementation="offloaded", # other generation options such as num_beams=4,num_beam_groups=2,num_return_sequences=4,diversity_penalty=1.0,max_new_tokens=50,early_stopping=True)
outputs = model.generate(inputs["input_ids"],generation_config=gen_config)

📦 Torch export for static cache

pytorch team gave us a great gift: you can now use torch.export directly compatible with Executorch! Find examples here.

... (truncated)

Commits

Updates torchvision from 0.16.0.post2+cxx11.abi to 0.19.0

Release notes

Sourced from torchvision's releases.

Torchvision 0.19 release

Highlights

Encoding / Decoding images

Torchvision is extending its encoding/decoding capabilities. For this version, we added a GIF decoder which is available as torchvision.io.decode_gif(raw_tensor), torchvision.io.decode_image(raw_tensor), and torchvision.io.read_image(path_to_image).

We also added support for jpeg GPU encoding in torchvision.io.encode_jpeg(). This is 10X faster than the existing CPU jpeg encoder.

Read more on the docs!

Stay tuned for more improvements coming in the next versions. We plan to improve jpeg GPU decoding, and add more image decoders (webp in particular).

Resizing according to the longest edge of an image

It is now possible to resize images by setting torchvision.transforms.v2.Resize(max_size=N): this will resize the longest edge of the image exactly to max_size, making sure the image dimension don't exceed this value. Read more on the docs!

Detailed changes

Bug Fixes

[datasets] SBDataset: Only download noval file when image_set='train_noval' (#8475) [datasets] Update the download url in class EMNIST (#8350) [io] Fix compilation error when there is no libjpeg (#8342) [reference scripts] Fix use of cutmix_alpha in classification training references (#8448) [utils] Allow K=1 in draw_keypoints (#8439)

New Features

[io] Add decoder for GIF images (decode_gif(), decode_image(),read_image()) (#8406, #8419) [transforms] Add GaussianNoise transform (#8381)

Improvements

[transforms] Allow v2 Resize to resize longer edge exactly to max_size (#8459) [transforms] Add min_area parameter to SanitizeBoundingBox (#7735) [transforms] Make adjust_hue() work with numpy 2.0 (#8463) [transforms] Enable one-hot-encoded labels in MixUp and CutMix (#8427) [transforms] Create kernel on-device for transforms.functional.gaussian_blur (#8426) [io] Adding GPU acceleration to encode_jpeg (10X faster than CPU encoder) (#8391) [io] read_video: accept BytesIO objects on pyav backend (#8442) [io] Add compatibility with FFMPEG 7.0 (#8408) [datasets] Add extra to install gdown (#8430) [datasets] Support encoded RLE format in for COCO segmentations (#8387) [datasets] Added binary cat vs dog classification target type to Oxford pet dataset (#8388) [datasets] Return labels for FER2013 if possible (#8452) [ops] Force use of torch.compile on deterministic roi_align implementation (#8436) [utils] add float support to utils.draw_bounding_boxes() (#8328)

... (truncated)

Commits

Updates intel-extension-for-pytorch from 2.1.30+xpu to 2.3.100+cpu

Updates torch from 2.1.0.post2+cxx11.abi to 2.4.0

Release notes

Sourced from torch's releases.

PyTorch 2.4: Python 3.12, AOTInductor freezing, libuv backend for TCPStore

PyTorch 2.4 Release Notes

  • Highlights
  • Tracked Regressions
  • Backward incompatible changes
  • Deprecations
  • New features
  • Improvements
  • Bug Fixes
  • Performance
  • Documentation
  • Developers
  • Security

Highlights

We are excited to announce the release of PyTorch® 2.4! PyTorch 2.4 adds support for the latest version of Python (3.12) for torch.compile. AOTInductor freezing gives developers running AOTInductor more performance based optimizations by allowing the serialization of MKLDNN weights. As well, a new default TCPStore server backend utilizing libuv has been introduced which should significantly reduce initialization times for users running large-scale jobs. Finally, a new Python Custom Operator API makes it easier than before to integrate custom kernels into PyTorch, especially for torch.compile.

This release is composed of 3661 commits and 475 contributors since PyTorch 2.3. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve 2.4. More information about how to get started with the PyTorch 2-series can be found at our Getting Started page.

... (truncated)

Commits

Updates torchaudio from 2.1.0.post2+cxx11.abi to 2.4.0

Release notes

Sourced from torchaudio's releases.

TorchAudio 2.4.0 Release

This release is compatible with PyTorch 2.4. There are no new features added.

This release contains 2 fixes:

TorchAudio 2.3.1 Release

This release is compatible with PyTorch 2.3.1 patch release. There are no new features added.

TorchAudio 2.3.0 Release

This release is compatible with PyTorch 2.3.0 patch release. There are no new features added.

This release contains minor documentation and code quality improvements (#3734, #3748, #3757, #3759)

TorchAudio 2.2.2 Release

This release is compatible with PyTorch 2.2.2 patch release. There are no new features added.

TorchAudio 2.2.1 Release

This release is compatible with PyTorch 2.2.1 patch release. There are no new features added.

TorchAudio 2.2.0 Release

New Features

Bug Fixes

Recipe Updates

TorchAudio 2.1.2 Release

This is a patch release, which is compatible with PyTorch 2.1.2. There are no new features added.

v2.1.1

This is a minor release, which is compatible with PyTorch 2.1.1 and includes bug fixes, improvements and documentation updates.

Bug Fixes

  • Cherry-pick 2.1.1: Fix WavLM bundles (#3665)
  • Cherry-pick 2.1.1: Add back compression level in i/o dispatcher backend by (#3666)
Commits

Updates oneccl-bind-pt from 2.1.300+xpu to 2.3.0+cpu

Updates setuptools from 71.1.0 to 72.1.0

Changelog

Sourced from setuptools's changelog.

v72.1.0

Features

  • Restore the tests command and deprecate access to the module. (#4519) (#4520)

v72.0.0

Deprecations and Removals

  • The test command has been removed. Users relying on 'setup.py test' will need to migrate to another test runner or pin setuptools before this version. (#931)
Commits
  • 441799f Bump version: 72.0.0 → 72.1.0
  • 59aff44 Merge pull request #4522 from pypa/feature/graceful-drop-tests
  • c437aaa Restore the tests command and deprecate access to the module.
  • a6726b9 Add celery and requests to the packages that test integration. Ref #4520
  • 5e1b3c4 Bump version: 71.1.0 → 72.0.0
  • 4c0b9f3 Merge pull request #4458 from pypa/debt/remove-test-command
  • be8e3a0 Merge pull request #4507 from pypa/docs/4483-install-core-extra
  • 99d2c72 Add documentation clarifying how to reliably install setuptools with its depe...
  • 63c89f9 👹 Feed the hobgoblins (delint).
  • c405ac1 Merge branch 'main' into debt/remove-test-command
  • See full diff in compare view

Updates jupyterlab from 4.3.0a2 to 4.3.0b0

Release notes

Sourced from jupyterlab's releases.

v4.3.0b0

4.3.0b0

(Full Changelog)

Enhancements made

Bugs fixed

Maintenance and upkeep improvements

... (truncated)

Changelog

Sourced from jupyterlab's changelog.

4.3.0b0

(Full Changelog)

Enhancements made

Bumps the pytorch group with 13 updates in the /pytorch directory:

| Package | From | To |
| --- | --- | --- |
| [accelerate](https://github.com/huggingface/accelerate) | `0.32.1` | `0.33.0` |
| [peft](https://github.com/huggingface/peft) | `0.11.1` | `0.12.0` |
| [protobuf](https://github.com/protocolbuffers/protobuf) | `5.27.2` | `5.27.3` |
| [tokenizers](https://github.com/huggingface/tokenizers) | `0.19.1` | `0.20.0` |
| [transformers](https://github.com/huggingface/transformers) | `4.42.4` | `4.44.0` |
| [torchvision](https://github.com/pytorch/vision) | `0.16.0.post2+cxx11.abi` | `0.19.0` |
| intel-extension-for-pytorch | `2.1.30+xpu` | `2.3.100+cpu` |
| [torch](https://github.com/pytorch/pytorch) | `2.1.0.post2+cxx11.abi` | `2.4.0` |
| [torchaudio](https://github.com/pytorch/audio) | `2.1.0.post2+cxx11.abi` | `2.4.0` |
| oneccl-bind-pt | `2.1.300+xpu` | `2.3.0+cpu` |
| [setuptools](https://github.com/pypa/setuptools) | `71.1.0` | `72.1.0` |
| [jupyterlab](https://github.com/jupyterlab/jupyterlab) | `4.3.0a2` | `4.3.0b0` |
| [neural-compressor](https://github.com/intel/neural-compressor) | `2.6` | `3.0` |



Updates `accelerate` from 0.32.1 to 0.33.0
- [Release notes](https://github.com/huggingface/accelerate/releases)
- [Commits](huggingface/accelerate@v0.32.1...v0.33.0)

Updates `peft` from 0.11.1 to 0.12.0
- [Release notes](https://github.com/huggingface/peft/releases)
- [Commits](huggingface/peft@v0.11.1...v0.12.0)

Updates `protobuf` from 5.27.2 to 5.27.3
- [Release notes](https://github.com/protocolbuffers/protobuf/releases)
- [Changelog](https://github.com/protocolbuffers/protobuf/blob/main/protobuf_release.bzl)
- [Commits](protocolbuffers/protobuf@v5.27.2...v5.27.3)

Updates `tokenizers` from 0.19.1 to 0.20.0
- [Release notes](https://github.com/huggingface/tokenizers/releases)
- [Changelog](https://github.com/huggingface/tokenizers/blob/main/RELEASE.md)
- [Commits](huggingface/tokenizers@v0.19.1...v0.20.0)

Updates `transformers` from 4.42.4 to 4.44.0
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.42.4...v4.44.0)

Updates `torchvision` from 0.16.0.post2+cxx11.abi to 0.19.0
- [Release notes](https://github.com/pytorch/vision/releases)
- [Commits](https://github.com/pytorch/vision/commits/0.19.0)

Updates `intel-extension-for-pytorch` from 2.1.30+xpu to 2.3.100+cpu

Updates `torch` from 2.1.0.post2+cxx11.abi to 2.4.0
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md)
- [Commits](https://github.com/pytorch/pytorch/commits/v2.4.0)

Updates `torchaudio` from 2.1.0.post2+cxx11.abi to 2.4.0
- [Release notes](https://github.com/pytorch/audio/releases)
- [Commits](https://github.com/pytorch/audio/commits/v2.4.0)

Updates `oneccl-bind-pt` from 2.1.300+xpu to 2.3.0+cpu

Updates `setuptools` from 71.1.0 to 72.1.0
- [Release notes](https://github.com/pypa/setuptools/releases)
- [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst)
- [Commits](pypa/setuptools@v71.1.0...v72.1.0)

Updates `jupyterlab` from 4.3.0a2 to 4.3.0b0
- [Release notes](https://github.com/jupyterlab/jupyterlab/releases)
- [Changelog](https://github.com/jupyterlab/jupyterlab/blob/main/CHANGELOG.md)
- [Commits](https://github.com/jupyterlab/jupyterlab/compare/@jupyterlab/lsp@4.3.0-alpha.2...@jupyterlab/lsp@4.3.0-beta.0)

Updates `neural-compressor` from 2.6 to 3.0
- [Release notes](https://github.com/intel/neural-compressor/releases)
- [Commits](intel/neural-compressor@v2.6...v3.0)

---
updated-dependencies:
- dependency-name: accelerate
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: pytorch
- dependency-name: peft
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: pytorch
- dependency-name: protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: pytorch
- dependency-name: tokenizers
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: pytorch
- dependency-name: transformers
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: pytorch
- dependency-name: torchvision
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: pytorch
- dependency-name: intel-extension-for-pytorch
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: pytorch
- dependency-name: torch
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: pytorch
- dependency-name: torchaudio
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: pytorch
- dependency-name: oneccl-bind-pt
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: pytorch
- dependency-name: setuptools
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: pytorch
- dependency-name: jupyterlab
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: pytorch
- dependency-name: neural-compressor
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: pytorch
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Aug 12, 2024
@github-actions
Copy link

Dependency Review

✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.

OpenSSF Scorecard

PackageVersionScoreDetails

Scanned Manifest Files

@dependabot @github
Copy link
Contributor Author

dependabot bot commented on behalf of github Aug 14, 2024

Looks like these dependencies are updatable in another way, so this is no longer needed.

@dependabot dependabot bot closed this Aug 14, 2024
@dependabot dependabot bot deleted the dependabot/pip/pytorch/pytorch-88c828daf6 branch August 14, 2024 16:50
jitendra42 pushed a commit to jitendra42/ai-containers that referenced this pull request Oct 23, 2024
* add masking support

* update docs

* re-disable compare perf

* enable masking in the action

* integrate with actions

* use .actions.json instead of input passthrough

* remove python edits
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file python Pull requests that update Python code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant