Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump the pip group across 3 directories with 4 updates #1900

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

dependabot[bot]
Copy link

@dependabot dependabot bot commented on behalf of github Nov 19, 2024

Bumps the pip group with 3 updates in the / directory: gradio, torch and langchain-community.
Bumps the pip group with 3 updates in the /reqs_optional directory: gradio, torch and langchain-community.
Bumps the pip group with 2 updates in the /spaces/demo directory: torch and transformers.

Updates gradio from 4.44.0 to 5.5.0

Release notes

Sourced from gradio's releases.

gradio@5.5.0

Features

Fixes

Changelog

Sourced from gradio's changelog.

5.5.0

Features

Fixes

5.4.0

Features

Fixes

... (truncated)

Commits
  • b5eaba1 chore: update versions (#9874)
  • fa5d433 Do not load code in gr.NO_RELOAD in the reload mode watch thread (#9886)
  • b6725cf Lite auto-load imported modules with pyodide.loadPackagesFromImports (#9726)
  • e10bbd2 Fix live interfaces for audio/image streaming (#9883)
  • dcfa7ad Enforce meta key present during preprocess in FileData payloads (#9898)
  • 7d77024 Fix dataframe height increasing on scroll (#9892)
  • 4d90883 Allows selection of directories in File Explorer (#9835)
  • 6c8a064 Ensure non-form elements are correctly positioned when scale is applied (#9882)
  • a1582a6 Lite worker refactoring (#9424)
  • f109497 Fix frontend errors on ApiDocs and RecordingSnippet (#9786)
  • Additional commits viewable in compare view

Updates torch from 2.2.1 to 2.5.1

Release notes

Sourced from torch's releases.

PyTorch 2.5.1: bug fix release

This release is meant to fix the following regressions:

Besides the regression fixes, the release includes several documentation updates.

See release tracker pytorch/pytorch#132400 for additional information.

PyTorch 2.5.0 Release, SDPA CuDNN backend, Flex Attention

PyTorch 2.5 Release Notes

  • Highlights
  • Backwards Incompatible Change
  • Deprecations
  • New Features
  • Improvements
  • Bug fixes
  • Performance
  • Documentation
  • Developers
  • Security

Highlights

We are excited to announce the release of PyTorch® 2.5! This release features a new CuDNN backend for SDPA, enabling speedups by default for users of SDPA on H100s or newer GPUs. As well, regional compilation of torch.compile offers a way to reduce the cold start up time for torch.compile by allowing users to compile a repeated nn.Module (e.g. a transformer layer in LLM) without recompilations. Finally, TorchInductor CPP backend offers solid performance speedup with numerous enhancements like FP16 support, CPP wrapper, AOT-Inductor mode, and max-autotune mode. This release is composed of 4095 commits from 504 contributors since PyTorch 2.4. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve 2.5. More information about how to get started with the PyTorch 2-series can be found at our Getting Started page. As well, please check out our new ecosystem projects releases with TorchRec and TorchFix.

Beta Prototype
CuDNN backend for SDPA FlexAttention
torch.compile regional compilation without recompilations Compiled Autograd
TorchDynamo added support for exception handling & MutableMapping types Flight Recorder
TorchInductor CPU backend optimization Max-autotune Support on CPU with GEMM Template
TorchInductor on Windows
FP16 support on CPU path for both eager mode and TorchInductor CPP backend
Autoload Device Extension
Enhanced Intel GPU support

*To see a full list of public feature submissions click here.

BETA FEATURES

[Beta] CuDNN backend for SDPA

The cuDNN "Fused Flash Attention" backend was landed for torch.nn.functional.scaled_dot_product_attention. On NVIDIA H100 GPUs this can provide up to 75% speed-up over FlashAttentionV2. This speedup is enabled by default for all users of SDPA on H100 or newer GPUs.

[Beta] torch.compile regional compilation without recompilations

Regional compilation without recompilations, via torch._dynamo.config.inline_inbuilt_nn_modules which default to True in 2.5+. This option allows users to compile a repeated nn.Module (e.g. a transformer layer in LLM) without recompilations. Compared to compiling the full model, this option can result in smaller compilation latencies with 1%-5% performance degradation compared to full model compilation.

... (truncated)

Commits

Updates langchain-community from 0.2.6 to 0.2.19

Release notes

Sourced from langchain-community's releases.

langchain-community==0.2.19

Changes since langchain-community==0.3.6

community: release 0.2.19 (#28057) community: patch graphqa chains (CVE-2024-8309) (#28050) langchain,community[patch]: release with bumped core (#27854) Added mapping to fix CI for #langchain-aws:227. (#27114) community: poetry lock for cffi dep (#26674)

langchain-community==0.2.18

Changes since langchain-community==0.3.5

langchain,community[patch]: release with bumped core (#27854) Added mapping to fix CI for #langchain-aws:227. (#27114) community: poetry lock for cffi dep (#26674)

Commits

Updates gradio from 4.44.0 to 5.5.0

Release notes

Sourced from gradio's releases.

gradio@5.5.0

Features

Fixes

Changelog

Sourced from gradio's changelog.

5.5.0

Features

Fixes

5.4.0

Features

Fixes

... (truncated)

Commits
  • b5eaba1 chore: update versions (#9874)
  • fa5d433 Do not load code in gr.NO_RELOAD in the reload mode watch thread (#9886)
  • b6725cf Lite auto-load imported modules with pyodide.loadPackagesFromImports (#9726)
  • e10bbd2 Fix live interfaces for audio/image streaming (#9883)
  • dcfa7ad Enforce meta key present during preprocess in FileData payloads (#9898)
  • 7d77024 Fix dataframe height increasing on scroll (#9892)
  • 4d90883 Allows selection of directories in File Explorer (#9835)
  • 6c8a064 Ensure non-form elements are correctly positioned when scale is applied (#9882)
  • a1582a6 Lite worker refactoring (#9424)
  • f109497 Fix frontend errors on ApiDocs and RecordingSnippet (#9786)
  • Additional commits viewable in compare view

Updates torch from 2.2.1 to 2.5.1

Release notes

Sourced from torch's releases.

PyTorch 2.5.1: bug fix release

This release is meant to fix the following regressions:

Besides the regression fixes, the release includes several documentation updates.

See release tracker pytorch/pytorch#132400 for additional information.

PyTorch 2.5.0 Release, SDPA CuDNN backend, Flex Attention

PyTorch 2.5 Release Notes

  • Highlights
  • Backwards Incompatible Change
  • Deprecations
  • New Features
  • Improvements
  • Bug fixes
  • Performance
  • Documentation
  • Developers
  • Security

Highlights

We are excited to announce the release of PyTorch® 2.5! This release features a new CuDNN backend for SDPA, enabling speedups by default for users of SDPA on H100s or newer GPUs. As well, regional compilation of torch.compile offers a way to reduce the cold start up time for torch.compile by allowing users to compile a repeated nn.Module (e.g. a transformer layer in LLM) without recompilations. Finally, TorchInductor CPP backend offers solid performance speedup with numerous enhancements like FP16 support, CPP wrapper, AOT-Inductor mode, and max-autotune mode. This release is composed of 4095 commits from 504 contributors since PyTorch 2.4. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve 2.5. More information about how to get started with the PyTorch 2-series can be found at our Getting Started page. As well, please check out our new ecosystem projects releases with TorchRec and TorchFix.

Beta Prototype
CuDNN backend for SDPA FlexAttention
torch.compile regional compilation without recompilations Compiled Autograd
TorchDynamo added support for exception handling & MutableMapping types Flight Recorder
TorchInductor CPU backend optimization Max-autotune Support on CPU with GEMM Template
TorchInductor on Windows
FP16 support on CPU path for both eager mode and TorchInductor CPP backend
Autoload Device Extension
Enhanced Intel GPU support

*To see a full list of public feature submissions click here.

BETA FEATURES

[Beta] CuDNN backend for SDPA

The cuDNN "Fused Flash Attention" backend was landed for torch.nn.functional.scaled_dot_product_attention. On NVIDIA H100 GPUs this can provide up to 75% speed-up over FlashAttentionV2. This speedup is enabled by default for all users of SDPA on H100 or newer GPUs.

[Beta] torch.compile regional compilation without recompilations

Regional compilation without recompilations, via torch._dynamo.config.inline_inbuilt_nn_modules which default to True in 2.5+. This option allows users to compile a repeated nn.Module (e.g. a transformer layer in LLM) without recompilations. Compared to compiling the full model, this option can result in smaller compilation latencies with 1%-5% performance degradation compared to full model compilation.

... (truncated)

Commits

Updates langchain-community from 0.2.6 to 0.2.19

Release notes

Sourced from langchain-community's releases.

langchain-community==0.2.19

Changes since langchain-community==0.3.6

community: release 0.2.19 (#28057) community: patch graphqa chains (CVE-2024-8309) (#28050) langchain,community[patch]: release with bumped core (#27854) Added mapping to fix CI for #langchain-aws:227. (#27114) community: poetry lock for cffi dep (#26674)

langchain-community==0.2.18

Changes since langchain-community==0.3.5

langchain,community[patch]: release with bumped core (#27854) Added mapping to fix CI for #langchain-aws:227. (#27114) community: poetry lock for cffi dep (#26674)

Commits

Bumps the pip group with 3 updates in the / directory: [gradio](https://github.com/gradio-app/gradio), [torch](https://github.com/pytorch/pytorch) and [langchain-community](https://github.com/langchain-ai/langchain).
Bumps the pip group with 3 updates in the /reqs_optional directory: [gradio](https://github.com/gradio-app/gradio), [torch](https://github.com/pytorch/pytorch) and [langchain-community](https://github.com/langchain-ai/langchain).
Bumps the pip group with 2 updates in the /spaces/demo directory: [torch](https://github.com/pytorch/pytorch) and [transformers](https://github.com/huggingface/transformers).


Updates `gradio` from 4.44.0 to 5.5.0
- [Release notes](https://github.com/gradio-app/gradio/releases)
- [Changelog](https://github.com/gradio-app/gradio/blob/main/CHANGELOG.md)
- [Commits](https://github.com/gradio-app/gradio/compare/gradio@4.44.0...gradio@5.5.0)

Updates `torch` from 2.2.1 to 2.5.1
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md)
- [Commits](pytorch/pytorch@v2.2.1...v2.5.1)

Updates `langchain-community` from 0.2.6 to 0.2.19
- [Release notes](https://github.com/langchain-ai/langchain/releases)
- [Commits](langchain-ai/langchain@langchain-community==0.2.6...langchain-community==0.2.19)

Updates `gradio` from 4.44.0 to 5.5.0
- [Release notes](https://github.com/gradio-app/gradio/releases)
- [Changelog](https://github.com/gradio-app/gradio/blob/main/CHANGELOG.md)
- [Commits](https://github.com/gradio-app/gradio/compare/gradio@4.44.0...gradio@5.5.0)

Updates `torch` from 2.2.1 to 2.5.1
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md)
- [Commits](pytorch/pytorch@v2.2.1...v2.5.1)

Updates `langchain-community` from 0.2.6 to 0.2.19
- [Release notes](https://github.com/langchain-ai/langchain/releases)
- [Commits](langchain-ai/langchain@langchain-community==0.2.6...langchain-community==0.2.19)

Updates `torch` from 2.0.0 to 2.2.0
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md)
- [Commits](pytorch/pytorch@v2.2.1...v2.5.1)

Updates `transformers` from 4.28.1 to 4.38.0
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.28.1...v4.38.0)

---
updated-dependencies:
- dependency-name: gradio
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: torch
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: langchain-community
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: gradio
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: torch
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: langchain-community
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: torch
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: transformers
  dependency-type: direct:production
  dependency-group: pip
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Nov 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file python Pull requests that update Python code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants