Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to upgrade indirect dependencies? #4986

Open
chinesedfan opened this issue Nov 23, 2017 · 72 comments
Open

How to upgrade indirect dependencies? #4986

chinesedfan opened this issue Nov 23, 2017 · 72 comments
Assignees

Comments

@chinesedfan
Copy link

chinesedfan commented Nov 23, 2017

Do you want to request a feature or report a bug?

Feature.

What is the current behavior?
yarn upgrade ignores indirect dependencies, so users can't upgrade them in yarn.lock. If I missed something, please tell me.

If the current behavior is a bug, please provide the steps to reproduce.

  • Suppose a new empty project, run yarn add is-alphanumerical@1.0.0
    • 2 indirect dependencies, is-alphabetical and is-decimal, will be installed and saved in yarn.lock
    • the latest version of is-alphabetical is 1.0.1 now, if another new version, say 1.0.2 was released(to test, you can release 2 test packages by yourself or modify is-alphabetical to be 1.0.0 in yarn.lock, ** I know modifying yarn.lock directly is not a regular operation**)
  • No matter which of following ways, yarn always says All of your dependencies are up to date
    • yarn upgrade is-alphabetical
    • yarn upgrade-interactive
    • yarn upgrade-interactive is-alphabetical

What is the expected behavior?
yarn upgrade also supports indirect dependencies.

Please mention your node.js, yarn and operating system version.
Node 8.9.0
yarn 1.3.2
OSX 10.12.6

@ghost ghost assigned rally25rs Nov 23, 2017
@ghost ghost added the triaged label Nov 23, 2017
@milesj
Copy link

milesj commented Nov 23, 2017

@chinesedfan Have you tried yarn upgrade-interactive?

@chinesedfan
Copy link
Author

@milesj Yes, it results in the same result and I also updated reproduce steps in issue description.

@rally25rs
Copy link
Contributor

rally25rs commented Nov 23, 2017

That is because yarn add is-alphanumerical@1.0.0 sets your package.json to exactly version 1.0.0 as you requested.

yarn upgrade respects your package.json semver range, and since you specified exactly version 1.0.0, it won't offer to upgrade to other versions.

You could resolve this a couple ways:

  • yarn upgrade --latest will ignore semver range and see what is tagged as latest in the registry.
  • change package.json to accept a version range like ^1.0.0 then yarn upgrade (you might have to yarn install first to get it to update the lock file for the changed range)
  • explicitly specify a version to upgrade, like yarn upgrade is-alphanumerical@1.0.1 or yarn upgrade is-alphanumerical@^1.0.0

Edit:

Sorry, I just noticed there are different package names. alphanumerical and alphabetical look the same at a glance :)

Right, since there is no upgrade for is-alphanumerical, the dependency tree isn't traversed any deeper to handle its transitive dependencies.

You could try adding the --force flag and see if that makes it the subdependencies. Otherwise I think you are right, there isn't an easy way to do that other than yarn remove is-alphanumerical and yarn add is-alphanumerical

@chinesedfan
Copy link
Author

@rally25rs Thanks for your reply! I tested 2 more cases.

  • yarn upgrade is-alphabetical --force doesn't work, either.
  • yarn upgrade is-alphanumerical will upgrade ALL its subdependencies even if it is already latest.
    • But if I just want to upgrade a specified subdependency, that is still not very convenient.

@OneCyrus
Copy link

yes, this is a major problem with yarn at the moment. and it's already in discussion at #2394

@rally25rs
Copy link
Contributor

duplicate #2394

@AlexWayfer
Copy link

Please, re-open: it's not duplicate.

#2394 describes duplicating of meck-test-bb package (indirect dependency):

I got two copies of meck-test-bb

This issue describes just ability to upgrade indirect dependency (somehow).

@chinesedfan
Copy link
Author

@rally25rs Forgive me to ping.

@AlexWayfer
Copy link

@rally25rs, please, you've closed both non-duplicating issues, it's wrong. Give us ability to upgrade indirect dependencies, please!

@rally25rs
Copy link
Contributor

Sorry, there was some confusion over on the other issue. I originally thought 2394 was asking for a way to upgrade a transitive dep using a --deep flag, or something like that, so I had marked this issue as a duplicate of that. Later on after re-reading 2394 I think it was about something different, which is not a duplicate of this like I originally thought.

@rally25rs rally25rs reopened this Jun 2, 2018
@rally25rs rally25rs removed the triaged label Jun 2, 2018
@alex-thewsey-ibm
Copy link

alex-thewsey-ibm commented Jun 6, 2018

+1 for this feature request. Also an example for anybody dumb like me who needs to upgrade a specific indirect dependency manually in the interim:

Given explicit dependency jsonwebtoken has resolved implicit dependency jws^3.0.0 to vulnerablejws=3.1.4: and you need it to instead resolve to patched 3.1.5:

Delete the jws entry e.g. below from yarn.lock, and re-run yarn. The indirect dependency and any affected packages will be updated, without touching other things (on yarn v1.3 at least)

jws@^3.0.0, jws@^3.1.4:
  version "3.1.4"
  resolved "https://registry.npmjs.org/jws/-/jws-3.1.4.tgz#f9e8b9338e8a847277d6444b1464f61880e050a2"
  dependencies:
    base64url "^2.0.0"
    jwa "^1.1.4"
    safe-buffer "^5.0.1"

Edit: Punctuation

alangpierce added a commit to alangpierce/babel that referenced this issue Jul 7, 2018
upath 1.0.4 doesn't support node 10, and this was fixed in later versions
( anodynos/upath#15 ), but Babel's yarn.lock pinned the
version to 1.0.4. Normal Babel builds use `yarn --ignore-engines` to work around
this, but I have a script that clones and builds a number of repositories and
assumes that all of them can run `yarn` after cloning without error. Upgrading
upath fixes plain `yarn` on node 10 so I don't need to add a special case when
cloning/building Babel.

As described in yarnpkg/yarn#4986 (comment) ,
I did this by deleting the yarn.lock entry and re-running `yarn`. (It's a little
tricky to upgrade since it's an indirect dependency.)

I believe this means that it should be safe to remove `--ignore-engines` from
travis.yml and Makefile, but I left those alone since arguably `--ignore-engines`
is nice in that it protects against future similar issues.
alangpierce added a commit to alangpierce/babel that referenced this issue Jul 7, 2018
upath 1.0.4 doesn't support node 10, and this was fixed in later versions
( anodynos/upath#15 ), but Babel's yarn.lock pinned the
version to 1.0.4. Normal Babel builds use `yarn --ignore-engines` to work around
this, but I have a script that clones and builds a number of repositories
(including Babel) and assumes that all of them can run `yarn` after cloning
without error. Upgrading upath fixes plain `yarn` on node 10 so I don't need to
add a special case when cloning/building Babel.

As described in yarnpkg/yarn#4986 (comment) ,
I did this by deleting the yarn.lock entry and re-running `yarn`. (It's a little
tricky to upgrade since it's an indirect dependency.)

I believe this means that it should be safe to remove `--ignore-engines` from
travis.yml and Makefile, but I left those alone since arguably `--ignore-engines`
is nice in that it protects against future similar issues.
@mkutny
Copy link

mkutny commented Jul 26, 2018

@alex-thewsey-ibm, thanks for the workaround!

Worked on yarn v1.7.

@Subtletree
Copy link

ty, worked Yarn 1.9.2

@joelpurra
Copy link

Might help to nudge yarn with selective dependency resolutions, even if it's for a single dependency. Thanks to @remolueoend for the hint!
https://yarnpkg.com/lang/en/docs/selective-version-resolutions/

From the docs:

{
  "name": "project",
  "version": "1.0.0",
  "dependencies": {
    "left-pad": "1.0.0",
    "c": "file:../c-1",
    "d2": "file:../d2-1"
  },
  "resolutions": {
    "d2/left-pad": "1.1.1",
    "c/**/left-pad": "1.1.2"
  }
}

@matthewtusker
Copy link

If you are using old stable version (e.g. 1.22.x) : I didn't find better solution than : https://medium.com/@ayushya/upgrading-javascript-packages-deep-dependencies-using-yarn-8b5983d5fb6b

This works far better than any of the solutions listed here. yarn up --recursive loader-utils upgrades loader-utils, but has also upgraded a bunch of other stuff, and has still left to disparate versions. Deleting the lines from the lock file and then running yarn install installs a single version.

@pftg
Copy link

pftg commented Jan 4, 2023

If you are using old stable version (e.g. 1.22.x) : I didn't find better solution than : https://medium.com/@ayushya/upgrading-javascript-packages-deep-dependencies-using-yarn-8b5983d5fb6b

This works far better than any of the solutions listed here. yarn up --recursive loader-utils upgrades loader-utils, but has also upgraded a bunch of other stuff, and has still left to disparate versions. Deleting the lines from the lock file and then running yarn install installs a single version.

@matthewtusker Here is a script that I used to do the same: #4986 (comment)

@microJ
Copy link

microJ commented Mar 20, 2023

If you are using old stable version (e.g. 1.22.x) : I didn't find better solution than : https://medium.com/@ayushya/upgrading-javascript-packages-deep-dependencies-using-yarn-8b5983d5fb6b

work for me.
simply, I want to upgrade react-native@0.63.5, then run yarn remove react-native && yarn add react-native@0.63.5 work for me well

@safydy
Copy link

safydy commented Mar 23, 2023

the combination of yarn remove & yarn add did work for me as workaround.

yarn remove myPackage; yarn add myPackage

@jrochkind
Copy link

jrochkind commented Mar 23, 2023

the combination of yarn remove & yarn add

I do not believe that works for indirect dependencies -- dependencies not specifically listed in your package.json. That's what this issue is about, it's in the title, "how to upgrade indirect dependencies."

If it does work for indirect dependencies, please confirm, and what yarn version you are using.

I think this zombie issue has become probably unproductive and incoherent at this point.

@gonadarian
Copy link

gonadarian commented Apr 8, 2023

I use the following "clean" approach (without touching yarn.lock) in such situations.

If, as described in original example, we have:

  • yarn add is-alphanumerical@1.0.0
    • installs is-alphabetical v1.0.1

I would do these 3 steps:

  1. yarn add is-alphabetical@1.0.2
    • gives duplicated is-alphabetical with both original v1.0.1 & new v1.0.2 in yarn.lock
  2. npx yarn-deduplicate
    • deduplicates to only the latest is-alphabetical v1.0.2
  3. yarn remove is-alphabetical
    • cleans up package.json
  4. profit!

wolfd added a commit to wolfd/streamlit that referenced this issue May 30, 2023
As mentioned in
https://blog.streamlit.io/six-tips-for-improving-your-streamlit-app-performance/
memory usage struggles in the browser if you have large ranges:

> Due to implementation details, high-cardinality sliders don't suffer
> from the serialization and network transfer delays mentioned earlier,
> but they will still lead to a poor user experience (who needs to
> specify house prices up to the dollar?) and high memory usage. In my
> testing, the example above increased RAM usage by gigabytes until the
> web browser eventually gave up (though this is something that should
> be solvable on our end. We'll look into it!)

This was caused by a bug in react-range, which I fixed last year.
tajo/react-range#178

At the time, I had figured it would get picked up by a random yarn
upgrade and didn't worry too much about it.
But, apparently yarn doesn't really have an easy way of doing upgrades
of transitive dependencies (see yarnpkg/yarn#4986)?
I took the suggestion of someone in that thread to delete the entry and
let yarn regenerate it.

Some techinical details about the react-range fix from the original
commit message (the "application" is a streamlit app):

> We have an application that uses react-range under the hood, and we
> noticed that a range input was taking 2GB of RAM on our machines. I
> did some investigation and found that regardless of whether the marks
> functionality was being used, refs were being created for each
> possible value of the range.

> We have some fairly huge ranges (we're using the input to scrub a
> video with potential microsecond accuracy), and can imagine that
> other people are affected by the previous behavior. This change
> should allow us to continue using large input ranges without
> incurring a memory penalty.
wolfd added a commit to wolfd/streamlit that referenced this issue May 30, 2023
As mentioned in
https://blog.streamlit.io/six-tips-for-improving-your-streamlit-app-performance/
memory usage struggles in the browser if you have large ranges:

> Due to implementation details, high-cardinality sliders don't suffer
> from the serialization and network transfer delays mentioned earlier,
> but they will still lead to a poor user experience (who needs to
> specify house prices up to the dollar?) and high memory usage. In my
> testing, the example above increased RAM usage by gigabytes until the
> web browser eventually gave up (though this is something that should
> be solvable on our end. We'll look into it!)

This was caused by a bug in react-range, which I fixed last year.
tajo/react-range#178

At the time, I had figured it would get picked up by a random yarn
upgrade and didn't worry too much about it.
But, apparently yarn doesn't really have an easy way of doing upgrades
of transitive dependencies (see yarnpkg/yarn#4986)?
I took the suggestion of someone in that thread to delete the entry and
let yarn regenerate it.

Some technical details about the react-range fix from the original
commit message (the "application" is a streamlit app):

> We have an application that uses react-range under the hood, and we
> noticed that a range input was taking 2GB of RAM on our machines. I
> did some investigation and found that regardless of whether the marks
> functionality was being used, refs were being created for each
> possible value of the range.

> We have some fairly huge ranges (we're using the input to scrub a
> video with potential microsecond accuracy), and can imagine that
> other people are affected by the previous behavior. This change
> should allow us to continue using large input ranges without
> incurring a memory penalty.
vdonato pushed a commit to streamlit/streamlit that referenced this issue May 31, 2023
As mentioned in
https://blog.streamlit.io/six-tips-for-improving-your-streamlit-app-performance/
memory usage struggles in the browser if you have large ranges:

> Due to implementation details, high-cardinality sliders don't suffer
> from the serialization and network transfer delays mentioned earlier,
> but they will still lead to a poor user experience (who needs to
> specify house prices up to the dollar?) and high memory usage. In my
> testing, the example above increased RAM usage by gigabytes until the
> web browser eventually gave up (though this is something that should
> be solvable on our end. We'll look into it!)

This was caused by a bug in react-range, which I fixed last year.
tajo/react-range#178

At the time, I had figured it would get picked up by a random yarn
upgrade and didn't worry too much about it.
But, apparently yarn doesn't really have an easy way of doing upgrades
of transitive dependencies (see yarnpkg/yarn#4986)?
I took the suggestion of someone in that thread to delete the entry and
let yarn regenerate it.

Some technical details about the react-range fix from the original
commit message (the "application" is a streamlit app):

> We have an application that uses react-range under the hood, and we
> noticed that a range input was taking 2GB of RAM on our machines. I
> did some investigation and found that regardless of whether the marks
> functionality was being used, refs were being created for each
> possible value of the range.

> We have some fairly huge ranges (we're using the input to scrub a
> video with potential microsecond accuracy), and can imagine that
> other people are affected by the previous behavior. This change
> should allow us to continue using large input ranges without
> incurring a memory penalty.
@romainmenke
Copy link

We switched the entire release strategy of postcss-preset-env to accommodate yarn but in hindsight I think this was a mistake. We should have left the issue as is as it wasn't ours to fix.

Our current setup :

  • build a dependency graph of all parts of postcss-preset-env
  • check if any part needs to be released
  • release only those parts that don't themselves depend on unreleased parts
  • update all downstream dependents with the newly released part's version
  • go to step 1

We repeat this until the entire graph has been updated and released.

Obviously a small change in a deeply buried part (e.g. a tokenizer) escalates to ±50 releases of packages that didn't themselves have any code changes.

This in turn creates a lot of noice for our users.

We don't like this solution and ideally we wouldn't have to do stuff like this.

We are only doing this because yarn fails to correctly update transitive dependencies, leading to conflicts when peer dependencies are used.

No other package manager is affected by this issue.

icazevedo pushed a commit to vtex/faststore that referenced this issue Oct 4, 2023
## What's the purpose of this pull request?

This PR upgrades protobufjs sub-dependency to incorporate the fix
introduced in
https://github.com/protobufjs/protobuf.js/releases/tag/protobufjs-v7.2.4,
which was a vulnerability in the package.

## How it works?

Updated the version `yarn.lock` uses to resolve the package.

## How to test it?

Check yarn.lock. Alternatively, install the packages and see the
installed version in node_modules.
## References

https://github.com/protobufjs/protobuf.js/releases/tag/protobufjs-v7.2.4
yarnpkg/yarn#4986 (comment)
GHSA-h755-8qp9-cq85
eric-skydio pushed a commit to eric-skydio/streamlit that referenced this issue Dec 20, 2023
As mentioned in
https://blog.streamlit.io/six-tips-for-improving-your-streamlit-app-performance/
memory usage struggles in the browser if you have large ranges:

> Due to implementation details, high-cardinality sliders don't suffer
> from the serialization and network transfer delays mentioned earlier,
> but they will still lead to a poor user experience (who needs to
> specify house prices up to the dollar?) and high memory usage. In my
> testing, the example above increased RAM usage by gigabytes until the
> web browser eventually gave up (though this is something that should
> be solvable on our end. We'll look into it!)

This was caused by a bug in react-range, which I fixed last year.
tajo/react-range#178

At the time, I had figured it would get picked up by a random yarn
upgrade and didn't worry too much about it.
But, apparently yarn doesn't really have an easy way of doing upgrades
of transitive dependencies (see yarnpkg/yarn#4986)?
I took the suggestion of someone in that thread to delete the entry and
let yarn regenerate it.

Some technical details about the react-range fix from the original
commit message (the "application" is a streamlit app):

> We have an application that uses react-range under the hood, and we
> noticed that a range input was taking 2GB of RAM on our machines. I
> did some investigation and found that regardless of whether the marks
> functionality was being used, refs were being created for each
> possible value of the range.

> We have some fairly huge ranges (we're using the input to scrub a
> video with potential microsecond accuracy), and can imagine that
> other people are affected by the previous behavior. This change
> should allow us to continue using large input ranges without
> incurring a memory penalty.
fflaten added a commit to fflaten/docs that referenced this issue Jan 2, 2024
bravo-kernel pushed a commit to pester/docs that referenced this issue Jan 3, 2024
* Bump Docusaurus to 3.0.1 with deps

clsx and prism-react-renderer upgraded
to same version used by Docusaurus 3.0.1

* Disable react-table debug log

Logged "Creating Table Instance..." during build

* Upgrade vulnerable transitive dependencies

Had to remove and add @Docusaurus deps
again due to yarnpkg/yarn#4986
zyxue pushed a commit to zyxue/streamlit that referenced this issue Mar 22, 2024
As mentioned in
https://blog.streamlit.io/six-tips-for-improving-your-streamlit-app-performance/
memory usage struggles in the browser if you have large ranges:

> Due to implementation details, high-cardinality sliders don't suffer
> from the serialization and network transfer delays mentioned earlier,
> but they will still lead to a poor user experience (who needs to
> specify house prices up to the dollar?) and high memory usage. In my
> testing, the example above increased RAM usage by gigabytes until the
> web browser eventually gave up (though this is something that should
> be solvable on our end. We'll look into it!)

This was caused by a bug in react-range, which I fixed last year.
tajo/react-range#178

At the time, I had figured it would get picked up by a random yarn
upgrade and didn't worry too much about it.
But, apparently yarn doesn't really have an easy way of doing upgrades
of transitive dependencies (see yarnpkg/yarn#4986)?
I took the suggestion of someone in that thread to delete the entry and
let yarn regenerate it.

Some technical details about the react-range fix from the original
commit message (the "application" is a streamlit app):

> We have an application that uses react-range under the hood, and we
> noticed that a range input was taking 2GB of RAM on our machines. I
> did some investigation and found that regardless of whether the marks
> functionality was being used, refs were being created for each
> possible value of the range.

> We have some fairly huge ranges (we're using the input to scrub a
> video with potential microsecond accuracy), and can imagine that
> other people are affected by the previous behavior. This change
> should allow us to continue using large input ranges without
> incurring a memory penalty.
zyxue pushed a commit to zyxue/streamlit that referenced this issue Apr 16, 2024
As mentioned in
https://blog.streamlit.io/six-tips-for-improving-your-streamlit-app-performance/
memory usage struggles in the browser if you have large ranges:

> Due to implementation details, high-cardinality sliders don't suffer
> from the serialization and network transfer delays mentioned earlier,
> but they will still lead to a poor user experience (who needs to
> specify house prices up to the dollar?) and high memory usage. In my
> testing, the example above increased RAM usage by gigabytes until the
> web browser eventually gave up (though this is something that should
> be solvable on our end. We'll look into it!)

This was caused by a bug in react-range, which I fixed last year.
tajo/react-range#178

At the time, I had figured it would get picked up by a random yarn
upgrade and didn't worry too much about it.
But, apparently yarn doesn't really have an easy way of doing upgrades
of transitive dependencies (see yarnpkg/yarn#4986)?
I took the suggestion of someone in that thread to delete the entry and
let yarn regenerate it.

Some technical details about the react-range fix from the original
commit message (the "application" is a streamlit app):

> We have an application that uses react-range under the hood, and we
> noticed that a range input was taking 2GB of RAM on our machines. I
> did some investigation and found that regardless of whether the marks
> functionality was being used, refs were being created for each
> possible value of the range.

> We have some fairly huge ranges (we're using the input to scrub a
> video with potential microsecond accuracy), and can imagine that
> other people are affected by the previous behavior. This change
> should allow us to continue using large input ranges without
> incurring a memory penalty.
@rjimenezda
Copy link

Hey there.

I am in the process of updating a big workspace where there are many vulnerabilities that would be solved trivially by semver.

An example would be, I have a vitest ^1.6.0 installed, which whenever it was added to the repo, resolved a vite dependency to 5.0.8 (vitest 1.6.0 depends on vite ^5.0.0).

I thought removing the dependency and reinstalling it would upgrade to the latest vite, but if you do so the exact same version of the indirect dependency (vite) is installed.

I would expect there to be a simple way to tell yarn to upgrade a dependency and its dependency tree, or at the very least, letting me do a clean install. I even tried removing vitest 1.6.0 from the local cache, to no avail.

If I initialize a different yarn project and install vitest@^1.6.0 it installs the latest compatible vite version, as semver indicates.

Any workarounds? How does yarn decide to install the exact same version even if I removed it from yarn lock and the cache?

@ClementValot
Copy link

@rjimenezda Quick idea there for a workaround, you may be able to leverage yarn dedupe

yarn add vite@latest
yarn dedupe
yarn remove vite

Would install the latest vite version, then deduplicate the vite dependency along semver to only keep the latest, then remove the explicit dependency.

Also, have you checked whether somewhere in your project you would have another transitive dependency to vite which prevents upgrading to somthing higher than 5.0.8? yarn why vite should help in that case

@rjimenezda
Copy link

@ClementValot that does work! However, on workspaces is kind of a mess, because it updates all vite uses across the monorepo, not just for the one you are currently on, and we wanted to do these updates gradually.

Also, it still feels incredibly convoluted. I'm still kind of baffled that deleting and reinstalling does not recalculate the most updated subdependencies, while on a fresh yarn project it does. I guess it's an optimization, but I can't figure out how to work around it.

Regarding the second one, no, it does not seem like it. There is a dependency in the middle of vitest and vite, which is vite-node, but yarn why doesn't give any hint as of why it would not work.

@rjimenezda
Copy link

I think figured out why vite is not upgraded in that particular case. I'll just type it out in case someone is on the same boat as I am.

Since there are several projects in the monorepo that depend on vites => vite-node => vite@^5.0.0, it seems like for yarn a viteˆ5.0.0 must always be resolved the same way. There is no way to say this particular vite^5.0.0 of this particular dependency chain in this workspace should use version X or Y.

That means, if I want to upgrade vite as an indirect dependency in the monorepo (because of vulnerabilities), I must upgrade it across all projects that depend on the same specified range.

I guess this is a very core design choice that probably has heaps of advantages, it's just kind of annoying for our very particular use case.

@robertknight
Copy link

I wrote a dumb tool to automate the process of editing yarn.lock to remove the existing resolutions and then re-running yarn install: https://github.com/robertknight/yarn-update-indirect.

npm install yarn-update-indirect
yarn-update-indirect some-transitive-dep

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests