-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to upgrade indirect dependencies? #4986
Comments
@chinesedfan Have you tried |
@milesj Yes, it results in the same result and I also updated reproduce steps in issue description. |
That is because
You could resolve this a couple ways:
Edit: Sorry, I just noticed there are different package names. Right, since there is no upgrade for You could try adding the |
@rally25rs Thanks for your reply! I tested 2 more cases.
|
yes, this is a major problem with yarn at the moment. and it's already in discussion at #2394 |
duplicate #2394 |
Please, re-open: it's not duplicate. #2394 describes duplicating of
This issue describes just ability to upgrade indirect dependency (somehow). |
@rally25rs Forgive me to ping. |
@rally25rs, please, you've closed both non-duplicating issues, it's wrong. Give us ability to upgrade indirect dependencies, please! |
Sorry, there was some confusion over on the other issue. I originally thought 2394 was asking for a way to upgrade a transitive dep using a |
+1 for this feature request. Also an example for anybody dumb like me who needs to upgrade a specific indirect dependency manually in the interim: Given explicit dependency Delete the
Edit: Punctuation |
upath 1.0.4 doesn't support node 10, and this was fixed in later versions ( anodynos/upath#15 ), but Babel's yarn.lock pinned the version to 1.0.4. Normal Babel builds use `yarn --ignore-engines` to work around this, but I have a script that clones and builds a number of repositories and assumes that all of them can run `yarn` after cloning without error. Upgrading upath fixes plain `yarn` on node 10 so I don't need to add a special case when cloning/building Babel. As described in yarnpkg/yarn#4986 (comment) , I did this by deleting the yarn.lock entry and re-running `yarn`. (It's a little tricky to upgrade since it's an indirect dependency.) I believe this means that it should be safe to remove `--ignore-engines` from travis.yml and Makefile, but I left those alone since arguably `--ignore-engines` is nice in that it protects against future similar issues.
upath 1.0.4 doesn't support node 10, and this was fixed in later versions ( anodynos/upath#15 ), but Babel's yarn.lock pinned the version to 1.0.4. Normal Babel builds use `yarn --ignore-engines` to work around this, but I have a script that clones and builds a number of repositories (including Babel) and assumes that all of them can run `yarn` after cloning without error. Upgrading upath fixes plain `yarn` on node 10 so I don't need to add a special case when cloning/building Babel. As described in yarnpkg/yarn#4986 (comment) , I did this by deleting the yarn.lock entry and re-running `yarn`. (It's a little tricky to upgrade since it's an indirect dependency.) I believe this means that it should be safe to remove `--ignore-engines` from travis.yml and Makefile, but I left those alone since arguably `--ignore-engines` is nice in that it protects against future similar issues.
@alex-thewsey-ibm, thanks for the workaround! Worked on yarn v1.7. |
ty, worked Yarn 1.9.2 |
Might help to nudge yarn with selective dependency From the docs: {
"name": "project",
"version": "1.0.0",
"dependencies": {
"left-pad": "1.0.0",
"c": "file:../c-1",
"d2": "file:../d2-1"
},
"resolutions": {
"d2/left-pad": "1.1.1",
"c/**/left-pad": "1.1.2"
}
} |
Upgrade `natives` indirect dependency using method described at yarnpkg/yarn#4986 (comment). See also: isaacs/natives#14
This works far better than any of the solutions listed here. |
@matthewtusker Here is a script that I used to do the same: #4986 (comment) |
work for me. |
the combination of
|
I do not believe that works for indirect dependencies -- dependencies not specifically listed in your package.json. That's what this issue is about, it's in the title, "how to upgrade indirect dependencies." If it does work for indirect dependencies, please confirm, and what yarn version you are using. I think this zombie issue has become probably unproductive and incoherent at this point. |
I use the following "clean" approach (without touching If, as described in original example, we have:
I would do these 3 steps:
|
As mentioned in https://blog.streamlit.io/six-tips-for-improving-your-streamlit-app-performance/ memory usage struggles in the browser if you have large ranges: > Due to implementation details, high-cardinality sliders don't suffer > from the serialization and network transfer delays mentioned earlier, > but they will still lead to a poor user experience (who needs to > specify house prices up to the dollar?) and high memory usage. In my > testing, the example above increased RAM usage by gigabytes until the > web browser eventually gave up (though this is something that should > be solvable on our end. We'll look into it!) This was caused by a bug in react-range, which I fixed last year. tajo/react-range#178 At the time, I had figured it would get picked up by a random yarn upgrade and didn't worry too much about it. But, apparently yarn doesn't really have an easy way of doing upgrades of transitive dependencies (see yarnpkg/yarn#4986)? I took the suggestion of someone in that thread to delete the entry and let yarn regenerate it. Some techinical details about the react-range fix from the original commit message (the "application" is a streamlit app): > We have an application that uses react-range under the hood, and we > noticed that a range input was taking 2GB of RAM on our machines. I > did some investigation and found that regardless of whether the marks > functionality was being used, refs were being created for each > possible value of the range. > We have some fairly huge ranges (we're using the input to scrub a > video with potential microsecond accuracy), and can imagine that > other people are affected by the previous behavior. This change > should allow us to continue using large input ranges without > incurring a memory penalty.
As mentioned in https://blog.streamlit.io/six-tips-for-improving-your-streamlit-app-performance/ memory usage struggles in the browser if you have large ranges: > Due to implementation details, high-cardinality sliders don't suffer > from the serialization and network transfer delays mentioned earlier, > but they will still lead to a poor user experience (who needs to > specify house prices up to the dollar?) and high memory usage. In my > testing, the example above increased RAM usage by gigabytes until the > web browser eventually gave up (though this is something that should > be solvable on our end. We'll look into it!) This was caused by a bug in react-range, which I fixed last year. tajo/react-range#178 At the time, I had figured it would get picked up by a random yarn upgrade and didn't worry too much about it. But, apparently yarn doesn't really have an easy way of doing upgrades of transitive dependencies (see yarnpkg/yarn#4986)? I took the suggestion of someone in that thread to delete the entry and let yarn regenerate it. Some technical details about the react-range fix from the original commit message (the "application" is a streamlit app): > We have an application that uses react-range under the hood, and we > noticed that a range input was taking 2GB of RAM on our machines. I > did some investigation and found that regardless of whether the marks > functionality was being used, refs were being created for each > possible value of the range. > We have some fairly huge ranges (we're using the input to scrub a > video with potential microsecond accuracy), and can imagine that > other people are affected by the previous behavior. This change > should allow us to continue using large input ranges without > incurring a memory penalty.
As mentioned in https://blog.streamlit.io/six-tips-for-improving-your-streamlit-app-performance/ memory usage struggles in the browser if you have large ranges: > Due to implementation details, high-cardinality sliders don't suffer > from the serialization and network transfer delays mentioned earlier, > but they will still lead to a poor user experience (who needs to > specify house prices up to the dollar?) and high memory usage. In my > testing, the example above increased RAM usage by gigabytes until the > web browser eventually gave up (though this is something that should > be solvable on our end. We'll look into it!) This was caused by a bug in react-range, which I fixed last year. tajo/react-range#178 At the time, I had figured it would get picked up by a random yarn upgrade and didn't worry too much about it. But, apparently yarn doesn't really have an easy way of doing upgrades of transitive dependencies (see yarnpkg/yarn#4986)? I took the suggestion of someone in that thread to delete the entry and let yarn regenerate it. Some technical details about the react-range fix from the original commit message (the "application" is a streamlit app): > We have an application that uses react-range under the hood, and we > noticed that a range input was taking 2GB of RAM on our machines. I > did some investigation and found that regardless of whether the marks > functionality was being used, refs were being created for each > possible value of the range. > We have some fairly huge ranges (we're using the input to scrub a > video with potential microsecond accuracy), and can imagine that > other people are affected by the previous behavior. This change > should allow us to continue using large input ranges without > incurring a memory penalty.
We switched the entire release strategy of Our current setup :
We repeat this until the entire graph has been updated and released. Obviously a small change in a deeply buried part (e.g. a tokenizer) escalates to ±50 releases of packages that didn't themselves have any code changes. This in turn creates a lot of noice for our users. We don't like this solution and ideally we wouldn't have to do stuff like this. We are only doing this because No other package manager is affected by this issue. |
## What's the purpose of this pull request? This PR upgrades protobufjs sub-dependency to incorporate the fix introduced in https://github.com/protobufjs/protobuf.js/releases/tag/protobufjs-v7.2.4, which was a vulnerability in the package. ## How it works? Updated the version `yarn.lock` uses to resolve the package. ## How to test it? Check yarn.lock. Alternatively, install the packages and see the installed version in node_modules. ## References https://github.com/protobufjs/protobuf.js/releases/tag/protobufjs-v7.2.4 yarnpkg/yarn#4986 (comment) GHSA-h755-8qp9-cq85
As mentioned in https://blog.streamlit.io/six-tips-for-improving-your-streamlit-app-performance/ memory usage struggles in the browser if you have large ranges: > Due to implementation details, high-cardinality sliders don't suffer > from the serialization and network transfer delays mentioned earlier, > but they will still lead to a poor user experience (who needs to > specify house prices up to the dollar?) and high memory usage. In my > testing, the example above increased RAM usage by gigabytes until the > web browser eventually gave up (though this is something that should > be solvable on our end. We'll look into it!) This was caused by a bug in react-range, which I fixed last year. tajo/react-range#178 At the time, I had figured it would get picked up by a random yarn upgrade and didn't worry too much about it. But, apparently yarn doesn't really have an easy way of doing upgrades of transitive dependencies (see yarnpkg/yarn#4986)? I took the suggestion of someone in that thread to delete the entry and let yarn regenerate it. Some technical details about the react-range fix from the original commit message (the "application" is a streamlit app): > We have an application that uses react-range under the hood, and we > noticed that a range input was taking 2GB of RAM on our machines. I > did some investigation and found that regardless of whether the marks > functionality was being used, refs were being created for each > possible value of the range. > We have some fairly huge ranges (we're using the input to scrub a > video with potential microsecond accuracy), and can imagine that > other people are affected by the previous behavior. This change > should allow us to continue using large input ranges without > incurring a memory penalty.
Had to remove and add @Docusaurus deps again due to yarnpkg/yarn#4986
* Bump Docusaurus to 3.0.1 with deps clsx and prism-react-renderer upgraded to same version used by Docusaurus 3.0.1 * Disable react-table debug log Logged "Creating Table Instance..." during build * Upgrade vulnerable transitive dependencies Had to remove and add @Docusaurus deps again due to yarnpkg/yarn#4986
As mentioned in https://blog.streamlit.io/six-tips-for-improving-your-streamlit-app-performance/ memory usage struggles in the browser if you have large ranges: > Due to implementation details, high-cardinality sliders don't suffer > from the serialization and network transfer delays mentioned earlier, > but they will still lead to a poor user experience (who needs to > specify house prices up to the dollar?) and high memory usage. In my > testing, the example above increased RAM usage by gigabytes until the > web browser eventually gave up (though this is something that should > be solvable on our end. We'll look into it!) This was caused by a bug in react-range, which I fixed last year. tajo/react-range#178 At the time, I had figured it would get picked up by a random yarn upgrade and didn't worry too much about it. But, apparently yarn doesn't really have an easy way of doing upgrades of transitive dependencies (see yarnpkg/yarn#4986)? I took the suggestion of someone in that thread to delete the entry and let yarn regenerate it. Some technical details about the react-range fix from the original commit message (the "application" is a streamlit app): > We have an application that uses react-range under the hood, and we > noticed that a range input was taking 2GB of RAM on our machines. I > did some investigation and found that regardless of whether the marks > functionality was being used, refs were being created for each > possible value of the range. > We have some fairly huge ranges (we're using the input to scrub a > video with potential microsecond accuracy), and can imagine that > other people are affected by the previous behavior. This change > should allow us to continue using large input ranges without > incurring a memory penalty.
As mentioned in https://blog.streamlit.io/six-tips-for-improving-your-streamlit-app-performance/ memory usage struggles in the browser if you have large ranges: > Due to implementation details, high-cardinality sliders don't suffer > from the serialization and network transfer delays mentioned earlier, > but they will still lead to a poor user experience (who needs to > specify house prices up to the dollar?) and high memory usage. In my > testing, the example above increased RAM usage by gigabytes until the > web browser eventually gave up (though this is something that should > be solvable on our end. We'll look into it!) This was caused by a bug in react-range, which I fixed last year. tajo/react-range#178 At the time, I had figured it would get picked up by a random yarn upgrade and didn't worry too much about it. But, apparently yarn doesn't really have an easy way of doing upgrades of transitive dependencies (see yarnpkg/yarn#4986)? I took the suggestion of someone in that thread to delete the entry and let yarn regenerate it. Some technical details about the react-range fix from the original commit message (the "application" is a streamlit app): > We have an application that uses react-range under the hood, and we > noticed that a range input was taking 2GB of RAM on our machines. I > did some investigation and found that regardless of whether the marks > functionality was being used, refs were being created for each > possible value of the range. > We have some fairly huge ranges (we're using the input to scrub a > video with potential microsecond accuracy), and can imagine that > other people are affected by the previous behavior. This change > should allow us to continue using large input ranges without > incurring a memory penalty.
Hey there. I am in the process of updating a big workspace where there are many vulnerabilities that would be solved trivially by semver. An example would be, I have a vitest ^1.6.0 installed, which whenever it was added to the repo, resolved a vite dependency to 5.0.8 (vitest 1.6.0 depends on vite ^5.0.0). I thought removing the dependency and reinstalling it would upgrade to the latest vite, but if you do so the exact same version of the indirect dependency (vite) is installed. I would expect there to be a simple way to tell yarn to upgrade a dependency and its dependency tree, or at the very least, letting me do a clean install. I even tried removing vitest 1.6.0 from the local cache, to no avail. If I initialize a different yarn project and install vitest@^1.6.0 it installs the latest compatible vite version, as semver indicates. Any workarounds? How does yarn decide to install the exact same version even if I removed it from yarn lock and the cache? |
@rjimenezda Quick idea there for a workaround, you may be able to leverage
Would install the latest vite version, then deduplicate the vite dependency along semver to only keep the latest, then remove the explicit dependency. Also, have you checked whether somewhere in your project you would have another transitive dependency to vite which prevents upgrading to somthing higher than 5.0.8? |
@ClementValot that does work! However, on workspaces is kind of a mess, because it updates all vite uses across the monorepo, not just for the one you are currently on, and we wanted to do these updates gradually. Also, it still feels incredibly convoluted. I'm still kind of baffled that deleting and reinstalling does not recalculate the most updated subdependencies, while on a fresh yarn project it does. I guess it's an optimization, but I can't figure out how to work around it. Regarding the second one, no, it does not seem like it. There is a dependency in the middle of vitest and vite, which is vite-node, but yarn why doesn't give any hint as of why it would not work. |
I think figured out why vite is not upgraded in that particular case. I'll just type it out in case someone is on the same boat as I am. Since there are several projects in the monorepo that depend on vites => vite-node => vite@^5.0.0, it seems like for yarn a viteˆ5.0.0 must always be resolved the same way. There is no way to say this particular vite^5.0.0 of this particular dependency chain in this workspace should use version X or Y. That means, if I want to upgrade vite as an indirect dependency in the monorepo (because of vulnerabilities), I must upgrade it across all projects that depend on the same specified range. I guess this is a very core design choice that probably has heaps of advantages, it's just kind of annoying for our very particular use case. |
I wrote a dumb tool to automate the process of editing
|
Do you want to request a feature or report a bug?
Feature.
What is the current behavior?
yarn upgrade
ignores indirect dependencies, so users can't upgrade them in yarn.lock. If I missed something, please tell me.If the current behavior is a bug, please provide the steps to reproduce.
yarn add is-alphanumerical@1.0.0
is-alphabetical
andis-decimal
, will be installed and saved in yarn.lockis-alphabetical
is 1.0.1 now, if another new version, say 1.0.2 was released(to test, you can release 2 test packages by yourself or modifyis-alphabetical
to be 1.0.0 inyarn.lock
, ** I know modifying yarn.lock directly is not a regular operation**)yarn
always saysAll of your dependencies are up to date
What is the expected behavior?
yarn upgrade
also supports indirect dependencies.Please mention your node.js, yarn and operating system version.
Node 8.9.0
yarn 1.3.2
OSX 10.12.6
The text was updated successfully, but these errors were encountered: