-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI and development environment overhaul #2298
Conversation
What the heck is pixi? This https://pixi.sh/latest/ ? Is this mature/stable? It doesn't have a wikipedia page and I have never heard from it... |
Yes, that link is correct. It is a relatively new package manager, it was released in summer last year (they had a release party at EuroSciPy 2023, Reimar and I attended it). It is developed by the same people that were/are behind mamba, so for what it's worth there is some credibility behind it.
I haven't worked with cargo/rust in a while but yes, I think they took a lot of inspiration from that and there are many similarities in how they work.
Hard agree.
Yes, the mss-feedstock and therefore the conda-forge package remains the same (and installable through conda, mamba, pixi or whatever else might work with conda-forge packages, with the usual caveat that conda-forge packages can not be tested and might break at any point in time due to the install-time dependency resolution - nothing new from this PR and a gripe I have with the entire conda packaging model). |
I recently gave a lighning talk at the Barcamp in Karlsruhe. Sources: There is also a talk by Wolf on pyconde in mid april |
with an uptodate conda you have same speed as with mamba, see |
Speed isn't the issue that conda has, in my opinion. The issue is that it is impossible to define a development environment in its entirety right next to the source code (i.e. inside of a git repository). Mamba has the same issue, because its interface is basically the same as conda's. Cargo fixes this for the rust world with its Cargo.lock file pinning dependencies from crates.io, PDM or poetry (or others) can do the same for python on top of PyPI, nix can do it independently of the programming language using pinned references to nixpkgs, and pixi does it on top of the conda packaging ecosystem (also basically language independent). Conda or mamba simply don't provide this feature, speed is irrelevant in that case. (Although, using a lock file moves the dependency resolution to the step of creating the lock file, so that at install time there is no dependency resolution at all necessary. A no-op is strictly faster than any solver conda could possibly integrate.) |
With trying to differentiate the packages #2390 I will likly get to a conda requirements.d/mswms.txt, requirements.d/mscolab.txt and likly a requirements.txt for the whole. I am currently looking for the jinja syntax to read a requirements.txt files into the current meta.yaml which I prefer to use until we can migrate to e.g. rattler-build etc. on conda-forge too. The one in the feedstock is not identical because of the cross compilation. Without any extra hassle I want to be able to redo a build by conda build locally. But keeiping that dir is then independent for switching in a first step to a requirements.txt instead of current transformation from a meta.yaml. With the jinja2 Syntax there is then also no duplication. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice!
Wolf recently announced a speedup using conda-forge from prefix-dev.
https://x.com/wolfvollprecht/status/1863590394992193803
We maybe should move all docs etc to pixi too, there is also an issue for that.
@ReimarBauer this is still WIP, I have only rebased so far and didn't yet update the pixi.toml file to reflect all of the changes since end of March or work on the documentation or anything. I am also looking into |
I know that it has WIP state, but I like it especially because of the recent problems which perfectly illustrate what disadvantages we have if we don't do it. These are benevolent comments ;) |
ac18443
to
5012396
Compare
d988539
to
c6b7950
Compare
This removes the docker image logic from the CI setup and instead replaces it with a setup based on pixi. As part of that it moves the dependency specification from localbuild/meta.yaml to pixi.toml and pixi.lock. This turns the MSS repository into a single source of truth for both the application code as well as the development environment (whereas the latter was previously only specified in the docker images, and not reproducible in any way). Setting up a development environment is as simple as installing pixi and running `pixi shell` (or `pixi run <cmd>`, or `pixi install` to just create the environment, etc.). This environment will, by construction, be the same that is used in the CI as well (modulo platform differences). There is a new workflow that periodically (once a week on Monday) recreates the pixi lockfile and opens a PR for that update. The checks in that PR essentially serve as a replacement for the previous scheduled runs to ensure that no dependency update breaks MSS. Merging that PR is a manual step that can be done just as with any other PR and would then update the environment on the given target branch. This is essentially what was previously the triggering of a docker image creation. Including new dependencies can be done with `pixi add`, which will also automatically add the dependency to the (pre-existing) lockfile. This means dependency additions can be part of the PR that necessitate them and they won't affect the entire environment (as they previously did, where they would trigger a full image rebuild) but instead just add that new package to the existing specification.
- name: Create or update pull request | ||
uses: peter-evans/create-pull-request@v7 | ||
with: | ||
token: ${{ secrets.PAT }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is assuming that the currently-configured PAT has the right permissions to create a PR against this repository.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, I updated it on some places, end of year, we will see.
setup.py
Outdated
@@ -44,7 +44,7 @@ | |||
console_scripts.append('msidp = mslib.msidp.idp:main') | |||
|
|||
setup( | |||
name="mss", | |||
name="mslib", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This name change was necessary because of a name collision with the mss package on PyPI, which we also depend on. Without it the editable install of this MSS into the pixi environment was impossible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not clear about the consequences. Does this mean that we would have to call ourselves MSLIB on conda-forge and within the project? I would then need to initiate a migration; it also has implications for publications.
If it all comes down to a renaming everywhere, then open-mss fits well with gh and open-mss.org.
The other mss project is called python-mss on conda-forge. If we choose a new name, we should do so carefully because "Mission Support System" is a very generic term.
open-mss?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is only the name of the python package defined by this setup.py file. It does not mean that we have to change the package name anywhere else AFAIU. PyPI should be the only package registry that actually enforces matching package names in the python package and the project on the index, if it even does that.
I've called it mslib because that is the name of the main module inside, but I agree it isn't a nice name. open-mss would sound good to me. python-mss
is called that on conda-forge because all pure-python packages should get a python-
prefix there AFAIK.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let us change that name already to open-mss. Then we have used that already there and at some point someone again tries to create a pypi package it needs not another renaming.
We can change the inner later anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed it. We should reserve the open-mss name on PyPI though, so we don't run into this again in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, I do
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://test.pypi.org/project/open-mss/
just learned a global installed pixi has precedence
(venv) (base) reimarbauer@MacBook-Pro-von-Reimar venv % bin/mss
open-mss is a conda-forge package.You can install it with pixi.
Get pixi from https://pixi.sh/latest/ for your operation system.
pixi global install mss
Usage:
msui -h
mswms -h
mscolab -h
- conda: https://conda.anaconda.org/conda-forge/noarch/pyshp-2.3.1-pyhd8ed1ab_1.conda | ||
- conda: https://conda.anaconda.org/conda-forge/noarch/pysocks-1.7.1-pyha55dd90_7.conda | ||
- conda: https://conda.anaconda.org/conda-forge/noarch/pytest-8.3.4-pyhd8ed1ab_1.conda | ||
- conda: https://conda.anaconda.org/conda-forge/noarch/pytest-cov-6.0.0-pyhd8ed1ab_1.conda |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this mean we have all testing, devoloping tools always installed on a developer and users system?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really. pixi {install,shell,run,...}
should only actually download the packages for the default environment, if you don't request the dev environment. Likewise for the other ones (addons, tutorials). The lock file only contains the combined environment resolution result.
Users aren't affected anyway since they do not interact with this pixi project at all.
metpy = "*" | ||
multidict = "*" | ||
netcdf4 = "*" | ||
numpy = "<2" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
after merge on develop it is >2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
>=2
actually
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added some ideas/questions.
|
||
We have implemented demodata as data base for testing. On first call of pytest a set of demodata becomes stored | ||
in a /tmp/mss* folder. If you have installed gitpython a postfix of the revision head is added. | ||
pixi run -e dev msui | ||
|
||
|
||
Setup MSWMS server |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please also change in the MSWMS and MSColab sections also the calls.
I wanted to send a patch but somehow I do something wrong, "There was an error committing your changes: File could not be edited"
Next time we meet I need to become "aufschlaut"
https://iffmd.fz-juelich.de/IENAfBSeRhuFDO5VIN2s3Q#
Then we have all relevant here updated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error doesn't read like anything I've ever encountered from git. At which point did you get it?
Anyway, I've made those changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I´d not tried again today, this was the path
problem.mp4
below that button I had the option to send a PR or direct edit, the PR gaves that error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried the whole process on my Mac also for the servers not only the ui.
a few commands should be changed in the development.rst for the servers
https://iffmd.fz-juelich.de/IENAfBSeRhuFDO5VIN2s3Q#
we should use open-mss from now on in the setup.py.
Off-topic: I like the new grouping of the CI results, makes it much easier to see what failed at a glance... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great !
Thx
I have several technical questions, which we may adress in todays meeting. Generally this seems like a step in a good direction! |
This removes the docker image logic from the CI setup and instead replaces it with a setup based on pixi. As part of that it moves the dependency specification from localbuild/meta.yaml to pixi.toml and pixi.lock. This turns the MSS repository into a single source of truth for both the application code as well as the development environment (whereas the latter was previously only specified in the docker images, and not reproducible in any way).
Setting up a development environment is as simple as installing pixi and running
pixi shell
(orpixi run <cmd>
, orpixi install
to just create the environment, etc.). This environment will, by construction, be the same that is used in the CI as well (modulo platform differences).There is a new workflow that periodically (once a week on Monday) recreates the pixi lockfile and opens a PR for that update. The checks in that PR essentially serve as a replacement for the previous scheduled runs to ensure that no dependency update breaks MSS. Merging that PR is a manual step that can be done just as with any other PR and would then update the environment on the given target branch. This is essentially what was previously the (manual) trigger of a docker image creation.
Including new dependencies can be done with
pixi add
, which will also automatically add the dependency to the (pre-existing) lockfile. This means dependency additions can be part of the PR that necessitate them and they won't affect the entire environment (as they previously did, where they would trigger a full image rebuild) but instead just add that new package to the existing specification.This (mostly) implements the ideas I've outlined in #2160. I wanted to substantiate what I had in mind, because I think there was some confusion there about what these ideas would entail.
I consider this to be a massive simplification of the CI setup, while retaining all required functionality (or even improving it). The additions count looks large due to the pixi.lock file, but that file is automatically generated. Outside of it this is a net negative in code size.
Fixes #2160.