-
-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Producing attestations fails when attempting publishing from unsupported reusable workflows (when enabled) #283
Comments
FYI reusable workflows are still not properly supported on the PyPI side. Some things happen to work almost by accident. There's a tracking issue somewhere in the Warehouse repo discussing this. cc @woodruffw |
This worked for me when pubslishing for test PyPI, but I got the same error when the GHA tries to publish to real PyPI. Test PyPI:
PyPI:
|
This is a work around for pypa/gh-action-pypi-publish#283
Thanks for the report. It would be useful to have a link to the log. And I even more useful if the job is restarted in debug mode. |
This is the link to the failed job: https://github.com/ietf-tools/svgcheck/actions/runs/11604651979/job/32313751640 |
Thanks! I wonder where this is coming from.. Error in sitecustomize; set PYTHONVERBOSE for traceback:
ModuleNotFoundError: No module named 'coverage'
Error in sitecustomize; set PYTHONVERBOSE for traceback:
ModuleNotFoundError: No module named 'coverage' |
Looks that's from a left over old test file, It's unrelated to this issue. |
Thanks for the details @kesara! This one is weird: that error is coming from From a quick look, I think what happened here is that I forgot to add a skip for attestations in Edit: Yep, that's what it looks like. I'll work on a fix now. |
This fixes a bug that I accidentally introduced with attestations support: `twine upload` learned the difference between distributions and attestations, but `twine check` didn't. As a result, `twine check dist/*` would fail with an `InvalidDistribution` error whenever attestations are present in the dist directory, like so: ``` Checking dist/svgcheck-0.9.0.tar.gz: PASSED Checking dist/svgcheck-0.9.0.tar.gz.publish.attestation: ERROR InvalidDistribution: Unknown distribution format: 'svgcheck-0.9.0.tar.gz.publish.attestation' ``` This fixes the behavior of `twine check` by having it skip attestations in the input list, like it does with `.asc` signatures. To do this, I reused the `_split_inputs` helper that was added with pypa#1095, meaning that `twine upload` and `twine check` now have the same input splitting/filtering logic. See pypa/gh-action-pypi-publish#283 for some additional breakage context. Signed-off-by: William Woodruff <william@yossarian.net>
pypa/twine#1172 should fix the behavior observed by @kesara! For @efriis: @webknjaz is right that reusable workflow support is currently unfortunately limited on PyPI. My colleague @facutuesca and I are currently working on improving that, but in the mean time most users should stick to non-reusable workflows for Trusted Publishing. |
This fixes a bug that I accidentally introduced with attestations support: `twine upload` learned the difference between distributions and attestations, but `twine check` didn't. As a result, `twine check dist/*` would fail with an `InvalidDistribution` error whenever attestations are present in the dist directory, like so: ``` Checking dist/svgcheck-0.9.0.tar.gz: PASSED Checking dist/svgcheck-0.9.0.tar.gz.publish.attestation: ERROR InvalidDistribution: Unknown distribution format: 'svgcheck-0.9.0.tar.gz.publish.attestation' ``` This fixes the behavior of `twine check` by having it skip attestations in the input list, like it does with `.asc` signatures. To do this, I reused the `_split_inputs` helper that was added with #1095, meaning that `twine upload` and `twine check` now have the same input splitting/filtering logic. See pypa/gh-action-pypi-publish#283 for some additional breakage context. Signed-off-by: William Woodruff <william@yossarian.net>
Got it. Would the recommendation be to combine the release and test release workflows just into a single one? I believe we separated them to make sure the test release step could never publish to pypi.org. When we registered our publishers, we have the test.pypi entries locked to _test_release.yml, and pypi entries locked to _release.yml. |
appreciate the quick look @woodruffw ! |
@efriis I think you can keep them in separate files, and make That way you should be able to keep the scope for the PyPI Trusted Publisher to just |
For visibility: pypi/warehouse#11096 |
@efriis you can achieve this by naming the environments |
@kesara I'm confused since it appears under the step running this action. And it's running in a container. Are you saying that you're leaving something on disk that ends up in the container as well? Oh, wait… You're running the build within the same job as publishing. Giving OIDC privileges to your build tool chain is insecure and may lead to various impersonation / privilege elevation scenarios. Using trusted publishing this way is discouraged. FWIW, this suggests that you may be leaving various incompatible artifacts in the Try following the guide closely, only keeping the download step and the upload step in the publishing job: https://packaging.python.org/en/latest/guides/publishing-package-distribution-releases-using-github-actions-ci-cd-workflows/#publishing-the-distribution-to-pypi. Perhaps, using a clean environment would help you avoid the issue of illegal files in |
@kesara if what William suggests is correct, https://github.com/marketplace/actions/pypi-publish#disabling-metadata-verification might be a workaround too. |
@woodruffw so is this report caused only caused by reusable workflows? Can you confirm that the logs are consistent with this explanation, so I could go ahead and close the issue? |
Possibly related issue: pypa/gh-action-pypi-publish#283.
@webknjaz Thanks. I'll review publishing workflow. |
This is a work around for pypa/gh-action-pypi-publish#283
Yep, that's correct -- what's happening here is that attestations against the "build config" identity, which is the top-level initiating workflow. In non-reusable cases that might be something like |
pypdfium2 was also hit by the same issue as @ThiefMaster. Breakage in the publish step is quite disturbing for project maintainers TBH. |
TestPyPI isn't intended to be used in serial with PyPI -- it's an ephemeral index that you can use to test your workflows. One common pattern I've seen with TestPyPI is to publish to it on In terms of resources: |
Hmm, I guess the reason why we're doing that in pypdfium2 is to make sure all wheel tags are valid / accepted by PyPI (we're setting custom tags by overloading I understand this isn't currently intentional, but I'm not yet convinced guarding the real PyPI with a prior TestPyPI upload is per se a bad idea. Consider the trouble it would cause to upload a broken release directly to PyPI. |
There are at least two reasons why it's a bad idea:
Furthermore, as a corollary to (1): there's no guarantee that TestPyPI performs exactly the same (or even compatibly similar) upload checks as PyPI. This happens to be currently the case, but relying on it as a stronger version of
I agree that it's not ideal, but I'm curious whether (or why) the existing yank/deletion processes are insufficient for your purposes. |
There's a pretty big disparity between how most users think of TestPyPI and how the packaging maintainers think of it (this is not the first instance I've noticed of this). Most users think of it as a way to test the full release process in a sandbox before doing the final release. If this isn't what it is, you should really consider making it into that, because that's what's useful to people. The principle issue is that pushing to PyPI is a destructive process, since releases cannot be deleted once they are pushed, so it's useful to have something to run a "test deployment" on (I admit the current TestPyPI doesn't actually completely work for this because releases cannot be deleted from it either, but it's better than nothing). |
If you have "upload to test pypi" then "upload to pypi" as sequential steps in a single job, as long as uploading to test pypi passes you won't have time to verify it, it will move on to uploading to pypi. If uploading to test pypi fails, it would also fail on pypi, so you're not gaining anything from the check. If you want to use it as a true "upload to test, then check things before uploading to pypi", then you would use two separate jobs, each with their own approval environment, which is exactly what the solution to this problem is too.
I was also in this camp (and may have unintentionally advocated it by using the pattern in Flask), but even before this issue I was starting to realize that it's not actually helping the way I thought it was. |
No, if uploading to testpypi fails, our workflow stops there. You don't need two separate jobs for that. |
Right, but it would also fail uploading to PyPI, so there's no reason to see if it fails on test pypi first. There's also no guarantee that test pypi has the same version deployed as pypi, so it may not even fail in the same way. Test PyPI also doesn't have the same uptime/performance specs as PyPI, so it can fail even if the actual release would succeed. I understand why people think failing a test pypi upload before trying pypi is good, I also thought that, but it's not actually helping. |
Yes there is, to notice and fix the issue before actually uploading to PyPI, to avoid having a broken release & tag lying around, which is dirty and confusing to downstreams. |
This is not true. I've had testpypi fail because of a broken build, which saved me because it never got to the real upload phase. The issue (IIRC) was that one release file that was uploaded was fine, and "created" the release, but a subsequent file had some issue. Of course, as noted, there's no way to delete releases from testpypi, so after doing this I had to temporarily disable the testpypi to make the actual release work, so it's still not actually ideal for this particular use-case. |
(And in case, it wasn't clear, if PyPI actually let you delete and redo broken releases, this wouldn't be nearly so big of an issue. I've never understood why there has always been such a strong pushback against allowing that) |
Exactly, same here.
In this case, I disagree. It would be really confusing and insecure to allow re-uploading deleted releases on PyPI. A good testing ground is the better solution IMHO - then there's no need to open that pandora's box. |
Random idea: Why not have atomic publishing? That way either the whole release fails or succeeds, but you'd never end up with something partially published where just some files are missing. This would also help with cases where a release runs into quota limits (which happened e.g. for an |
If the upload partially succeeds, but some of the files were bad, you can use I've had a release partially succeed too. In fact, it got past test pypi, then failed on pypi. But I could fix it and keep going. Test pypi didn't help. |
Yes, this is essentially the rationale: immutable files are an important security invariant on the index itself. I agree that the community would benefit from a better automatable testing ground, which TestPyPI itself can probably never be. I don't have super strong opinions on what that should be, other than that
We're currently working on that, stay tuned 🙂 |
Have a dedicated build distribution job, and split the publish to TestPyPI and PyPI jobs, to workaround attestation file issue. Xref pypa/gh-action-pypi-publish#283
Howdy! We just had to turn off release attestations to get our releases running in langchain-ai/langchain. Haven't had a chance to dig deeper into attestation configuration in order to see what we need to fix, and thought I'd file an issue in case others run into the same thing!
langchain-ai/langchain#27765
Release Workflow
We run releases from the two workflow files edited in ^ that PR
Error
We were seeing errors in your releases, e.g. in this workflow run: https://github.com/langchain-ai/langchain/actions/runs/11602468120/job/32307568692
Configuration of test release - 2 main things that look weird are
/legacy/
andrepository_url
(we configurerepository-url
per docs)Logs - partially redacted
Temporary Fix
langchain-ai/langchain#27765
We turned off attestations with
attestations: false
The text was updated successfully, but these errors were encountered: