-
Notifications
You must be signed in to change notification settings - Fork 967
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Combining job outputs with masking leads to empty output #1498
Comments
Hi @danielmarbach, Per documentation:
Please, also look at the discussion which may provide more context to your issue. And please, notify me if that solves your problem 😊 |
I'm going to close out this issue until we hear back from you, please let us know if you are still seeing this issue! |
@nikola-jokic for some reason I wasn't receiving the first ping of yours or I must have missed it in my inbox. So if I understood you correctly it is not possible to share secrets between jobs? At the moment what we have to do is the step that creates the secret encrypts the secret, uploads it to a temporary artifacts the the other job then downloads the secret, decrypts it and promotes it again masked into the current job. |
No problem @danielmarbach. Yes, you are right. They will be discarded on the runner. You can either use secrets as described here. If secrets have to be programmatically set, you are essentially doing what the Actions workflow does for you. |
To give you some more insights. Normally we would want to do everything we can as part of one job. Yet in this specific case we are setting a server cluster that requires a few nodes to work. That is an expensive operations and we are working against a limited set of resources. By having the setup of the cluster in the same matrix build job we would then setup the cluster per matrix which would quickly lead to resource exhaustion. To accomodate that we have created a setup job that creates the cluster. The matrix builds wait for the setup job to be completed. Then there is also a cleanup job that runs after all the matrix builds have run or things have failed. Both the matrix jobs as well as the cleanup need information about how and where to access the cluster in order to be able to connect to it and eventually destroy it again after the run. We want to avoid having this information to be leaked. Hence we were hoping to "just mask the dynamic secrets" and then share them with jobs. I can understand though the design and architectural reasons why that is not allowed (or supported). It just means for any such a scenario you are basically forced to reinvent the wheel like we did. here is the encryption step we ended up using here the artifact upload then the explicit download, decrypt steps |
Hey @danielmarbach, This seems perfectly valid! Could you please post your feedback on the GitHub Feedback site which is actively monitored? Using the forum ensures that we route your problem to the correct team. 😊 And thank you for the explanation. Please include that in your feedback! |
Here we go community/community#13082 |
Not only secrets, but anything, if masked, cannot be referenced into another job: e.g. when I use amazon-ecr-login get a
but if I define the registry into output for another job to use, the full registry string is masked like this:
in another job, use it with this is a BUG, not only enhancement request in community/community#13082 |
Suffering the same aws troubles, I found this thread. Later, I found the answer. Returning here to post what I found: see |
Describe the bug
When combining job outputs with masking the output is empty when used in another job.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Secret should be available, not empty and masked
Runner Version and Platform
Version of your runner? Current runner version: '2.284.0'
OS of the machine running the runner? Linux
What's not working?
Please include error messages and screenshots.
Job Log Output
https://github.com/danielmarbach/GithubActionsWorkflowSharingSpike/runs/4258032273?check_suite_focus=true
Runner and Worker's Diagnostic Logs
If applicable, add relevant diagnostic log information. Logs are located in the runner's
_diag
folder. The runner logs are prefixed withRunner_
and the worker logs are prefixed withWorker_
. Each job run correlates to a worker log. All sensitive information should already be masked out, but please double-check before pasting here.The text was updated successfully, but these errors were encountered: