-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracking temporary access #346
Comments
Whatever reminds us to keep it updated would be best. A single issue is likely to fade into oblivion. How about 4) creating a section for it in the README in the secrets repo? We can remind people of the policy and have the list of temporary accesses there. Also, I suggest that if we have to give access to non-collaborators, we take those machines out of the pool and redeploy them / add new ones in their stead. |
In terms of redeploying that can take some time from the build team and in the case were we have limited resources (ex AIX box for now) potentially affect builds. Is the problem that containment per user id under the os is not strong enough ? I was thinking that we'd give access by creating an individual user id, and then fully deleting the user/directories when cleaning up. |
Isolate by user might be feasible in theory, but I don't think that we have configured the build machines with that goal in mind. So additional work might be required. In case where redeployment is undesirable (eg AIX) we should probably simply avoid giving access to non-collaborators. |
Depending on the main concern, another option might be to setup separate machines which are not connected to the ci ? While not an option for AIX in the short term I think it would be later on and could be feasible for other platforms. |
I've temporarily given out access to test machines prior. I did so by asking for a pubkey; telling the user what to think about (since it might be running a job) such as copying their own environment, not downloading and installing stuff, etc. This has been to collaborators and I think the trust has been just fine. I've also revoked their key and when they said they were done (note: no one has had access days through, just <24h:ish). I guess we could record this in a log somewhere but I think the mechanism works well enough. |
The process I've used follows what @jbergstroem mentions, except that access has been longer than 24 hours. Part of what I want is to make sure we clean up periodically. The other case I'd like to cover is one were we have a collaborate who can vouch for the person. For example, if there is a platform specific issue that only occurs in the machines in the CI (which did occur for AIX due to a missing APAR), I'd like to be able to grant access to team members here at IBM which may not be collaborators as I trust them to do the right thing. It would still be nice to have a solution for non-collaborators as well as that allows them to investigate/resolve issues across platforms instead of potentially just the one platform they have access to. |
Proposal discussion being covered here: #354 |
I believe this is now documented in https://github.com/nodejs/build/blob/master/doc/process/special_access_to_build_resources.md |
In some cases we need to grant temporary access to CI test machines so that people can investigate specific issues. For example, providing access to pLinux or AIX machines that people don't necessarily have access to normally. I could even see this being useful for the core platforms like linux so that somebody who only has a mac can investigate an issue on linux related to a change they want to make.
It would be nice to automate this, but in the mean time I'm thinking we should at least track so that we don't forget to clean up afterwards.
I can think of a few ways off the top of my head:
..
At this point I prefer 1).
The text was updated successfully, but these errors were encountered: