You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When specifying job -> group -> network -> dns -> servers option, the resolv.conf is being created with 640 permissions preventing non-root processes from reading it.
Reproduction steps
Create a job with the following group network stanza
network {
dns {
servers = ["169.254.127.1"]
}
}
Run the job and exec into the task (I used a docker task) and ls -l /etc/resolv.conf to observe the permissions
on the Nomad host, the file in the alloc working directory also appears to have the wrong permissions, so I don't think it's an issue with docker doing the bind mount
-rw-r----- 1 root root 46 Jan 11 02:13 /var/nomad/alloc/818cc9c5-c179-d1be-02f2-09255a798662/task/resolv.conf
Expected Result
The permissions of resolv.conf are normally 644
Actual Result
The permissions of resolv.conf are 640
The text was updated successfully, but these errors were encountered:
As an aside, it looks like servers can't be templated with node metadata (eg, ${meta.foobar}). Is this intentional? If not I can throw in a feature request for that too.
Hi @nvx. Yeah that's definitely looks like a bug in the shared drivers code (ref mount.go#L72-L82 where we're not setting the permission at all so it's just using the umask.
As an aside, it looks like servers can't be templated with node metadata (eg, ${meta.foobar}). Is this intentional? If not I can throw in a feature request for that too.
Most of the rest of the network block can't be interpolated with client data because we use it for scheduling (ex. we can't interpolate a static port on a client because we'd need to know the value for the scheduler to determine it's safe to put on that client).
But the dns block defaults to the client so it should be safe to interpolate. That's just an oversight. I'll open a new issue for that so that it doesn't get lost when we fix the bug described here.
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Nomad version
Nomad v1.2.3 (a79efc8)
Operating system and Environment details
Linux amd64
Issue
When specifying job -> group -> network -> dns -> servers option, the resolv.conf is being created with 640 permissions preventing non-root processes from reading it.
Reproduction steps
Create a job with the following group network stanza
Run the job and exec into the task (I used a docker task) and ls -l /etc/resolv.conf to observe the permissions
on the Nomad host, the file in the alloc working directory also appears to have the wrong permissions, so I don't think it's an issue with docker doing the bind mount
Expected Result
The permissions of resolv.conf are normally 644
Actual Result
The permissions of resolv.conf are 640
The text was updated successfully, but these errors were encountered: