-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
File and folder permissions are screwed when executed via Jobber #191
Comments
That's interesting... It looks like there's a difference in the umask. I'll look into this. In the meantime, you should be able to get around this by explicitly settings the permissions in your script. |
Thanks for having a look into this. The command that fails is: -Fd dumps the database into a directory that's specified via -f and that directory will be created by pg_dump and needs to be deleted before every execution. The only workaround is to execute the jobs via root since I can't change the permissions while pg_dump creates the directory and fails right away with a permission denied error message.. |
Line 170 in d7276b2
's/0177/0077/'
|
thx. this is exactly the same issue: Where to set default file permissions (umask)?(post no. 9). Looks like 077 would do the trick or what is 's/0177/0077/'? |
Thanks, @zelenin. I'm not sure what I was thinking when I set that umask... |
Could you please implement this little fix? I would like to run my jobs via a normal user and not root. Thanks in advance. |
Sure. |
maybe new release? |
Done. Release has been made. v1.3.3. |
Could you update the documentation or share your Dockerfile. Looks like you've introduced new parameter since 1.3.2. I now get the following error if I use
If I use
or
It seems to work if I use |
Ah crap.
… On Aug 28, 2018, at 9:16 AM, Christian Kotte ***@***.***> wrote:
Could you update the documentation or share your Dockerfile. Looks like you've introduced new parameter since 1.3.2. I now get the following error if I use exec /usr/local/libexec/jobberrunner ${configfile} in my entrypoint script:
Must specify exactly one of -u and -p
Usage: /usr/local/libexec/jobberrunner [flags] JOBFILE_PATH
Flags:
-h help
-p uint
path TCP socket on which to receive commands
-q string
quit socket path
-u string
path to Unix socket on which to receive commands
-v version
If I use exec /usr/local/libexec/jobberrunner -u /tmp/jobber_socket ${configfile}, I get this error:
bash-4.4$ jobber list
Jobber doesn't seem to be running for user jobberuser.
(No socket at /var/jobber/1000/cmd.sock.): stat /var/jobber/1000/cmd.sock: no such file or directory
or
bash-4.4$ jobber list
Jobber doesn't seem to be running for user root.
(No socket at /var/jobber/0/cmd.sock.): stat /var/jobber/0/cmd.sock: no such file or directory
It seems to work if I use exec /usr/local/libexec/jobberrunner -u /var/jobber/${EUID}/cmd.sock ${configfile}
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub, or mute the thread.
|
btw. I get
when compiling 1.3.3. This is the output during the build:
This is the Dockerfile:
The issue is also not fixed with this version. Do I use 1.3.2 or 1.3.3 with wrong version output? Could you share your Dockerfile please. |
It looks like your Dockerfile is checking out the wrong branch... although I don't immediately see the problem in the Dockerfile. In any case, please use dshearer/jobber:1-alpine3.7 from now on. In this image, replace /home/jobberuser/.jobber with your own jobfile. |
Please compile the sources.
|
I also cannot use your Docker image. If it shows 1.3.3 for you, can you show how you install Jobber in Docker, please? |
Oh I see what happened. When I made the v1.3.3 tag, I based it on master rather than the maint-1.3 branch. Sorry about that. It is fixed now. |
Ok. I can now checkout tag I didn't test my early recommendation of The default umask on e.g. Alpine is It's also not documented that the umask is set via Jobber and it's misleading if you compare the output of a script that was triggered by jobber with the output of a script that was run via bash. |
Sounds like I should just stop setting the umask. |
It looks like cron does not set the umask... I'm not sure where it comes from in cron. For now, I will set the umask to 0002. The user can control the umask by setting it in the script that jobber executes. E.g.:
|
jobberrunner: (#191) Stop setting umask
If I run my backup scripts via Jobber, the permissions of created files and folders is not correct. It looks like only rw for the owner is set - also for directories.
I can still schedule my jobs via root, but they fail if I execute the job with a normal user account because of a permission issue related to the missing x for the directory.
The jobs create a database dump to a directory with pg_dump and zip it via gpg-zip:
Script executed without Jobber
directory:
drwx------ 2 root root 4096 Apr 24 19:06 name-db
file:
-rw-r--r-- 1 root root 85497987 Apr 24 19:06 name-20180424-190638.tar.gz.gpg
Script executed via Jobber
directory:
drw------- 2 root root 4096 Apr 24 19:04 name-db
file:
-rw------- 1 root root 85498005 Apr 24 19:04 name-20180424-190424.tar.gz.gpg
I use Jobber 1.3.2 in a custom Docker container with Alpine Linux 3.7. This doesn't happen with the same Dockerfile and Jobber 1.2 with exec jobberd instead of exec /usr/local/libexec/jobberrunner ${configfile}.
The text was updated successfully, but these errors were encountered: