Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker Compose mounts named volumes as 'root' exclusively #3270

Closed
nschoe opened this issue Apr 5, 2016 · 46 comments
Closed

Docker Compose mounts named volumes as 'root' exclusively #3270

nschoe opened this issue Apr 5, 2016 · 46 comments

Comments

@nschoe
Copy link

nschoe commented Apr 5, 2016

It's about named volumes (so no "data volume container", no "volumes-from") and docker-compose.yml.

The goal here is to use docker-compose to manage two services 'appserver' and 'server-postgresql' in two separate containers and use the "volumes:" docker-compose.yml feature to make data from service 'server-postgresql' persistent.

The Dockerfile for 'server-postgresql' looks like this:

FROM        ubuntu:14.04
MAINTAINER xxx

RUN apt-get update && apt-get install -y [pgsql-needed things here]
USER        postgres
RUN         /etc/init.d/postgresql start && \
            psql --command "CREATE USER myUser PASSWORD 'myPassword';" && \
            createdb -O diya diya
RUN         echo "host all  all    0.0.0.0/0  md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN         echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
CMD         ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]

Adn the docker-compose.yml looks like this:

version: '2'
services:
    appserver:
        build: appserver
        depends_on:
            - server-postgresql
        links:
            - "server-postgresql:serverPostgreSQL"
        ports:
            - "1234"
            - "1235"
        restart: on-failure:10
    server-postgresql:
        build: serverPostgreSQL
        ports:
            - "5432"
        volumes:
            - db-data:/volume_data
        restart: on-failure:10
volumes:
    db-data:
        driver: local

Then I start everything with docker-compose up -d, I enter my server-postgresql container with docker-compose exec server-postgresql bash, a quick ls does reveal /volume_data, I then cd into it and try touch testFile and got "permission denied. Which is normal because a quick ls -l show that volume_data is owned by root:root.

Now what I think is happening is that since I have USER postgres in the Dockerfile, when I run docker-compose exec I am logged in as user 'postgres' (and the postgresql daemon runs as user 'postgres' as well, so it won't be able to write to /volume_data).
This is confirmed because when I run this instead: docker-compose exec --user root server-postgresql bash and retry to cd /volume_data and touch testFile, it does work (it's not a permission error between the host and the container, as it is somtimes the case when the container mounts a host folder, this is a typical unix permission error because /volume_data is mounted as 'root:root' while user 'postgres' is trying to write).

So there should be a way in docker-compose.yml to mount namedvolumes as specific user, smth like:

version: '2'
services:
    appserver:
        [...]
    server-postgresql:
        [...]
        volumes:
            - db-data:/volume_data:myUser:myGroup
        [...]
volumes:
    db-data:
        driver: local

The only dirty workaround that I can think of is remove the USER posgres directive from the Dockerfile, and change the ENTRYPOINT so that it points to a custom "init_script.sh" (wihch would be run as 'root' since I removed USER postgres), this script would change permissions of /volume_data so that 'postgres' can write on it, then su postgres and execute the postgresql daemon (in foreground). But this is actually very dirty, because it links the Dockerfile and docker-compose.yml in a non standard way (runtime ENTRYPOINT would rely on the fact that a mounted volume is made available by docker-compose.yml).

@dnephin
Copy link

dnephin commented Apr 5, 2016

I don't think this is supported by the docker engine, so there's no way we can support it in Compose until it is added to the API. However I don't think it's necessary to add this feature. You can always chown the files to the correct user:

version: '2'
services:
  web:
    image: alpine:3.3
    volumes: ['random:/path']

volumes:
  random:
$ docker-compose run web sh
/ # touch /path/foo
/ # ls -l /path
total 0
-rw-r--r--    1 root     root             0 Apr  5 16:11 foo
/ # chown postgres:postgres /path/foo
/ # ls -l /path
total 0
-rw-r--r--    1 postgres postgres         0 Apr  5 16:11 foo
/ # 
$ docker-compose run web sh
/ # ls -l /path
total 0
-rw-r--r--    1 postgres postgres         0 Apr  5 16:11 foo

The issue you're facing is about initializing a named volume. This is admittedly not something that is handled by Compose (because it's somewhat out of scope), but you can easily use the docker cli to initialize a named volume before running docker-compose up.

@nschoe
Copy link
Author

nschoe commented Apr 6, 2016

I was indeed not sure whether this was a Docker or a Compose problem, sorry if I misfiled it.
Is this a plan in Docker API, should I file an issue there?

I understand the possibility of manually logging in the container and chown-ing the volume to the 'postgres' user. But the thing is, in my case, I am using Compose so I can immediately instance new containers for new clients (docker-compose -p client_name up) and Compose will create a custom network client_name_default, it will create the containers with names client_name _appserver_1 and client_name _server-postgresql_1 and more importantly, it will create the volume client_name_db-data. All of which I don't have to do manually, so it can be run by the script that handle client registration.

With the solution you described (manually "logging" in the container with sh and chown-ing the volume label), I can't have a simple procedure to add new clients, and this must be taken care of by hand.

This is why I think this feature should be implemented. In the Docker API, we can specify ro or rw (for read-only or read-write) permissions when mounting a volume, I think we should be able to specify user:group.

What do you think, does my request make sense?

@nschoe
Copy link
Author

nschoe commented Apr 6, 2016

Actually I come here with news, it seems what I am trying to achieve is doable, but I don't know if this is a feature or a bug. Here is why I changed:

In my Dockerfile, before changing to user 'postgres' I added these:

# ...
RUN    mkdir /volume_data
RUN   chown postgres:postgres /volume_data

USER postgres
# ...

What this does is create a directory /volume_data and change its permissions so that user 'postgres' can write on it.
This is the Dockerfile part.

Now I havent changed anything on the docker-compose.yml : so docker-compose still creates the Named Volume directory_name_db-data and mounts it to /volume_data and the permissions have persisted!.
Which means that now, I have my Named Volume mounted on pre-existing directory /volume_data with the permissions preserved, so 'postgres' can write to it.

So if this the intended behavior or a breach of security? (It does serve me in this case, though !)

@dnephin
Copy link

dnephin commented Apr 6, 2016

I believe this was added in Docker 1.10.x so that named volumes would be initialized from the first container that used them. I think it's expected behaviour.

@marcelmfs
Copy link

I'm also doing named volumes with ownership being set in the Dockerfile and in compose I'm adding user: postgres so that even PID 1 is owned by non-root user.

@Hubbitus
Copy link

Docker-compose for volumes has driver_opts to provide options.
It will be very good to see there options like chmod, chown even for driver local.
And especially I want it also will be applied for locally created host directories if it is not present on start.

@imaia
Copy link

imaia commented Jul 7, 2017

Related (to some extent) moby/moby#28499

@jcberthon
Copy link

Did someone opened already an issue on the Moby project?

The answer from @dnephin is not working. The problem is because we are running the container as a standard user, the chown or chmod commands are failing, the volume being own by root, a standard user cannot change the permissions.

@dragon788
Copy link

@jcberthon the suggested method is to run the container with root as the starting user and then use the USER command AFTER the chown/chmod so that it is basically "dropping privileges".

@micheljung
Copy link

That's fine if you are in control of the Docker image but if you're using existing images that's not really an option.

@jcberthon
Copy link

@dragon788 and @micheljung, I solved my problem.

Actually the real issue was that i my Dockerfile, I declared a VOLUME and then I modified the ownership and permissions of the files in that volume. Those changes are lost. By simply moving the VOLUME declaration to the end of the Dockerfile (or removing it, as it is optional), my problem is solved. The permissions are correct.

So the mistake was:

FROM blabla
RUN do stuff
VOLUME /vol
RUN useradd foo && chown -R foo /vol
USER foo
CMD ["blabla.sh"]

The chown in the example Dockerfile above are lost during the build because we declare VOLUME before it. When running the container, dockerd copies within the named volume the content of /vol before the VOLUME declaration (so with root permissions). Therefore the running processes cannot modify or change permissions, so even forcing chown in the blabla.sh script cannot work.

By changing the file to:

FROM blabla
RUN do stuff
RUN useradd foo && chown -R foo /vol
USER foo
VOLUME /vol
CMD ["blabla.sh"]

the problem is solved.

@renanwilliam
Copy link

@jcberthon could you please share how do you bind your volume /vol with host system in your docker-compose.yml ?

@egeexyz
Copy link

egeexyz commented Dec 8, 2017

I am working with Docker on Fedora (so, SELinux enabled) and none of the above mentioned methods worked for me. Ideally, I want to run applications in my Containers under the context of a user (no root) but this Volume issue is a blocker to that.

The only workaround that works for me is to eliminate my application user and run/own everything as the root user.

stefjoosten pushed a commit to AmpersandTarski/RAP that referenced this issue Dec 15, 2017
The recipe followed here is inspired on docker/compose#3270
@jcberthon
Copy link

jcberthon commented Dec 15, 2017

Hi @renanwilliam and @egee-irl

I've been using the above mentioned solution on several OS incl. Fedora 26 and CentOS 7 both with SELinux enforced, Ubuntu 16.04, 17.10 and Raspbian 9 all three with AppArmor activated (with a mixture of amd64 and armhf platforms).

So as I said, I've now moved the VOLUME ... declaration at the end of my Dockerfile, but you can remove it alltogether, it is not needed. What I also usually do is fix the userid when creating the user in the Dockerfile (e.g. useradd -u 8002 -o foo). Then I can simply reuse that UID on the host to give proper permissions to the folder.

So next step is to create the "pendant" of the /vol directory on the host, let's say it is /opt/mycontainer1/vol, so that's

$ sudo mkdir -p /opt/mycontainer1/vol
$ sudo chown -R 8002 /opt/mycontainer1/vol
$ sudo chmod 0750 /opt/mycontainer1/vol

Then when running the container as user foo, it will be able to write to the /opt/mycontainer1/vol directory. Something like:

$ sudo -u docker-adm docker run --name mycontainer1 -v /opt/mycontainer1/vol:/vol mycontainer1-img

On SELinux based hosts, you might want to add the :z :Zoption for the volume so that Docker will tag the folder appropriately. The difference between z and Z is that the lowercase z will tag the volume so that potentially all containers of this host could be allowed by SELinux to access that directory (but obviously only if you bind mount it to another container), whereas the uppercase Z will tag it so that only that specific container will be able to access the directory. So on Fedora with SELinux you might want to try:

$ sudo -u docker-adm docker run --name mycontainer1 -v /opt/mycontainer1/vol:/vol:Z mycontainer1-img

Update: you can check my repo here https://github.com/jcberthon/unifi-docker I'm using this method and explaining how to configure the host and run your container. I hope this can help futher solving your problems.

@jcberthon
Copy link

Btw, I apologise @renanwilliam for the long delay in replying to you. I don't have much free time this end of the year...

@villasv
Copy link

villasv commented Feb 6, 2018

So, long story short for the impatient:

RUN mkdir /volume_data
RUN chown postgres:postgres /volume_data

Creating the volume directory beforehand and a chown solves it, because the volume will preserve the permissions of the preexisting directory.

@colbygk
Copy link

colbygk commented Feb 8, 2018

This is a poor work around as it is non-obvious (Doing a chown in a Dockerfile and then inheriting that ownership during the mount). Exposing owner and group control at the docker-compose and docker CLI would be the path of least surprise for unix style commands.

@jcberthon
Copy link

@villasv

A small tip: merge the 2 RUN ... into one, this avoids creating extra layers and is a best practice. So your 2 lines should be

RUN mkdir /volume_data && chown postgres:postgres /volume_data

But beware (as I mentioned in a comment above) that you need to do the above RUN ... command before declaring the volume (or actually not declaring the volume) using VOLUME .... If you do (as I did) the change of ownership after declaring the volume, then those changes are not recorded and lost.

@jcberthon
Copy link

@colbygk it would be indeed handy, but that's not how Linux works. Docker uses the Linux mount namespace to create different single-directory hierarchies (/ and subfolders), but AFAIK there are no user/group mappings or permissions overriding currently in the Linux mount namespace. Those "mounts" inside a container (and that include bind-mount volumes) are on a file system on the host (unless you use some other Docker volume plugins of course), and this file system is respecting the Linux VFS layer which does all the file permissions check. There could even be on the host some MAC (e.g. SELinux, AppArmor, etc.) which could interfere with a container accessing files within the container. Actually if you do chroot, you can encounter similar issues as you can bind mount folders within the chroot, you also have the problem that processes running within the chroot environment might have the wrong effective UID/GID to access files in the bind-mount.

Simple Linux (and actually Unix as well) rules apply for the container. The trick is to see and understand the possibilities and limits of Linux namespaces today, and then it is becoming clearer how to solve problems such as this issue. I solved it entirely using classical Unix commands.

@colbygk
Copy link

colbygk commented Feb 14, 2018

@jcberthon Thank you for your thoughtful response:

I would argue that this should be an issue that is pushed into the plugin layer as you suggest and therefore could become part of the generic volume handler plugin that ships with Docker. It seems very uncloud/container like to me to force an external resource (external to a particular container) to adhere to essentially static relationships that are defined in the image a container is derived from.

There are other examples of this exact sort of uid/gid mapping that occurs in other similar areas of "unix":

Please correct me if I am wrong; openzfs/zfs#4177 appears to be originated by the "leader of LXC/LXD" as an issue related to ZFS on linux not correctly providing UID/GID information to allow mapping of those into a container in almost the exact way we are discussing here. Looking closely at openzfs/zfs#4177 it appears that the zfs volume type actually could already support this uid/gid mapping between namespaces, but does not expose the controls to do so.

ChubaOraka added a commit to ChubaOraka/aliascheck that referenced this issue Aug 22, 2022
… default non-root user `node`

Error Log:
client    | [Error: EACCES: permission denied, open '/usr/src/app/client/.next/package.json'] {
client    |   errno: -13,
client    |   code: 'EACCES',
client    |   syscall: 'open',
client    |   path: '/usr/src/app/client/.next/package.json'
client    | }

References:
node.js - How to keep node_modules inside container while using docker-compose and a non-root user? - Stack Overflow
https://stackoverflow.com/questions/49941708/how-to-keep-node-modules-inside-container-while-using-docker-compose-and-a-non-r
https://stackoverflow.com/a/49952703
https://stackoverflow.com/questions/49941708/how-to-keep-node-modules-inside-container-while-using-docker-compose-and-a-non-r/49952703#49952703

Docker Compose mounts named volumes as 'root' exclusively · Issue #3270 · docker/compose · GitHub
docker/compose#3270

Stuck with a Problem of access permission denied · Issue #8908 · vercel/next.js · GitHub
vercel/next.js#8908
vercel/next.js#8908 (comment)

What is the best way to use NextJS with docker? · Discussion #16995 · vercel/next.js · GitHub
vercel/next.js#16995

reactjs - error An unexpected error occurred: "EACCES: permission denied - Stack Overflow
https://stackoverflow.com/questions/52713928/error-an-unexpected-error-occurred-eacces-permission-denied

docker - Dockerfile with nextjs and puppteer permission denied - Stack Overflow
https://stackoverflow.com/questions/64565062/dockerfile-with-nextjs-and-puppteer-permission-denied
[RUN chown -Rh $user:$user /home/node]
https://stackoverflow.com/a/64922590
https://stackoverflow.com/questions/64565062/dockerfile-with-nextjs-and-puppteer-permission-denied/64922590#64922590
[More relevant for production build, suggests we modify next.config.js by adding a distDir directive:
module.exports = {
      distDir: 'build',
      serverRuntimeConfig: {
        // Will only be available on the server side
        apiUrl: 'http://signup-api:1984'
      },
      publicRuntimeConfig: {
        // Will be available on both server and client
        apiUrl: 'http://localhost:1984'
      }
    }
]
ChubaOraka added a commit to ChubaOraka/aliascheck that referenced this issue Aug 22, 2022
… default non-root user `node`

Error Log:
client    | [Error: EACCES: permission denied, open '/usr/src/app/client/.next/package.json'] {
client    |   errno: -13,
client    |   code: 'EACCES',
client    |   syscall: 'open',
client    |   path: '/usr/src/app/client/.next/package.json'
client    | }

References:
node.js - How to keep node_modules inside container while using docker-compose and a non-root user? - Stack Overflow
https://stackoverflow.com/questions/49941708/how-to-keep-node-modules-inside-container-while-using-docker-compose-and-a-non-r
https://stackoverflow.com/a/49952703
https://stackoverflow.com/questions/49941708/how-to-keep-node-modules-inside-container-while-using-docker-compose-and-a-non-r/49952703#49952703

Docker Compose mounts named volumes as 'root' exclusively · Issue #3270 · docker/compose · GitHub
docker/compose#3270

Stuck with a Problem of access permission denied · Issue #8908 · vercel/next.js · GitHub
vercel/next.js#8908
vercel/next.js#8908 (comment)

What is the best way to use NextJS with docker? · Discussion #16995 · vercel/next.js · GitHub
vercel/next.js#16995

reactjs - error An unexpected error occurred: "EACCES: permission denied - Stack Overflow
https://stackoverflow.com/questions/52713928/error-an-unexpected-error-occurred-eacces-permission-denied

docker - Dockerfile with nextjs and puppteer permission denied - Stack Overflow
https://stackoverflow.com/questions/64565062/dockerfile-with-nextjs-and-puppteer-permission-denied
[RUN chown -Rh $user:$user /home/node]
https://stackoverflow.com/a/64922590
https://stackoverflow.com/questions/64565062/dockerfile-with-nextjs-and-puppteer-permission-denied/64922590#64922590
[More relevant for production build, suggests we modify next.config.js by adding a distDir directive:
module.exports = {
      distDir: 'build',
      serverRuntimeConfig: {
        // Will only be available on the server side
        apiUrl: 'http://signup-api:1984'
      },
      publicRuntimeConfig: {
        // Will be available on both server and client
        apiUrl: 'http://localhost:1984'
      }
    }
]
ChubaOraka added a commit to ChubaOraka/aliascheck that referenced this issue Aug 22, 2022
…tainer's default non-root user `node`

Error Log:
client    | [Error: EACCES: permission denied, open '/usr/src/app/client/.next/package.json'] {
client    |   errno: -13,
client    |   code: 'EACCES',
client    |   syscall: 'open',
client    |   path: '/usr/src/app/client/.next/package.json'
client    | }

References:
node.js - How to keep node_modules inside container while using docker-compose and a non-root user? - Stack Overflow
https://stackoverflow.com/questions/49941708/how-to-keep-node-modules-inside-container-while-using-docker-compose-and-a-non-r
https://stackoverflow.com/a/49952703
https://stackoverflow.com/questions/49941708/how-to-keep-node-modules-inside-container-while-using-docker-compose-and-a-non-r/49952703#49952703

Docker Compose mounts named volumes as 'root' exclusively · Issue #3270 · docker/compose · GitHub
docker/compose#3270

Stuck with a Problem of access permission denied · Issue #8908 · vercel/next.js · GitHub
vercel/next.js#8908
vercel/next.js#8908 (comment)

What is the best way to use NextJS with docker? · Discussion #16995 · vercel/next.js · GitHub
vercel/next.js#16995

reactjs - error An unexpected error occurred: "EACCES: permission denied - Stack Overflow
https://stackoverflow.com/questions/52713928/error-an-unexpected-error-occurred-eacces-permission-denied

docker - Dockerfile with nextjs and puppteer permission denied - Stack Overflow
https://stackoverflow.com/questions/64565062/dockerfile-with-nextjs-and-puppteer-permission-denied
[RUN chown -Rh $user:$user /home/node]
https://stackoverflow.com/a/64922590
https://stackoverflow.com/questions/64565062/dockerfile-with-nextjs-and-puppteer-permission-denied/64922590#64922590
[More relevant for production build, suggests we modify next.config.js by adding a distDir directive:
module.exports = {
      distDir: 'build',
      serverRuntimeConfig: {
        // Will only be available on the server side
        apiUrl: 'http://signup-api:1984'
      },
      publicRuntimeConfig: {
        // Will be available on both server and client
        apiUrl: 'http://localhost:1984'
      }
    }
]
@lolmaus
Copy link

lolmaus commented Aug 25, 2022

What is the easiest workaround?

@giraffesyo
Copy link

One workaround is to have an init container mount the same named volume, chown the folder, and then make the service depend on that container completing successfully.. See code below.

  my-service-init:
    image: node:16
    user: root
    group_add:
      - '1000'
    volumes:
      - my-service-recover:/tmp/recover_data
    command: chown -R node:1000 /tmp/recover_data
  my-service:
    restart: always
    user: node
    group_add:
      - '1000'
    volumes:
      - my-service-recover:/tmp/recover_data
    depends_on:
      my-service-init:
        condition: service_completed_successfully
volumes:
  my-service-recover:

@lolmaus
Copy link

lolmaus commented Sep 13, 2022

This article resolved the issue for me:
https://jtreminio.com/blog/running-docker-containers-as-current-host-user/

Here's my TL/DR summary of the article:
https://stackoverflow.com/a/73499592/901944

Cryogenics-CI pushed a commit to cloudfoundry/backup-and-restore-sdk-release that referenced this issue Oct 4, 2022
Cryogenics-CI added a commit to cloudfoundry/backup-and-restore-sdk-release that referenced this issue Oct 25, 2022
* Experiment running system-tests for mysql inside containers

[#183164582]

Signed-off-by: Fernando Naranjo <fnaranjo@vmware.com>

* Build database-backup-restore and fix echo commands

[#183164582]

Also replace some additional commands performed on BOSH VM

* Ensure we invoke `bash -c` for every `go.Command`

[#183164582]

Running os commands using goexec is not identical to running
those commands in a shell such as bash.
Some examples of known things to require a bash shell:
- Pipes
- Redirects

Signed-off-by: Fernando Naranjo <fnaranjo@vmware.com>

* Create workflow to run MySQL tests on PRs

[#183164582]

Signed-off-by: Fernando Naranjo <fnaranjo@vmware.com>

* Generate TLS certificates and shared them across containers

[#183164582]

Mysql service is the one in charge of creating the certificates. To make these
generated certificates available to the container running the tests we use a
shared volume named `mysql-certs` which we mount in both containers.

* Support testing several MYSQL versions using GHAction matrix

[#183164582]

For this we need to pass the MYSQL version we want to test as a Docker build arg

* Remove from matrix failing unsupported versions of MYSQL

[#183164582]

There are chances that the versions removed could also work
with our current release but they need some packages not included by default in
the Dockerimage. Doing the extra effort to make the pass is irrelevant to the
BOSH release itself, since the filesystem is not identical to what will be included
in the VM.

We might need to investigate how to reduce this filesystem mismatch if possible.

Signed-off-by: Fernando Naranjo <fnaranjo@vmware.com>

* WIP try to improve some docs-type things

[#183164582]

Signed-off-by: Gareth Smith <sgareth@vmware.com>

* Remove unneded folder intialization steps

[#183164582]

They seem to have inconsistent behaviour on different operating systems

* Restore folder initialization and add it to system-mysql Dockerfile too

[#183164582]

* Print database logs in case of test failures

[#183164582]

* Reorganize VOLUME declarations to overcome a possible bug in Docker

[#183164582]

docker/compose#3270 (comment)

* Replace chmod with chown to better express the intent of the operation

[#183164582]

* Revert "Replace chmod with chown to better express the intent of the operation"

This reverts commit e102885.

It seems `chmod root:root` counteracts `chmod mysql:mysql` causing a race condition
between the two containers. It seems we are better off using `chmod 777` here.

* Improve debugging experience by mounting the whole repo instead of a subfolder

[#183164582]

By doing this any error message in the logs would point to the exact path where
the problem occurred instead of a docker-mounted path in an arbitrary location.

This change pursues reducing the knowledge burden introduced by the use of Docker
by trying to make the Docker integration as thin and transparent as possible.
Ideally, not requiring any Docker knowledge to be able to work with the codebase.

* Improve unit-tests debug experience by mounting full repo instead of subfolder

[#183164582]

By doing this any error message in the logs would point to the exact path where
the problem occurred instead of a docker-mounted path in an arbitrary location.

This change pursues reducing the knowledge burden introduced by the use of Docker
by trying to make the Docker integration as thin and transparent as possible.
Ideally, not requiring any Docker knowledge to be able to work with the codebase.

* Refactor system-db-tests to use a similar approach to unit-tests

[#183164582]

Some goals of this refactor were:
- Don't have an explicit ENTRYPOINT in Dockerfile, specify it in docker-compose
- Move most of the boilerplate required by the tests to the tests folder
- Remove hardcoded paths in boilerplate environment variables required by tests
- Make scripts/run-system-db-tests-mysqsl more self-container and self-documenting
- Flatten the path to start adding tests for other databases such as Postgres
- Flatten the path to add some tests for testing the version detection logic

* Simplify TestRunner.Dockerfile and `backup`/`restore` invocations

[#183164582]

Original tests made use of two auxiliary scripts `backup`/`restore` to
set the required environment variables every time they were used.
Since these variables are static and never change we added these vars
to the `run-system-db-tests-mysql.bash` script itself and invoke the
`database-backup-restore` binary directly instead from the tests.

* Reduce certs handling in `Dockerfile` and `docker-compose.yml`

[#183164582]

By aglutinating the three previous variables each one pointing to
a different certificate into a single variable pointing to a folder
we should contain the three files.
These files are expected to always be named the same so there isn't
a need to specify the full path for each of them individually.

* Fix errors introduced in latest commit

[#183164582]

Some of the environment variables which previously we thought were
not needed explicitly by the tests were in fact required for mysql
to successfully establish a connection using TLS certificates.

* Add MySQL5.7 binary as it is used to determine version of db

[#183164582]

By looking at the source-code we determined that mysql5.7 binary
is used to connect with the mysql database and ask for its version.
Only later, the corresponding binary will be choosed and used.

Signed-off-by: Fernando Naranjo <fnaranjo@vmware.com>

* Add tests for different versions of MariaDB

[#183164582]

In this case we didn't copy the binary from the mariadb Docker
image because it had unsatisfied dependencies (such as libssl3)

* Trigger PR workflows

* Show database logs on error to facilitate debugging

[#183164582]

* Workaround permission issues when specifying Volume in Dockerfile

[#183164582]

* Try triggering tests again

[#183164582]

* Use chmod 777 both in TestsRunner and BackingDB Dockerfiles

[#183164582]

* Fix mysql5.7-debian tests and make mysql tests more precise

[#183164582]

Since bbr-sdk-release always uses 5.7 at least once (to query the db version)
we always need to specify MYSQL_CLIENT_5_7_PATH.
Since we are using GitHub matrixes to tests several MySQL versions using the
same scripts, it seems difficult to use the MYSQL_VERSION variable to download
only the required binaries since we will need some conditional logic to populate
the right ENV variables.

For that reason this commit simply downloads the latest MySQL 8.0 and MySQL 5.7
binaries and populates the corresponding ENV variables every time, without caring
which specific MYSQL_VERSION we are currently testing.

* Add Postgres system tests

[#183164582]

* Fix typo in Postgres system tests workflow

[#183164582]

* Test compilation against diff stemcells and improve system tests

[#183164582]

To test compilation against diff stemcells we use multistage Docker containers
to download bosh-lite warden tarballs and using the rootfs included in the tarball
to create a container FROM scratch.

Inside that dockerized stemcell we run the release packaging scripts which allows
testing compilation against different stemcell using GHActions matrix strategies.

Now that we are able to compile the release directly in GHActions, we can improve
system tests by leveraging the binaries built on-the-fly from the blobs.
This approach has huge benefits compared to the previous one:
- Tests are more true to reality
- Bosh blobs are fully tested (instead of testing just the GO code)
- Compilation is being tested against different stemcells

* Use `backup` and `restore` scripts from the /var/vcap/jobs folder

[#183164582]

Also, compiled database-backup-restorer using its packaging script
instead of manually running golang during the `run-system-db-tests` scripts
and refactored tests to use the binary located in /var/vcap/packages

* Reduce verbosity of the logs

[#183164582]

* Add ubuntu-xenial tests except for MySQL 8.0 which is unsupported

[#183164582]

* Run postgres_tls and postgres_mutual_tls test suites in containers

[#183164582]

Signed-off-by: Fernando Naranjo <fnaranjo@vmware.com>

* Fix Postgres TLS and MUTUAL_TLS test suites

[#183164582]

* Ensure system-db-postgres-backing-db database gets cleaned up after each job

[#183164582]

* Fix ENABLE_TLS var not being passed to docker-compose run commands

[#183164582]

All tests were running the basic Postgres tests instead of TLS and MUTUAL

* Run dockerize-release as a separate step from the tests

[#183164582]

This change will help us differentiate how much time is spent in the actual tests versus
compiling the release itself. This should serve as a first step towards optimizing builds

* Prevent running Postgres TLS and MTLS tests for 9.4-alpine

[#183164582]

postgres:9.4-alpine doesn't have openssl package installed by default
which causes the database to error before start and prevents tests from running.

Testing Postgres 9.4 without TLS/MTLS should be enough for now to ensure this
version is supported. Patching postgres Dockerfile to add openssl is possible
but might no be needed or desirable since Postgres 9.4 reached EOGS in February 2020

* Disable 9.4-alpine for Mutual TLS not normal tests

[#183164582]

* Move dockerfiles from .github/actions to ci/

[#183164582]

Signed-off-by: Diego Lemos <dlemos@vmware.com>

* Restore tests and guard new BOSH-less functionality under a flag

[#183164582]

* Redirect stderr to stdout when running with flag RUN_TESTS_WITHOUT_BOSH

[#183164582]

The tests expected error messages to be available in stdout. Probably because when
running the commands by SSHing into a BOSH VM the errors are retreived from Stdout

Signed-off-by: Fernando Naranjo <fnaranjo@vmware.com>
Signed-off-by: Gareth Smith <sgareth@vmware.com>
Signed-off-by: Diego Lemos <dlemos@vmware.com>
Co-authored-by: Gareth Smith <sgareth@vmware.com>
Co-authored-by: Diego Lemos <dlemos@vmware.com>
Co-authored-by: Cryogenics CI Bot <mapbu-cryogenics@groups.vmware.com>
@ksaito1125
Copy link

I solved it by not modifying Dockerfile or entrypoint.

Mount the named volume to /path with uid=70 in the compose.yml below.

Create named volume and change owner

Make sure there is no random named volume and create it.

$ echo $VOL_NAME
random
$ docker volume ls | grep $VOL_NAME
$ docker volume create $VOL_NAME
random
$

Change the owner of a named volume.

$ docker run -it --rm -v $VOL_NAME:/aaa alpine chown 70:70 /aaa

Specify externrl in volumes of compose.yml.

Specify external for volumes as shown below.

$ cat compose.yml 
version: '2'
services:
  web:
    image: alpine:3.3
    volumes: ['random:/path']
    command: ['tail', '-f', '/dev/null']

volumes:
  random:
    external: true

Start compose and check the owner.

$ docker compose up -d
[+] Running 1/1
 ⠿ Container dynamodb-perm-test-web-1  Started 
$ docker compose exec web ls -la /path
total 8
drwxr-xr-x    2 postgres postgres      4096 Oct 28 05:41 .
drwxr-xr-x    1 root     root          4096 Oct 28 05:46 ..
$ docker compose exec web grep 70 /etc/passwd
postgres:x:70:70::/var/lib/postgresql:/bin/sh
$

Although named volumes are part of docker, we believe they should be managed separately from the container just like any other external storage.
Therefore, I think it is better not to change the volume settings with Dockerfile or entrypoint.

@iven
Copy link

iven commented Nov 21, 2022

ndeloof's solution helped me to resolve the same issue. It's important to note that only short syntax of volume works, long syntax of it still mount volumes as root, even if you've chowned the directory before mounting.

@fabpico
Copy link

fabpico commented Jun 12, 2023

So, long story short for the impatient:

RUN mkdir /volume_data
RUN chown postgres:postgres /volume_data

Creating the volume directory beforehand and a chown solves it, because the volume will preserve the permissions of the preexisting directory.

This stops working when the files are in WSL, the user becomes "xfs" and it seems to be unchangeable.

@raphaelauv
Copy link

raphaelauv commented Aug 24, 2023

thanks @giraffesyo

I made a little more complet example where the original file is not yet in the volume

  my-service-init:
    image: node:16
    user: root
    group_add:
      - '1000'
    volumes:
      - ./cred.json:/opt/conf/cred.json:ro
      - my-service-recover:/opt/something/conf/
    command: bash -c "ls -la /opt/conf/ && ls -la /opt/something/conf/ && cp /opt/conf/cred.json /opt/something/conf/cred.json && chown node /opt/something/conf/cred.json && chmod 755 /opt/something/conf/cred.json && ls -la /opt/something/conf/"

giving ->

total 12
drwxr-xr-x 2 root root 4096 Aug 24 08:22 .
drwxr-xr-x 1 root root 4096 Aug 24 08:22 ..
-rw------- 1 1000 1000  336 Aug 24 07:42 cred.json
total 8
drwxr-xr-x 2 node root 4096 Aug 24 06:56 .
drwxrwxr-x 1 node root 4096 Aug 24 06:56 ..
total 12
drwxr-xr-x 2 node root 4096 Aug 24 08:22 .
drwxrwxr-x 1 node root 4096 Aug 24 06:56 ..
-rwxr-xr-x 1 node root  336 Aug 24 08:22 cred.json

@mogul
Copy link

mogul commented Oct 2, 2024

I want to draw attention to this elegant solution which is fully encapsulated in the compose file: You can use restart: "no" with a one-shot container that just changes the named volume's ownership, and use depends_on: to have the other container(s) that use the volume wait until that operation is complete. Worked example here.

@nathanweeks
Copy link

@mogul While it seems unlikely a data race would occur in practice, to guarantee the chown command specified in the entrypoint of the files-init service (in the stackoverflow example) has completed before the service-logs service has started, I think a service_completed_successfully condition would be needed (as illustrated in the depends_on long syntax)? i.e.:

    service-logs:
        image: alpine
        depends_on:
            files-init:
              condition: service_completed_successfully

mogul added a commit to mogul/spiff-arena that referenced this issue Oct 3, 2024
@mogul
Copy link

mogul commented Oct 3, 2024

I think a service_completed_successfully condition would be needed (as illustrated in the depends_on long syntax)?

Great point!💡

I tested that that works locally, and I've updated that in the PR that I pointed to as my worked example.🙇‍♂️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests