Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Raspbian / RaspberryPiOS no longer allowing user Pi #542

Open
NossieUK opened this issue Apr 9, 2022 · 28 comments · May be fixed by #553
Open

Raspbian / RaspberryPiOS no longer allowing user Pi #542

NossieUK opened this issue Apr 9, 2022 · 28 comments · May be fixed by #553

Comments

@NossieUK
Copy link

NossieUK commented Apr 9, 2022

I know that the script relies on Pi existing as a user ....

Just wondering if any moves have been made to change this to allow other users with the news that user Pi will be changed going forward?

@Paraphraser
Copy link

I am reasonably sure that there are no actual dependencies on /home/pi (rather than $HOME) or things like chown pi:pi (rather than chown $USER:$USER) left. There were a few in gcgarner days but I think they've all been nailed as various eyeballs have passed over them.

There are quite a few "bad examples" ("bad" since 2020-04-04) in the documentation which should probably be turned into "better" examples. But those don't affect anything.

The assumption of $HOME/IOTstack is a bit more deep seated

Quite a few templates have things like PUID=1000 which implicitly assume that 1000=$USER. Nothing will break if 1000 is not known to /etc/passwd or if it's known but doesn't map to the ID of logged-in user. Those are usually about making sure that sub-folders of ~/IOTstack/volumes/<container>/ can be read/written without needing sudo. It's possible that something should be done about those but I don't think they break anything.

With IOTstack, the "user pi" at "/home/pi" with "ID 1000" has always been more about testing. Those being the defaults (until now) they got a lot more testing than any "roll your own". Stating those assumptions explicitly has always been directed more towards getting each user to at least ask themselves the question, "if I've rolled my own environment, is it possible that that explains my current problem?"

I might turn out to be wrong but we'll only really know if/when issues are opened pointing to specific incompatibilities. Fingers crossed...

@Paraphraser
Copy link

The answer to the ID=1000 question is that if you select a different username (eg "me" instead of "pi") in Raspberry Pi Imager, that user (ie "me") gets ID=1000. So that means all the PUID=1000 continue to have the same semantics.

@simonmcnair
Copy link

ideally, for me, there wouldn't be any hardcoding. It's fine to assume that the folder iotstack should exist in the $home folder but there shouldn't be any presumption about the user being 1000, or that you're running on a PI. It should be detected or enumerated. I would go so far as to say that, in the case of bluetooth for instance, it should be enumerable and an env flag set for the remainder of the session.

Yes, I know this makes it more work, and more difficult, it's just my opinion.

@Paraphraser
Copy link

Well, any of us saying "shouldn't" is true. But there are just so many containers that assume ID=1000 that it becomes a practical question of how to adapt them.

Every individual user is entirely free to declare, "I don't want to use 1000, I want to use X". But it is then up to that person to follow through and make sure that none of the containers they want to run depends on 1000 and, if it does, fix it.

@simonmcnair
Copy link

Possibly the other answer is to create a user and/or group called IOTStack and assign it all to that user/group. Yes this abstracts everything away from rasppi but the more abstraction there is, the less you're dependant on third party changes ?

@Paraphraser
Copy link

It's probably time for a somewhat longer response to your question.

First, a bit of conceptual background to set the scene.

The web is full of information about processes that have been "containerised" along with examples of how to use those containers in a docker environment. I'm sure you've seen them. In general, they break down into:

  1. docker run ...
  2. sample service definitions to use in a docker-compose.yml
  3. both of the above.

If you took a random sample of group #2 off the web, you'd find a whole bunch of different approaches. In particular, examples differ in:

  1. how persistent storage is handled; and
  2. which ports are mapped.

If you concatenated a random sample of group #2 service definitions into a docker-compose.yml and told docker-compose to bring up that collection of containers (a "stack"), you'd probably wind up with persistent storage scattered all over your Pi's file system, a reasonable chance of port conflicts as containers contended for the same external port numbers, plus a non-zero probability that a few containers simply would not start because they make assumptions that don't hold.

IOTstack is not a "system", as such. It's a set of conventions which are designed to give you a better-than-even chance that a group of containers you select from the menu will work out-of-the-box. In particular, IOTstack's conventions:

  1. place every container's persistent storage in a sub-folder of ~/IOTstack/volumes/;

  2. minimise the likelihood of port conflicts by not allocating the same external port to two service definitions unless it is unavoidable;

    An example of an unavoidable port conflict is PiHole and AdGuardHome. Both are going to want external port 53. It makes no sense to run two ad-blockers. The IOTstack menu won't prevent you from selecting both containers but docker-compose will grizzle when you try to bring up the stack.

  3. create, to the maximum extent possible, the conditions where a container's assumptions hold on first launch.

    Earlier incarnations of IOTstack used scripts named directoryfix.sh to try to preset assumptions. Those scripts were only ever run by the menu and only then under certain conditions. Those scripts have been replaced, progressively, with Dockerfiles which give containers that might misbehave on first lauch the "smarts" to self-repair.

A key thing to keep in mind is that, we (IOTstack) still have to work within the rules and constraints imposed by docker and docker-compose.

After a clean clone from GitHub, there is no ~IOTstack/volumes folder nor any sub-folders. The first time you bring up a stack, docker-compose creates ~IOTstack/volumes plus the left-hand-side of each volume map. Using the WireGuard service-definition as an example:

volumes:
- ./volumes/wireguard:/config

docker-compose will do the equivalent of:

$ sudo mkdir -p ~/IOTstack/volumes/wireguard

On first launch, /home, /home/pi and /home/pi/IOTstack will exist (so they won't be touched) but both volumes and wireguard will be created and owned by root.

When the WireGuard container starts, it does the equivalent (from inside the container) of:

# chown -R $PUID:$PGID /config

The container is running as root so these changes propagate to the external file system. I assume the logic is to minimise the need for the user to use sudo when manipulating files in the container's persistent storage area.

Some containers also downgrade their privileges. Two examples that I can think of are mosquitto and pihole but those use user IDs of 1883 and 999, respectively, not 1000.

The obvious next question is, where does the WireGuard container get PUID and PGID? Answer, via environment variables contained in its service definition:

$ cat ~/IOTstack/.templates/wireguard/service.yml
wireguard:
  container_name: wireguard
  image: ghcr.io/linuxserver/wireguard
  restart: unless-stopped
  environment:
  - PUID=1000
  - PGID=1000
  - TZ=Etc/UTC
  - SERVERURL=your.dynamic.dns.name
  - SERVERPORT=51820
  - PEERS=laptop,phone,tablet
  - PEERDNS=auto
  # - PEERDNS=172.30.0.1
  - ALLOWEDIPS=0.0.0.0/0
  ports:
  - "51820:51820/udp"
  volumes:
  - ./volumes/wireguard:/config
  - /lib/modules:/lib/modules:ro
  cap_add:
  - NET_ADMIN
  - SYS_MODULE
  sysctls:
  - net.ipv4.conf.all.src_valid_mark=1

That's the template. In a running system the menu would have copied the template into ~/IOTstack/docker-compose.yml and then, hopefully, the user customised it before bringing the stack up.

At this point, it's probably a good idea to get an appreciation for the extent of this general pattern:

$ find ~/IOTstack/.templates -name "service.yml" -exec grep -H "=1000" {} \;
/home/pi/IOTstack/.templates/prometheus/service.yml:    - IOTSTACK_UID=1000
/home/pi/IOTstack/.templates/prometheus/service.yml:    - IOTSTACK_GID=1000
/home/pi/IOTstack/.templates/plex/service.yml:    - PUID=1000
/home/pi/IOTstack/.templates/plex/service.yml:    - PGID=1000
/home/pi/IOTstack/.templates/gitea/service.yml:    - USER_UID=1000
/home/pi/IOTstack/.templates/gitea/service.yml:    - USER_GID=1000
/home/pi/IOTstack/.templates/homebridge/service.yml:    - PGID=1000
/home/pi/IOTstack/.templates/homebridge/service.yml:    - PUID=1000
/home/pi/IOTstack/.templates/domoticz/service.yml:    - PUID=1000
/home/pi/IOTstack/.templates/domoticz/service.yml:    - PGID=1000
/home/pi/IOTstack/.templates/qbittorrent/service.yml:      - PUID=1000
/home/pi/IOTstack/.templates/qbittorrent/service.yml:      - PGID=1000
/home/pi/IOTstack/.templates/transmission/service.yml:    - PUID=1000
/home/pi/IOTstack/.templates/transmission/service.yml:    - PGID=1000
/home/pi/IOTstack/.templates/syncthing/service.yml:      - PUID=1000
/home/pi/IOTstack/.templates/syncthing/service.yml:      - PGID=1000
/home/pi/IOTstack/.templates/heimdall/service.yml:    - PUID=1000
/home/pi/IOTstack/.templates/heimdall/service.yml:    - PGID=1000
/home/pi/IOTstack/.templates/homer/service.yml:    - UID=1000
/home/pi/IOTstack/.templates/homer/service.yml:    - GID=1000
/home/pi/IOTstack/.templates/n8n/service.yml:#            - PGID=1000
/home/pi/IOTstack/.templates/n8n/service.yml:#            - PUID=1000
/home/pi/IOTstack/.templates/nextcloud/service.yml:    - PUID=1000
/home/pi/IOTstack/.templates/nextcloud/service.yml:    - PGID=1000
/home/pi/IOTstack/.templates/wireguard/service.yml:  - PUID=1000
/home/pi/IOTstack/.templates/wireguard/service.yml:  - PGID=1000
/home/pi/IOTstack/.templates/mariadb/service.yml:    - PUID=1000
/home/pi/IOTstack/.templates/mariadb/service.yml:    - PGID=1000
/home/pi/IOTstack/.templates/python/service.yml:    - IOTSTACK_UID=1000
/home/pi/IOTstack/.templates/python/service.yml:    - IOTSTACK_GID=1000
/home/pi/IOTstack/.templates/blynk_server/service.yml:    - IOTSTACK_UID=1000
/home/pi/IOTstack/.templates/blynk_server/service.yml:    - IOTSTACK_GID=1000

Why the variety of xID, PxID, USER_xID and IOTSTACK_xID? In the case of the ones with an IOTSTACK_ prefix, it's to avoid any semantic collisions. As for the others: beats me. Just is.

Your next thought is likely to be, "why not abstract all those hard-coded 1000 to the ID of the user running docker-compose?" Good idea. Shame it doesn't work. Let me demonstrate.

Let's go back to the WireGuard service definition and add another environment variable:

- SIMONMCNAIR=example

Now I'll tell docker-compose to apply that:

$ UP wireguard
[+] Running 1/1
 ⠿ Container wireguard  Started                                                                                                                                                               5.1s

And now I'll prove that PUID, PGID and SIMONMCNAIR are available within the container:

$ docker exec wireguard bash -c 'echo "PUID=$PUID, PGID=$PGID, SIMONMCNAIR=$SIMONMCNAIR"'
PUID=1000, PGID=1000, SIMONMCNAIR=example

I assume you know that UID and EUID are available to the shell and, in the case of the default user on a Raspberry Pi, are going to have the value 1000:

$ echo "UID=$UID, EUID=$EUID"
UID=1000, EUID=1000

So let's try this pattern in the compose file:

- SIMONMCNAIR=${UID}

Apply that:

$ UP wireguard
WARN[0000] The "UID" variable is not set. Defaulting to a blank string. 
[+] Running 1/1
 ⠿ Container wireguard  Started                                                                                                                                                               5.1s
$ docker exec wireguard bash -c 'echo "PUID=$PUID, PGID=$PGID, SIMONMCNAIR=$SIMONMCNAIR"'
PUID=1000, PGID=1000, SIMONMCNAIR=

The same thing happens if I substitute EUID. It's probably because those variables are either given special handling or aren't exported. Let's try setting up our own variable and exporting it:

$ export MYUID=$UID
$ echo "MYUID=$MYUID"
MYUID=1000

Editing:

- SIMONMCNAIR=${MYUID}

Testing:

$ UP wireguard
[+] Running 1/1
 ⠿ Container wireguard  Started                                                                                                                                                               5.2s
$ docker exec wireguard bash -c 'echo "PUID=$PUID, PGID=$PGID, SIMONMCNAIR=$SIMONMCNAIR"'
PUID=1000, PGID=1000, SIMONMCNAIR=1000

That works. As a strategy for getting the current UID into containers, you would only need to do something like add export MYUID=$UID to your ~/.profile. And, fairly obviously, we could generalise the scheme by adopting a convention like:

- PUID=${MYUID:-1000}
- PGID=${MYUID:-1000}

Some comments:

  1. Changing someone's .profile or .bashrc is beyond the scope of IOTstack. We could recommend it but we can't enforce it.
  2. A scheme where 1000 was the default would obviously work on systems that didn't define MYUID where UID was 1000; however, you'd have the same problem as now if another user ID was in effect. In other words, it lacks a certain something as a bullet-proof solution.
  3. There are corner cases where .profile or .bashrc don't run (eg cron) so that lacks a certain something too. You could set MyUID=1000 in ~/IOTstack/.env. The menu could do this too but it would be a bit of a problem to track down if the ID were to change without the menu being re-run. Don't ask me why Docker decided to make that file invisible. It's a trap for the unwary.
  4. The output from the find command above is just the examples that are exposed in IOTstack templates. A lot of containers support environment variable keys for passing the user and group ID and default to 1000 internally if the key is omitted. For IOTstack to be able to claim "we are ID agnostic", we would need to figure out the answer for every container and inherit the maintenance problem every time a container's maintainer decided to change something, or whenever a new container was added to IOTstack. IMO, that's a lot of work for not much actual gain.

Bottom line: ID=1000 is where it's at.

We are reasonably sure that the actual username "pi" does not matter. But we are equally reasonably sure that s/he who futzes with ID=1000 will find her/himself neck deep in repair work.

Should it be that way? Perhaps not. But IOTstack isn't responsible for this state of affairs. You really need to discuss the problem with the maintainer of every container that needs to know the ID of the external user.

Does that answer your question?


By the way, I like to explore questions of this kind in a reasonable amount of detail because, for every person who asks something like this, there are bound to be plenty of others who are wondering about it. I reckon it's better to try to cover-off as much of the how, what, when, where and why as possible.

Plus, everything I write is implicitly tied to my own limited knowledge and questionable assumptions. I'm no expert. You, or someone else following this thread may well spot the flies in my logical ointment, find a solution, and we all benefit. Go for it!

@simonmcnair
Copy link

Thank you for generously taking the time to explain the logic. I guess sometimes things are just the way they are and it would take a larger amount of effort to try and change it, than to go along with the current. Thanks again for your time. I appreciate it.

@simonmcnair
Copy link

I've been doing some reading, and investigation, and nothing I'm going to state here detracts from your excellent synopsis above.

Apparently a good way of getting the current userid is export TEST=$(id -u)

in a dockerfile (which is not appropriate for most of this) it can be done, I believe, as follows:

RUN export currentuserid=$(id -u);echo $currentuserid;

I think another fair perspective is that a user id is arbitrary (which is harder programmatically) unless it's standard i.e. root or pi (which is worse from a security perspective).

My view would be to create an IOTuser and an IOTgroup and set everything to that (no different to how most daemons act) . Which would also mean that uninstallation would be easier, and the locations of files could be prescribed definitively and from a security perspective minimum rights, and smaller attack surface.

I would also be inclined to include a random password generator, generate passwords for each service and drop them, and the full urls into a reference file which could then be used to import in to a password manager and/or book mark manager.

At this point the conversation has descended in to conjecture, design and opinion, for which I apologise.

Thanks for tolerating my blathering.
:-)

@ukkopahis
Copy link

Any problems with a solution to:

  • change all templates like this
    - PUID=${IOTSTACK_UID:-1000}
    - PGID=${IOTSTACK_GID:-1000}
  • when menu is executed: add the the definitions to ~/IOTstack/.env:
    IOTSTACK_UID=$(id --user)
    IOTSTACK_GID=$(id --group)

I can implement, update docs and make the PR.

@simonmcnair
Copy link

I, personally, would prompt/test to create an IOTStack user and group with a random pwd (logged) but I'm sure you both know better than me (newb). Perhaps give the option to run as current user if an additional user is not desired.
I would not assume anything, and would always define to prevent misinterpretation and increase security. I would never use 1000 for anything, ever, as it would assume that the first user created exists and/or that the current user wants to inherit ownership.
It's easy for me to get on my high horse and not do the work though, so apologies for the presumption.

Thanks for the effort you've all put in, I appreciate it.

@ukkopahis
Copy link

prompt/test to create an IOTStack user and group with a random pwd (logged)

The new Raspberry OS installer already prompts user/pw.

Perhaps give the option to run as current user if an additional user is not desired.

This user used in containers is just for convenience, so that your main login can access files it creates without using sudo. Nothing to do with security. Regardless of what you choose, the service will work.

@SimonMcN
Copy link

That will change if you ever implement a samba service or something similar ? Either way, if you say it's fine, it's fine :-)

@Paraphraser
Copy link

@ukkopahis the only "problem" is the containers that do support some form of ID environment variable but where we (IOTstack) haven't yet captured it in our templates. Those will still default to 1000.

I've only really noticed this in passing so don't quote me if I'm mis-remembering but I think it's the LinuxServer builds which do that.

I'm not sure this matters in the scheme of things. After all, we get the occasional "WTF?" question about 1883 for Mosquitto which does no real harm so, if the current ID is not 1000 and there are containers assuming it is, the worst that is going to happen is the need to use sudo when mucking about in volumes. Like that's an exception to the rule...

On balance, I'd say "go for it". It will capture a reasonable number of cases out-of-the-box and be a standard we can work towards.

@Paraphraser
Copy link

Apparently a good way of getting the current userid is export TEST=$(id -u)

in a dockerfile (which is not appropriate for most of this) it can be done, I believe, as follows:

RUN export currentuserid=$(id -u);echo $currentuserid;

I think you might misunderstand what is going on inside Dockerfiles. When a RUN command executes, you're inside the container. The "current user" is whatever is in effect at that point in the Dockerfile. In most cases, that's root. In others it's a user defined within container, and most commonly by the upstream container's maintainer.

Node-RED is a good example. At the point of the FROM statement in the Dockerfile, the user in effect is called "node-red" and, guess what, that name is mapped to ID=1000:

$ docker exec nodered grep "^node-red" /etc/passwd
node-red:x:1000:1000:Linux User,,,:/usr/src/node-red:/bin/ash

See how that ID=1000 keeps popping up. It's a persistent little beast.

If you look at IOTstack's Dockerfile for Node-RED, you'll see this kind of pattern (this is my actual Dockerfile for Node-RED - each IOTstack user's is going to be different):

FROM nodered/node-red:latest-14
USER root
RUN apk update && apk add --no-cache eudev-dev mosquitto-clients bind-tools tcpdump tree
USER node-red
RUN npm install \ 
  node-red-node-pi-gpiod \
  node-red-dashboard \
  node-red-contrib-influxdb \
  node-red-contrib-boolean-logic \
  node-red-node-tail \
  node-red-configurable-ping \
  node-red-node-email

Running apk needs to be done as root but the user in effect at that point is inherited from the image specified in the FROM statement. As I said above, it's "node-red". We need to change to "root" for the apk to run, and then change it back again. Running id -u is going to get either 0 or 1000, depending on where it's run in that sequence. But even if it's 1000, it is the container's 1000, not the Pi's 1000. You could be using ID=1234 outside the container but the container will still see 1000.

It doesn't matter which container you are talking about. Running id -u in a Dockerfile will always get its answer from the container, not the hosting Pi.

Plus, the majority of containers come "as is" from DockerHub. It's only a subset where we use Dockerfiles so, even if this did actually work, it would not be a universal solution.

Which would also mean that uninstallation would be easier...

Well, "uninstallation" is both "easy" and "somewhat convoluted", depending on what you want to achieve.

It's pretty easy for whatever user you define:

$ sudo rm -rf ~/IOTstack

Bang. All gone.

However, although it's simple to nuke the current user's IOTstack folder, you are still left with container images. Those exist outside of $HOME. You need to run docker images to find out what they are, then docker rmi to get rid of each one. There are other nooks and crannies to clean out too if you want to do a thorough job but that doesn't detract from the material point that getting rid of all that baggage is independent of whatever username and user ID you choose.

At this point the conversation has descended in to conjecture, design and opinion, for which I apologise.

Thanks for tolerating my blathering.

I don't mind any of that. Discussion is how we get to solutions.

@ukkopahis
Copy link

ukkopahis commented Apr 28, 2022

Regardless of what you choose, the service will work.

This got me thinking. In actuality this statement should be hedged with "as long as the chosen user-id matches the owner of the existing files in volumes". It's rare (and a bit risky) for a container's entrypoint script to do the required 'chown' operations for a changed user-id. It's therefore user's responsibility to match the container PUID and the file owners of volumes/<service>/*.

As such, when we remove the "User has the user ID 1000" assumption from 'Getting started'-page, we can't change the PUID of existing containers even if they are 'wrongly' defaulting to PUID=1000. Or if we do such a change later, we'll have to add an entrypoint script or a post-build migration script to menu.sh that fixes volumes/service/* file ownerships.

As example: Assumptions: a) an existing service defaults to PUID=1000. b) the logged-in-user rob has ID 999:

  1. Follows: files in /volumes/service/* have an owner-id of 1000, and aren't easily accessible by rob.
  2. Also follows: changing docker service to PUID=999 would make the service have problems accessing/changing /volumes/service/*-files that still have owner-id=1000.

Therefore the pull-request I'm working on, and all future added services, must in their service.yml set the user-id ("PUID") correctly from the get-go (if the container supports it). Changing it later is a migration headache not worth it.

@Paraphraser
Copy link

A first time user deploying IOTstack on a Pi where ID is not 1000 is probably not too much of a concern because it's probably safe to assume "running the menu" so fixes can be applied.

I've been wondering more about the migrating user. Running IOTstack now with ID 1000. Takes a backup, rebuilds a Pi with a different ID, then restores the backup. Not sure what the IOTstack restore scripts do but IOTstackBackup passes the "same owner" flag to tar so everything will come back with the wrong ID.

I'm leaning towards dealing with this entire scenario via caveat rather than by code. A sentiment along the lines of

those dumb enough to try it get to solve their own fang-sucking problems

if you get my drift...

@ukkopahis
Copy link

ukkopahis commented Apr 28, 2022

When considering the troubles of accidentally using the wrong UID, I think it's better to not have a default, but use mandatory variables instead:

- PUID=${IOTSTACK_UID:?IOTSTACK_UID must be defined in .env}
- PGID=${IOTSTACK_GID:?IOTSTACK_GID must be defined in .env}

Then, if by some magic, the end-user manages to delete the .env-file, instead of risking a jolly file-ownership-mess we get a clean error:

$ docker-compose up -d  --force-recreate pihole
ERROR: Missing mandatory value for "environment" option interpolating ['WEBPASSWORD=%randomAdminPassword%', 'INTERFACE=eth0', 'IOTSTACK_UID=${IOTSTACK_UID:?IOTSTACK_UID must be defined in .env}'] in service "pihole": IOTSTACK_UID must be defined in .env

which hopefully alerts the user to the .env-file missing due to problem in his backup, restore, cloning, etc...

@ukkopahis
Copy link

rebuilds a Pi with a different ID

Self inflicted, unforced problem. Redo with better judgment. Practice makes perfect ;)

I'll add instruction about this to docs.

@ukkopahis ukkopahis linked a pull request Apr 29, 2022 that will close this issue
@simonmcnair
Copy link

Newb question if that's okay. Why not, as part of the script, chown current $uid:$groupid ~/IOTstack/Volumes if the folder exists ? It won't apply to any externally mounted data but it would be assumed that IOTStack 'owns' the files in that directory ?

This is just me learning, this has been a great help to my understanding so far

@simonmcnair
Copy link

Newb question if that's okay. Why not, as part of the script, chown current $uid:$groupid ~/IOTstack/Volumes if the folder exists ? It won't apply to any externally mounted data but it would be assumed that IOTStack 'owns' the files in that directory ?

This is just me learning, this has been a great help to my understanding so far

Oh, I guess where the docker image doesn't have an option to specify user or group in the dockercompose this will default to 1000 and the chown will possibly break that container. Got it.

@Paraphraser
Copy link

It's not necessarily safe to chown everything in volumes to the current user. Some containers will barf if you change their ownership on them.

Next point. There is no way (yet) of hooking a user script to docker-compose so you either have to do it in the menu or supply a script that the user just has to know to run. Not everybody uses the menu (I don't) and the directoryfix.sh I mentioned before are an example of something the user just has to know to run. That basically didn't work - we were forever telling people "just run ...". The Dockerfile approach is the solution.

The chown that Dockerfiles do are limited to the container's persistent storage. That's the whole idea of containers - they are contained. But, again, the uid/gid that an internal script uses in a chown can't come from the container. The only real way of getting it the info is what we've been talking about here.

@ukkopahis
Copy link

IOTstackBackup passes the "same owner" flag to tar so everything will come back with the wrong ID

Not sure how my brain managed to "autocorrect" this as: "preserves owners and permissions, why are you stating the obvious?" 🤣

Anyway, I suspect wrong file owners may cause permission problems in certain corner cases, especially now that we're abandoning the ID=1000 assumption. I'd consider changing it. Did you have a reason to using that flag?

@ukkopahis
Copy link

ukkopahis commented Apr 29, 2022

Though I'm torn on if it'd be better to add to docker-compose.yml:

env_file: docker-compose.iotstack.env

And have all defaults added into the explicitly declared file. I don't like how .env is just implicitly found and a hidden file. And 'badly' named, with no clear indication it's for Docker.

This would have to be added to every service, though.

@Paraphraser
Copy link

Did you have a reason to using that flag?

Well, yes. So that the contents of ~/IOTstack/volumes come back "as is". The general backup (everything non-database) includes the compose file and services so, if the ID was 1000 on the backup, it'll be 1000 on the restore just as a by-product of what is needed for the contents of ~/IOTstack/volumes to be restored faithfully. But, now that I think about it, it's really no big deal to explicitly do a chown on the compose file and services.

@Paraphraser
Copy link

I wholeheartedly agree with your comments on .env. Serious "what on earth was I thinking at the time?" going on there.

Although it isn't hidden, I also dislike env.yml in the .templates folder. It contains the networking definitions so it should have a name reflecting its purpose.

Personally, I have my compose file structured like this:

version: '3.6'

networks:

  default:
    driver: bridge
    ipam:
      driver: default

  nextcloud:
    driver: bridge
    internal: true
    ipam:
      driver: default

services:

  portainer-ce:

…

(I do it that way so I can easily cat service definitions I want to test onto the end of my compose file.)

Anyway, that version line at the top is hard-coded into the menu (old and new). It might be better if:

  1. env.yml was renamed to something like docker-compose-header.yml with content:

    version: '3.6'
    
    networks:
    
      default:
        driver: bridge
        ipam:
          driver: default
    
      nextcloud:
        driver: bridge
        internal: true
        ipam:
          driver: default
    
  2. Both old and new menus were changed to copy that "as is" to the start of the compose file.

  3. All knowledge of a compose file version number was removed from both menus.

A wrinkle on the theme might be to first:

$ cp -n ~/IOTstack/.templates/docker-compose-header.yml ~/IOTstack/services/.

and then prepend ~/IOTstack/services/docker-compose-header.yml to the compose file. That way, anyone who wants to make permanent changes to either the version number or networking structure has a straightforward mechanism which won't collide with pulls.

In theory, the version directive is deprecated but I've noticed people complaining that things break if it's omitted (I think it was something to do with cgroup rules).

@Paraphraser
Copy link

Though I'm torn on if it'd be better to add to docker-compose.yml

So, that would be a candidate for this "header" file too, right?

@ukkopahis
Copy link

ukkopahis commented Apr 30, 2022

Did you have a reason to using that (same owner) flag?

Well, yes. So that the contents of ~/IOTstack/volumes come back "as is". The general backup (everything non-database) includes the compose file and services so, if the ID was 1000 on the backup, it'll be 1000 on the restore just as a by-product of what is needed for the contents of ~/IOTstack/volumes to be restored faithfully. But, now that I think about it, it's really no big deal to explicitly do a chown on the compose file and services.

Not telling you how to handle this, but if restoring a backup changes file owner from the original 1883 to 1000, some breakage might occur.

From man tar:

       -p, --preserve-permissions, --same-permissions
              extract information about file permissions (default for superuser)

       --no-same-owner
              Extract files as yourself (default for ordinary users).

I also dislike env.yml in the .templates folder.

Yup, but it might contain other stuff than only network at some point in the future. Could be renamed e.g. base-docker-compose.yml. It's the starting point where all service.yml:s are merged into. But I don't at the moment think renaming it is worth the effort.

Personally, I have my compose file structured 'networks:' before 'services:'

That's a good idea, might pick it up for #505

@ukkopahis
Copy link

ukkopahis commented May 1, 2022

env_file: docker-compose.iotstack.env

Was a nice dream, but doesn't work. Docker docs state:

You can set default values for environment variables using a .env file, which Compose automatically looks for in project directory (parent folder of your Compose file). Values set in the shell environment override those set in the .env file.

And by my tests defining an env_file doesn't add variables that can be used for variable replacement in the environment:-section, only the default .env does. Docker logic :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants