Skip to content
This repository has been archived by the owner on Oct 4, 2024. It is now read-only.

Issues Upgrading a Container to 3.0,0 #351

Closed
gilesknap opened this issue May 16, 2022 · 24 comments
Closed

Issues Upgrading a Container to 3.0,0 #351

gilesknap opened this issue May 16, 2022 · 24 comments

Comments

@gilesknap
Copy link
Owner

Hi Folks, just posting some hints here if someone stumble in the same problems that I did while upgrading to the new docker version.

  1. If you are deleting your old OAuth credentials and creating new ones don't forge to delete .gphotos.token at the root of the storage folder or you will get a message about invalid credentials.

  2. This is properly document at https://gilesknap.github.io/gphotos-sync/main/tutorials/installation.html#execute-in-a-container, but if you are using docker you will need --net=host and -it for the auth URL to show, otherwise the app will be stuck in the following message:

    05-15 22:32:30 INFO version: 3.0.0, database schema version 5.7

    After your new .gphotos.token file is generated the flags above are no longer necessary.

  3. Speaking of it, version 3.0.0 is not in Docker Hub (will it be published there as well?), you'll need the full GitHub Container Registry URL to run the latest version:

    docker run --rm -v $CONFIG:/config -v $STORAGE:/storage --net=host -it ghcr.io/gilesknap/gphotos-sync /storage
    
  4. This may be pretty specific, but port 8080 is already in use on my system and It was not practical to stop the software that is using it. I've tried to remove the --net=host flag and use something like -p 98080:8080, But port 8080 seems to be hard coded in the auth logic and simply replaying the request meant for port 8080 on a different port didn't work for me. Luckily I figured out that I could generate .gphotos.token using another machine and simply copy the file over.

I hope that this is useful to other people.

Originally posted by @aaccioly in #341 (comment)

@gilesknap
Copy link
Owner Author

TODO: I will add a command line argument to specify the hostname and port for the redirect URL. This would allow you to override port on localhost or even run on a headless server as long as the container can be routed from your workstation that is running the browser.

@aaccioly
Copy link

Thanks @gilesknap.
Once the port is configurable, if it's possible, It would be great if we could update the documentation and drop the usage of --net=host in favour of just exposing the port (e.g., -p "8080:8080").

@tdimarzio
Copy link

Came here to say that https://registry.hub.docker.com/r/gilesknap/gphotos-sync contains version 2.14.2 (By [gilesknap] Updated 2 years ago) but it looks like you've already identified that above. Will the dockerhub version be upgraded to 3.0.0, or, to put it another way, will dockerhub be synchronized with ghcr.io/gilesknap/gphotos-sync ?

@gilesknap
Copy link
Owner Author

Well, I was intending to take down the dockerhub and have github only but have had quite a few votes for DockerHub so I'll add it to the CI.

@jdvp
Copy link

jdvp commented May 17, 2022

Not too familiar with docker but once I got everything set up in my container, after I did the oauth prompt I was redirected to a url like http://localhost:8080/?state=xxxx but it says it can't be loaded and the CLI prompt stays up (Please visit this URL to authorize this application: https://accounts.google.com...)

I tried with both --net=host and -p 8080:8080 but neither seemed to work (stuck on cli, redirect says not loaded, .gphotos.token file not created). I'm thinking this is probably a configuration issue on my end but not sure where to get started since it's a fresh docker install (doing on my mac to transfer the token file to my existing synology that I'm upgrading from v2) and I have nothing running on 8080 in the first place

Anyone else have this issue and find a way to solve it? Thanks

EDIT : I was able to get it to complete by going into the container's command line, installing curl in the container (apt-get update; apt-get install curl) and then calling curl directly in the container (`curl "http://localhost:8080/?state=...")

@gilesknap
Copy link
Owner Author

@jdvp it just sounds like your container was not sharing the port to the host. What platform are you on and how are you running the container?

@gilesknap
Copy link
Owner Author

@aaccioly I agree with changing the docs as you suggest. But I'm working on the change and I cant get exposing the port only to work.

This works:

docker run -it -v storage:/storage -v/home/giles/.config/gphotos-sync:/config --net=host  gphotos-sync --host localhost --port 8989 /storage

But the browser can find the redirect URL when I use

docker run -it -v storage:/storage -v/home/giles/.config/gphotos-sync:/config -p8989:8989  gphotos-sync --host localhost --port 8989 /storage

Did anyone make gphotos-sync 3.0.0 work with -p8080:8080 ? If so did you need to do anything else to make this work.

@gilesknap
Copy link
Owner Author

gilesknap commented May 17, 2022

hmmm, also Google does not seem to like any host name or IP that is not localhost, making the --host argument useless.

Probably this restriction is due to the use of HTTP.

image

@aaccioly
Copy link

@aaccioly I agree with changing the docs as you suggest. But I'm working on the change and I cant get exposing the port only to work.

This works:

docker run -it -v storage:/storage -v/home/giles/.config/gphotos-sync:/config --net=host  gphotos-sync --host localhost --port 8989 /storage

But the browser can find the redirect URL when I use

docker run -it -v storage:/storage -v/home/giles/.config/gphotos-sync:/config -p8989:8989  gphotos-sync --host localhost --port 8989 /storage

Did anyone make gphotos-sync 3.0.0 work with -p8080:8080 ? If so did you need to do anything else to make this work.

Hi @gilesknap , how are you building the image?
If it's based on a Dockerfile you need to expose (e.g. EXPOSE 8080) a port for binding.

@gilesknap
Copy link
Owner Author

gilesknap commented May 17, 2022

@aaccioly ah right. So I also tried --expose 8080 on the cli. Is that not the same thing? It did not work either (I am building with Dockerfile and not doing EXPOSE currently)

@gilesknap
Copy link
Owner Author

Just tried adding the EXPOSE but still no joy.

@aaccioly
Copy link

Just to verify.

You have:

  1. Added an EXPOSE 8080 directive to the Dockerfile
  2. Built a new image
  3. Run a container with the new image using -p 8080:8080

All 3 steps, in order, are needed to make it happen.

@gilesknap
Copy link
Owner Author

gilesknap commented May 17, 2022

Yes - exactly that sequence.


Step 14/16 : EXPOSE 8080
 ---> Running in ff9fe8f96392
Removing intermediate container ff9fe8f96392
 ---> 2a747114f4f6
Step 15/16 : ENTRYPOINT ["gphotos-sync"]
 ---> Running in 4bd73775bccd
Removing intermediate container 4bd73775bccd
 ---> 18bea4315af3
Step 16/16 : CMD ["--version"]
 ---> Running in 6903f883c542
Removing intermediate container 6903f883c542
 ---> 2edd34d950ae
Successfully built 2edd34d950ae
Successfully tagged gphotos-sync:latest
(.venv) (dockerhub) [giles@ws1 .devcontainer]$ 
(.venv) (dockerhub) [giles@ws1 .devcontainer]$ 
(.venv) (dockerhub) [giles@ws1 .devcontainer]$ 
(.venv) (dockerhub) [giles@ws1 .devcontainer]$ docker run -it -v storage2:/storage -v/home/giles/.config/gphotos-sync:/config -p8080:8080   gphotos-sync   /storage 
05-17 11:21:14 WARNING  gphotos-sync 3.0.1.dev2+g9de540f.d20220517 2022-05-17 11:21:14.392040 
Please visit this URL to authorize this application: https://accounts.google.com/o
....

@aaccioly
Copy link

As for the custom host, apparently there's a redirect_ui parameter that one can pass to Google's OAuth server: https://developers.google.com/identity/protocols/oauth2/native-app#step-2:-send-a-request-to-googles-oauth-2.0-server

Having said that, host support is not as much of a necessity than custom port. It's a nice to have for sure, but not as inconvenient as the hardcoded port.

@aaccioly
Copy link

aaccioly commented May 17, 2022

Yes - exactly that sequence.

Interesting, maybe try to move the EXPOSE directive to the bottom of the Dockerfile as per this example: https://docs.docker.com/get-started/02_our_app/

@gilesknap
Copy link
Owner Author

gilesknap commented May 17, 2022

My reading of the docs says that -p should work without EXPOSE and its not working even at the end so there must be another issue.

For the moment I'm going to do a release that has 2 features:

  • pushes to dockerhub
  • adds --port which allows people to overcome issues like @aaccioly had with 8080
  • (it will also have the useless --host but maybe someone can work out a way to use it)

@gilesknap
Copy link
Owner Author

I have release 3.0.3 with the above features.

I did not change the docs to use -p8080:8080 because its not working for me and I don't know why. But I'm not aware of any major issue with using --net host for just one invocation.

@aaccioly
Copy link

aaccioly commented May 17, 2022

Hi @gilesknap, it's more than fine to split in two releases.
It's good to keep the EXPOSE in your Dockerfile anyway as several popular docker admin tools make use of the directive to introspect what ports should be mapped.

The primary reason why --net=host is not ideal is security. With --net=host there's no isolation between the network of the container and the host.

Now that I think about it, this may be culprit:

flow.run_local_server(open_browser=False)
run_local_server may be binding to localhost by design. The container localhost is not the host localhost unless --net=host is enabled.

What happens if you pass 0.0.0.0 as a parameter to this function?

Finally, why use run_console directly and bypass all of this? Or is run_console precisely what is getting deprecated by Google?

Kind regards,

@jdvp
Copy link

jdvp commented May 17, 2022

@jdvp it just sounds like your container was not sharing the port to the host. What platform are you on and how are you running the container?

I was doing the setup on Mac, but this documentation from Docker seems to imply that using --net=host doesn't work on Mac:
https://docs.docker.com/network/host/

The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server

It was fine after CLIing into the container itself and going to the redirect url manually so not a big issue on my end but was a bit tricky for me to figure out since I didn't know it was an option lol

@aaccioly
Copy link

aaccioly commented May 17, 2022

I was doing the setup on Mac, but this documentation from Docker seems to imply that using --net=host doesn't work on Mac: https://docs.docker.com/network/host/

The host networking driver only works on Linux hosts, and is not supported on Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server

Good catch @jdvp!
Did you run docker exec curl or wget inside the container itself to finish the credentials setup?

If so, do you mind sharing the command for the benefit of everyone else. If you are right and --net=host doesn't work on Windows and Mac I predict that several users will be needing it now that the latest image was pushed to Docker Hub.

@aaccioly
Copy link

aaccioly commented May 17, 2022

Apparently --net=host really doesn't work on Windows / Mac: docker/roadmap#238

@gilesknap, just making you aware of it in case we get a mob of angry users now that the Docker Hub image has been updated.

@jdvp
Copy link

jdvp commented May 17, 2022

@aaccioly so just did it again to make sure I had it right

Basically on one terminal I ran the command like regularly mentioned in the docs (with the stuff to not actually pull anything bc I am moving my token to another machine):

CONFIG=$HOME/Downloads/gphotos/config  
STORAGE=$HOME/Downloads/gphotos/storage
docker run --rm -v $CONFIG:/config -v $STORAGE:/storage --net=host -it ghcr.io/gilesknap/gphotos-sync /storage --skip-files --skip-albums --skip-index  

This told me the url to navigate to in my browser:

WARNING  gphotos-sync 3.0.3 2022-05-17 17:00:24.921562 
Please visit this URL to authorize this application: https://accounts.google.com/o/oauth2/auth?response_type=code&client_id=....

I then went to the docker app and clicked the CLI button (I think this is the same as finding the docker container item and running exec? for me this seemed to run docker exec -it b57df7accbf880b0fxxxx... /bin/sh)

Screen Shot 2022-05-17 at 10 01 19 AM

This opened a new terminal window and I tried to curl the redirect URL that I got in the oauth flow:

 curl "http://localhost:8080/?state=..."

This gave an error about curl not being installed

/bin/sh: 1: curl: not found

So I installed curl with

apt-get update; apt-get install curl

After the install finished, I tried curling again and got the success message:

The authentication flow has completed. You may close this window.

@gilesknap
Copy link
Owner Author

Hi @gilesknap, it's more than fine to split in two releases. It's good to keep the EXPOSE in your Dockerfile anyway as several popular docker admin tools make use of the directive to introspect what ports should be mapped.

The primary reason why --net=host is not ideal is security. With --net=host there's no isolation between the network of the container and the host.

Now that I think about it, this may be culprit:

flow.run_local_server(open_browser=False)

run_local_server may be binding to localhost by design. The container localhost is not the host localhost unless --net=host is enabled.
What happens if you pass 0.0.0.0 as a parameter to this function?

Finally, why use run_console directly and bypass all of this? Or is run_console precisely what is getting deprecated by Google?

Kind regards,

I'm of the opinion that it is binding to localhost by design - it only uses http for the connection and passes the secret token over that connection.

I tried 0.0.0.0 and get:
image

You can try it yourself with 3.0.8, the command line arg --host is passed through to run_local_server but the only values that don't get the above error are 127.0.0.1 and localhost.

The first thing I tried was run_console and it still uses the old google hosted uri oauth:2.0:oob which no longer works with unverified apps. According to the docs you linked the other option is a domain that you host and this would be fine for a vendor who is paying for API bandwidth but not for this case.

This in combination with the net=host not working on Mac and Windows is a bit of a showstopper. I think I'll have to put @jdvp 's workaround in the docs. I can make it easier by installing curl in the image build.

I have to say I tried the curl approach earlier today and was getting an error about missing scope. I'll need need to do more investigation (and get someone with a Mac to try out the approach).

@gilesknap
Copy link
Owner Author

Oh wait - @jvp is on mac - its windows that has not been tried - I can do that,

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants