-
-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove external OTP dependency from deploy hook Synology_DSM.sh #4646
Conversation
Also adapt to DSM 7's API improvements.
@Neilpang Hello, Thank you so much for this project. Could this PR be merged into master branch and be released in recent days because it's a problem that syno's dsm OS doesn't have oathtool and related dependencies in it. Syno user really need this to auto deploy the certs for the accounts that have enabled two-step authtication. |
the SYNO_TOTP_SECRET is totally removed in this pr, how does the exising users renew the cert? if you make change to the code, keep compatible with the existing users. Imagine that the existing users are just using a cronjob to renew the cert, we must make sure the existing certs can be renewed automatically without human involved. There is only one exception that: the api changed, and the exising code can not work anymore, so we have to change the code. |
This is intentionally, because it's one of two reasons for the whole change: getting rid of storing the OTP secret for good - just like you can via the web UI by ticking "Remember this device", but via Synology's official API instead of the UI.
Due to change in login procedure for enhanced comfort - as well as security due to removal of the previous requirement to store the secret in plain text on disk - that's simply impossible.
Again, this is impossible in this case.
Yes, Synology changed the API upon release of DSM 7. They only included the legacy login method for backward compatibility. Nonetheless, I happily update the FAQ / Wiki as soon as this PR got merged - just as I updated the deploy API plug-in's file header with improved docs regarding its usage: # Usage:
# 1. export SYNO_Username="adminUser"
# 2. export SYNO_Password="adminPassword"
# Optional exports (shown values are the defaults):
# - export SYNO_Certificate="" to replace a specific certificate via description
# - export SYNO_Scheme="http"
# - export SYNO_Hostname="localhost"
# - export SYNO_Port="5000"
# - export SYNO_Device_Name="CertRenewal" - required for skipping 2FA-OTP
# - export SYNO_Device_ID="" - required for skipping 2FA-OTP
# 3. acme.sh --deploy --deploy-hook synology_dsm -d example.com |
I agree with @Eagle3386, this is a breaking change, but it's for good reason. The legacy method is problematic to setup and maintain, frequently breaking. There's no guarantee that it will continue to be support, in fact it will likely be removed in a future major update of DSM. The new API is the way forward and will require breaking changes to support. |
I have merged the new 'deploy/synology_dsm.sh' manually and tried using it. I think it's acceptable for those existing synology DSM users. The old method to deploy certs via SYNO_DID or SYNO_TOTP_SECRET with two-step authentication enabled account is too far from easy. It's much more simple than before:
Just like login via DSM WebUI and only run it once. |
I'd like to clarify that the mentioned OTP code prompt needs to be answered only once - any future execution of the deploy script then utilizes the received device ID for the actual deployment of the certificates. This fulfills 3 requirements at once:
|
I see. The trusted device ID is stored and it's convenient for automatic deploy after the certs is renewed. |
Exactly! 😎 @Neilpang Anything further to address from my side or can you, due to explanations/reasoning given above, accept this PR? |
if the cert was first issued by the old credentials, it MUST be renewed with the same credentials too. I understand you want to get rid of the old api, but compatiblity is the FIRST rule. Imagine you configured a server with acme.sh, and it has been run well for years. but one day the ssl cert is not renewed in time, what do you think? please make your code:
|
While I disagree with your compatibility point (if Synology changes the API tomorrow, user complaints then would be pretty much the same as those caused by this PR), I consider your suggestion of issuing a warning a valid alternative. I'll edit my PR accordingly within the next couple of days. However, let me be clear regarding support of your mentioned "old methods": I will not support any additional Docker or |
@Neilpang We never know when Synology might pull the legacy API as far as I can tell they've never made a commitment to when they'll EOL that. Like @Eagle3386 said if Synology breaks it users would be in the same mess, so it would be better to do it now. I don't think there'd be very many production Synology systems using the current method as it is so weird and fragile, I know I wouldn't. Most likely it's homelab stuff, If someone can figure out the existing script they shouldn't have much difficulty figuring out the new one. |
If the legacy API will ever be closed, then it's not our fault. You can modify the usage wiki to only describe your new credentials and usages, so that anyone from now on would use the new methods. But we must support to renew the cert which was ever issued by the old methods. So, I think I'm clear:
|
@Neilpang Finally had some free time to complete this, but found that even the old method using OTP was calling the Synology API wrong: acme.sh/deploy/synology_dsm.sh Lines 105 to 111 in 41b6aeb
It's wrong, because
However, since you requested that old setups should continue "as is", I reintroduced these misuses of the API - complaints due to errors will be completely redirected to you! 🤪 Also, there's neither a |
Split "[ && ]" into "[ ] && [ ]" to make ShellCheck happy
Any idea when this is likely to see the light of day? The latest version I see is 9th June. How are the instructions likely to change - will it still be shell based? I found the instructions for determining SYNO_DID and SYNO-TOTP_SECRET completely confusing, the latter being possibly browser/OS dependent. |
I would indeed! |
@alidaf Please see Deploy the certificate to Synology DSM for updated instructions on how to automate deployment with 2FA enabled. @jaydee73 Same as for @alidaf, however, regarding already running setups: yes, they're covered as well, but instructions for those are intentionally dropped completely. Regarding the command line parameters: Regarding your linked guide: it uses a Docker container while the wiki page - the old just as the new one - talk about |
@Eagle3386 Thanks for the input. I'll try to work through it the coming days. One thing already popped up: You wrote, that after deleting the two lines (DID und TOTP), we should follow the wiki instructions. You mean: From top to bottom? Point 3 says "acme.sh --deploy --deploy-hook synology_dsm -d example.com". Do we really have to go through this deploy-stuff again, even if we already have an existing certificate? If so, how can me make sure that the script does not create a new (additional!) certificate to the existing one? And on top of that, we have a new issue (#4721) saying that 3.0.7 synology hook is broken. Are you aware of this? Is this because of your updated stuff? |
|
Holy moly, this was not what I wanted to achieve. I'm sorry. That probably happens when one (=me!) is not using his native language. Daher einmal ganz kurz: Ich wollte dir definitiv nicht auf den Schlips treten. Ganz im Gegenteil, ich bin dankbar für deine Hilfe. It doesn't matter for me if there are 2 steps, 20 steps or 200 steps to be done. It's simply that I want to make sure (in advance!) to do the right steps. And maybe also help others with the same questions. This wasn't meant to be a complain at all. You are totally right, step 3 ist only about deploying. My bad, I should have paid better attention. Also thanks for clarifying the other issue. Wasn't clear for me that he still sits on DSM6. Sorry again! I hope pulse is slowly getting back to normal... :) |
Yeah, that helped, clarify a lot & settle things back to normal. - "Schwamm drüber!" 😉 Also, I'm here to help with further questions, if any new arise. |
Strange...I made a new comment some hours ago, but for whatever reason it isn't visible....ok, I'll try again...: Yesterday I tried to implement the new method on my already running system. Unfortunately I failed. I deleted the two old conf-file lines (SYNO DID and SYNO TOTP) and added the two new lines (Device name & device ID) to my conf file. Small info: I am using the official acme.he docker container. Every time (I have tried several times...) I executed the script with the deploy hook (directly on the shell), it still used the old method and stated, that is is deprecated. No asking for an otp code. Restart of the container also didn't change anything. Then I started from scratch and installed a new container. Issuing (again directly on the shell) went without problems. Deploy also acted as expected. I was asked for an otp code, entered it and the certificate was successfully deployed to the Syno. So far so good. Nevertheless two questions arised: In general, my conf file indeed does "work". Yesterday I also added Gotify and executed some "export" commands directly in the shell. They were correctly added to my conf file. So, yes, some new help would be appreciated. Big thanks! |
@jaydee73 a. I think it's an expected behaviour. Now the script will only check if there's defined "SYNO_Device_ID" even there's already a "SYNO_Device_Name" exist. It doesn't matter anything unless you manually change "SYNO_Device_Name" which doesn't match "SYNO_Device_ID". b. Yes, it might be trouble when redeployment happens because the script will ask for "SYNO_Device_ID". Run "export" command will add env variable into current running shell to make sure the new script could load it for deployment. Basically, Device ID is generated by Syno's API while the script successfully pass 2FA authentication unless you already have one. If "SYNO_Device_ID" is not defined in advance, the script will add |
Thanks, @wyxls for explaining most of the stuff already. 👍🏻 @jaydee73, to make it absolutely clear:
|
Thanks for all your support & patience, guys. But in the end, it didn't work out for me...again... I really followed all your steps....nothing added to conf file, deleted the two lines, manually run acme.sh for deploy, entered TOTP code and device name. The script successfully deployed the certificate But again, no device ID was written to my conf file after deployment. This seems to be rocket science, or at least my very basic knowledge isn't nearly enough to get it going... :-( I'll probably wait for some other guys who are also using this script in docker. Maybe they'll have the same problem or maybe it's working for them and they have an additional hint for me. Until then I'll stick with the "old" and less secure "DID method". Thanks again, especially to Eagle3386. |
Hold on here for a second, please.
They put humans onto the moon with far less code, let alone computational power - if certificate deployment / its configuration file update is more complex, then we're officially outta space here! 🥳🤣
Please, provide us with a link to the container info page as well as the debug log snippet after deployment into DSM succeeded, but prior to the deploy script finishing its work (use I'm thinking of either a permission problem (as stated above) or maybe a container configuration problem & want to analyze this further.
You're welcome & thanks for your acknowledgement. 😇 |
I will provide the informations you need this evening. One info in advance. Several other "hooks" (is this the right expression?) can write (an do so) into the conf file: --set-default-ca Hopefully we can figure this out together. |
That's strange, because as you can see here: acme.sh/deploy/synology_dsm.sh Lines 160 to 163 in 56cf93d
device name & also ID are saved just like username & password - which all use ACME.sh's _savedeployconf function.
However, the script not only works on my DSM 7.2, but on many others as well: there are/were 3 users with issues thus far - 2 were solved within a day or so (1 was due to outdated DSM, the other calling the wrong API path on DSM 6), hence your issue is the only remaining one. Can you - just for testing, of course - remove the lines for username & password as well, re-run the deployment script (you might need to re-run Because if they reappear within the file, it might be a value issue, but if not, then there might be a more serious issue here… |
I'm not exactly sure, which page you mean by this? This one? https://github.com/acmesh-official/acme.sh/wiki/Run-acme.sh-in-docker ?
I hope, this is the right part. Of course I have the full debug 3 log, if you need more to analyze:
This "http restart failed" message I'll do get every time. Don't know if this is relevant. Nevertheless the certificate is deployed into Syno and is visible in the GUI. Also the script says "success" at the end.
Done. Deleted user & password line from file and re-ran the script. Username & Password line were NOT written into the file. In fact (in both deploy runs I've done) NO |
I meant something like https://registry.hub.docker.com/_/nextcloud. Or is it done - though not mentioned anywhere - like with any other Docker container & you have to use environment variables for properly configuring our environment variables, i.e.
Yes & no. 😅 No, because I also need to look at the part prior to Yes, because:
Is nothing to immediately bother yourself with - it might just be that
So, you're on DSM 7. Good to know, thanks!
As already stated above, that one failed message is nothing to worry - the Docker container simply can't restart DSM's Nginx instance, so nevermind. Besides, that
This clearly proves that it's not a bug of my code changes, but a general issue of how the "Docker way" acts/handles the configuration file. |
Both are located in a folder called acme.sh in the root directory of the container. The folder is mounted from a local directory on my NAS into the container. In general (I probably hinted to this already anywhere...), I mainly used a tutorial to get this whole thing up and running: https://www.christosgeo.com/2022/02/03/renew-lets-encrypt-certificates-on-synology-using-acme-sh/ (please don't sentence me for that ;-) ).
You are probably right.
Here we go (I deleted my password and changed the domain name stuff in the log. Hopefully everything else is not security sensitive...):
|
Will reply more extensively tomorrow, but for now: remove your real domain & especially device ID from the log. |
Done. But couldn't find any line where my pw is still readable... |
I've thought a little bit about the whole thing and I had an idea... As mentioned, I do have a conf file in the acme.sh folder. In this file, I added all the stuff previously and the scripts (acme, dns script, notify script) also wrote into this file. I thought "maybe the syno-script writes into another conf file" (as there weren't any errors while executing the "_savedeployconf" command. So I went into the shell and searched for *.conf files anywhere else....tadaaa....there is another conf file in the certificate folder (/acme.sh/test3.mydomain.de_ecc) named test3.mydomain.de.conf. Opening this conf file showed several SYNO_ lines, and also the device ID line (and username, and password....) Until now, I have never looked into the certificate folder and for that reason, I've never seen the second conf file... :-( What do I learn about this? I don't exactly know...is this expected behaviour? Can acme.sh use both conf files? If so, my problem wouldn't be a problem at all as the device id line is there, but simply in another conf file. Maybe this is helpful for you... |
First of all: Of course, there's a second, separate configuration file apart from ACME.sh's one - how else could you deploy several certificates to several hosts with crucial information differing between at least 2 such hosts?! Regarding the rest of your previous message(s):
{
"desc":"mydomain.de",
# … Left out for brevity…
"is_default":false,
"issuer":{"common_name":"R3","country":"US","organization":"Let's Encrypt"},
"key_types":"ECC",
# … Left out for brevity…
"signature_algorithm":"sha256WithRSAEncryption",
"subject":{"common_name":"mydomain.de","sub_alt_name":["*.mydomain.de","mydomain.de"]},
# … Left out for brevity…
},
{
"desc":"vaultwarden",
# … Left out for brevity…
"is_default":false,
"issuer":{"common_name":"R3","country":"US","organization":"Let's Encrypt"},
"key_types":"",
# … Left out for brevity…
"signature_algorithm":"sha256WithRSAEncryption",
"subject":{"common_name":"*.mydomain.de","sub_alt_name":["*.mydomain.de"]},
# … Left out for brevity…
},
{
"desc":"Synology QuickConnect Certificate",
# … Left out for brevity…
"is_default":false,
"issuer":{"common_name":"R3","country":"US","organization":"Let's Encrypt"},
"key_types":"RSA/ECC",
# … Left out for brevity…
"signature_algorithm":"sha256WithRSAEncryption",
"subject":
{
"common_name":"nas-*****-ds720.direct.quickconnect.to",
"sub_alt_name":["*.nas-name-ds720.direct.quickconnect.to","nas-****-ds720.direct.quickconnect.to"]
},
# … Left out for brevity…
} which is basically just wrong, because Let's Encrypt's R3 instance can't issue any ECC certificate as is well-explained here: https://letsencrypt.org/certificates/ But apart from that, as everything works as expected, I'm finally closing this (already closed) PR as there's nothing wrong, just 2 files & that's how it's supposed to be. |
@Eagle3386 Regarding your point:
Just to clarify, I will NOT need to enter a new TOTP code in e.g., ~2 months from now when the renewal script runs? The reason I ask is because I assumed that the login would expire after some time and I would need to manually re-enter another TOTP code (like how DSM prompts if you haven't logged in, in a while). But please correct me if I'm wrong -- I just don't want to be surprised later and find out my Task Scheduler failed during the renewal! |
No, you're not re-entering a TOTP code any time after once running manual deployment with entering a TOTP code. However, as mentioned in the deploy hook script's issue: currently, there's a bug in Synology's API (they're aware of it, but "can't" give an ETA for a fix) which breaks this & hence you've got to manually deploy every time. |
@Eagle3386 Thanks for clarifying. It's unfortunate to hear that a bug on Synology's side is preventing us from being able to automatically renew our certs. I found your comment in this thread, which is the one I'm assuming you're referring to? And just to make sure I understand for the next renewal -- I'm assuming I can re-run this renew script, it'll prompt for the TOTP code, I enter it (like I did the first time to issue the cert), and then it should push into DSM -- correct?
Btw, also wanted to say thanks for all your clear and direct communication in these threads, especially when you contacted Synology's support directly and shared their responses. You've really helped me understand how all this works much better! |
Nope. Besides, the script is designed to work with a working Synology API, not a broken / bugged one. Hence, you've got 3 options:
You're welcome. I'm just glad, it was helpful to other people than me, too. |
As there was a new DSM release some days ago (DSM 7.2.1-69057 Update 5), do you have any informations if the bug was fixed with this version? Or has anyone tested this already? |
As the release notes state, that update isn't even targeting that bug. |
I had this working for months and now my certificates have expired. The script runs, apparently successfully, but the certificates are still out of date. There was a recent DSM update - did this break something? |
Also adapt to DSM 7's API improvements.
See #2727 for further details.