-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] - local deploy cannot guaranteed be done without having access to a domain #1707
Comments
Looking through our workflow, we actually don't need the domain before stage 04. Meaning, we could accept an empty string in case of a local deployment to mean "just use the load balancer ip". Shouldn't be terribly hard to implement if we want that. |
Does this mean that you suggest we just make the "domain" optional? And if so we just use what the cloud/docker/kind gives up? This could make a lot of sense and we just tell the user hey this is your url set the domain if you want something different? Also I think that your issue with the three points describes well the solutions that exist with each not being perfect. I could see how making domain optional and using the ip/cname generated from stage 04 would be enough to get the user started and would certainly be better experience then trying to setup dns initially |
Yes, in case we are doing a local deploy. It doesn't really make sense for cloud deploy, does it?
Yes. My idea is, in case no domain is supplied, we skip Lines 107 to 131 in 9915a6d
and just use Lines 104 to 105 in 9915a6d
going forward. The last messages after a successful deploy tell the user how to reach the cluster and thus we don't need to do anything there. |
Summary after an offline discussion with @costrouc
This is unnecessarily strict. If you are doing a cloud deploy, the load balancer will give us the publicly accessible IP as well. Meaning, we can make the domain optional in all cases, which is nice. |
Closing due to #1803 (comment) and #1833 was landed. |
Describe the bug
Related to #1703. TL;DR: our guide tells users to fix
/etc/hosts
, but the deployed cluster doesn't see this change. There are three ways to achieve that:--domain=...
to point to172.18.1.100
. This is what we do forgithub-actions.nebari.dev
to get our CI going:nebari/.github/workflows/kubernetes_test.yaml
Line 95 in daecbcf
--domain=172.18.1.100.nip.io
duringnebari init
. With this users no longer need to have a domain available, but still need to have access to the internet. Plus, we add an external dependency, since our guide then depends on nip.io being available.--domain=172.18.1.100
. It is not as pretty as having an URL, but probably good enough for local deploy. The major advantage is that this can be done even without internet access making it the most reliable and easiest to use option of the three. Doing this, the user also doesn't need to fix/etc/hosts
.Regardless of what we decide we want to use, 2. and 3. (as well as our current docs) make an assumption that doesn't hold in 100% of the cases: the cluster IP is fixed at
172.18.1.100
.However, we don't know that. It should work most of the time, but I know for example that it doesn't work for @costrouc. Maybe he can fill in some details on when this won't be true?
Internally, we only know the correct IP after stage 4:
nebari/nebari/deploy.py
Lines 102 to 104 in daecbcf
Meaning, setting it upfront during
nebari init --domain
is not guaranteed to work.Expected behavior
I think we should allow setting no domain upfront and instead of asking the user to set up their DNS (when
--disallow-prompt
is not set)nebari/nebari/deploy.py
Lines 128 to 131 in daecbcf
just use the IP we get from the load balancer.
OS and architecture in which you are running Nebari
Linux
How to Reproduce the problem?
Follow the local deployment how-to in an environment where the load balancer doesn't use
172.18.1.100
as IP.Command output
No response
Versions and dependencies used.
No response
Compute environment
None
Integrations
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: