-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rethink the nginx vhost fragments system #256
Comments
I'm my experience using other So they create a vhost file in They don't take care or cleans or deletions, so you have to add additional remove tasks for old vhost configurations if you, for instance, don't need some vhost proxy configuration you were using in the past. They create a As we want to configure some domain with several
To use |
Adding an overall label to each call into the |
Signed-off-by: Peter Ansell <p_ansell@yahoo.com>
Backwards compatibility has been manually handled for roles created here, so the failure should only apply to calls from other custom roles Signed-off-by: Peter Ansell <p_ansell@yahoo.com>
Great @ansell . I think that this can fix one of the issues of this generator: https://github.com/vjrj/generator-living-atlas when you configure several standalone services in the same machine and the same domain. |
Backwards compatibility has been manually handled for roles created here, so the failure should only apply to calls from other custom roles Signed-off-by: Peter Ansell <p_ansell@yahoo.com>
Signed-off-by: Peter Ansell <p_ansell@yahoo.com>
Backwards compatibility has been manually handled for roles created here, so the failure should only apply to calls from other custom roles Signed-off-by: Peter Ansell <p_ansell@yahoo.com>
Signed-off-by: Peter Ansell <p_ansell@yahoo.com>
Backwards compatibility has been manually handled for roles created here, so the failure should only apply to calls from other custom roles Signed-off-by: Peter Ansell <p_ansell@yahoo.com>
Signed-off-by: Peter Ansell <p_ansell@yahoo.com>
I return to this enhancement. Let me give some samples of our current situation in nodes like Austria (production), and Tanzania (demo). Both nodes are using few VMs (2 and 3 respectively). With the current nginx_vhost implementation we need to use different hostnames for each service even if the services are in the same host. This is counter-intuitive and error-prone. For instance this error in Austria was caused by the use of the same hostname for The usual strategy in webservers to configure vhosts is the use of the service name, let's say, the https://serverdomain.org is configured in And this The My proposal to solve this is that every service role configures a common For instance if in some hosts we use
For instance
Or
So every role that needs a Vhosts deletions are not handled by ansible but the addition and modifications or vhosts (like others vhost plugins do). For us that we are using the LA ansible generator, this enhancement is quite important because will permit us to install a complete national node with all basic services (including CAS, Spatial and https) in multiple hosts without errors. Example of command we are using:
For others portals we'll permit to exec tags or standalone playbooks in a hosts without nginx issues. |
The biggest issue right now for nginx server/location configurations is the requirement to be able to deploy to the same domain name from different playbooks at different times.
The second biggest issue is that we are not declaring the domain name(s) in the original call to the
webserver
role (which calls thenginx
role), only in calls to thenginx_vhost
role. This could be improved, but there is no urgency at this point so it is fairly low priority.The third biggest issue is the sporadic use of
tag
to isolate parts of a playbook for partial use, in which case we can't guarantee that a previous element will always be called. Novice users of tags may think the system is broken because we appear to be supporting tags even though they are always going to be broken in strange and unusual ways if used by non-experts due the complex dependencies between roles due to the waynginx_vhost
is pulled in by applications and the domaon .Some users also merge multiple applications that could easily be supported in separate domain names into single playbooks which make them more cumbersome to support than they should be. Unfortunately there is no technical direction at this point to fix those issues so they are also a complicating factor in being able to manage the nginx configuraiton at a higher level than the
nginx_vhost
+vhost_fragments
persistence level.In the multiple playbooks case, we can't touch any of the previous nginx configuration for fear of removing an essential element, or duplicating an essential element.
In the
nginx_vhost
case, it is too late to be able to clean up previous fragments automatically at that stage, as a previous call tonginx_vhost
may have already modified current fragments.Using
include
to pull in separate parts of the configuration would make it cleaner, but it wouldn't fix the requirements that we need to allow for in this case.The current requirement is for users who know they are deploying from a single playbook to setup the vhost fragments they want to clear in their inventory file, which is picked up by the high level
nginx
role:This step is optional as it can't be implemented any other way at this point without users having other complaints which we would then need to deal with individually to verify the cause for them.
This issue is open for suggestions on how to improve the overall nginx setup while keeping to the current (and likely future) requirements of different playbooks, and partial executions.
cc @vjrj
The text was updated successfully, but these errors were encountered: