-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not expose additional ports #331
Conversation
E2E Test ResultsDACCS-iac Pipeline ResultsBuild URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1578/Result : failure BIRDHOUSE_DEPLOY_BRANCH : remove-external-ports DACCS_CONFIGS_BRANCH : master PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master PAVICS_SDI_BRANCH : master DESTROY_INFRA_ON_EXIT : true PAVICS_HOST : https://host-140-90.rdext.crim.ca PAVICS-e2e-workflow-tests Pipeline ResultsTests URL : http://daccs-jenkins.crim.ca:80/job/PAVICS-e2e-workflow-tests/job/master/1137/NOTEBOOK TEST RESULTS |
E2E Test ResultsDACCS-iac Pipeline ResultsBuild URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1579/Result : failure BIRDHOUSE_DEPLOY_BRANCH : remove-external-ports DACCS_CONFIGS_BRANCH : master PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master PAVICS_SDI_BRANCH : master DESTROY_INFRA_ON_EXIT : true PAVICS_HOST : https://host-140-126.rdext.crim.ca PAVICS-e2e-workflow-tests Pipeline ResultsTests URL : http://daccs-jenkins.crim.ca:80/job/PAVICS-e2e-workflow-tests/job/master/1138/NOTEBOOK TEST RESULTS |
E2E Test ResultsDACCS-iac Pipeline ResultsBuild URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1581/Result : success BIRDHOUSE_DEPLOY_BRANCH : remove-external-ports DACCS_CONFIGS_BRANCH : master PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master PAVICS_SDI_BRANCH : master DESTROY_INFRA_ON_EXIT : true PAVICS_HOST : https://host-140-101.rdext.crim.ca PAVICS-e2e-workflow-tests Pipeline ResultsTests URL : http://daccs-jenkins.crim.ca:80/job/PAVICS-e2e-workflow-tests/job/master/1139/NOTEBOOK TEST RESULTS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some CanarieAPI configs that were modified should be ported to https://github.com/bird-house/birdhouse-deploy/blob/master/birdhouse/optional-components/canarie-api-full-monitoring/config/canarie-api/canarie_api_full_monitoring.py.template if not already defined in the list of public endpoints.
grafana: | ||
url: http://grafana:3000 | ||
title: Grafana | ||
public: true | ||
c4i: false | ||
type: api | ||
sync_type: api | ||
prometheus: | ||
url: http://prometheus:9090 | ||
title: Prometheus | ||
public: true | ||
c4i: false | ||
type: api | ||
sync_type: api | ||
alertmanager: | ||
url: http://alertmanager:9093 | ||
title: AlertManager | ||
public: true | ||
c4i: false | ||
type: api | ||
sync_type: api |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious if this works (UI-wise, everything responds/reacts correctly)?
Have you tried validating access to those endpoints?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As of yesterday, no ...I was still working on it but everything should be working now. I'm not too familiar with what these pages should necessarily look like so if @tlvu can take a look and let me know if anything looks off, I'd appreciate it.
url: http://${PAVICS_FQDN}:8083/twitcher/ows/proxy/thredds | ||
url: http://thredds:8080/twitcher/ows/proxy/thredds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will need to check with @tlvu.
There was an issue regarding THREDDS that needed the full ${PAVICS_FQDN_PUBLIC}${TWITCHER_PROTECTED_PATH}
to be defined as Magpie's URL so that Twitcher could redirect in the right location. Not sure if this impacts the catalog browsing/listing/access or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually didn't even mean to make this change since catalog is being deprecated. It must have slipped in from the cherry-pick from the other branch.
I'm ok to either revert this change or leave it as is (since catalog is deprecated)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it works with thredds:8080
, then it is better to change it.
It is guaranteed to fail with ${PAVICS_FQDN}:8083
after changes are applied.
birdhouse/config/flyingpigeon/config/canarie-api/canarie_api_monitoring.py.template
Outdated
Show resolved
Hide resolved
birdhouse/config/hummingbird/config/canarie-api/canarie_api_monitoring.py.template
Outdated
Show resolved
Hide resolved
proxy_pass http://${PAVICS_FQDN}:8087; | ||
proxy_pass http://geoserver:8080/geoserver/; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That bypasses the whole Magpie/Twitcher auth. It should be adjusted to use the protected proxy location while we are modifying this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I didn't make any changes to which services were behind twitcher in this PR. As we discussed in #328 that feels like a change that should happen in a different PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that the monitoring routes were an exception to that rule since putting those behind twitcher was the only way I could think to not expose the monitoring ports and still protect those routes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. It's ok if done through a subsequent PR. I just want to make sure we don't forget about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To Do: #333
proxy_pass http://${PAVICS_FQDN}:8800/jupyter/; | ||
proxy_pass http://jupyterhub:8000/jupyter/; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bypasses the Magpie/Twitcher auth.
Is JupyterHub sufficient by itself to form the Magpie login through the UI?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes jupyter can handle its own authentication by communicating with magpie directly. I'd love to figure out how to put jupyterhub behind twitcher eventually but I spent a few days trying to figure it out and didn't come up with a good solution so I put it back on the to-do list for later.
(also, I didn't make any changes to which services are behind twitcher in this PR, see my comment above)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To Do: #334
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bypasses the Magpie/Twitcher auth.
@fmigneault FYI Jupyter has never been behind Twitcher since day one, it has its own authentication.
proxy_pass http://${PAVICS_FQDN}:9000/; | ||
proxy_pass http://portainer:9000/; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bypasses Magpie/Twitcher auth.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes.... portainer does manage its own authorization so this is not necessarily insecure but we could make it more secure by putting it behind twitcher as well. I'd be open to double-protecting portainer since its such a powerful tool.
(also, I didn't make any changes to which services are behind twitcher in this PR, see my comment above)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To Do: #335
E2E Test ResultsDACCS-iac Pipeline ResultsBuild URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1589/Result : failure BIRDHOUSE_DEPLOY_BRANCH : remove-external-ports DACCS_CONFIGS_BRANCH : master PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master PAVICS_SDI_BRANCH : master DESTROY_INFRA_ON_EXIT : true PAVICS_HOST : https://host-140-90.rdext.crim.ca PAVICS-e2e-workflow-tests Pipeline ResultsTests URL : http://daccs-jenkins.crim.ca:80/job/PAVICS-e2e-workflow-tests/job/master/1143/NOTEBOOK TEST RESULTS |
E2E Test ResultsDACCS-iac Pipeline ResultsBuild URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1590/Result : success BIRDHOUSE_DEPLOY_BRANCH : remove-external-ports DACCS_CONFIGS_BRANCH : master PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master PAVICS_SDI_BRANCH : master DESTROY_INFRA_ON_EXIT : true PAVICS_HOST : https://host-140-126.rdext.crim.ca PAVICS-e2e-workflow-tests Pipeline ResultsTests URL : http://daccs-jenkins.crim.ca:80/job/PAVICS-e2e-workflow-tests/job/master/1144/NOTEBOOK TEST RESULTS |
Let me try it again today. I had some trouble logging into Grafana last time but did not have time to investigate. |
… prometheus and alertmanager PAVICS_FQDN_PUBLIC is the prefered public hostname over PAVICS_FQDN anyways.
Because we removed all external ports.
E2E Test ResultsDACCS-iac Pipeline ResultsBuild URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1922/Result : failure BIRDHOUSE_DEPLOY_BRANCH : remove-external-ports DACCS_CONFIGS_BRANCH : master PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master PAVICS_SDI_BRANCH : master DESTROY_INFRA_ON_EXIT : true PAVICS_HOST : https://host-140-46.rdext.crim.ca PAVICS-e2e-workflow-tests Pipeline ResultsTests URL : http://daccs-jenkins.crim.ca:80/job/PAVICS-e2e-workflow-tests/job/master/1247/NOTEBOOK TEST RESULTS |
E2E Test ResultsDACCS-iac Pipeline ResultsBuild URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1923/Result : failure BIRDHOUSE_DEPLOY_BRANCH : remove-external-ports DACCS_CONFIGS_BRANCH : master PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master PAVICS_SDI_BRANCH : master DESTROY_INFRA_ON_EXIT : true PAVICS_HOST : https://host-140-69.rdext.crim.ca PAVICS-e2e-workflow-tests Pipeline ResultsTests URL : http://daccs-jenkins.crim.ca:80/job/PAVICS-e2e-workflow-tests/job/master/1248/NOTEBOOK TEST RESULTS |
E2E Test ResultsDACCS-iac Pipeline ResultsBuild URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1924/Result : failure BIRDHOUSE_DEPLOY_BRANCH : remove-external-ports DACCS_CONFIGS_BRANCH : master PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master PAVICS_SDI_BRANCH : master DESTROY_INFRA_ON_EXIT : true PAVICS_HOST : https://host-140-20.rdext.crim.ca PAVICS-e2e-workflow-tests Pipeline ResultsTests URL : http://daccs-jenkins.crim.ca:80/job/PAVICS-e2e-workflow-tests/job/master/1249/NOTEBOOK TEST RESULTS |
@mishaschwartz We have a problem. All the Grafana graphs are broken in this PR. I have ensured they worked properly before updating to this PR. I honestly do not want to delay this PR longer since we have way too many opened already. How about you undo only the part about Grafana, keeping Prometheus and Alertmanager, those are working fine. The rest is fine too. Grafana happens to already have its own authentication so that's not unsecure, which is the spirit of this PR. |
I managed to find a fix for the Grafana dashboard but now I am losing stats for 4 of the graphs. It's as if Prometheus is unable to access node-exporter which make no sense since it is able to still get some stats from it. I still think we might want to rollback the changes for all the monitoring components so we can merge this PR. I do not have time to look more into those broken 4 broken graphs (Load, Unused Disk Space, Available Memory, Disk I/O) currently. |
E2E Test ResultsDACCS-iac Pipeline ResultsBuild URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1927/Result : failure BIRDHOUSE_DEPLOY_BRANCH : remove-external-ports DACCS_CONFIGS_BRANCH : master PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master PAVICS_SDI_BRANCH : master DESTROY_INFRA_ON_EXIT : true PAVICS_HOST : https://host-140-46.rdext.crim.ca PAVICS-e2e-workflow-tests Pipeline ResultsTests URL : http://daccs-jenkins.crim.ca:80/job/PAVICS-e2e-workflow-tests/job/master/1250/NOTEBOOK TEST RESULTS |
@tlvu this looks like a problem caused by the fact that the node-exporter:
image: quay.io/prometheus/node-exporter:v1.0.0
container_name: node-exporter
volumes:
- /:/host:ro,rslave
network_mode: "host"
pid: "host"
command: --path.rootfs=/host
restart: always which makes it inaccessible to other containers (in this case it needs to be visible to |
Then this is even more weird because if I vaguely remember |
The data from cadvisor was always working, just the data from node-exporter was not accessible to prometheus. See the fix here: d4ee23f Unfortunately it means that this one port is available outside of the docker network (9100) but its a requirement of node-exporter and there's nothing we can do about that if we want to use it. |
@tlvu Here is what my test machine's grafana looks like now: |
E2E Test ResultsDACCS-iac Pipeline ResultsBuild URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1931/Result : failure BIRDHOUSE_DEPLOY_BRANCH : remove-external-ports DACCS_CONFIGS_BRANCH : master PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master PAVICS_SDI_BRANCH : master DESTROY_INFRA_ON_EXIT : true PAVICS_HOST : https://host-140-69.rdext.crim.ca PAVICS-e2e-workflow-tests Pipeline ResultsTests URL : http://daccs-jenkins.crim.ca:80/job/PAVICS-e2e-workflow-tests/job/master/1252/NOTEBOOK TEST RESULTS |
@mishaschwartz thanks for finding this. I confirm your fix worked on my side as well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All good for me, just a little update needed for the changelog.
Merge away when ready.
- Do not expose additional ports: | ||
- Docker compose no longer exposes any container ports outside the default network except for ports 80 and 443 from | ||
the proxy container. This ensures that ports that are not intended for external access are not exposed to the wider | ||
internet even if firewall rules are not set correctly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a note that node-exporter
bind on host networking so its port is also exposed.
E2E Test ResultsDACCS-iac Pipeline ResultsBuild URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1933/Result : failure BIRDHOUSE_DEPLOY_BRANCH : remove-external-ports DACCS_CONFIGS_BRANCH : master PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master PAVICS_SDI_BRANCH : master DESTROY_INFRA_ON_EXIT : true PAVICS_HOST : https://host-140-69.rdext.crim.ca PAVICS-e2e-workflow-tests Pipeline ResultsTests URL : http://daccs-jenkins.crim.ca:80/job/PAVICS-e2e-workflow-tests/job/master/1253/NOTEBOOK TEST RESULTS |
Forgot to say, please bump minor and not patch on this one since anyone relying on those open ports internally will have to change their habit/scripts. |
@mishaschwartz Yes please, I would appreciate if you could bump it. I've seen the request related to the bump but I won't be able to work on DACCS until next week. Thanks a lot |
Thanks @mishaschwartz ! |
Overview
Docker compose no longer exposes any container ports outside the default network except for ports 80 and 443 from the proxy container. This ensures that ports that are not intended for external access are not exposed to the wider internet even if firewall rules are not set correctly.
Note that if the
monitoring
component is used then port 9100 will be exposed from thenode-exporter
container. This is because this container must be run on the host machine's network and unfortunately there is no known workaround that would not require this port to be exposed on the host machine.Changes
Non-breaking changes
Breaking changes
Related Issue / Discussion
Additional Information
Links to other issues or sources.