Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

💡 [Feature] Protected ports by design #222

Closed
Tracked by #26
fmigneault opened this issue Nov 12, 2021 · 2 comments · Fixed by #331
Closed
Tracked by #26

💡 [Feature] Protected ports by design #222

fmigneault opened this issue Nov 12, 2021 · 2 comments · Fixed by #331
Assignees
Labels
enhancement New feature or request security Issues or features related to security concerns

Comments

@fmigneault
Copy link
Collaborator

Description

The current definitions in docker-compose take advantage of the ports being accessible to redirect incoming HTTP(S) requests to HTTPS /twitcher/ows/proxy endpoint, which validates user access/permissions, before redirecting to the targeted service's port when allowed access.

Up to this point, access to those ports has been restricted by external firewall rules disallowing incoming requests to directly target a port (only 433 is available), while internal network resolution can use them.

Although this approach works, it remains a security concern and possible vector of attack.
It is also problematic for cases where authentication could be required, but could be bypassed if accessing the port directly.
For security reasons, I will not provide more details here on "how-to" take advantage of this, but I can provide examples in private.

Furthermore, it is not clear with the current definitions that firewall rules are mandatory to protect resources.
When looking at component names and descriptions (eg: all-public-access, database-external-ports, secure-thredds, etc.), everything seems to indicate that the platform secures access by itself, while in fact, nothing is protected if the platform is not behind proper firewall control.

Firewall rules are good practice to limit chances of unintentional access leak, but the platform design shouldn't solely rely on those to secure its own resources, as it supposedly advertises to be doing so.

Feature

The services should not employ host/ports of the server (eg: network: default).
Instead, they should take advantage of docker network links to restrict very strictly their access.

For example, /twitcher/ows/proxy/flyingpigeon would redirect (after Twitcher/Magpie validation) to flyingpigeon:8093 instead of <localhost>:8093, and ports: "8093:8093" entry in docker-compose would be removed. An optional component such as https://github.com/bird-house/birdhouse-deploy/blob/master/birdhouse/optional-components/database-external-ports/docker-compose-extra.yml could be provided for debugging purposes, where proxy could map all ports to their relevant <service>:<port>.

A network only for this purpose (eg: network: services-access) should also be used to keep HTTPS proxy redirects distinct from incoming HTTPS requests and other internal networks. In other words, service-port redirects should not mingle with networks for database connections, docker proxy, etc.

Concerned Organizations

@fmigneault fmigneault added enhancement New feature or request security Issues or features related to security concerns labels Nov 12, 2021
@tlvu
Copy link
Collaborator

tlvu commented Nov 12, 2021

For example, /twitcher/ows/proxy/flyingpigeon would redirect (after Twitcher/Magpie validation) to flyingpigeon:8093 instead of <localhost>:8093, and ports: "8093:8093" entry in docker-compose would be removed

Yeah I am okay with this proposal.

In fact, when I started working on the platform, I never understood why <localhost>:8093 was used instead of flyingpigeon:8093. That is probably why we have a hardcoded docker-compose restart proxy at the end of every ./pavics-compose.sh up.

The only possible advantage I would see for using <localhost>:8093 is to limit destruction and re-creation of intermediate containers. Ex if we have the following dependency proxy -> twitcher -> magpie -> flyingpigeon. If we re-deploy a new flyingpigeon, only flyingpigeon container will have to be destroyed and recreated. twitcher and magpie is un-touched and proxy is only restarted by ./pavics-compose.sh up.

Using flyingpigeon:8093 with links directive in docker-compose.yml, all 4 containers will have to be destroyed and recreated. Again, containers are ephemeral so frequent destructions and re-creations are no problem.

The only problem is we will lose ancient container logs. For the proxy container, since the canarie-api stats is handled by the same container and not persisted to disk, we will also lose the stats.

So if we solve the problems above and we do not see other additional problems, I see no reason to not implement this idea.

@matprov
Copy link
Collaborator

matprov commented Nov 12, 2021

The only problem is we will lose ancient container logs. For the proxy container, since the canarie-api stats is handled by the same container and not persisted to disk, we will also lose the stats.

With the log aggregation coming to the platform, this concern will disappear for the services which logs will be kept inside the logging system, as they will be accessible even after container is destroyed. The question to be asked would be more about the retention time each service's logs are kept. But this is not the question to answer here. Important thing here is that losing logs is not a dealbreaker.

Again, containers are ephemeral so frequent destructions and re-creations are no problem.

Exactly! Again, not a dealbreaker here.

mishaschwartz added a commit that referenced this issue Aug 10, 2023
## Overview

Docker compose no longer exposes any container ports outside the default
network except for ports 80 and 443 from the proxy container. This
ensures that ports that are not intended for external access are not
exposed to the wider internet even if firewall rules are not set
correctly.

Note that if the `monitoring` component is used then port 9100 will be
exposed from the `node-exporter` container. This is because this
container must be run on the host machine's network and unfortunately
there is no known workaround that would not require this port to be
exposed on the host machine.

## Changes

**Non-breaking changes**

- Changes all internal URL references to components

**Breaking changes**

- removes the lb_flyingpigeon endpoint because it requires direct access
to an exposed port

## Related Issue / Discussion

- Resolves #222

## Additional Information

Links to other issues or sources.

- This does not make any changes to the components that are deprecated
in #311
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request security Issues or features related to security concerns
Projects
None yet
3 participants