Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP proxy mode for egress-router #13586

Merged
merged 1 commit into from
Jun 9, 2017

Conversation

danwinship
Copy link
Contributor

@danwinship danwinship commented Mar 30, 2017

This adds an http/https proxy mode to egress-router. If you don't specify an EGRESS_DESTINATION, then instead of acting as an iptables-based reflector, it will run squid and act as an HTTP proxy (to any destination).

I wanted to use something like tinyproxy, but that's not currently shipped as part of RHEL. It looked like squid and apache were the only options; if anyone knows of a smaller http proxy option that is in RHEL, let me know. (haproxy is not an option; it only does reverse proxying.)

It doesn't have any configurability at the moment, but some could be added. (Eg, limiting destination addresses?) We'd want to do that by having generic environment variables in the pod spec, not by letting the user provide their own squid.conf, because we don't want to be tied to squid necessarily.

Also, maybe this should be explicitly configured somehow (EGRESS_ROUTER_MODE=http-proxy?) rather than being implicit if you don't specify EGRESS_DESTINATION. Or maybe it should be a separate image entirely?

@openshift/networking PTAL
Trello Card: https://trello.com/c/9RHw27K7

Copy link
Contributor Author

@danwinship danwinship left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI:

# This is a mostly-transparent HTTP/HTTPS proxy; it only exists to
# redirect requests, not to authenticate them in any way.

http_port 8080
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

8080 seemed like the most "generic HTTP proxy" port to me? I didn't want to use 3128 because that's explicitly squid's port

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

8080 is fine, but 3128 wouldn't be bad too. FoxyProxy and the Android proxy config both default to 3128 for the port.

# redirect requests, not to authenticate them in any way.

http_port 8080
http_access allow all
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Allow all HTTP requests from all sources"


http_port 8080
http_access allow all
cache deny all
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Don't cache any requests"

http_access allow all
cache deny all
access_log none all
debug_options ALL,0
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Don't log every request/response"
"Print only critical errors (level 0) in all log domains"

The container ends up logging

WARNING: Cannot write log file: /var/log/squid/cache.log
/var/log/squid/cache.log: No such file or directory
         messages will be sent to 'stderr'.

twice at startup. (It shows up in oc logs for the pod.) I couldn't find any way to get it to just log to stderr without complaining about it...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not the end of the world. But you could try /proc/self/fd/2 if the message really annoyed you. I'd probably just live with that error though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that doesn't work either. (Same error)

Copy link
Contributor

@knobunc knobunc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks Dan!

@danwinship
Copy link
Contributor Author

(pushed an update with the addition of shutdown_lifetime 0 to cause squid to exit immediately when you delete the pod, rather than lingering 30 seconds)

@danwinship danwinship changed the title HTTP proxy mode for egress-router [DO NOT MERGE] HTTP proxy mode for egress-router Mar 30, 2017
@danwinship
Copy link
Contributor Author

we should implement this as two separate containers, with only the "setup" container being privileged

# bash to react to SIGTERM if we explicitly set a trap for it, except
# that bash doesn't process signal traps while it is waiting for a
# process to finish, and we have to be waiting for a process to finish
# because there's no way to sleep forever within bash.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a slight feeling that this leaves the tail around as a zombie ... why not

trap "exit" TERM
while true; do
    sleep 60
done

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't leave a zombie (note that this is existing code, it's just getting re-indented). But yeah, a loop-and-sleep would be more obvious anyway...

I'm also thinking about making the setup be done in an initContainer, and then run origin-pod as the "main" container. (Or is that "cheating" because in the future origin-pod might do something other than just sit and wait for a SIGTERM?)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using an init container doesn't sound like a bad idea to me but I'm not sure I can be authoritative on that

@imcsk8
Copy link
Contributor

imcsk8 commented Apr 3, 2017

You could touch /var/log/squid/cache.log in the Dockerfile in order to avoid the warning message

@danwinship
Copy link
Contributor Author

You could touch /var/log/squid/cache.log in the Dockerfile in order to avoid the warning message

No, then it would actually log to that file; I want it to log to stderr, so that any errors will show up in "oc logs"

@danwinship
Copy link
Contributor Author

danwinship commented Apr 26, 2017

Rebased on top of #13837. Now it uses two containers (a privileged setup container and an unprivileged squid container), is much more configurable, and has a unit test.

If you wanted to test it:

    apiVersion: v1
    kind: Pod
    metadata:
      name: egress-http
      labels:
        name: egress-http
      annotations:
        pod.network.openshift.io/assign-macvlan: "true"
        pod.beta.kubernetes.io/init-containers: '[
            {
                "name": "egress-init-container",
                "image": "openshift/origin-egress-router",
                "imagePullPolicy": "IfNotPresent",
                "env": [
                    { "name": "EGRESS_SOURCE", "value": "172.17.0.99" },
                    { "name": "EGRESS_GATEWAY", "value": "172.17.0.1" },
                    { "name": "EGRESS_ROUTER_MODE", "value": "http-proxy" }
                ],
                "securityContext": { "privileged": true }
            }
        ]'
    spec:
      containers:
      - name: egress-proxy
        image: openshift/origin-egress-http-proxy
        imagePullPolicy: IfNotPresent
        env:
        - name: EGRESS_HTTP_PROXY_DESTINATION
          value: |
            !*.example.com
            *

(that means "allow to anywhere except example.com and its subdomains". See the unit test for more syntax examples. I'll submit a docs bug once this is finalized. Note that once the kube 1.6 rebase completes you'll be able to specify the init container "normally" rather than as an annotation.)

@danwinship
Copy link
Contributor Author

Just to clarify, this is reviewable, it's just do-not-merge because it's built on top of #13837 which isn't merged yet. (Only the last commit is new in this PR.)

@danwinship danwinship changed the title [DO NOT MERGE] HTTP proxy mode for egress-router HTTP proxy mode for egress-router May 17, 2017
@dcbw
Copy link
Contributor

dcbw commented May 27, 2017

LGTM

@openshift-bot openshift-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 27, 2017
@knobunc
Copy link
Contributor

knobunc commented May 30, 2017

[merge]

@knobunc
Copy link
Contributor

knobunc commented May 31, 2017

This is not a test failure... it's me forgetting the merge state. I'll re-merge this on Monday.

@openshift-bot
Copy link
Contributor

[Test]ing while waiting on the merge queue

@openshift-bot openshift-bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 5, 2017
Update the base egress-router image and add a new egress-http-proxy
image that runs squid to proxy HTTP connections to some or all
destinations.
@danwinship
Copy link
Contributor Author

flake #14229, [test]

@openshift-bot
Copy link
Contributor

Evaluated for origin test up to 351b252

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin/1943/) (Base Commit: 09f08bd)

@openshift-bot
Copy link
Contributor

Evaluated for origin merge up to 351b252

@openshift-bot
Copy link
Contributor

openshift-bot commented Jun 9, 2017

continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/merge_pull_request_origin/936/) (Base Commit: 2458531) (Image: devenv-rhel7_6333)

@openshift-bot openshift-bot merged commit cc2ed8f into openshift:master Jun 9, 2017
@danwinship danwinship deleted the egress-router-proxy branch June 10, 2017 14:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants