Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

contour serve should take its configuration from a configuration file #1130

Closed
davecheney opened this issue May 29, 2019 · 4 comments · Fixed by #1253
Closed

contour serve should take its configuration from a configuration file #1130

davecheney opened this issue May 29, 2019 · 4 comments · Fixed by #1253
Assignees
Labels
blocked/needs-design Categorizes the issue or PR as blocked because it needs a design document. kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Milestone

Comments

@davecheney
Copy link
Contributor

davecheney commented May 29, 2019

contour serve currently takes all its configuration from command line flags. As of the end of May, it looks like this

% contour serve --help
usage: contour serve [<flags>]

Serve xDS API traffic

Flags:
  --help                         Show context-sensitive help (also try --help-long and --help-man).
  --incluster                    use in cluster configuration.
  --kubeconfig="/Users/dfc/.kube/config"  
                                 path to kubeconfig (if not in running inside a cluster)
  --xds-address="127.0.0.1"      xDS gRPC API address
  --xds-port=8001                xDS gRPC API port
  --stats-address="0.0.0.0"      Envoy /stats interface address
  --stats-port=8002              Envoy /stats interface port
  --debug-http-address="127.0.0.1"  
                                 address the debug http endpoint will bind to
  --debug-http-port=6060         port the debug http endpoint will bind to
  --http-address="0.0.0.0"       address the metrics http endpoint will bind to
  --http-port=8000               port the metrics http endpoint will bind to
  --envoy-http-access-log="/dev/stdout"  
                                 Envoy HTTP access log
  --envoy-https-access-log="/dev/stdout"  
                                 Envoy HTTPS access log
  --envoy-external-http-port=80  External port for HTTP requests
  --envoy-external-https-port=443  
                                 External port for HTTPS requests
  --envoy-service-http-address="0.0.0.0"  
                                 Kubernetes Service address for HTTP requests
  --envoy-service-https-address="0.0.0.0"  
                                 Kubernetes Service address for HTTPS requests
  --envoy-service-http-port=8080  
                                 Kubernetes Service port for HTTP requests
  --envoy-service-https-port=8443  
                                 Kubernetes Service port for HTTPS requests
  --use-proxy-protocol           Use PROXY protocol for all listeners
  --ingress-class-name=INGRESS-CLASS-NAME  
                                 Contour IngressClass name
  --ingressroute-root-namespaces=INGRESSROUTE-ROOT-NAMESPACES  
                                 Restrict contour to searching these namespaces for root ingress routes

In addition to there being a lot of configuration flags that affect the state of a contour serve process, we can only configure things which act like simple strings or numbers. Configuration such as a JSON format for HTTP logs (#624) would be unwieldily expressed as a CLI flag.

I propose that the majority of these configuration flags move into a configuration file which must be provided for contour serve to start. eg,

% contour serve -c /path/to/config

The format of the configuration file is not important, but would probably be JSON, YAML. If it were a gRPC version of same that would be consistent with Envoy's configuration file, and simplify parsing and validation of the configuration file.

It is imperative that whatever configuration format is chosen, it must support inline comments.

Why not a config map?

I want to call out specifically why I'm advocating for a file on disk, not a config map. This is for several reasons.

  1. Developer experience. A file on disk makes it simple to run contour serve locally which is a key part of the development inner loop. If you have to connect to a running k8s cluster to obtain Contour's configuration that adds friction to the process.
  2. Smoke testing; if you change a config map, everyone sees it immediately. With a config map you can deploy a canary against a new revision of the configuration file without affecting existing instances.
  3. Configuration lifecycle. Many of the current CLI flags affect the lifecycle of the whole Contour process. If they change, we need to restart Contour to reflect those changes. At the moment as Contour's flags live in the Deployment object, editing those causes the process to restart. If we moved away from CLI flags to something that Contour could watch via the API server that raises questions about when a configuration change would be actioned. Should Contour apply those configuration changes on the fly? This design would force Contour into a mode where it reloaded its configuration internally. From experience, this is a difficult pattern to get right, and best avoided if possible.

Storing configuration on disk has a very well defined lifecycle; change config file, restart process to pick up changes. That applies equally in a local deployment as it does inside k8s. Once #881 is done the cost of restarting a Contour process will drop significantly.

To be clear, we may use a config map to hold the contents of the config file for deployment, but during deployment the config file will be mounted in a volume, not referenced directly.

@davecheney davecheney added kind/feature Categorizes issue or PR as related to a new feature. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. blocked/needs-design Categorizes the issue or PR as blocked because it needs a design document. blocked Blocked waiting on a dependency labels May 29, 2019
@davecheney davecheney removed help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. blocked Blocked waiting on a dependency labels May 30, 2019
@davecheney davecheney added this to the 0.14.0 milestone May 31, 2019
@davecheney
Copy link
Contributor Author

Thinking about this some more while it would be beneficial to have the lifecycles of Contour and Envoy separated, it isn't mandatory to deliver this feature.

Remove the blocker tag, assigned tentatively to 0.14.

@davecheney
Copy link
Contributor Author

Hello.

This issue is my major contribution to the 0.14 cycle. This is an update for anyone subscribed to this issue or waiting on #1124 or #1053.

The design of this feature is straight forward; add a configuration file as an additional source of configuration information for the contour serve sub command. The primary driver of this feature is to have a place to put the configuration for JSON access logs #1053 as it is believed that there exists no reasonable default value for JSON access logs.

In implementing this feature I have hit the following issues

User Experience

Currently contour serve takes configuration from two sources, command line flags and environment variables. The latter is implemented by defining a cli flag and configuring kingpin to source the default value from an environment variable. In practice this works quite well as kubernetes currently does not permit environment variables to be used in deployment manifests, that is you cannot write

spec:
   command: “contour”
   args:
   - “serve”
   - “--envoy-service-address=$CONTOUR_POD_IP”

($CONTOUR_POD_IP is a made up environment variable, I’ve no idea if it exists in the real world)

However if we define the --envoy-source-address flag to have a default value of $CONTOUR_POD_IP the desired result is achieved.

This arrangement is not without downsides; env vars are somewhat hidden compared to explicit command line arguments. However, there is a clear order of precedence, env vars are overridden by command line flags. Period.

If we add a configuration file into the mix, this increases the number of places a piece of configuration may be defined, and increases the number of places where one configuration form overwrites another to 3 factorial.

I’m faced with a decision; now we have a configuration file, do I move configuration flags into the file? To not do so would create a bizarre order of precedence problem. Arguably the operator would want the precedence order to be (lowest to higher)

env vars => config file => command line flags.

If we defined the precedence order to be

env vars => command line flags => config file, then the config file as defined overrides both the hidden parameters in the environment and the explicit ones in the deployment.

So, clearly there is an argument that fields in the configuration file supplant command line flags. If that is so then what do we do about the command line flags that previously took their values from the environment. Does the configuration file represent a template that contour processes wrt. the current environment on startup? Possibly, but that is unusual and surprising.

The situation I find myself in is the transition from env var/cli flags to a config file will probably be incomplete. All contour users will have to adapt to a model where some configuration is defined in cli flags, and others is placed inside a configuration file. This is an unsatisfying result.

Configuration file maintenance

Just as envoy requires a configuration file to start, shortly so shall contour. The question for a cluster administrator is “where does that configuration file come from?”.

Classical kubernetes dictates that a configuration file’s contents are stored in a ConfigMap and then “mounted” as a file in a temporary volume. This is in fact how the earliest versions of contour bootstrapped Envoy before contour bootstrap.The difficulty with this approach is the contents of a single file on disk must be represented as a literal value for a single yaml key. As the contents of the configuration file will likely be some form of yaml or json, we have a nesting an quoting problem. This is further exacerbated by the need to embed a quoted json template describing the desired json access log format. People who are used to shell meta quoting hijinx will be shuffling uncomfortably in their chairs at this point.

To be continued

To make it clear, if you’ve had the courage to read this far, optional JSON logging is a feature that we plan for 0.15. That’s not being bumped. However, I wanted to give everyone subscribed to this issue an understanding of the problems that I’m considering as I work through the implementation of this feature.

Thank you for your time.

@stevesloka
Copy link
Member

In practice this works quite well as kubernetes currently does not permit environment variables to be used in deployment manifests

You are able to reference ENV variables from inside a deployment manifest. You can even have those ENV vars source from a secret or configmap so they are dynamic based upon the secret/config map that is referenced. We used to do this until it got hardcoded, so if a user would change the ports that Contour served, it would be dynamic to Envoy's deployment manifest: https://github.com/heptio/contour/blob/6125ee57b585ba92f8885838be7b450a5503340c/examples/ds-hostnet-split/03-envoy.yaml#L66-L69

Other projects like kind use a similar pattern with configuration files. They allow specific command line args to be passed, but most custom pieces are tucked away in the configuration file. They specify a yaml file with a version and tie configuration to that version: https://kind.sigs.k8s.io/docs/user/quick-start#configuring-your-kind-cluster

@davecheney
Copy link
Contributor Author

You are able to reference ENV variables from inside a deployment manifest.

WOW. TIL. Is this a recent change. I have a memory a year ago that if you wanted to use env vars in a deployment cmd line the best way was to wrap your command in a shell script that did the expansion for you.

davecheney added a commit to davecheney/contour that referenced this issue Jul 19, 2019
Fixes projectcontour#1130

This PR adds the ability to supply configuration values to contour serve
from a configuration file. Some gymnastics are required to make this
work with kingpin and keep the precidence of config file -> env vars ->
cli flags.

At the moment only a few values are plumbed into the config file, mainly
to keep the linter's happy. More will be added in 0.15.

Signed-off-by: Dave Cheney <dave@cheney.net>
davecheney added a commit that referenced this issue Jul 19, 2019
Fixes #1130

This PR adds the ability to supply configuration values to contour serve
from a configuration file. Some gymnastics are required to make this
work with kingpin and keep the precidence of config file -> env vars ->
cli flags.

At the moment only a few values are plumbed into the config file, mainly
to keep the linter's happy. More will be added in 0.15.

Signed-off-by: Dave Cheney <dave@cheney.net>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blocked/needs-design Categorizes the issue or PR as blocked because it needs a design document. kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants