Skip to content

Latest commit

 

History

History
159 lines (127 loc) · 6.13 KB

how-to-container.md

File metadata and controls

159 lines (127 loc) · 6.13 KB

Testing HAProxy & DataPlane API with a Container

Illustrating the utility of haproxy as a load-balancer is best accomplished using a container:

  1. Build the image:

    make build-image
  2. Start the haproxy image in detached mode and map its secure, dataplane API port (5556) and the port used by the load balancer (8085) to the local host:

    docker run -it --name haproxy -d --rm -p 5556:5556 -p 8085:8085 haproxy
  3. Create a frontend configuration:

    $ curl -X POST \
      --cacert example/ca.crt \
      --cert example/client.crt --key example/client.key \
      --user client:cert \
      -H "Content-Type: application/json" \
      -d '{"name": "lb-frontend", "mode": "tcp"}' \
      "https://localhost:5556/v2/services/haproxy/configuration/frontends?version=1"
    {"mode":"tcp","name":"lb-frontend"}
  4. Bind the frontend configuration to *:8085:

    $ curl -X POST \
      --cacert example/ca.crt \
      --cert example/client.crt --key example/client.key \
      --user client:cert \
      -H "Content-Type: application/json" \
      -d '{"name": "lb-frontend", "address": "*", "port": 8085}' \
      "https://localhost:5556/v2/services/haproxy/configuration/binds?frontend=lb-frontend&version=2"
      {"address":"*","name":"lb-frontend","port":8085}
  5. At this point it is possible to curl the load balancer, even if there is no one on the backend answering the query:

    $ curl http://localhost:8085
    <html><body><h1>503 Service Unavailable</h1>
    No server is available to handle this request.
    </body></html>
  6. Create a backend configuration and bind it to the frontend configuration:

    $ curl -X POST \
      --cacert example/ca.crt \
      --cert example/client.crt --key example/client.key \
      --user client:cert \
      -H "Content-Type: application/json" \
      -d '{"name": "lb-backend", "mode":"tcp", "balance": {"algorithm":"roundrobin"}, "adv_check": "tcp-check"}' \
      "https://localhost:5556/v2/services/haproxy/configuration/backends?version=3"
      {"adv_check":"tcp-check","balance":{"algorithm":"roundrobin","arguments":null},"mode":"tcp","name":"lb-backend"}
  7. Update the frontend to use the backend:

    $ curl -X PUT \
      --cacert example/ca.crt \
      --cert example/client.crt --key example/client.key \
      --user client:cert \
      -H "Content-Type: application/json" \
      -d '{"name": "lb-frontend", "mode": "tcp", "default_backend": "lb-backend"}' \
      "https://localhost:5556/v2/services/haproxy/configuration/frontends/lb-frontend?version=4"
      {"default_backend":"lb-backend","mode":"tcp","name":"lb-frontend"}
  8. Run two simple web servers in detached mode named http1 and http2:

    docker run --rm -d -p 8086:80 --name "http1" nginxdemos/hello:plain-text && \
    docker run --rm -d -p 8087:80 --name "http2" nginxdemos/hello:plain-text
  9. Add the first web server to the backend configuration:

    $ curl -X POST \
      --cacert example/ca.crt \
      --cert example/client.crt --key example/client.key \
      --user client:cert \
      -H "Content-Type: application/json" \
      -d '{"name": "lb-backend-server-1", "address": "'"$(docker inspect http1 -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}')"'", "port": 80, "check": "enabled", "maxconn": 30, "verify": "none", "weight": 100}' \
      "https://localhost:5556/v2/services/haproxy/configuration/servers?backend=lb-backend&version=5"
      {"address":"172.17.0.2","check":"enabled","maxconn":30,"name":"lb-backend-server-1","port":80,"weight":100}
  10. With the first web server attached to the load balancer's backend configuration, it should now be possible to query the load balancer and get more than an empty reply:

    $ curl http://localhost:8085
    Server address: 172.17.0.2:80
    Server name: 456dbfd57701
    Date: 21/Dec/2019:22:22:22 +0000
    URI: /
    Request ID: 7bcabcecb553bcee5ed7efb4b8725f96

    Sure enough, the server address 172.17.0.2 is the same as the reported IP address of lb-backend-server-1 above!

  11. Add the second web server to the backend configuration:

    $ curl -X POST \
      --cacert example/ca.crt \
      --cert example/client.crt --key example/client.key \
      --user client:cert \
      -H "Content-Type: application/json" \
      -d '{"name": "lb-backend-server-2", "address": "'"$(docker inspect http2 -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}')"'", "port": 80, "check": "enabled", "maxconn": 30, "verify": "none", "weight": 100}' \
      "https://localhost:5556/v2/services/haproxy/configuration/servers?backend=lb-backend&version=6"
      {"address":"172.17.0.3","check":"enabled","maxconn":30,"name":"lb-backend-server-2","port":80,"weight":100}
  12. Now that both web servers are connected to the load-balancer, use curl to query the load-balanced endpoint a few times to validate that both backend servers are being used:

    $ for i in {1..4}; do curl http://localhost:8085 && echo; done
    Server address: 172.17.0.2:80
    Server name: 456dbfd57701
    Date: 21/Dec/2019:22:26:51 +0000
    URI: /
    Request ID: 77918aee58dd1eb7ba068b081d843a7c
    
    Server address: 172.17.0.3:80
    Server name: 877362812ed9
    Date: 21/Dec/2019:22:26:51 +0000
    URI: /
    Request ID: 097ccb892b565193f334fb544239fca6
    
    Server address: 172.17.0.2:80
    Server name: 456dbfd57701
    Date: 21/Dec/2019:22:26:51 +0000
    URI: /
    Request ID: 61022aa3a8a37cdf37541ec1c24b383e
    
    Server address: 172.17.0.3:80
    Server name: 877362812ed9
    Date: 21/Dec/2019:22:26:51 +0000
    URI: /
    Request ID: 2b2b9a0ef2e4eba53f6c5c118c10e1d8

    It's alive!

  13. Stop haproxy and kill the web servers:

    docker kill haproxy http1 http2