Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

There is a framework for adding end-to-end tests to the OSS suite release repo #39

Open
izgeri opened this issue Feb 14, 2020 · 9 comments

Comments

@izgeri
Copy link
Contributor

izgeri commented Feb 14, 2020

The suite release repo has a pipeline set up so that end-to-end tests can be added to the project easily.

@doodlesbykumbi
Copy link
Contributor

Some considerations and considerations for putting together this framework.

  1. What is the goal of these tests ?

    Each component will likely already have its own test suite. The focus here is likely to carry out smoke tests to ensure that happy paths between components within the release work as expected.

  2. What environment/s are needed to exercise these tests and situate the involved components?

    Kubernetes seems like the least common multiple between components like Conjur OSS, Conjur Helm Chart, Kubernetes authenticator and Secretless. The E2E tests would likely benefit from running within Kubernetes.

    Perhaps, there's no hard requirement for that to be in a production cluster and instead we might be able to leverage Kubernetes in Docker.

  3. Will the E2E (end-to-end) tests be one off for any given release or do we anticipate that there'd be value in repeated runs ?

    This is important because there'll be X number of releases existing in parallel. I'm not sure it'd be feasible to keep testing each of them over and over again. I just didn't want to dismiss it without putting it out to the floor.

  4. Could the form of the E2E tests change between releases as components change ?

    This will likely happen as the interface of components change and the testing framework would need to be able to accommodate this.

    An idea to deal with this is to copy-on-change. When a components changes then we can copy the previous test and modify it as necessary to accommodate the new scenario while maintaining everything before that point.

  5. Are imperative bash scripts good enough for defining the test cases or do we want to create some layer of abstraction to push for something more declarative ?

    It be nice to have a clean abstraction that allows for declarative test definitions.

@doodlesbykumbi
Copy link
Contributor

I'll try to put together an E2E test case with these considerations in mind and share my findings here.

@izgeri
Copy link
Contributor Author

izgeri commented Feb 20, 2020

Just to respond to a few of your notes:

  • The main goal of this test framework is to enable us to test key use cases / end-to-end flows that involve more than just a single component + Conjur. Most individual components already have integration tests with Conjur - we don't want to duplicate that work, but it would be good to have the ability to test user workflows involving multiple components + Conjur here.

  • The most important thing is that we can run this test using the (current) pinned versions of the components, so that we can validate the suite release before we tag it (ie by creating a branch that bumps the pins, and running the test suite in the branch).

  • If there is a way to run the tests where the versions can be inputs, then we can optionally run the suite tests against other configurations as needed - maybe this could be done just by creating a branch with different pins and letting the suite run there.

  • I find the KinD idea interesting - my only question is whether there are some "not quite like in production" issues we'll run into and have to deal with later.

I encourage you to think of these end-to-end test cases as corresponding to end user use cases or workflows - it may help in how you think about writing them.

@doodlesbykumbi
Copy link
Contributor

So if we're looking for high fidelity tests then KinD isn't so great. It might be useful for carrying out local iterations in the way that minikube might.

@doodlesbykumbi
Copy link
Contributor

My latest thoughts regarding this testing framework, taking into account your comment @izgeri. The content below still a bit abstract and talks about the sort of test we'd like and not how we'll do it. I'm still thinking about that and trying things out.

Testing OSS release suites

The release suites are a combination of Conjur (the root dependency), deployment tools for Conjur and consumption tools for Conjur. Each tool will likely already have independent tests to ensure that it works in isolation with Conjur. The purpose of these tests is to ensure that Conjur deployed by a particular tool can be consumed by another tool. Our suite of tests then is made up from combinations of a deployment tool, and a consumption tool .

Conjur OSS

This component is standalone and no tests are exercised on it. For deployment to any given environment we need only input the version.

Deployment tools

Conjur AWS

AWS CloudFormation templates for deploying Conjur OSS.

Conjur Helm Chart

This component deploys Conjur to Kubernetes (not OpenShift). For deploying Conjur, the helm chart can take a version.

Clients

These components provide language specific capabilities to interact with Conjur. They depend on [Conjur OSS](#Conjur OSS)

Kubernetes authenticator

This component authenticates workloads running in Kubernetes.

Secretless

This component facilitates secured connections to target services. Ultimately it's just another client.

E2E Tests cases

Below are some example test cases:

Client library usage of Conjur deployed via Cloud Formation. In this case the clients connect to a publicly available Conjur instance using API keys

Given "vXXX" Conjur is deployed on AWS using conjur-aws
For-each client in release-clients {
  Then client is able fetch secrets from Conjur using an API key.
}

Client library usage of Kubernetes authenticator on Conjur deployed via Helm. In this case the clients make use of the Kubernetes authenticator client sidecar. This excludes Secretless since it has this behaviour built in.

Given "vXXX" Conjur via Helm is deployed with the authenticators "kubernetes" enabled
For-each client in release-clients {
  A client application is deployed on Kubernetes with the Kubernetes authenticator sidecar "vYYY"
  Then client is able fetch secrets from Conjur using credentials from the Kubernetes authenticator.
}

Secretless usage of Kubernetes authenticator on Conjur deployed via Helm.

Given "vXXX" Conjur is deployed via Helm with the authenticators "kubernetes" enabled
A client application is deployed on Kubernetes with the Secretless sidecar "vYYY"
Then the application is able to connect via a connection from Secretless that uses the Kubernetes authenticator

@izgeri
Copy link
Contributor Author

izgeri commented Feb 25, 2020

I'm not sure it's accurate that every test case will be "deployment + Conjur OSS + consumer" - at a minimum, when you use the authn-k8s client you are typically using something like "deployment + Conjur OSS + authn-k8s client (consumer 1) + client library (consumer 2)"

I agree that the consumer end is where the most customization may happen, though. what if we want to run some tests cases where ansible + jenkins are somehow used in concert? this seems reasonable to me - have a Conjur OSS instance up, your app runs a Jenkins pipeline which uses creds to push to a registry, and then ansible deploys your app to prod.

to be clear, the only test case we need to implement at first is helm chart + conjur oss + secretless - but in considering the design of the test framework, it's good to have a few other possible future examples in mind:

  • helm chart + conjur oss + authn-k8s client + client library
  • helm chart + conjur oss + authn-k8s client + summon
  • helm chart + conjur oss + secrets provider
  • conjur-aws + conjur oss + jenkins plugin + ansible (?)

@doodlesbykumbi
Copy link
Contributor

Agreed. A single consumer would be oversimplifying. We can call it Deployment + Conjur OSS + Consumer (can be an arbitrary combination of consumers that each have independent process boundaries).

I'm thinking that if I spend too much time thinking about the framework in a vacuum then I might end up wasting cycles just spinning wheels in one place.

For now we can focus on an implementation of the helm chart + conjur oss + secretless scenario. This likely won't embody the best framework but would allow us to circle back to this card with some concrete inspiration.

@sgnn7
Copy link
Contributor

sgnn7 commented Feb 26, 2020

My thoughts on this:

  1. What is the goal of these tests ?

Smoke test of full e2e of some basic functionality of each component

  1. What environment/s are needed to exercise these tests and situate the involved components?

Yup. K8s and OC would probably the most wanted candidates though we should stick to K8s/minikube/KinD for now to make sure our users can run the same tests and that we don't need any privileged infra.

I think we should only run one set of tests per OSS suite release. Once we shrinkwrap an old OSS release, we can leave it as-is probably.

  1. Will the E2E (end-to-end) tests be one off for any given release or do we anticipate that there'd be value in repeated runs ?

I expect re-runnable tests will be needed but won't need to be frequent (maybe only on pre-releases/releases?)

  1. Could the form of the E2E tests change between releases as components change ?

Yeah this will happen but we probably don't worry about it until that happens. We should make our stuff modular though just for best-practices.

  1. Are imperative bash scripts good enough for defining the test cases or do we want to create some layer of abstraction to push for something more declarative ?

Bash is adequate but probably not optimal. We can try exploring maybe golang or python testing frameworks here?

@doodlesbykumbi
Copy link
Contributor

doodlesbykumbi commented Mar 10, 2020

Testing

(A) At present an E2E test case exists that exercises the following components:

  1. Helm chart to deploy Conjur OSS
  2. Conjur OSS deployed by (1)
  3. Kubernetes authenticator consumes (2)

The components are deployed using bash scripts. These should be organised into meaningful abstractions
that reduce boilerplate and allow for more declarative test scenario definitions.

(B) The types of activity surrounding a test scenario can be broken down into the following:

  1. Deployment of components, carried out on the local terminal and requiring authentiated access to
    the intended infrastructure
  2. Execution of steps on the infrastructure where any given component is running. This requires some shared state so that components that need to interact know about each other.
  3. Assertions against behavior by comparing results of (2) to expectations

Test cases

Test cases are defined in Go. Shell scripts can be executed via Go to exercise the test cases and any assertions can be carried out in Go.

Shared state

A key-value store exists as the soure of truth of values that allow components to interact with one another, e.g. the CONJUR_ACCOUNT value set in (A1) is the same value that's needed in (A3).

Logging

Multiple levels of logging are needed to cater to the different use cases for logs.

  1. Information logging tells us at a high level what/how a particular test-run is doing. There's no need at this level to show logs from individual commands.
  2. Debugging logging provides detailed logs at several layers of granularity that allow a developer to determine the circumstances of a failure.

There's a few things missing right now from the E2E test. Right now B1 (Deployment of components) is carried out in a pretty imperative way. If I wanted to add some Secretless test cases it wouldn't be simple.

Ideally I should be able to

  1. Specify a list of components and their versions
  2. Each component has values that it outputs as a result of being deployed e.g. CONJUR_APPLIANCE_URL. As part of (1) we should be able to specify any dependencies between component input and component output.
  3. Write executors for each component such that given some input a deployment is carried out and the output is stored into the shared state

With (1), (2) and (3) in place we can create automation that resolves the dependency graph and runs the providers in the order that makes sense.

Test cases can use the shared state to execute steps across the components. We'd also need to write utility functions for executing code on any given component.

@izgeri izgeri removed this from the Conjur has an OSS Suite Release milestone Mar 12, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

3 participants