Skip to content
This repository has been archived by the owner on Apr 29, 2020. It is now read-only.

Test DSL #3

Open
victorb opened this issue Dec 21, 2016 · 2 comments
Open

Test DSL #3

victorb opened this issue Dec 21, 2016 · 2 comments

Comments

@victorb
Copy link
Collaborator

victorb commented Dec 21, 2016

One of my main areas of concern is to have a nice testing DSL to enables reuse and readability so it'll be easy to add new tests and refactor existing ones.

Currently, it looks something like this:

name: Simple Add and Cat
config:
  nodes: 2
  times: 10
steps:
  - name: Add file
    on_node: 1
    cmd: head -c 10 /dev/urandom | base64 > /tmp/file.txt && cat /tmp/file.txt && ipfs add -q /tmp/file.txt
    outputs: 
    - line: 0
      save_to: FILE
    - line: 1
      save_to: HASH
  - name: Cat file
    on_node: 2
    inputs:
      - FILE
      - HASH
    cmd: ipfs cat $HASH
    assertions:
    - line: 0
should_be_equal_to: FILE

I'm in a rush right now, so I'll leave this here and write my thoughts when I can.

@FrankPetrilli
Copy link
Collaborator

So I've dug a bit at this project and introduced a few things in the process of implementing tests.

First off, the new tests:

  • Failing cat because previous node removed pin
  • Transitive property (Add on 1, pin on 2, pin rm on 1 and GC, cat on 3)
  • Failing cat because previous node has been destroyed (working on moving away from using exec.Command, so this will be committed with some other changes as I'd like to keep to using the kubernetes Go APIs for future development)

I've also made a few tweaks to existing tests to make them fit the changes I've added discussed below (primarily, adding timeouts and expected failures)

So, the DSL-related changes:

  • I found it helpful to be able to timeout a command. For example, when the original node with the data has disappeared and nobody else has pinned the hash, it will just sit for a long time. So, I introduced a YAML option which times out a task at X seconds.
  • Using those timeouts, I then created an "expected" section. For example, in a test case where the node containing data disappears, we expect the cat command to timeout as many times as we ran the test. With the current DSL, we don't know whether a failure or timeout is bad or good.
  • Using that expected section, after displaying the outcome, I test the actual outcome against the expected outcome at the end of test execution and call os.Exit() from main with success or failure.

I'll be adding an issue to two for non-DSL related changes as well.

@victorbjelkholm, I wanted to get your thoughts on these changes before submitting a PR. Do you have any concerns about the changes I've proposed so far?

@FrankPetrilli
Copy link
Collaborator

FrankPetrilli commented Jan 15, 2017

I've also found it necessary to implement a parallel test runner / matching DSL to meet the requirements in @whyrusleeping's issue 211 "Swarm downloading a small file". At this point, I've kept your convention of on_node, and added a yaml attribute for end_node, which will invoke the parallel test runner.

New DSL example:

name: Swarm downloading a small file
config:
  nodes: 11
  times: 3
  expected:
      successes: 30
      failures: 0
      timeouts: 0
steps:
  - name: Create 5MB File
    on_node: 1
    cmd: head -c 5000000 /dev/urandom | base64 > /tmp/file.txt && md5sum /tmp/file.txt | cut -d ' ' -f 1 && ipfs add -q /tmp/file.txt
    timeout: 0
    outputs:
    - line: 0
      save_to: FILE
    - line: 1
      save_to: HASH
  - name: Cat file on node 2-11
    on_node: 2
    end_node: 11
    inputs:
      - FILE
      - HASH
    cmd: ipfs cat $HASH | md5sum | cut -d ' ' -f 1
    timeout: 10
    assertions:
    - line: 0
      should_be_equal_to: FILE

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants