Skip to content
This repository has been archived by the owner on Sep 17, 2024. It is now read-only.

chore: add skeleton for the stack-monitoring tests #113

Closed

Conversation

mdelapenya
Copy link
Contributor

@mdelapenya mdelapenya commented Apr 24, 2020

What is this PR doing?

It ports the existing Python parity tests for stack monitoring to Go

Why is it important?

We want to add support for Stack Monitoring in this test framework.

How to test these changes?

You could run:

$ cd e2e
$ OP_LOG_LEVEL=INFO go test -v --godog.format junit parity-tests

or using VSCode, supporting debug capabilities:

  • Add breakpoints in the functions you're interested
  • Open runner_test.go file as the current selected file in the editor
  • Open Debug panel
  • Select "Parity Tests" in the Run selector

Related issues

Logs

Expand to view the logs


OP_LOG_LEVEL=DEBUG go test -v --godog.format pretty parity-tests
DEBU[0000] Validating required tools: [docker docker-compose] 
DEBU[0000] Binary is present                             binary=docker path=/usr/local/bin/docker
DEBU[0000] Binary is present                             binary=docker-compose path=/usr/local/bin/docker-compose
DEBU[0000] 'op' workdirs created.                        servicesPath=/Users/mdelapenya/.op/compose/services stacksPath=/Users/mdelapenya/.op/compose/stacks
DEBU[0000] Boxed file                                    path=services/apm-server/docker-compose.yml service=apm-server
DEBU[0000] Boxed file                                    path=services/elasticsearch/docker-compose.yml service=elasticsearch
DEBU[0000] Boxed file                                    path=services/kibana/docker-compose.yml service=kibana
DEBU[0000] Boxed file                                    path=services/metricbeat/docker-compose.yml service=metricbeat
DEBU[0000] Boxed file                                    path=services/opbeans-go/docker-compose.yml service=opbeans-go
DEBU[0000] Boxed file                                    path=services/opbeans-java/docker-compose.yml service=opbeans-java
DEBU[0000] Boxed file                                    path=services/vsphere/docker-compose.yml service=vsphere
DEBU[0000] Boxed file                                    path=stacks/metricbeat/docker-compose.yml service=metricbeat
DEBU[0000] Workspace file                                path=/Users/mdelapenya/.op/compose/services/apache/docker-compose.yml service=apache
DEBU[0000] Workspace file                                path=/Users/mdelapenya/.op/compose/services/ceph/docker-compose.yml service=ceph
DEBU[0000] Workspace file                                path=/Users/mdelapenya/.op/compose/services/dropwizard/docker-compose.yml service=dropwizard
DEBU[0000] Workspace file                                path=/Users/mdelapenya/.op/compose/services/envoyproxy/docker-compose.yml service=envoyproxy
DEBU[0000] Workspace file                                path=/Users/mdelapenya/.op/compose/services/etcd/docker-compose.yml service=etcd
DEBU[0000] Workspace file                                path=/Users/mdelapenya/.op/compose/services/haproxy/docker-compose.yml service=haproxy
DEBU[0000] Workspace file                                path=/Users/mdelapenya/.op/compose/services/kafka/docker-compose.yml service=kafka
DEBU[0000] Workspace file                                path=/Users/mdelapenya/.op/compose/services/metricbeat/docker-compose.yml service=metricbeat
DEBU[0000] Workspace file                                path=/Users/mdelapenya/.op/compose/services/mysql/docker-compose.yml service=mysql
DEBU[0000] Workspace file                                path=/Users/mdelapenya/.op/compose/services/nats/docker-compose.yml service=nats
DEBU[0000] Workspace file                                path=/Users/mdelapenya/.op/compose/services/redis/docker-compose.yml service=redis
DEBU[0000] Workspace file                                path=/Users/mdelapenya/.op/compose/services/uwsgi/docker-compose.yml service=uwsgi
DEBU[0000] Workspace file                                path=/Users/mdelapenya/.op/compose/services/vsphere/docker-compose.yml service=vsphere
DEBU[0000] Workspace file                                path=/Users/mdelapenya/.op/compose/stacks/metricbeat/docker-compose.yml service=metricbeat
INFO[0000] Feature Context found                         modules="[stack-monitoring]" paths="[features/stack-monitoring/parity-tests.feature]"
DEBU[0000] Before StackMonitoring Suite...              
DEBU[0000] Installing elasticsearch monitoring instance 
Feature: Parity Tests
DEBU[0000] Before StackMonitoring Scenario...           
DEBU[0000] Installing elasticsearch                     
DEBU[0000] Enabling legacy collection, sending metrics to the monitoring instance 
DEBU[0000] Running elasticsearch for X seconds (default: 30) to collect monitoring data internally and index it into the Monitoring index for elasticsearch 
DEBU[0000] Stopping elasticsearch                       
DEBU[0000] Downloading sample documents from elasticsearch's monitoring index to a test directory 
DEBU[0000] Disable legacy                               

Scenario Outline: The documents indexed by the legacy collection method are identical in structure to those indexed by Metricbeat collection # features/stack-monitoring/parity-tests.feature:3
Given "" sends metrics to Elasticsearch using the "legacy" collection monitoring method # stack_monitoring_test.go:15 -> *StackMonitoringTestSuite
When "" sends metrics to Elasticsearch using the "metricbeat" collection monitoring method # stack_monitoring_test.go:15 -> *StackMonitoringTestSuite
Then the structure of the documents for the "legacy" and "metricbeat" collection are identical # stack_monitoring_test.go:50 -> *StackMonitoringTestSuite

Examples:
  | product       |
  | elasticsearch |

DEBU[0000] After StackMonitoring Scenario...
DEBU[0000] Before StackMonitoring Scenario...
DEBU[0000] Installing kibana
DEBU[0000] Enabling legacy collection, sending metrics to the monitoring instance
DEBU[0000] Running kibana for X seconds (default: 30) to collect monitoring data internally and index it into the Monitoring index for kibana
DEBU[0000] Stopping kibana
DEBU[0000] Downloading sample documents from kibana's monitoring index to a test directory
DEBU[0000] Disable legacy
| kibana |
DEBU[0000] After StackMonitoring Scenario...
DEBU[0000] Before StackMonitoring Scenario...
DEBU[0000] Installing beats
DEBU[0000] Enabling legacy collection, sending metrics to the monitoring instance
DEBU[0000] Running beats for X seconds (default: 30) to collect monitoring data internally and index it into the Monitoring index for beats
DEBU[0000] Stopping beats
DEBU[0000] Downloading sample documents from beats's monitoring index to a test directory
DEBU[0000] Disable legacy
| beats |
DEBU[0000] After StackMonitoring Scenario...
DEBU[0000] Before StackMonitoring Scenario...
DEBU[0000] Installing logstash
DEBU[0000] Enabling legacy collection, sending metrics to the monitoring instance
DEBU[0000] Running logstash for X seconds (default: 30) to collect monitoring data internally and index it into the Monitoring index for logstash
DEBU[0000] Stopping logstash
DEBU[0000] Downloading sample documents from logstash's monitoring index to a test directory
DEBU[0000] Disable legacy
| logstash |
DEBU[0000] After StackMonitoring Scenario...
DEBU[0000] After StackMonitoring Suite...
DEBU[0000] Destroying elasticsearch monitoring instance, including attached services

4 scenarios (4 pending)
12 steps (4 pending, 8 skipped)
1.28132ms
testing: warning: no tests to run
PASS
ok github.com/elastic/metricbeat-tests-poc/e2e 0.790s

@mdelapenya mdelapenya self-assigned this Apr 24, 2020
@mdelapenya mdelapenya requested review from a team, aag6z, ycombinator and liza-mae and removed request for aag6z April 24, 2020 20:50
@@ -66,6 +66,12 @@ var supportedProducts = map[string]*contextMetadata{
},
modules: []string{"metricbeat"},
},
"parity-tests": &contextMetadata{
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For newcomers, long story about this map here: #104

It needs an elasticsearch as monitoring instance, and we will create them
using the life cycle hooks (Befores and Afters)
This will allow us to run multiple elasticsearch instances in the same test,
as we do for monitoring
We don't need them for stack monitoring's parity tests purpose
@ycombinator
Copy link
Contributor

ycombinator commented May 26, 2020

Thanks for scoping down this PR to just Elasticsearch parity tests, @mdelapenya.

As stated in #94, the goal of this PR was:

mainly to see if there are any performance or maintainability gains in the process [of porting over the current parity tests to the E2E framework].

Now that this PR is only dealing with Elasticsearch parity tests, I was able to do an apples-to-apples comparison by running the parity tests as implemented by this PR and comparing the results with the existing parity tests. In both cases I started out with completely clean environments (no pre-built Docker images to aid this PR and similarly no VM images or running VMs to aid the existing parity tests).

This PR

$ time OP_LOG_LEVEL=INFO go test -v --godog.format junit parity-tests
...
PASS
ok      github.com/elastic/e2e-testing/e2e      274.351s

Existing parity tests

$ time AIT_STACK_PRODUCT=elasticsearch ./playbooks/monitoring/buildenv.sh
...
PLAY RECAP *********************************************************************
aithost                    : ok=157  changed=61   unreachable=0    failed=0    skipped=37   rescued=0    ignored=0

[2020-05-26 16:59:04] [INFO] Run script: /Users/shaunak/development/github/elastic-stack-testing/scripts/shell/vagrant_vm.sh
mkdir: /Users/shaunak/development/github/elastic-stack-testing/ait_workspace/7-7-1-6d314aa6_os: File exists
[2020-05-26 16:59:04] [INFO] Using Vagrantfile: /Users/shaunak/development/github/elastic-stack-testing/ait_workspace/7-7-1-6d314aa6_os/Vagrantfile
[2020-05-26 16:59:04] [WARNING] No vagrant vm action specified
[2020-05-26 16:59:04] [INFO] Deactivate python venv
AIT_STACK_PRODUCT=elasticsearch ./playbooks/monitoring/buildenv.sh  52.54s user 20.88s system 26% cpu 4:32.60 total

As you can see the existing parity tests took 4 minutes and 32 seconds while the ones in this PR took 274s == 4 minutes and 33 seconds — almost exactly the same amount of time!

Given this, we can conclude that there doesn't seem to be any major performance benefit in porting over the existing parity tests to the E2E framework.

However, performance was only one of the goals stated in #94. The other one was maintainability. Given that the @elastic/stack-monitoring-ui team is currently maintaining these parity tests, I will defer to them on whether they think porting them over to the E2E framework (as illustrated in this PR) would be more maintainable for them in the long run over the current implementation.

@mdelapenya
Copy link
Contributor Author

However, performance was only one of the goals stated in #94. The other one was maintainability. Given that the @elastic/stack-monitoring-ui team is currently maintaining these parity tests, I will defer to them on whether they think porting them over to the E2E framework (as illustrated in this PR) would be more maintainable for them in the long run over the current implementation.

About performance, I'd run benchmarks for both implementations to verify it in a controlled environment to avoid getting simplistic results.

As an example, in my local machine (Macbook 13'', 16GB RAM, 2,7 GHz Intel Core i7, SSD) it takes:

ok      github.com/elastic/e2e-testing/e2e      195.666s
OP_LOG_LEVEL=INFO go test -v --godog.format junit parity-tests  6.06s user 2.75s system 4% cpu 3:18.55 total

which is faster than the above numbers (curious about the CPU usage: 4% vs 26%).

As a side note, the tests for the 4 products (elasticsearch, kibana, logstash and filebeat) complete fairly fast:

FAIL    github.com/elastic/e2e-testing/e2e      576.602s
OP_LOG_LEVEL=INFO go test -v --godog.format junit parity-tests  14.35s user 4.28s system 3% cpu 9:38.84 total

Talking about maintainability, I'm going to list the things I (and the software industry via ISO's) considered important when working in both projects, creating this framework and reading the current parity tests. Hope I don't get anybody bored here 😄

Testability

  • This framework:
    • Unit testing possible using programming language primitives (go test)
    • Language compiler supporting type checking, and code style
    • JSON processing relying on a third-party library, plus a few helper methods
    • Test Life cycle hooks (befores, afters) provided by the test libraries, being possible to create and destroy test resources using them.
  • Current parity tests:
    • Linting of YAML difficult (indentations could be hell)
    • Unit tests for Ansible possible (i.e. molecule), although not present
    • Native JSON support in Python is great

Analyzability

  • This framework:
    • Golang as sole language for control flow
    • Golang code is organised in one single package, with files per functionality
    • Configuration variables defined in code, being possible to override them in the shell (i.e. OP_ variables).
  • Current parity tests:
    • Ansible for control flow
    • YAML for Ansible role and playbook definition as main language
    • Ansible layout includes roles, playbooks, dependant roles
    • Configuration variables defined at Ansible level in multiple places: group_vars, host_vars, each role, playbook, shell (i.e. AIT_ variables).

Functionality

  • This framework:
    • Suitability:
      • It uses a high level, general purpose programming language (Golang), which seems more appropriate to create assertions
  • Current parity tests:
    • Suitability:
      • It uses a declarative language (YAML) and Ansible, which uses YAML to define the state of the software to be installed in a target machine. It's possible to create assertions in YAML files.

Usability

  • This framework:
    • Learnability:
      • Golang is relatively simple to learn and understand, so any developer coming from a backend language would find most of language capabilities present in the language.
  • Current parity tests:
    • Learnability:
      • YAML as declarative syntax is easy to learn, although difficult to maintain

Installability

  • This framework needs:
    • Golang 1.13
    • Docker
    • Docker Compose
  • Current parity tests needs:
    • Python 3.6 (it does not run locally on my Mac with Python 3.7)
    • Virtualenv
    • Ansible
    • Vagrant
    • VirtualBox
    • Bash (to run shell scripts)

@ycombinator
Copy link
Contributor

About performance, I'd run benchmarks for both implementations to verify it in a controlled environment to avoid getting simplistic results.

Isn't that what I did? Can you explain why your environment should be considered more controlled than mine?

Also, your results prove that the new parity tests run faster in your environment than mine. But the question is about the relative performance between the existing and new parity tests, right?

In my comment I ran both implementations (existing and new) and posted both their results. However, so far you have only posted the results of the new parity tests in your comment. For an apples to apples comparison, shouldn't you also run the existing parity tests in your environment and post those results as well?

As for maintainability, thanks for listing out the various factors that go into it. We should indeed consider these in deciding which implementation is more maintainable long term. I have no horse in this race as I'm no longer the primary maintainer of these tests. So I'll reiterate what I said in my previous comment: it's up to the @elastic/stack-monitoring-ui team to weigh in on this aspect as they are primarily maintaining the parity tests.

I think it's worth remembering that we already have stack monitoring parity tests implemented and running in CI for well over a year at this point. They're stable and have been catching bugs for us already. We made an investment in them to get them to be stable and useful, which took a few months, and since then that investment has been paying off with very little upkeep. The framework they are built on is being actively maintained and is well supported. So there is no inherent reason to abandon this investment now.

Essentially, we've paid off the costs and are reaping the benefits. Whereas with a new implementation we're going to have to put in some work first before we start seeing the same level of stability and usefulness. For example, right now my involvement in maintaining the existing parity tests is pretty much zero. By comparison, I've had to be quite involved in helping debug the new implementation and I suspect I would be for at least some more time.

Basically, in my mind, there has to be enough of a benefit to move away from the existing implementation that's stable, useful, and well-supported; otherwise I'm not sure it's worth it.

@chrisronline
Copy link

I wasn't really aware of this effort, but I'm having a hard time understanding why this is happening. I don't think we (the stack monitoring team) has any issues with these tests as is.

Is there any background I can read about this effort?

@mdelapenya
Copy link
Contributor Author

mdelapenya commented May 27, 2020

Sorry if I expressed that I ran the tests in a controlled environment, but my intention was to clarify that none of mine or yours is controlled 😊 I only posted the results as a pure demonstration of non being controlled. I can picture a bare-metal, dedicate machine running on CI for different runs to accomplish this performance task.

Besides that, I tried to run the existing ones without success, that's why I added the it does not run locally on my Mac with Python 3.7 comment. I'd like to run them today, after installing all dependant tools and versions.

Please do not get me wrong, I'm not saying there is no value in the parity tests so we have to replace them. I totally value what they provide to detect changes between both JSON schemas. The one-month work in this PR is trying to address the initial request (#94) to check if this new dockerised, BDD-based framework was able to mimic what the parity tests do, and that's what we did: bring them to the framework, where we would be able to run it with the frequency that we need (with every PR or merge, daily, etc.)

TBH I was simply surprised that measuring performance in a few runs would determine the performance mark for both use cases, that's why I brought in the maintainability aspects to consider when taking a final decision.

@mdelapenya
Copy link
Contributor Author

I've not mentioned it, but any initiative from our team (Eng. Productivity) is trying to assist where needed: if we see no improvements in this new approach, then it's totally OK 😊 so closing this PR would be fine too

@ycombinator
Copy link
Contributor

Sorry if I expressed that I ran the tests in a controlled environment, but my intention was to clarify that none of mine or yours is controlled 😊 I only posted the results as a pure demonstration of non being controlled. I can picture a bare-metal, dedicate machine running on CI for different runs to accomplish this performance task.

Ah, thanks for clarifying. I agree with you that a controlled environment is ideal for benchmarking. Given that we're more interested in the relative comparison between two implementations I think, as long as they run in the same environment, it's good enough — not perfect, but good enough IMO.

BTW, it is for this exact reason that I avoided comparing the performance results of the existing and new implementations in a CI environment — I would consider a CI environment to be even less controlled than our local machines. So I'm right there with you on how the environment being controlled vs. not can influence the numbers.

I tried to run the existing ones without success, that's why I added the it does not run locally on my Mac with Python 3.7 comment. I'd like to run them today, after installing all dependant tools and versions.

👍 Please do. We have one comparative performance analysis done on my system. It would be great to have another one done on yours as well.

The one-month work in this PR is trying to address the initial request (#94) to check if this new dockerised, BDD-based framework was able to mimic what the parity tests do, and that's what we did.

Indeed, and I'm grateful for the work you've done in bringing this PR to where it is. Without it we couldn't be having the discussion we're having right now about performance and maintainability, which is what #94 was about. Quoting from it:

It would be great to try and port over one of these product's parity tests over to the e2e-testing framework, mainly to see if there are any performance or maintainability gains in the process.

TBH I was simply surprised that measuring performance in a few runs would determine the performance mark for both use cases, that's why I brought in the maintainability aspects to consider when taking a final decision.

This is a fair criticism. I will re-run both implementations locally multiple times (resetting state by removing all docker images and VMs+images and making sure the CPU usage drops to a low stable rate each time). I'll post the results of each run here. If you do the same in your environment and post the results, hopefully we can get a fairer picture of relative performance.

@ycombinator
Copy link
Contributor

ycombinator commented May 27, 2020

I wasn't really aware of this effort, but I'm having a hard time understanding why this is happening. I don't think we (the stack monitoring team) has any issues with these tests as is.

Is there any background I can read about this effort?

@chrisronline This effort is an exploration of whether there might be benefits (performance and maintainability, specifically) to migrating the existing parity tests over to the E2E testing framework being developed by the Observability Engineering Productivity team. I requested this exploration in #94. My thinking was/is that if there are significant benefits in either or both areas (performance and/or maintainability), it would be worth moving the tests over. Now, thanks to @mdelapenya's efforts, we have this PR so we can do a concrete comparison between the existing and new implementations, rather than talking in abstract terms.

As the primary maintainer of the existing tests along with @igoristic, WDYT?

@mdelapenya
Copy link
Contributor Author

mdelapenya commented May 27, 2020

@ycombinator I have found some errors while preparing the local environment on Mac, so I created this issue in the repo: https://github.com/elastic/elastic-stack-testing/issues/585

@chrisronline
Copy link

It sounds like the ask here is for myself and @igoristic to pull this down and run the tests and get a sense of how they work and how we can debug them. We can do that and report back

@ycombinator
Copy link
Contributor

@chrisronline Yes, that would be an ideal way to get a sense of maintainability. Thanks!

@mdelapenya
Copy link
Contributor Author

Hey @ycombinator, I managed to run the parity test locally. These are the time results for the parity tests:

1st run:

AIT_STACK_PRODUCT=elasticsearch ./playbooks/monitoring/buildenv.sh  58.07s user 87.49s system 48% cpu 5:00.84 total

2nd run:

AIT_STACK_PRODUCT=elasticsearch ./playbooks/monitoring/buildenv.sh  55.85s user 84.51s system 48% cpu 4:49.31 total

3rd run:

AIT_STACK_PRODUCT=elasticsearch ./playbooks/monitoring/buildenv.sh  55.13s user 82.31s system 48% cpu 4:45.46 total

@ycombinator
Copy link
Contributor

ycombinator commented May 27, 2020

Thanks @mdelapenya. Those are about the same numbers I saw for my single run for the existing parity tests. I still need to run them again multiple times to get more results; just haven't gotten to it.

I'm curious why the new tests (this PR) ran so much faster in your environment than mine when the existing tests took about the same time. Did you clear out all your docker images before you ran the new tests? I did that with docker system prune -af before I ran the new tests on my system. Just want to make sure we're both using the same steps for testing performance.

@mdelapenya
Copy link
Contributor Author

I'm just running these tests removing elasticsearch and metricbeat images from the Docker host.

To speed things up on CI, those images can be cached in the CI workers, as we already do for other projects

@mdelapenya
Copy link
Contributor Author

mdelapenya commented May 27, 2020

New results! 🍞

Removing docker images, so both elasticsearch and metricbeat are pulled during the process:

1st run

OP_LOG_LEVEL=INFO go test -v --godog.format pretty parity-tests  6.30s user 3.19s system 3% cpu 4:25.09 total

2nd run

OP_LOG_LEVEL=INFO go test -v --godog.format pretty parity-tests  6.60s user 3.22s system 3% cpu 4:36.23 total

One of the benefits of Docker is this reusability of the Docker cache, so I would like to keep these results as valid, because it's how Docker works. If the existing framework is not caching resources between runs (it downloads a TAR file from the network) then maybe that could be a good candidate for improvement.

@ycombinator
Copy link
Contributor

ycombinator commented May 27, 2020

One of the benefits of Docker is this reusability of the Docker cache, so I would like to keep these results as valid, because it's how Docker works. If the existing framework is not caching resources between runs (it downloads a TAR file from the network) then maybe that could be a good candidate for improvement.

++ agreed!

I don't know too much about our CI environments so I might need some education here. Given that we use ephemeral workers for CI, does that impact cacheability? Specifically, it is possible to cache Docker images (or other artifacts) on each worker and have them preserved between two uses of the same worker?

@mdelapenya
Copy link
Contributor Author

Yes! We can bake packer images with some docker images already present in the host, under /var/lib/docker. As an example see https://github.com/elastic/beats/blob/master/.ci/packer_cache.sh

We also generate most of the Beats integrations Docker images daily, and push them to our own registry, to avoid building them on each CI build (See https://apm-ci.elastic.co/blue/organizations/jenkins/beats%2Fbeats-docker-images-pipeline/detail/beats-docker-images-pipeline/200/pipeline/57).

But this is not our case, as we consume already existing Docker images. Nevertheless, I could imaging creating images for a specific PR on metricbeat, pushing it to our registry, and testing running these tests with it. Everything automated, of course

@ycombinator
Copy link
Contributor

Gotcha, thanks.

In that case I agree that there will be a performance gain from using the E2E testing framework for the parity tests (this PR), which use Docker images over the existing implementation. Even if we were to try and cache tar files for the existing implementation, that counts towards extra effort in that framework because it's not something we get "for free" already. Whereas it sounds like the analogous effort for caching Docker images on CI workers has already been made so we will get it "for free".

Concretely, we're looking at 195.666s with the E2E testing framework (taken from this comment) vs. 291.333s (taken as the average of 3 values from this comment). That's a performance gain of 95s or 32%, which is not insignificant IMO.

So purely from a performance perspective, I'm now 👍 to migrate the tests over to the E2E framework. I do think we should wait to hear @chrisronline and @igoristic's thoughts on the maintainability perspective since they are the ones maintaining the existing tests and would have to maintain the new ones if we migrate.

@mdelapenya
Copy link
Contributor Author

I'm still curious about the CPU usage 🤔 but I guess it's a Docker Vs Vagrant thing. I did not expect such a difference!

Docker: 3%
Vagrant: 48%

@liza-mae
Copy link

Nice performance analysis. I am planning to remove the use of Vagrant in CI. Ansible can be used directly, with docker or a cloud provider.

@mdelapenya
Copy link
Contributor Author

Nice performance analysis. I am planning to remove the use of Vagrant in CI. Ansible can be used directly, with docker or a cloud provider.

We use Ansible as Cloud provisioner replacing Terraform, and it works pretty well!

@mdelapenya
Copy link
Contributor Author

@chrisronline @igoristic @ycombinator Please let us know if you have any questions about this PR 🙏 we are glad to help!

@mdelapenya
Copy link
Contributor Author

Hi 👋

Would you mind if I close this one until a final decision is taken? You could reopen it at any time if/when needed.

Thanks in advance!

@chrisronline
Copy link

Sounds good to me. We are a bit swamped and won't be able to look at this in the near future

@mdelapenya
Copy link
Contributor Author

Sounds good to me. We are a bit swamped and won't be able to look at this in the near future

Never mind! We have the commits here ready to come to them whenever there is more bandwidth.

Thank you!

@mdelapenya mdelapenya closed this Jun 17, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Port over one of the stack monitoring parity tests
5 participants