Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Copy the conscious-risks ecosystem into 007; run it as part of the test suite #498

Open
masak opened this issue Apr 18, 2019 · 3 comments
Open

Comments

@masak
Copy link
Owner

masak commented Apr 18, 2019

Objective: to make sure the conscious-risks code (AKA "007's entire ecosystem") keeps on running even in the face of changes to the language.

The two smaller modules should be no problem to run straight off. Maybe we can even contribute some test cases back. 😉

It strikes me now that they would also be a good playground for trying out both #324 and #417 and getting early feedback.

The bigger game might take some tweaking to run in a test harness. We'll want to remove the intentional delays-by-looping, for example. Most of the other output we should be able to test, though. Again, let's err on the side of too many tests.

I'm thinking leave the code as-is, but run it through a filter of some kind before running tests on it. I think a regex/subst-based filter ("source filter") is fine in the short run, but of course doing this with some hypothetical Q transform (maybe based on XSLT) would be cooler and better for everyone. In either case, all the individual transforms we do on the source should fail the entire test file if they don't match and substitute in the way they expect — most likely, that's a sign the "original" has changed and the transforms need to be updated.

@masak
Copy link
Owner Author

masak commented Jul 21, 2019

I've started in on this. Early progress can be tracked in https://github.com/masak/007/tree/masak/add-community-code .

Haven't really gotten stuck as such, but have realized along the way that, since the tests are running inside the nested 007 runloop, the way successes and failures can even register in the outer (Test.pm6) environment is for something to parse the captured 007 output and report inner results as outer ones. The two test environments can then run in tandem.

Just reporting on this. It's a "fun" hurdle which I didn't expect to have at all.

@masak
Copy link
Owner Author

masak commented Sep 21, 2019

I need to look at this again. It would be really helpful for those modules to be tracked by the Alma repo. Still need to do what the above comment outlines.

@claes-magnus reports that currently the modules don't work. Likely the Alma rename busted something up. 😞 Need to look at that too, ASAP.

@masak
Copy link
Owner Author

masak commented Sep 25, 2019

Created #548 to track the breakage, since it's outside of the scope of this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant