Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal for Elm Analyzer #12

Merged
merged 26 commits into from
Feb 28, 2022
Merged

Conversation

jiegillet
Copy link
Contributor

@jiegillet jiegillet commented Jan 16, 2022

Dear @mpizenberg,

Please find here a draft proposal for what an Elm Analyzer tool could look like.
Sorry for dumping it out of the blue without any prior discussion, I always feel uneasy discussing projects I'm not sure I can complete, and by the time I have a working proof of concept, the discussion might as well happen around some code.

Quick backstory: I have a lot of experience working on the Elixir Analyzer, building analyses but also developing analyzer tools. Elixir has a lot of different checks: 36 specific checks (almost every concept exercises + a bunch of practice exercises) and 18 commons checks (checking thinks like variable case, indentation, anti-patterns...). I have been thinking about transposing that experience to Elm for a while.

Basically, this proposal is built around jfmengels/elm-review, which I learned for this project and seems very well suited for the job. elm-review relies on stil4m/elm-syntax to parse Elm code into an AST and provides a set of tools for analyzing the code. I found it very nice to work with and it has a lot of potential, I implemented a couple of proof-of-concept rules for two-fer (checking for the presence of a type signature and checking for the use of withDefault). There are also many pre-made rules publicly available.

You should be able to run the analysis with bin/build.sh then bin/run.sh two-fer test_data/two-fer test_data/two-fer, assuming you have elm-review installed.

One difficulty is running a specific set of rules for a specific exercise but not more. One hack that I have found was you can name each rule after a string (I used the slug) and in the CLI you can use --rules to list the rules you want. Running common rules in this way might get clunky, and I have not worked out yet how to deal with exercises that don't have rules yet.

Another difficulty is that the elm-review only allows exporting errors as strings (which makes sense for a CLI) so in order to gather the errors and do some analysis after the review, I've had to encode a record as JSON string and decode it later. It's a bit clunky but it works. elm-review errors show the part of source code that triggered the check, but I haven't used that feature yet (Elixir doesn't do that, but there is no reason we can't later on).

Everything is still very malleable, there are no tests not error handling code, and my bash scripts are terrible, but I look forward to discussing ideas. I'm also happy to discuss on Slack.

@mpizenberg
Copy link
Member

That's awesome @jiegillet ! Indeed, elm-review is very much the tool for the job, or at least the most suited one.

I'm currently quite busy finishing another side project (improving dependency solver of elm-test to make it compatible with lamdera) + I have to focus a bit on other personal stuff + I'm soon finishing my current work contract so I have to look for something else too ^^. All that just to say, I may not have much time to review this shortly and I haven't used elm-review a lot. But I wanted to answer quick anyway that I'm very much in favor of the approach you took.

Side remarks for you to get help even if I'm not very reactive. @jfmengels (elm-review author) is very active in the elm community and often excited to see how other people are using elm-review. He should have good ideas on how to improve the current blockers (json output, applying shared rules + specific rules depending on module).

There are also many other very active elm-review users and rule authors in the elm-review channel of the elm slack. Otherwise don't hesitate to reach out to me on the exercism slack or the elm slack for casual discussions.

@ErikSchierboom
Copy link
Member

@jiegillet Great work! My Elm is less than rusty, so I can't really help much with the review, but what I can do is:

  1. Help with the Docker integration (unless you have the skills yourselves of course)
  2. Give suggestions on what to consider when building an analyzer

As per the second point, my main suggestion would be to have an extensive set of test data with their expected output. I've noticed you have a test_data directory already, so that's fantastic. My suggestion would be to add an expected_analysis.json file to the test data directories, which leads to having some sort of "integration tests" where the analyzer is run against known input and compared against an expected output. I've found this worked great for the C# analyzer. See https://github.com/exercism/csharp-analyzer/tree/main/test/Exercism.Analyzers.CSharp.IntegrationTests/Solutions/TwoFer/Concatenation/IsNullOrWhiteSpace/Expression for what that might look like.

Feel free to ping me if you need any reviews.

@jiegillet
Copy link
Contributor Author

Thanks a lot for the help!
I will most likely need help with the docker integration. I plan on copying the configuration in elm-test-runner but I will certainly ping you for a review. Should that be a separate PR, or included in this one?
For the testing, I will certainly have some, but I haven't decided exactly how it will be. I think I will have some integration tests like you are proposing, and also some unit tests for the individual checks and tools.

@ErikSchierboom
Copy link
Member

I will most likely need help with the docker integration. I plan on copying the configuration in elm-test-runner but I will certainly ping you for a review. Should that be a separate PR, or included in this one?

Whatever you prefer!

, findFunctions : List CalledFunction
, find : Find
, comment : Comment
}
Copy link
Contributor Author

@jiegillet jiegillet Jan 24, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dear @angelikatyborska and @neenjaw, I know you can't review Elm code, but I would like to have your opinion on this function API.

This is basically assert_call copied from the Elixir Analyzer. I made some changes:

  • Renamed called_fn and calling_fn to findFunctions and calledFrom because I was always confusing the two. Maybe there are better names?
  • Made calling_fn (calledFrom ) into a Maybe type, Nothing means anywhere in the module and Just function only in function.
  • Made called_fn (findFunctions ) into a list a functions, and we can either find All, None or Some. I did this because it's simpler to write that a function should call several other ones and also it's easier to say a function should call one of some choices, this is done in Elixir with suppress_if but it only works for two (it also covers about 80% of uses of suppress_if so maybe it doesn't need to be implemented). It also handles assert_call and assert_no_call in one function.
  • The tool I'm using elm-review makes it trivial to return a Range, basically the location in the source code of a mistake (this this case, mostly calling_fn or sometimes the position of a function that shouldn't be called). However, I'm not sure how to use this power effectively. What would be awesome would be to add squiggly lines on the source code. I'm not sure if that's possible.

I'd love to have your opinion on these changes, and ask if you've had other ideas of things that could be smoother with asset_call in general. Thank you 💛

@ceddlyburge
Copy link
Contributor

Hi @jiegillet / Jie, thanks for the proposal! This all looks great, and I agree that elm-review is a good idea. We already use the same elm-syntax that it uses in the representer.

It sounds like you would like some help with testing it and smoothing out the rough edges, is that right?

@jiegillet
Copy link
Contributor Author

Hi @ceddlyburge, thank you for your feedback, I'm glad you like the idea.

I'm still working on it with a lot of motivation at the moment (I just added a super cool feature, I'll commit after I add unit tests later) and I'm reworking the structure a lot, so getting someone else involved right now might be counter-productive, but the second I run out of steam, I'll reach out for help.

I have some ideas on how to do the tests (I already started adding unit tests for the rules), but things are still changing a lot. I submitted a PR to elm-review to add a feature that would help greatly, we might have to wait for that to be merged and published.

I do welcome however high-level feedback. I've opened an issue to discuss common checks, I might open more, or discuss things on Slack or here.

@ceddlyburge
Copy link
Contributor

Cool cool, it sounds like you are happy in your work! Please get in touch if you want to talk through anything ...

- update elm-review to 2.7.0 to use Rule.filterErrorsFornFiles
- introduce RuleConfig
- stop using slug, instead determine files to run on in Ruleconfig
- common checks are ran as is, custom decoders translate errors
@jiegillet jiegillet marked this pull request as ready for review February 9, 2022 14:01
@jiegillet jiegillet requested a review from a team as a code owner February 9, 2022 14:01
@jiegillet
Copy link
Contributor Author

jiegillet commented Feb 9, 2022

Hi @mpizenberg and @ceddlyburge.
I think this is ready for a review. It's by no mean perfect, probably lots of details to fix, but I shouldn't spend too much time on these while the structure is still in discussion. I suspect I already spent too much time on stuff that will change a lot :)

Here is a quick rundown on how it works. It's very condensed, so please ask questions if it's not clear.

To give it a run, try bin/build.sh && bin/run.sh two-fer test_data/two-fer/imperfect_solution test_data/two-fer/imperfect_solution And check test_data/two-fer/imperfect_solution/analysis.json.

It relies on elm-review to run rules specific to exercises and also common rules. After elm-review is done exporting the comments in JSON form, the comments are sent back into the app, decoded, transformed and summarized into an analysis.json that the system expects. That file contains references to files stored on website-copy that contain the messages that the users will see (full docs here).

The exercise rules are built as normal elm-review rules, except that the output is encoded in a hacky way (elm-review only exports messages as strings, but for Exercism we need a structured JSON). I also created a powerful helper tool in Analyzer.elm (heavily inspired from the Elixir Analyzer) that can check if a function calls another (and more, see docs).

Exercise rules should only run for one exercise, but elm-review is not very flexible in that regard, so the best solution I have found to fix this was to hardcode the path of the only files that the rule should run on (I had to add that feature to elm-review first). This is not super clean, and would definitely need to be checked in the CI somehow. I would welcome a better approach.

The common rules can also be built as normal rules, but there are many useful (and complex) established rules in the community, so I decided to take advantage of them. The output of those rules are formatted strings that are complicated to parse (the Simplify rule looks for more than 100 special cases, with many different possible messages), so for now I have opted to grab them almost as they are via custom decoders that need to be added and send then as they are to the students, which goes against the spirit of the analyzer (see this discussion on Slack), but is acceptable for now.

I have added some shell scripts to run smoke tests, but I haven't yet integrated anything to docker, I think this PR is big enough (CC @ErikSchierboom).

@mpizenberg
Copy link
Member

@jiegillet that's awesome you spend all this time to figure out how to integrate elm-review in the analyzer! I hope I can have a detailed look at it this weekend but no promise sorry.

@jiegillet
Copy link
Contributor Author

Oh, I forgot to mention that I do not expect a fast review on a huge PR like this one.
Please do take your time, and do not apologize for taking as long as you need. You already mentioned how busy you are.

@ErikSchierboom
Copy link
Member

I have added some shell scripts to run smoke tests, but I haven't yet integrated anything to docker, I think this PR is big enough (CC @ErikSchierboom).

Sure! That makes sense. Let me know if you need any help with that.

Copy link
Contributor

@ceddlyburge ceddlyburge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Jie, I haven't looked at all the cod yet, but I've got as far as Comment.elm, and I thought I would submit the feedback so far in case it is useful to you. It all looks good, and there aren't any dealbreakers, but I have left quite a lot of comments. Generally I think it might be possible to use a few less Maybe's and a few more types, but that could happen in a follow up PR being as this one works fine as it is.

Cheers, Cedd

bin/check_files.sh Show resolved Hide resolved
# Temporarily disable -e mode
set +e
# running the analysis
./bin/run.sh two-fer ./test_data/two-fer/perfect_solution ./test_data/two-fer/perfect_solution > /dev/null
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed, can we initially release a single rule for a difficult / obscure exercise, just so that if something bad happens it will only affect that one exercise.
And so probably update the smoke test to use that exercise.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, sounds good. How about using strain?
It's not the most difficult/obscure one but it really suited for an analysis, since the instructions say people are not supposed to use List.filter (I'm thinking of blocking any List function), but a large proportion of solutions use it anyway.
There has been about 6 solutions published in the last 6 months.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, sounds like a good plan.

src/Analyzer.elm Outdated Show resolved Hide resolved
src/Analyzer.elm Outdated

-}
functionCalls :
{ calledFrom : Maybe String
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

calledFrom = Nothing is a slightly strange syntax here, maybe it is worth creating a new type for this (calledFrom = Anywhere is more intent revealing)?
Also it will always be a string, so if we define a new type it won't need to be generic

src/Analyzer.elm Show resolved Hide resolved
expressions ++ List.concatMap (\branch -> traverseTreeFrom branch trimmedTree) modulefunctions


matchFunction :
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found this function slightly hard to work out, and I wonder whether using Maybe Range is the best way to represent a match.

Anyway, I think it would get clearer if the case statement move to the let section and just returned a boolean, and the in part could do the if statement (and I think this means getRange is no longer required)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that the meaning is hard to work out. I do need to keep the range when I find the function, that's why I used a Maybe Range, but I'll try to clean it up.

( calledFunction, ( Just [], name ) == ( functionModule, functionName ) |> getRange )


checkForError : Find -> Comment -> Context -> List (Error {})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function is nicely laid out, and I kind of get what it is doing, but I'm unable to tell if its working properly (although I'm sure it is). Especially with the global error / error. Maybe some more comments would help, or some functions with intent revealing names or something

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, this code is fairly subtle, I'll try to make it more clear, at the very least with comments.

src/Comment.elm Outdated
List.sortBy (.commentType >> commentTypeShowOrder) comments

message =
case List.map .commentType comments |> minimumBy commentTypeSummaryOrder of
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could maybe use sortBy again here and take head to remove need for minimumBy? Or the List.Extra package has a minimumBy function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just realized List.Extra is an indirect dependency already, might as well use that!

src/Comment.elm Outdated


makeSummary : List (Decoder Comment) -> String -> Result Decode.Error String
makeSummary decoders =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this be better as highjackingDecoders, based on the name of the param that decodeElmReviewComments takes?

makeSummary : List (Decoder Comment) -> String -> Result Decode.Error String
makeSummary decoders =
decodeElmReviewComments decoders
|> Decode.map (aggregateComments >> encodeSummary >> Encode.encode 0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this encode as well as decode? Is this part of the dark magic that takes the json that is being sneakily stored in the elm review output and then extracting it in to what exercism wants?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is the dark magic part.
It decodes the comments from the elm-review JSON, reorders them, makes a summary and encodes the JSON exercism wants.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, I think it is probably worth adding a comment to explain it a bit.

Copy link
Contributor Author

@jiegillet jiegillet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the review!
Partial reviews are appreciated, this way I can start changing stuff little by little.
I prefer committing changes in this PR, in particular to make my intentions clearer with better custom types and comments.

bin/check_files.sh Show resolved Hide resolved
# Temporarily disable -e mode
set +e
# running the analysis
./bin/run.sh two-fer ./test_data/two-fer/perfect_solution ./test_data/two-fer/perfect_solution > /dev/null
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, sounds good. How about using strain?
It's not the most difficult/obscure one but it really suited for an analysis, since the instructions say people are not supposed to use List.filter (I'm thinking of blocking any List function), but a large proportion of solutions use it anyway.
There has been about 6 solutions published in the last 6 months.

src/Analyzer.elm Outdated Show resolved Hide resolved
src/Analyzer.elm Show resolved Hide resolved
expressions ++ List.concatMap (\branch -> traverseTreeFrom branch trimmedTree) modulefunctions


matchFunction :
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that the meaning is hard to work out. I do need to keep the range when I find the function, that's why I used a Maybe Range, but I'll try to clean it up.

( calledFunction, ( Just [], name ) == ( functionModule, functionName ) |> getRange )


checkForError : Find -> Comment -> Context -> List (Error {})
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, this code is fairly subtle, I'll try to make it more clear, at the very least with comments.

makeSummary : List (Decoder Comment) -> String -> Result Decode.Error String
makeSummary decoders =
decodeElmReviewComments decoders
|> Decode.map (aggregateComments >> encodeSummary >> Encode.encode 0)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is the dark magic part.
It decodes the comments from the elm-review JSON, reorders them, makes a summary and encodes the JSON exercism wants.

src/Comment.elm Outdated
List.sortBy (.commentType >> commentTypeShowOrder) comments

message =
case List.map .commentType comments |> minimumBy commentTypeSummaryOrder of
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just realized List.Extra is an indirect dependency already, might as well use that!

@jiegillet
Copy link
Contributor Author

I think I've addressed all the comments so far, I must say all the changes are solid improvements!

In particular I changed the type (Maybe ModuleName, String, Range) to Node Expression and this will let us make much more general checks in the future (checking is a function uses a let block, or a record update syntax...). We can add those as they become relevant for future exercise rules.

I have identified some over-engineering in functionCalls and I would like to ask for your advice. elm-review has two types of errors:

  1. normal errors, that are localized (think of the squiggly lines in compiler errors)
  2. global errors, that are not localized (for things like "something is missing")

Normal errors are highly recommended for rules, which makes sense for most uses of elm-review rules, so I have been using them whenever possible. However, that makes the code a more complex (see checkForError) especially that so far I'm not actually including that information in the final comments shown to the students. It's also tedious for unit tests, because elm-review tests require to give a location for testing normal errors.

In the Elixir analyzer, the comments for the functionCalls type of rules say something generic like "Function X should call function Y and Z." or "Function X should not call Y or Z" without specifying the location, and students are expected to figure things out. This works well enough, I think, but I would like to know what you think. I have 2 options in mind:

  1. Keep comments general => don't bother with locations, always return global errors. Pros: simpler code, simpler comments. Cons: students have a slightly harder time figuring out their mistakes.
  2. Localize the comments => export ranges in the comments. Pros: nice for students. Cons: higher complexity for code and unit tests, especially when several errors are found (comment on which one? all? in which order?), need to rely more on passing parameters in the comments.

If we go with option 2, it's best to do it now, I can't imagine that it would be easy to change later on.
Looking forward to your thoughts.

Copy link
Contributor

@ceddlyburge ceddlyburge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Jeremie

I have now looked at everything apart from AnalyzerTest. Maybe we could have another call and you can talk me through it.

It would also be good to set up CI, which I think you have mostly done, but it still needs integrating with GitHub.

I think Matthieu is very busy at the moment, but I would like to allow him to review this if he wants to (it may be that he is just too busy and can't do it now).

The code looks good as it is I think, so when you have finished reading the comments and have decided what to do about them (if anything) please let me know and I'll take a final look.

Then we can ask one of the exercism team to merge / deploy.

Cheers

Cedd

@@ -0,0 +1,13 @@
module Strain exposing (discard, keep)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we add comments to the code to state what imperfections we are expecting the analyser to find?

@@ -0,0 +1,62 @@
module Common.Simplify exposing (ruleConfig)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NoUnused and Simplify have some identical code. I'm not sure whether this is a good or a bad thing, but I thought it worth mentioning. The good bit is that it keeps things simple and understandable, the bad thing is that there is less consistency and more places to make mistakes.

A list of rules, along with their associated decoders and message formatters would also work I think

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar code, but not identical, they don't read the same fields and don't produce messages in the same way. I would leave them separate, at least for now. I also do need to add unit tests for these, I guess I can do it once we commit to use them.

A list of rules, along with their associated decoders and message formatters would also work I think

That's not a bad idea, but the exercise rules don't need decoders since they all use the same decodeComment.
Maybe type AnalyzerRule = CustomRule Rule | ImportedRule Rule (Decoder Comment)?

{ name : String
, comment : String
, commentType : CommentType
, params : Dict String String
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the params always a singleton String? If so making it a String would simplify things.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, not necessarily, there can be a number of parameters.
For example, we could have a rule on casing and send {"params":{"actual":"some_var","expected":"someVar"}} to inject those in the general message over on website-copy.

ruleConfigs : List RuleConfig
ruleConfigs =
[ -- Common Rules
Common.NoUnused.ruleConfig
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we comment out the common rules for the first release?

tests/Exercise/StrainTest.elm Show resolved Hide resolved
tests/Exercise/TwoFerTest.elm Show resolved Hide resolved
describe "encoders should encode and decoders should decode"
[ test "encoding" <|
\() ->
Expect.equal
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it it more idiomatic to put the expect.equal at the end of the test

@mpizenberg
Copy link
Member

Yes sorry @ceddlyburge and @jiegillet . I'm browsing through some of the comments and taking a little bit of time when I think I can quickly identify something or some issue but won't have time for a review of the code here. Super thankful you can do this Cedd!

@jiegillet
Copy link
Contributor Author

Thank you for the review! I addressed your comments.

I have now looked at everything apart from AnalyzerTest. Maybe we could have another call and you can talk me through it.

Sure, AnalyzerTest is a bit messy :p
We could do today (Monday) or tomorrow (Tuesday) at these are time as last time?

It would also be good to set up CI, which I think you have mostly done, but it still needs integrating with GitHub.

Could we do it in a different PR? Maybe along with the docker integration?
This one is getting long, lots of comments to scroll through :)

@ceddlyburge
Copy link
Contributor

Sounds good, I could do 12:30 today if that still works for you? 9:30 your time I think ...

@jiegillet
Copy link
Contributor Author

Alright! I sent you and Matthieu the invitation.

Copy link
Contributor

@ceddlyburge ceddlyburge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Jeremie, thanks for the video call just now. And thanks again for all your time on this, its a very exciting step forward, and must have been a lot of work! I'm approving this now, and am looking forward to the docker integration and CI. It would also be good to add a readme at some point, and I think we could simplify some of AnalyserTest, but I don't think its a priority.

@ceddlyburge
Copy link
Contributor

Keep comments general => don't bother with locations, always return global errors. Pros: simpler code, simpler comments. Cons: students have a slightly harder time figuring out their mistakes.
Localize the comments => export ranges in the comments. Pros: nice for students. Cons: higher complexity for code and unit tests, especially when several errors are found (comment on which one? all? in which order?), need to rely more on passing parameters in the comments.
If we go with option 2, it's best to do it now, I can't imagine that it would be easy to change later on.
Looking forward to your thoughts.

Let's stick with option 2 for now, it didn't look like that much extra complexity, and I think it will be useful, especially for the longer exercises

@ceddlyburge
Copy link
Contributor

ceddlyburge commented Feb 21, 2022

@exercism/maintainers-admin could you review / approve this please?

@jiegillet
Copy link
Contributor Author

Any reason why we are holding off on merging?
I have docker and the CI ready in another branch, I'll also add a README there.

@ceddlyburge ceddlyburge merged commit f4ce91d into exercism:main Feb 28, 2022
@ceddlyburge
Copy link
Contributor

Sorry, I hadn't realised that Erik had approved it ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
x:size/massive Massive amount of work
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants