Replies: 1 comment
-
RE
Would it not be possible to configure a different external command for each rule? You'd likely want to ensure that they're only defined locally to avoid executing arbitrary shell code from the internet, but one could imagine a rule configuration like
or the like. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Idea
I find the
elm-review
API quite fun to use, and much more powerful than usinggrep
orregexes
. When a rule analyzes a project, it may collect contextual data that it will use to know whether it should report an error or not.We could use the same API to collect data to be used for other purposes. For instance if you use CSS files and classes, you could extract the list of CSS classes currently being used in your application, and pass that to your CSS linter which would tell you to remove the unused classes. We could also have a rule that extracts the Markdown documentation of the whole project so that a script can create those files. You could then use that to publish the documentation in a custom way on your product website, or you can use a Markdown linter to report grammatical issues.
For the Markdown linting, we could also build those rules ourselves and they would be more helpful, but I’m sure that this would be far faster to build. And then that might inspire some to write the rules inside
elm-review
🙂You could also collect data to find statistics about your project and display them in a dashboard such as module coupling. Or for getting a feel for where to start cleaning an Elm application by detecting code smells: how many times is a primitive type being aliased, how often is
Maybe.withDefault
used, etc.The proposal
elm-review
will provide a new builder forRule
namedwithDataExtractor
(open to naming suggestions) for project rules.You give it a function that takes your project context and returns an extract. I imagine that it will often times be a
Json.Value
value but we'll go over that later.We will not call this function in a normal
elm-review
run. The CLI will get a--extract
boolean flag (open to naming suggestions) which will make the tool run as follow:When reporting JSON, we can probably output both the errors are extraction.
If you built a review rule that only aims to extract data, it may not make sense to have it be able to report an error, and therefore it may not necessarily make sense that
elm-review
does this. But there will be times where the rule will run into code that it will not accept or able to make sense out of. If we take the example of extracting CSS classes out of the code, we can imagine a call toHtml.Attributes.class
with a dynamic value that is too complex for the rule to compute. In that case, the rule could report an error if you so wished it to.It would be compatible with
--fix
and--fix-all
where it would fix the errors, and when no fixes are found, do what I mentioned before.For module rules
For project rules, once we have finished reviewing the project, we have a
projectContext
that we can pass to the data extractor. But for module rules we don't have a single result, we only have a module context, one per module, which internally is discarded at the moment.The easiest and most performant solution I think is to only support having a data extractor for project rules.
Extracted data output
I think that this might be where I have the biggest design issue.
My original thought was that we'd output a JSON object that would look like the following
This sounds like the easiest way to deal with multiple rules. An issue that could come up is that the name of the rule is not necessarily unique. You can enable a rule twice, and one valid use-case for that is if they have different configurations.
So if we key this by the rule name, we potentially override and lose data. I see two ways around this
Extracted data output, as plain text?
I think
--watch
mode could be quite useful if we just display the extracted data anytime when there are no errors instead of reporting that there are no errors. That way, you could have some kind of "monitor" of the codebase that updates everytime you save. To make this work, we need to be able to display text since JSON will make this monitor not readable.An example of a monitor: we are on a quest to add type annotations everywhere and would like to do the work in batches. We create a rule that counts the number of missing type annotations and returns the top 5 modules.
Now I know it's best to target
Some.Module.A
first, thenSome.Module.B
, etc.Or imagine you are working on a complex update function, and you wish to visualize the flow of messages
As you would work on that file, the update flow would be updated to reflect how the page works.
This would I think open up quite a lot of interesting applications, but it would not work very well if we reported JSON, as that would not make the monitors readable.
Integration with external tools
It can be useful to pipe the extracted data to another tool, just like in the Unix philosophy. While this would work when calling
elm-review
, it would not when running in watch mode as the program never terminates. I don't think we need to for at least the first iteration, but we could have an argument--command
where we would pipe the extracted content to.elm-review --watch --extract --command "sed s/Thing/Other/ > main.css"
(fake bash, but you get the gist).I do feel like the tool would be limited since we could only reasonably/usefully pass the output of one single rule to something else, or even use only a single thing as a monitor, leading to multiple
elm-review
calls with different configurations/arguments. Or we wrap all that in a giant JSON and let the user handle it all with a tool likejq
or a custom Node.js script 🤷♂️Request for comments
elm-review
do this, or should that be a different tool?Beta Was this translation helpful? Give feedback.
All reactions