Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pylint alerts corrections as part of an intervention experiment #2294

Closed
evidencebp opened this issue Nov 13, 2024 · 5 comments
Closed

Pylint alerts corrections as part of an intervention experiment #2294

evidencebp opened this issue Nov 13, 2024 · 5 comments

Comments

@evidencebp
Copy link

evidencebp commented Nov 13, 2024

Is your feature request related to a problem? Please describe.

Pylint alerts are correlated with tendency to bugs and harder maintenance.

Describe the solution you'd like
Fix some of the pylint alerts.

Describe alternatives you've considered

  1. You can leave the code as is
  2. You will probably fix some of the alerts during your ongoing development

Additional context

I'd like to conduct a software engineering experiment regarding the benefit of Pylint alerts removal.
The experiment is described here.
In the experiments, Pylint is used with some specific alerts, files are selected for intervention and control.
After the interventions are done, one can wait and examine the results.

I'm asking for your approval for conducting an intervention in your repository.

See examples of interventions in stanford-oval/storm, gabfl/vault, and coreruleset/coreruleset.

You can see the planed interventions

May I do the interventions?

@kandersolar
Copy link
Member

The linked file is for a different package. I think this is what is relevant for us: https://github.com/evidencebp/pylint-intervention/blob/main/interventions/candidates/pvlib_pvlib-python_interventions_October_06_2024.csv

Looking through the list of flags, I am inclined to decline the offer. Many of the "issues" are intentional, and compatible with the linter configuration we use. Others would be only debatable improvements, with the effort required to edit and review likely not worth the improvement IMHO.

The goal of the experiment is to evaluate the benefit of fixing alerts of various types. Benefit will be measured by both developer opinion and in metric improvement.

@evidencebp I am curious, what metrics do you use? How can the benefit of fixing pylint issues be quantified?

@evidencebp
Copy link
Author

Thank you for the feedback, @kandersolar !

Some of the alerts are known to have limited or even no effect (e.g., line-too-long).
They function as a control, to see that the influence is not a side-effect (e.g., drawing attention to the file, leading to a different significant fix).
These alerts are easy to fix and review.

Usually there are two types of relevant metrics.
The first one is code metrics that are based on the source code itself.
Examples are lines-of-code and McCabe.
These are usually direct and local.

Another type of metric are process metrics, based on the development process.
This is a view independent of the source code (making comparing more reliable) and can represent benefits.
Examples corrective commit probability (measures tendency to bugs) and commit duration (measures modification effort).

@cwhanse
Copy link
Member

cwhanse commented Nov 13, 2024

I concur with @kandersolar - most of the pylint items are "line too long" which is intentional for readability, or in the set "too many lines/branches/statements)". Squelching the "too many" set feels like refactoring modules and functions, which would be a lot of work for (my perception) little return.

@evidencebp
Copy link
Author

@kandersolar , @cwhanse , I respect your decision and enjoy the discussion.

I agree regarding the line-too-long.

Note that "too many lines/branches/statements)" is a very different case.
Code length (in many variations) is very influential.
It leads to code that is harder to understand, test, and modify.

Refactoring to split a long method into smaller ones is common enough to be implemented in IDEs.

Note that the goal of the experiment is to reach a point that we will have a dataset of many clean interventions that will allow us to learn how beneficial each alert type is.
From this point of view, even verifying the line-too-long is not beneficial will advance the knowledge.

@kandersolar
Copy link
Member

kandersolar commented Nov 25, 2024

Seems like we're not interested in pursuing this for pvlib-python, but thanks @evidencebp for the offer.

@kandersolar kandersolar closed this as not planned Won't fix, can't repro, duplicate, stale Nov 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants