Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Linting with Ruff #2033

Merged
merged 23 commits into from
Jun 19, 2024
Merged

Linting with Ruff #2033

merged 23 commits into from
Jun 19, 2024

Conversation

freddyheppell
Copy link
Contributor

@freddyheppell freddyheppell commented Jun 3, 2024

WIP linting with ruff (fixes #2022).

Still to-do:

  • fix current check failures (see below)
  • github actions workflow
  • When build: switch from setup.py to pyproject.toml #1978 is merged, the config can be moved from ruff.toml into pyproject.toml
  • edit contributing file
  • make decision about dependency caching
  • decide if we want to enable any additional rules
Old discussion about enabling/disabling lint rules ## Current Lint State

Current outstanding issues reported by ruff check, which are not auto-fixable.

44      D205    [ ] 1 blank line required between summary line and description
36      E402    [ ] Module level import not at top of file
11      E722    [ ] Do not use bare `except`
 9      D417    [ ] Missing argument description in the docstring for...
 7      E731    [*] Do not assign a `lambda` expression, use a `def`
 5      F821    [ ] Undefined name ...
 4      F841    [*] Local variable ... is assigned to but never used
 2      D301    [*] Use `r"""` if any backslashes in a docstring
 2      F401    [ ] ... imported but unused; consider using `importlib.util.find_spec` to test for availability
 1      E741    [ ] Ambiguous variable name: `l`

Those marked with [*] can be fixed automatically by a rule marked as unsafe. These fixes are not enabled by default since they are currently not well implemented and/or tested enough to be sure they won't modify the behaviour of code. However, they're fine to use one-off so long as their results are checked.

Most of these are relatively small issues which I can work on fixing, the bigger ones are:

D205 - 1 blank line required between summary line and description

Majority appear to be caused by functions not having a short single-line summary, instead opening with a longer explanation, e.g.

bertopic/backend/_sentencetransformers.py:54:9: D205 1 blank line required between summary line and description
   |
53 |       def embed(self, documents: List[str], verbose: bool = False) -> np.ndarray:
54 |           """Embed a list of n documents/words into an n-dimensional
   |  _________^
55 | |         matrix of embeddings.
56 | | 
57 | |         Arguments:
58 | |             documents: A list of documents or words to be embedded
59 | |             verbose: Controls the verbosity of the process
60 | | 
61 | |         Returns:
62 | |             Document/words embeddings with shape (n, m) with `n` documents/words
63 | |             that each have an embeddings size of `m`
64 | |         """
   | |___________^ D205
65 |           embeddings = self.embedding_model.encode(documents, show_progress_bar=verbose)
66 |           return embeddings
   |
   = help: Insert single blank line

Really the issue here is the first line (logical line, not literal line in the code file) is overly long, but there's not actually a rule that captures that. Perhaps this rule could be disabled.

E402 - Module level import not at top of file

There's a lot of these but they're all caused by the main bertopic file mixing top-level imports with conditional imports. The conditional imports can be moved to below all the module-level imports if possible. Some (like the py3.8 conditional) could potentially be removed.

E722 - Do not use bare except

These may be hard to fix if the exception if the specific exception it's intended to catch isn't obvious. But arguably that's a problem in general, not just something that needs to be fixed to appease the linter.

F821 - Undefined name ...

There are some variables in a loop in BERTopic.topics_over_time which it thinks are unassigned:

bertopic/_bertopic.py:880:30: F821 Undefined name `previous_topics`
    |
878 |                 current_topics = sorted(list(documents_per_topic.Topic.values))
879 |                 overlapping_topics = sorted(
880 |                     list(set(previous_topics).intersection(set(current_topics)))
    |                              ^^^^^^^^^^^^^^^ F821
881 |                 )
    |

bertopic/_bertopic.py:887:21: F821 Undefined name `previous_topics`
    |
885 |                 ]
886 |                 previous_overlap_idx = [
887 |                     previous_topics.index(topic) for topic in overlapping_topics
    |                     ^^^^^^^^^^^^^^^ F821
888 |                 ]
    |

bertopic/_bertopic.py:893:27: F821 Undefined name `previous_c_tf_idf`
    |
891 |                     (
892 |                         c_tf_idf[current_overlap_idx]
893 |                         + previous_c_tf_idf[previous_overlap_idx]
    |                           ^^^^^^^^^^^^^^^^^ F821
894 |                     )
895 |                     / 2.0
    |

I think this is because they're assigned later in the loop and these parts of the code won't run before then, but of course a static analyser can't determine this. These may need to be specifically ignored with # noqa comments.

It's also flagged up a mistake from my PR #1984 where self is used in a @classmethod! I'll PR a fix for that too...

@MaartenGr
Copy link
Owner

I was just reading through #1978 when I saw this PR, thanks for the work! As soon as that PR is merged, I'll take the time to look through this one.

@afuetterer
Copy link
Contributor

#1978 is merged. :)

@MaartenGr
Copy link
Owner

@afuetterer Thanks for the reminder! Note that I typically work on my open-source packages in the evening/weekend, so sometimes it might take a while for me to answer. Having said that, pinging generally helps 😉

Those marked with [*] can be fixed automatically by a rule marked as unsafe. These fixes are not enabled by default since they are currently not well implemented and/or tested enough to be sure they won't modify the behaviour of code. However, they're fine to use one-off so long as their results are checked.

Most of these are relatively small issues which I can work on fixing, the bigger ones are:

Agreed, for these some manual check is important since this is the first passthrough of Ruff. After that, we can optimize/automate stuff but for now it is indeed important to understand which changes are pushed through automatically and how they might affect the codebase.

D205 - 1 blank line required between summary line and description
Really the issue here is the first line (logical line, not literal line in the code file) is overly long, but there's not actually a rule that captures that. Perhaps this rule could be disabled.

I think disabling would be preferred here as that would require rewriting part of the docstrings which I feel are quite clear at the moment. Also, even if rewriting those would be helpful I think the impact would still be small considering the amount of work that is required (which could be spend better on other features/issues/tasks instead).

E402 - Module level import not at top of file
There's a lot of these but they're all caused by the main bertopic file mixing top-level imports with conditional imports. The conditional imports can be moved to below all the module-level imports if possible. Some (like the py3.8 conditional) could potentially be removed.

Although most could be moved below module-level imports, I believe (but I'm not sure) that the warnings needed to be before some of the imports. I added a while back ago so I'm not sure whether it is still needed but Huggingface's Transformers had many irrelevant warnings back then.

E722 - Do not use bare except
These may be hard to fix if the exception if the specific exception it's intended to catch isn't obvious. But arguably that's a problem in general, not just something that needs to be fixed to appease the linter.

Agreed. Some of these were intended to broad to catch many different issues but others were either just because of a lack of time. For now, ignoring these make sense until a separate PR goes into the specifics of this. I'm worried if we will go over all of them now, it might be a bit out of scope. But it is indeed important to go over them at some point.

@MaartenGr
Copy link
Owner

F821 - Undefined name ...
There are some variables in a loop in BERTopic.topics_over_time which it thinks are unassigned:
I think this is because they're assigned later in the loop and these parts of the code won't run before then, but of course a static analyser can't determine this. These may need to be specifically ignored with # noqa comments.

Yep, these need to stay as they are at the moment and noqa comments would be sufficient here.

It's also flagged up a mistake from my PR #1984 where self is used in a @classmethod! I'll PR a fix for that too...

Cool, definitely helps!

How do you want to continue? Because of the merge of the PR I mentioned previously there are some conflicts and the ruff.toml needs to be moved to the pyproject.toml. Other than that, if I start merging other PRs that are currently open, that will continue to conflict with this one I think.

Having said that, there are a couple of others that I actually want to merge first since they were already open for a while and nearing completion (specifically #1929 and #1894).

@freddyheppell
Copy link
Contributor Author

Those all sound like good suggestions. I should be able to look at writing an actions workflow and reducing/ignoring errors this evening and weekend.

I'll update this branch just to get the pyproject.toml changes, then leave it with any future conflicts until it's ready to merge. At this point the conflicts on this branch can just be resolved by accepting the master version of the diff (i.e. discarding changes on this branch) and re-running the linter.

Once merged this will cause two problems, but there are ways around them:

  1. Any open PRs will get merge conflicts - there are ways of minimising the effect of this by manually installing the linter on the PR branch and selectively running the linter on files which will have conflicts (see this comment - it's about Black but the same principle applies for Ruff).
  2. Git blames will become useless and attribute virtually every line to the lint commit. This can be resolved by squashing this whole PR to a single commit (which I'll do when ready) and putting the commit hash in a .git-blame-ignore-revs file (see this page in the Black docs). GitHub automatically uses this when showing blames on the web. With the Git CLI you either have to pass this file in each time, or (much easier) run a one-off config option to use it.

@MaartenGr
Copy link
Owner

I should be able to look at writing an actions workflow and reducing/ignoring errors this evening and weekend.

Awesome, thanks! Take your time, there's no rush.

Any open PRs will get merge conflicts - there are ways of minimising the effect of this by manually installing the linter on the PR branch and selectively running the linter on files which will have conflicts (scikit-learn/scikit-learn#20301 (comment) - it's about Black but the same principle applies for Ruff).

Hmmm, although a very nice solution we'll have to consider the experience of users that opened some of the PRs. Looking at the link you provided it seems to be a degree of complexity not everyone is familiar. Especially those who opened up minor changes/typos.

Although there are currently 12 PRs open (ignoring this PR), there are 6 PRs that have recent changes/updates (#2039, #2013, #2002, #1985, #1929, and #1894). Of those, I believe the latter 4 PRs would be the main difficulty here if this PR is merged before those. How about I merge those latter 4 PRs (when they are done) and then come back to this? It would minimize the number of issues and I think they can be merged this week.

Git blames will become useless and attribute virtually every line to the lint commit. This can be resolved by squashing this whole PR to a single commit (which I'll do when ready) and putting the commit hash in a .git-blame-ignore-revs file (see this page in the Black docs). GitHub automatically uses this when showing blames on the web. With the Git CLI you either have to pass this file in each time, or (much easier) run a one-off config option to use it.

That's nice. I typically already squash all commits when merging with the main branch so fortunately that already is similar to current processes thus far.

@afuetterer
Copy link
Contributor

@afuetterer Thanks for the reminder! Note that I typically work on my open-source packages in the evening/weekend, so sometimes it might take a while for me to answer. Having said that, pinging generally helps 😉

Those marked with [*] can be fixed automatically by a rule marked as unsafe. These fixes are not enabled by default since they are currently not well implemented and/or tested enough to be sure they won't modify the behaviour of code. However, they're fine to use one-off so long as their results are checked.
Most of these are relatively small issues which I can work on fixing, the bigger ones are:

Agreed, for these some manual check is important since this is the first passthrough of Ruff. After that, we can optimize/automate stuff but for now it is indeed important to understand which changes are pushed through automatically and how they might affect the codebase.

D205 - 1 blank line required between summary line and description
Really the issue here is the first line (logical line, not literal line in the code file) is overly long, but there's not actually a rule that captures that. Perhaps this rule could be disabled.

I think disabling would be preferred here as that would require rewriting part of the docstrings which I feel are quite clear at the moment. Also, even if rewriting those would be helpful I think the impact would still be small considering the amount of work that is required (which could be spend better on other features/issues/tasks instead).

E402 - Module level import not at top of file
There's a lot of these but they're all caused by the main bertopic file mixing top-level imports with conditional imports. The conditional imports can be moved to below all the module-level imports if possible. Some (like the py3.8 conditional) could potentially be removed.

Although most could be moved below module-level imports, I believe (but I'm not sure) that the warnings needed to be before some of the imports. I added a while back ago so I'm not sure whether it is still needed but Huggingface's Transformers had many irrelevant warnings back then.

E722 - Do not use bare except
These may be hard to fix if the exception if the specific exception it's intended to catch isn't obvious. But arguably that's a problem in general, not just something that needs to be fixed to appease the linter.

Agreed. Some of these were intended to broad to catch many different issues but others were either just because of a lack of time. For now, ignoring these make sense until a separate PR goes into the specifics of this. I'm worried if we will go over all of them now, it might be a bit out of scope. But it is indeed important to go over them at some point.

Hi @MaartenGr, sorry. I did not want to rush you in any way, I just wanted to ping @freddyheppell, that the other PR was merged and that, if desired, the config could move to pypyroject.toml, as suggested.

@MaartenGr
Copy link
Owner

Hi @MaartenGr, sorry. I did not want to rush you in any way, I just wanted to ping @freddyheppell, that the other PR was merged and that, if desired, the config could move to pypyroject.toml, as suggested.

Don't be sorry! Just wanted to be transparent as to when you can (sort of) expect a response from me and why things might take a bit longer. Also, in all transparency, I didn't even notice that it wasn't you who created the PR. I just kinda assumed, so definitely miscommunication from my side 😉

@freddyheppell
Copy link
Contributor Author

Hmmm, although a very nice solution we'll have to consider the experience of users that opened some of the PRs. Looking at the link you provided it seems to be a degree of complexity not everyone is familiar. Especially those who opened up minor changes/typos.

I think in practice it'll be relatively straightforward - in fact what I linked to is probably unnecessary in practice. A very simple PR may be mergeable anyway, or produce an easily resolvable merge conflict. In more complex PRs, the inverse of what I'll have to do in this PR can be done, i.e. accept changes from the PR branch in favour of master, then re-run the linter. In the worst case, merge conflicts can be minimised by installing ruff manually and running ruff format x.py y.py z.py... with just the files modified in the PR. This should make those files identical to the new linted version on master, other than the parts where actual changes have been made. And this is a one-off process caused by linting the entire codebase, and won't affect branches once this PR is merged.

That's nice. I typically already squash all commits when merging with the main branch so fortunately that already is similar to current processes thus far.

Just to clarify with this - it's only this initial linting PR which causes issues with git blame (as it changes the vast majority of lines as far as git's concerned) so will need to go in the .git-blame-ignore-revs file. In subsequent PRs, the only linting will be on code that's been created or modified as part of the PR anyway, so Blame will be unaffected.

@freddyheppell
Copy link
Contributor Author

I've now rebased to get the pyproject.toml changes. Resolving the conflicts was actually super straightforward! I just accepted the incoming master change each time, which simply replaced the old, linted code with the new, unformatted code, then re-ran the linter. So it's no problem to merge additional PRs before this one.

@MaartenGr
Copy link
Owner

MaartenGr commented Jun 13, 2024

I've now rebased to get the pyproject.toml changes. Resolving the conflicts was actually super straightforward! I just accepted the incoming master change each time, which simply replaced the old, linted code with the new, unformatted code, then re-ran the linter. So it's no problem to merge additional PRs before this one.

@freddyheppell
Alright, I finally had the time to process and merge all PRs that were blocking this one. Regardless of what else comes in, this PR has priority.

P.S. If you have extra time, a quick note on Ruff in the CONTRIBUTING.md file here would be great.

@freddyheppell
Copy link
Contributor Author

FYI enabling dependency caching on setup-python cuts about a minute off workflow run time. Probably should enable this on the build workflow too?

@MaartenGr
Copy link
Owner

FYI enabling dependency caching on setup-python cuts about a minute off workflow run time. Probably should enable this on the build workflow too?

Is that within the same run or also between runs? Aside from here I cannot find a quick answer to this. If it is between runs, it might not use newer versions of dependencies. Using the latest version of dependencies generally helps in quickly finding compatibility issues.

@@ -75,6 +75,7 @@ spacy = [
test = [
"pytest>=5.4.3",
"pytest-cov>=2.6.1",
"ruff~=0.4.7",
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a specific reason you used the ~= operator here? I can imagine using it likely makes dependency management more stable but I like using >= to quickly find out when new major/minor versions are released and things break 😅 That way, I do not have to make changes to support new releases and only change when something breaks (which people are more likely to communicate rather than when things work).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ended up going with this version constraint because if, in a PR, someone were to install a newly released version, it may introduce new rules which reformat/start erroring on code unrelated to the PR.

Arguably the linter version can be kept constant forever, but it can be safely incremented in a separate PR every so often to avoid this.

@afuetterer
Copy link
Contributor

In the keybert repository there is a pre-ommit-config defined. Should this be done here as well?
Then you could run pre-commit instead of ruff directly in CI? Just a suggestion.

@freddyheppell
Copy link
Contributor Author

Is that within the same run or also between runs? Aside from here I cannot find a quick answer to this. If it is between runs, it might not use newer versions of dependencies. Using the latest version of dependencies generally helps in quickly finding compatibility issues.

The cache key used by the setup-python action is a combination of the hash of the dependency file, OS and Python version. Assuming CI is used frequently enough to keep the cache, it will keep using this forever (until the dependency file is changed).

I suppose what's important is what gets cached. If it just caches the downloaded packages/built wheels then packages will still get upgraded. If it caches the actual installed packages directory then they won't.

I think it's the former, based on the fact when it installs from cache it states it's using a cached download, rather than that there's an installed package meeting the dependency constraints already. This would suggest that dependency resolution is still happening. I can look into this more after this PR though - I can leave it in just for the lint or take it out for now.

Alternatively, the cache could be modified to include the PR branch in the key. That way it would be cached for repeated runs in the same PR, but each PR would be freshly installed.

@freddyheppell
Copy link
Contributor Author

In the keybert repository there is a pre-ommit-config defined. Should this be done here as well?

Then you could run pre-commit instead of ruff directly in CI? Just a suggestion.

I did think about this but I'm not too familiar with precommit, and I'm not quite sure how to handle the fact we have auto-formatting, autofixable checks and non-autofixable checks. And I've never been a huge fan of doing it automatically, as sometimes the changes are suboptimal (e.g. no-op code is replaced with more explicitly no-op code, rather than removed), and doing it in a precommit hook hides that somewhat.

Also it's probably more necessary when you're using all the separate packages instead of Ruff as an all-in-one. I will add a Make task to use locally (and put this in the contributing doc).

In CI, we need to run a different command anyway, as we want to fail instead of doing any auto-fixes.

@freddyheppell
Copy link
Contributor Author

Now the basics are working, do we want to enable additional rulesets? Ruff has some related to additional packages, and some that enforce extra standards.

Some I think are worth considering - these are all relatively small and focused (and have good auto fix coverage) so even enabling all of them won't be overly strict:

  • isort - auto sorts imports alphabetically and into blocks (by external/internal imports, whole package vs modules and so on) - I mentioned this in the original issue and I thought this was enabled by default!
  • Pyupgrade - encourages outdated code practices to be replaced with modern alternatives. Also flags up cases where version checks are no longer needed based on the package min version.
  • Numpy rules - a few Numpy specific rules added by Ruff
  • pandas-vet - encourages good practice with Pandas
  • flake8-errmsg - requires exception message strings to be assigned to a variable first, this prevents the string cluttering the traceback in addition to the exception message.
  • eradicate - requires commented code to be removed

This one is useful but is a bit stricter so likely to flag up a fair bit:

@MaartenGr
Copy link
Owner

@freddyheppell Thanks for all your work on this! Went through your questions/comments:

I think it's the former, based on the fact when it installs from cache it states it's using a cached download, rather than that there's an installed package meeting the dependency constraints already. This would suggest that dependency resolution is still happening. I can look into this more after this PR though - I can leave it in just for the lint or take it out for now.

I think leaving it out for now would be best as we can look into this in a separate PR more in-depth. I want to prevent increasing the scope of this PR as much as possible.

I did think about this but I'm not too familiar with precommit, and I'm not quite sure how to handle the fact we have auto-formatting, autofixable checks and non-autofixable checks. And I've never been a huge fan of doing it automatically, as sometimes the changes are suboptimal (e.g. no-op code is replaced with more explicitly no-op code, rather than removed), and doing it in a precommit hook hides that somewhat.

I typically like to have all linting done within the PR itself and use the workflows to check whether the formatting was done properly. As such, using pre-commit does allow new users to quickly and easily format their code without too much hassle. That said, running make style or something similar is also not that difficult 😅

In CI, we need to run a different command anyway, as we want to fail instead of doing any auto-fixes.

Yep, definitely don't want auto-fixes in the workflow.

isort - auto sorts imports alphabetically and into blocks (by external/internal imports, whole package vs modules and so on) - I mentioned this in the original issue and I thought this was enabled by default!

I believe this one might be a bit tricky since I'm suppressing some warning before and after imports to disable warnings specific to certain imports.

import yaml
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
warnings.filterwarnings("ignore", category=UserWarning)
try:
yaml._warnings_enabled["YAMLLoadWarning"] = False
except (KeyError, AttributeError, TypeError) as e:
pass
import re

Not sure how isort handles these cases as the order cannot be changed here. Do note though that this was implemented a couple of years ago, so it might not be necessary anymore.

Pyupgrade - encourages outdated code practices to be replaced with modern alternatives. Also flags up cases where version checks are no longer needed based on the package min version.

Not familiar with this one but does seem like a nice addition. I am, however, quite hesitant to add packages as dependency conflicts is something that starts creeping up.

Numpy rules - a few Numpy specific rules added by Ruff

This would be nice. Numpy changes are important and detecting them early on would be helpful.

pandas-vet - encourages good practice with Pandas

Pandas might actually be replaced with something else in the future, so let's skip this for now.

flake8-errmsg - requires exception message strings to be assigned to a variable first, this prevents the string cluttering the traceback in addition to the exception message.
eradicate - requires commented code to be removed

Although nice features, the size/popularity/downloads of these packages make it difficult for me to rely on at the moment since I cannot be confident these will be maintained for years on end.

This one is useful but is a bit stricter so likely to flag up a fair bit:

flake8-comprehensions - finds cases where comprehensions should be used, or are used unnecessarily

Ah, that's definitely something I want to decide myself as part of the discussion on for loops vs. comprehensions are related to readability and that's a decision that I want to make myself.

@freddyheppell
Copy link
Contributor Author

Okay thanks for that, will address later. FYI - all rulesets are included with Ruff already. They come from various packages originally but Ruff has to reimplement them in Rust, the names of the original packages are just for convenience.

@afuetterer
Copy link
Contributor

In the keybert repository there is a pre-ommit-config defined. Should this be done here as well?
Then you could run pre-commit instead of ruff directly in CI? Just a suggestion.

I did think about this but I'm not too familiar with precommit, and I'm not quite sure how to handle the fact we have auto-formatting, autofixable checks and non-autofixable checks. And I've never been a huge fan of doing it automatically, as sometimes the changes are suboptimal (e.g. no-op code is replaced with more explicitly no-op code, rather than removed), and doing it in a precommit hook hides that somewhat.

Also it's probably more necessary when you're using all the separate packages instead of Ruff as an all-in-one. I will add a Make task to use locally (and put this in the contributing doc).

In CI, we need to run a different command anyway, as we want to fail instead of doing any auto-fixes.

You can use ruff as pre-commit hook. Instead of fixing anything the hook can fail in CI (--exit-non-zero-on-fix), which is the behavior you want, I guess.

I use it like this:

- repo: https://github.com/astral-sh/ruff-pre-commit
  rev: f42857794802b6a77b0e66f08803575aa80d3c8f  # frozen: v0.4.7
  hooks:
  - id: ruff
    args: [--fix, --show-fixes, --exit-non-zero-on-fix]
  - id: ruff-format

Ref: https://github.com/astral-sh/ruff-pre-commit

@MaartenGr
Copy link
Owner

Okay thanks for that, will address later. FYI - all rulesets are included with Ruff already. They come from various packages originally but Ruff has to reimplement them in Rust, the names of the original packages are just for convenience.

Oh, and here I thought there were a bunch of additional dependencies... 😅

@freddyheppell
Copy link
Contributor Author

In the interests of getting this merged, I think we should leave precommit and extra rules for now. I did test some of the extra rulesets and they generally had a very small number of issues (e.g. the NPY ruleset was only to replace np.random.* method calls with the now reccomended np.random.Generator approach) - so these can be dealt with in future in relatively small and simple PRs.

I've added the Make rules and documented in the contributing file. So I think this is now ready to merge.

@freddyheppell freddyheppell marked this pull request as ready for review June 18, 2024 12:49
@freddyheppell
Copy link
Contributor Author

freddyheppell commented Jun 18, 2024

Remember that this needs to be squashed down to a single commit and the revision added to .git-blame-ignore-revs, as explained here.

I could squash all the commits in the PR and add this file myself, but then we'd lose the commit history in this PR. Perhaps it would be better for you to squash merge it, then add the ignore revs file yourself?

@MaartenGr MaartenGr merged commit 70d7725 into MaartenGr:master Jun 19, 2024
6 checks passed
@MaartenGr
Copy link
Owner

@freddyheppell Thank you for all the work on this! Just did a last check and everything looks great. I squashed the commits so that should be good to go. Whilst I answer some other issues, would you mind adding the ignore revs? If not, I'll likely do it somewhere next week.

@afuetterer
Copy link
Contributor

I think the top level .flake8 config file can be deleted now?

@MaartenGr
Copy link
Owner

@afuetterer Yes, it can be deleted!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Consider adding a linter
3 participants