Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check the roles/plays in this repository with ansible-lint #3961

Closed
MarkusTeufelberger opened this issue Dec 29, 2018 · 29 comments
Closed

Check the roles/plays in this repository with ansible-lint #3961

MarkusTeufelberger opened this issue Dec 29, 2018 · 29 comments
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Milestone

Comments

@MarkusTeufelberger
Copy link
Contributor

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

FEATURE REQUEST

Environment:

Not applicable

Description:

I'd like to propose that besides yamllint, all roles and plays/playbooks in this repository should also be checked with ansible-lint (https://github.com/ansible/ansible-lint).

@Atoms
Copy link
Member

Atoms commented Jan 3, 2019

we have thought of implementing this, but we have too much to fix for that to work correctly :) maybe in future, or if you want we can look at PR implementing this :)

@Atoms Atoms pinned this issue Jan 3, 2019
@Atoms Atoms unpinned this issue Jan 3, 2019
@MarkusTeufelberger
Copy link
Contributor Author

I could try to do it check by check (a single PR that fixes everything at once would be horrible to review), it would be nice if someone could help me with adding the initial CI job with all checks disabled to the repository.

@Atoms
Copy link
Member

Atoms commented Jan 3, 2019

we are in stage to move our CI, so it would be no effort for now to do that as there might be some changes regarding that.

@MarkusTeufelberger
Copy link
Contributor Author

Do you want to move away from Gitlab CI alltogether or just run your integrations tests elsewhere?

@Atoms
Copy link
Member

Atoms commented Jan 3, 2019

first we will move tests and already investigating moving away from gitlab

@MarkusTeufelberger
Copy link
Contributor Author

Hm, do you mean by "it would be no effort for now to do that" that it would be a trivial (effortless) change, or that I should wait a bit until the migration has been done?

@Atoms
Copy link
Member

Atoms commented Jan 3, 2019

it was meant by now this should not be done :) let's wait migration...

@Atoms Atoms added the feature label Jan 3, 2019
@MarkusTeufelberger
Copy link
Contributor Author

Great, feel free to ping me once this has been done, then I'll prepare a PR.

I don't think that it would be beneficial to fix linting errors before the linter is available in CI, since it could easily happen that over time people commit code that would fail the linter again.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 11, 2019
@Miouge1
Copy link
Contributor

Miouge1 commented Apr 11, 2019

Has been implemented in #4411

@MarkusTeufelberger
Copy link
Contributor Author

Well, sort of. I'll start with step 2, fixing the actual errors.

@Miouge1
Copy link
Contributor

Miouge1 commented Apr 11, 2019

@MarkusTeufelberger I've started with #4508 #4500 and #4496

Edit: I'm thinking that going 1 error at a time can help to make it faster to review and avoid conflicts

@MarkusTeufelberger
Copy link
Contributor Author

Great! 👍

@MarkusTeufelberger
Copy link
Contributor Author

For the record, here's the rough amount of errors when #4411 was merged:

  - '102'  # 1 error
  - '103'  # 2 errors
  - '104'  # 3 errors
  - '201'  # 3 errors
  - '204'  # approx 50 errors
  - '206'  # tons of errors
  - '301'  # approx 50 errors
  - '302'  # 1 error
  - '303'  # 15 errors
  - '305'  # ~30+ errors
  - '306'  # ~40 errors
  - '403'  # 5 errors
  - '404'  # 6 errors
  - '502'  # ~50 errors
  - '503'  # 12 errors
  - '504'  # 3 errors
  - '601'  # ~40 errors
  - '602'  # tons of errors
  - '701'  # 20 errors

@MarkusTeufelberger
Copy link
Contributor Author

On the still open ones:

  • 204 - good candidate for ignoring (long lines are very prevalent in this repository...)
  • 301 - needs expertise about what each command does and how to detect if it already ran in many cases to ensure idempotency
  • 305 - good candidate to still be fixed, quite straightforward
  • 306 - apparently has some issues on Red Hat derived distros? [306] Shells that use pipes should set the pipefail option - hard to comply on non-RHEL ansible/ansible-lint#497
  • 503 - needs expertise to determine if a task really should be a handler that might run much later or if it really needs to run right at this moment
  • 701 - good candidate for ignoring (the roles in here are not really designed for galaxy)

@DanyC97
Copy link
Contributor

DanyC97 commented Jun 4, 2019

nice work @MarkusTeufelberger @Miouge1 .

@DanyC97
Copy link
Contributor

DanyC97 commented Jun 4, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 4, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 3, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 3, 2019
@MarkusTeufelberger
Copy link
Contributor Author

/remove-lifecycle rotten

There is still some work to be done here...

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 3, 2019
@Miouge1 Miouge1 added good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Dec 12, 2019
@Miouge1 Miouge1 added this to the 2.13 milestone Dec 12, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 11, 2020
@Miouge1
Copy link
Contributor

Miouge1 commented Mar 11, 2020

/remove-lifecycle rotten

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 10, 2020
@Miouge1 Miouge1 added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Apr 11, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 10, 2020
@MarkusTeufelberger
Copy link
Contributor Author

Since some global ignores were even added after I already fixed them(!), I'm no longer going to work on the remaining ones.

Still some work to be done by "someone" though.

This was referenced Jul 16, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 9, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

6 participants