Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix #1508 - the needs() function should propagate calling task's params/args down to needed tasks #1509

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

tabulon
Copy link

@tabulon tabulon commented Sep 27, 2021

PR for fixing #1508 - the needs()function should propagate calling task's params/args down to needed tasks

Hi,

This PR is an attempt to fix #1508 by correcting what appears to be a typo (or copy/paste error) in the body of the needs() function in Rex::Commands.

The bug appears to have been introduced initially by PR #1157 (the fix for #1066 ) with 48c737b.

Notice how params and args were mixed up (interchanged) when being passed down -- within the needs() function.

Two alternate fixes are proposed (in commits 2 & 3), which may be cherry-picked, or just applied in series, which amounts to picking the latter.

Please review and merge, or let me know how to improve it further.

Open questions

While this is clearly bug fix, it can't be ruled out that some code out in the wild may have come to rely on the incorrect behavior during the time of its existence.
(since around 2017).

Therefore, it might perhaps be safer to introduce a "feature switch" for introducing this fix... if the maintainers deem that necessary.

This PR does NOT include such a feature switch, as I guess (perhaps wrongly) that most people might have already turned to using run_task in the mean time, because of glitches like this with the need() function, some of which appear to have already been handled by #1188, but not this one.

That's only a guess, though...

Bundled commits

  1. Add tests (marked as TODO) related to the bug (1508). See below
  2. Fix 1508, by correcting a typo in a couple places (quick fix, with minimal changes in code)
  3. Fix 1508 by correcting a typo + minimal code cleanup (alternate fix)
  4. Update ChangeLog, remove "TODO" mark from tests related to 1508
  5. Tidy related tests (to satisfy xt/author/perltidy.t) (sorry...)

Note: Sorry about the last commit... which is only there to satisfy xt/author/perltidy.t... Please don't make me redo the entire PR just because the tidying comes in a separate commit 😅. I will try to avoid this sort of thing next time.

How to test

$ prove -v t/issue/1508.t   # for this issue
$ prove t/**/*.t            # for non-regression

As long as the related tests remain marked as "TODO",
they will not report failures during normal test runs.

To see their true pass/fail status, you have to pass
the '-v' option to prove.

The last commit removes the "TODO" mark from tests.
Once that commit is merged, the '-v' switch is no longer needed.

Checklist

  • based on top of latest source code
  • changelog entry included
  • tests pass in CI : Maintainer approval is needed to launch CI workflow (for first time contributors to the project, which is the case here )
  • git history is clean
  • git commit messages are well-written

The added tests check that the needs() function
propagates run-time task arguments (%params and @Args)
from the calling task down to the "needed" tasks.

CHANGES:
=============
new file: t/issue/1508.t

HOW TO TEST :
=============

$ prove -v t/issue/1508.t   # for this issue
$ prove t/**/*.t            # for non-regression

While the related tests remain marked as "TODO",
they will not report failures during normal test runs.

To see their true pass/fail status, you have to pass
the '-v' option to `prove`.
Notice that how params and args were mixed up (interchanged)
when being passed down  -- within the needs() function.

This appears to be a typo, introduced initially
by PR RexOps#1157 (the fix for RexOps#1066 ) with 48c737b.

CHANGES:
=============
modified: lib/Rex/Command.pm

HOW TO TEST :
=============

$ prove -v t/issue/1508.t   # for this issue
$ prove t/**/*.t            # for non-regression

While the related tests remain marked as "TODO",
they will not report failures during normal test runs.

To see their true pass/fail status, you have to pass
the '-v' option to `prove`.
This is an alternative fix which also does some minimal
code clean-up as well fixing the culprit (typo).

It is proposed in a separate commit to ease cherry-picking.

In any case, all tests pass either way.

JUSTIFICATION :
===============
There does not appear to be any particular reason for having
two identical invocations of the ->run method in two separate
arms of an `if... elsif...` statement.

So this commit replaces them by a -logically equivalent- snippet.

FURTHER DETAILS:
=================

BTW, digging in repo history, it seems that the two arm
`if ... elsif ...` form existed since the very initial introduction
of the needs() function by commit 95d3e91.

But even at that time (and probably ever since), those two arms of the
if statement always did exactly the same thing...

So I can't think of any valid reason to keep them around.

CHANGES:
=============
modified: lib/Rex/Command.pm

HOW TO TEST :
=============

$ prove -v t/issue/1508.t   # for this issue
$ prove t/**/*.t            # for non-regression
This is made in a separate commit so to ease cherry-picking
between two alternative fixes proposed in distinct commits.

In any case, all tests pass either way.

CHANGES:
=============
modified: t/issue/1508.t
modified: ChangeLog

HOW TO TEST :
=============
$ prove -v t/issue/1508.t   # for this issue
$ prove t/**/*.t            # for non-regression
`dzil test -all` was failing on `xt/author/perltidy.t`, apparently
not happy with the indentation/tab-stops.

CHANGES:
=============
modified: t/issue/1508.t

HOW TO TEST :
=============
```shell
$ prove --lib -v t/issue/1508.t   # for this issue
$ prove --lib --recursive  t/     # for non-regression
```
@ferki
Copy link
Member

ferki commented Oct 30, 2021

Thanks for this PR, @tabulon!

I'm going to comment on your points first below before doing a full review.


While this is clearly bug fix, it can't be ruled out that some code out in the wild may have come to rely on the incorrect behavior during the time of its existence. (since around 2017).

Also, code in the wild might have relied on the original behavior for years before 2017 :) The original author seems to have intended options and arguments to be passed around by needs. So I agree this is a bugfix to restore an unintentionally changed behavior for backwards compatibility reasons.

This PR does NOT include such a feature switch, as I guess (perhaps wrongly) that most people might have already turned to using run_task in the mean time, because of glitches like this with the need() function, some of which appear to have already been handled by #1188, but not this one.

I don't think a new feature flag is necessary right now. If we have to make the "broken behavior" available again, that could be an opt-in, though.

I also believe that a lot of users of needs do not rely on passing around options and arguments between tasks (but their tasks are getting their input from elsewhere, e.g. CMDB). I also always solely relied on the "call this other task on the current connection" aspect.

Note: Sorry about the last commit... which is only there to satisfy xt/author/perltidy.t... Please don't make me redo the entire PR just because the tidying comes in a separate commit

I don't think the PR needs to be re-done :) I might look into an opportunity to squash away the tidy-only commit at merge-time.

The last commit removes the "TODO" mark from tests.

If I understand you correctly, the tests were treated as TODO first because you weren't sure in advance which of the fixes would be cherry-picked. In the spirit of TDD's "red-green-refactor" it's fine to do it something like this next time:

  • write a failing test first and push it (as draft PR)
  • push the minimum amount of change needed to fix (so it passes now)
  • refactor the code to simplify it (=remove the duplication)

Copy link
Member

@ferki ferki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the code part is correct, but we probably need to find a better way to test the behavior for passing options and arguments around. I think we could keep working collaboratively on this PR and squash away the new changes into the original ones and/or add follow-up commits.

I also think I've found a bug in the author tests which caused the test file to be not checked by the Perl::Critic tests (e.g. Modules::ProhibitMultiplePackages should be in effect but this has passed with multiple packages in it). I'm going to fix that separately.

Comment on lines +1 to +18

=head1 NAME

issue/1508.t - Check that the `needs()` function correctly propogates run-time task arguments

=head1 DESCRIPTION

Check that the `needs()` function does indeed correctly propogate
run-time task arguments (%params and @args) from the calling task down to the "needed" tasks.

=head1 DETAILS

* AUTHOR / DATE : [tabulon]@[2021-09-26]
* RELATES-TO : [github issue #1508](https://github.com/RexOps/Rex/issues/1508#issue-1007457392)
* INSPIRED from : t/needs.t

=cut

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rex tests currently do not include POD sections (maybe they should, though?! 🤔), but they aimed to be treated as standalone perl scripts with shebang, $VERSION, etc. (see e.g. t/cmdb_path.t).

You can squash this change away into the original commit if you want, or I can change it upon merge as a follow-up commit.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @ferki,

Thank you for your through review -- of this PR as well as the related issue.

I will try to briefly address your remarks in several chunks.

POD in test files

Since the .t files are valid perl scripts, putting POD is technically allowed.

For some kinds of content, I find it nicer than block/line comments. It renders better and folds neatly in the text editor. Whether or not it is socially allowed is up to the maintainer (you), so go ahead and yank it if you wish.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why were the tests marked as TODO ?

The cherry-picking thing might have played a role in there, yes.
But that's also what I usually try to do, because I find it way cleaner way doing TDD.

If Rex policy is as you describe, I will try to remember next time. It's true that this has become quite popular among many languages lately.

Meanwhile, here are my reasons, in a nutshell, in an attempt to convince you of the added value:

  • Marking tests that are expected to fail (as such) is an extra piece of valuable information.

  • Perl (with its excellent Test::* modules) had the ability to do TDD way before other languages. TODO tests were invented as part of this scheme, and they have quite interesting characteristics.

  • When you do a normal prove, TODO tests still get executed (and their results are briefly reported), but their failure won't cause the whole test suite to fail!

  • When you do a prove -v (verbose mode), TODO tests behave as as normal tests : they will get detailed reporting

  • If TODO tests actually pass (instead of failing), that's also reported as a surprise (so that you can suspect something fishy)

  • Also, the cherry-picking of solutions can then happen across PRs.

Basically, it tends to be a superior form of TDD, in my humble opinion:

  • You can start with writing TODO tests for a given issue (without necessarily committing any code that satisfies them) ; keep them around, even ship them as such.
  • Months later, you (or someone else) can come up with one or more solutions which can easily be checked with prove -v.
  • Once satisfied with a solution, you unmark the TODO tests within the PR that commits the solution.

And, of course, all of that can happen within the same PR (as was the case here).

But you probably already know all this... And you probably have good reasons to prefer otherwise.

In the end, whatever the maintainer of the project decides, of course.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

POD in tests

In general I'm not opposed to add POD to test files, provided we can realistically commit to establish it as the new norm, and also to start adding them retroactively for consistency. I feel that would involve a considerable overhead, so I'd wish to discuss that topic outside of the context of this PR (and focus on passing arguments to needs here).


TODO tests

Having new tests that first pass, which then start to fail when I successfully implement their logic feels backward for my brain 🙃 I can accept it works better for you, and I appreciate your detailed reasoning ❤️

So far I used TODO for tests that are expected to fail, and not intend to fix in the same PR that introduces them. So I'm fine shipping TODO tests (e.g. there's one in t/file.t), and I'd even prefer it over things like "skip this on Windows" whenever possible (the current test suite mostly SKIPs though :/ ).

In the usual PR context, I like to have it demonstrated by the CI run that the new tests are actually failing before the related change gets implemented. So I push the "Add tests for X" commits early, and watch them fail ("red"). Then I also like to demonstrate that the new commit (or sometimes series of commits) I push to implement/fix the failing behavior actually makes the test suite to pass ("green"). Then if I need to simplify the code, I can make that at the end, and demonstrate the tests still pass ("refactor")

If my PR would be about adding new tests that I don't intend to fix in the same PR, I think I'd still push a commit with the failing tests first, then make the CI pass with a second commit that marks them as TODO.

I'd be happy to further discuss these methodology details via GitHub discussions or chat.

@@ -0,0 +1,165 @@

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have to admit I have a hard time to follow what exactly is happening in this test file. Could you walk me through it, please?

I see it was inspired by t/needs.t. Is there a simpler way to test the behavior perhaps? Or to fold the new checks into t/needs.t directly?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned, and despite how it may look at first, this test script was heavily inspired from t/needs.t.

It's basically a refactored version of it, with one notable functional difference: unlike t/needs.t, this one also checks the run-time task options and params.

So, my initial -joking- response would be :

sure, please walk us through t/needs.t and I will do the same for this one :-)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Joking aside, here's a brief run-through :

Just like in t/needs.t, the main package does the following :

  • first, define a bunch of tasks :
task test => sub {...}
task test2 => sub {...}
...
task test6 => sub {...}
  • Then, run each task and check that it has returned successfully (which constitutes the actual "test")

Just like in t/needs.t, some of those tasks invoke the needs() function, either with other tasks in the main package, or tasks defined in other packages (Rex::Module or Nested::Module) which also reside in the same test script.

1) Why do we have the two other packages (Rex::Module and Nested::Module) ?

Same reason as we have them in t/needs.t, i.e. we want to check that the needs() function works across package namespaces.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2) Why do we have an extra helper package (T), unlike in t/needs.t ?

Well, that's where the DRY refactoring happens...

Notice how almost the same code is repeated again and again in the task subs in t/needs.t.

Here, that boilerplate is replaced with calls to track_taskrun and check_needed which sensibly do the same, i.e. :

  • track_taskrun : When a task gets run, mark it as such by writing to a file with that name (and also save the its arguments there).

  • check_needed :

    • Make sure a "needed" task has already been run, i.e. :
      • die unless a file with that name exists
      • die unless the arguments that were saved in that file correspond to the expected arguments
    • Also do some clean up, by deleting the said file.
      Doing this here might not be considered very orthodox, but that's also what's happening in t/needs.t)

Notice how failure is signaled with die, just like in t/needs.t.

This also partly explains why we employ the private _deep_check() routine from Test::More:: here.

Just like you, my first reflex was employing is_deeply, but that doesn't work because "Test::More" chokes as it is then unable to keep track of the actual number of tests.

The cleaner alternative would have been to rely on is_deeply in the main test loop for my $task ( $run_list->tasks ) in the test script's package main, but that won't work either, because the summary info (from $task_list->get_summary) does not contain task params/arguments... Well it probably should... :-)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3) Why do we have to do file IO in the first place ?

I was also suprised by this at first. I guess the author of t/needs.t could have the best answer here.

The explanation I came up with has to do with the task execution model in Rex (each task possibly having separate SSH connection, possibly running on a separate thread/process, ...).

A no-brainer method for eliminating the hairiness that would get involved is to go through file IO.

t/needs.t does this quite simply by writing/truncating a file that has the same name of the needed task when its being run, and then merely checking the existence of the file from the calling task.

Here, we also keep track of the params/arguments, so we use Storable to do that, which also eliviates the need for manually opening and reading/writing to files.

I chose Storable because of its simplicity and popularity. Besides, Rex already has a runtime dependency on Storable, so using it in a test did not entail adding a dependency.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Why the unpacking logic ?

At one point you ask if we could get rid of the unpacking logic in helper routines, which was actually neatly placed on a single line (before perl tidy messed it up :-) :

    my %opts = ref $_[-1] eq 'HASH' ? %{; pop } : ();

Yes, we could; by directly assuming a HASH reference as the last argument, like such :

    my %opts = %{; pop };

But, if you know of a way of preventing perl-tidy doing its thing, I think it would be preferable to keep the unpacking logic in there.

It gives the caller more freedom. That way, if desired, the current tests in t/needs.t can easily be implemented with those two refactored routines (track_taskrun and check_needed).

The above also gives away my thinking about you suggestion of merging these tests into t/needs.t.

Imho, the easiest and most maintainable way of doing that would be retrofit t/needs.t tests into this -refactored- model.

Otherwise, if we try to it the non-DRY way t/needs.t does this (by copying and pasting the file IO code in all tasks), we could easily end up with a mess of several hundred lines (if not more) of hard-to-maintain code...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, please walk us through t/needs.t and I will do the same for this one :-)

Yeah, later I realized it is based on t/needs.t. My take is that t/needs.t is bad too, but that's part of the historical ballast from 10 years ago. That's a considerable amount of legacy to clean up, so that's why we use Test::Perl::Critic::Progressive to make sure we at least don't add more violations on top of the current pile :) These tests were unknowingly broken for PRs coming from forks, which was my mistake and I fixed that now (so it should properly complain about t/issue/1508.t after a rebase).

In other words, the policy is "we accept our legacy, but we don't want to add more ballast; therefore new code should be clean, or clean up old mess".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the detailed explanation!


  1. Why do we have the two other packages (Rex::Module and Nested::Module) ?

Rex chops off the leading Rex:: part for task names for brevity. We have these two modules defined in t/needs.t to make sure needs work with namespaces both with and without the Rex:: prefix.


  1. Why do we have an extra helper package (T), unlike in t/needs.t ?

This also partly explains why we employ the private _deep_check() routine from Test::More:: here.

Just like you, my first reflex was employing is_deeply, but that doesn't work because "Test::More" chokes as it is then unable to keep track of the actual number of tests.

That sounds like a good indicator to invent a cleaner approach than the 10-year-old legacy t/needs.t - or even better, fix t/needs.t first :)

I wonder if we could split t/needs.t up into real modules under e.g t/lib first as a pure refactor, then modify it to support running the same test tasks both with and without arguments 🤔


  1. Why do we have to do file IO in the first place ?

Yup, I see we use files to keep track of some execution data, and I don't have a much better way for that currently. Storable also sounds good for dumping/restoring the extra data needed by the argument tests. It is also a Perl core module, so it wouldn't be a new dependency anyway 👍


  1. Why the unpacking logic ?

was actually neatly placed on a single line (before perl tidy messed it up :-)

That might also be an indicator of weirdness :) I'm happy to tune perltidy rules and reformat the codebase with that.

In this case though perlcritic complains about the null statement (and in the exploded formatting, about the missing final semicolon in the block). Perhaps it would be more readable as fully unpacking @_ first, and then do this transformation logic on the last element of the passed array?

Comment on lines +28 to +33
my %opts = ref $_[-1] eq 'HASH'
? %{
;
pop
}
: ();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the calls in this file, it seems that the first argument is always a scalar value, and the last argument is always a hash reference. Does it make sense to still keep this unpack logic around?

Maybe this works too:

my ($called_task, %opts) = @_;

That way we don't need the for loop either. What do you think?

(Same goes for check_needed below)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps...

The intention was the ability to do the following, when you have more than one needed task :

needs ("task1", "task2", "task3")
check_needed("task1", "task2", "task3", \%opts) 

Currently, it looks like we don't have that case (with several needed tasks) in the tests. Maybe we should ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For track_taskrun, it's an easier call; yes we could get rid of some of the logic, as you suggest.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, it looks like we don't have that case (with several needed tasks) in the tests. Maybe we should ?

I think that's what confused me initially. We don't have the multiple-tasks case tested now, so it might be premature optimizing.

But your're right, in the long run we probably should cover that use case too 👍

package T; # Helper package (for cutting down boilerplate in tests)
use Storable;

sub track_taskrun {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly this subroutine saves the arguments passed to it in a file via Storable. If that's true, would it be better to call it something like save_arguments?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you wish so...

It also saved the arguments, yes.
But its principal function is to register the fact that a given task has just been executed (also saving the run-time arguments, in our case).

Copy link
Member

@ferki ferki Nov 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think by now I better understand the original name, thanks to your explanations. So this is a subroutine that tracks task execution (~run).

t/issue/1508.t Show resolved Hide resolved
@ferki
Copy link
Member

ferki commented Oct 30, 2021

Alright, the perlcritic tests should be solved by now via #1516.

Please rebase these changes on top of the default branch in order to test them against the current state.

After the rebase, it should show some failures about the test file, which should also give us some ideas about how to go about testing.

@tabulon
Copy link
Author

tabulon commented Nov 2, 2021

Please rebase these changes on top of the default branch in order to test them against the current state.

After the rebase, it should show some failures about the test file, which should also give us some ideas about how to go about testing.

I will try to rebase as you suggest as soon as I get a chance.
Meanwhile, please note that t/needs.t also has multiple packages in it (all except the helper package T).

@ferki
Copy link
Member

ferki commented Nov 2, 2021

Meanwhile, please note that t/needs.t also has multiple packages in it (all except the helper package T).

Yup, as explained above that's considered as our "accepted legacy". It either should be cleaned up, or replaced by new code that is better/cleaner. I believe it's a good example why I aim to discuss and design new stuff on an issue first, with the goal of preventing to invest effort on repeating the past's bad patterns.

Sorry about still having it around, and causing to send you down on a seemingly less-than-ideal path for adding new needs tests :(

On the bright side, we at least gained more understanding of the problem domain. Let's find out the new best way together to test needs!

@tabulon
Copy link
Author

tabulon commented Nov 3, 2021

On the bright side, we at least gained more understanding of the problem domain. Let's find out the new best way together to test needs!

OK, so how do you want to go about this?

Based on our discussion, here's a suggested outline for a new PR (after rebasing and seeing what PerlCritic has to say ) :

  • Rewrite the test script, name it t/needs2.t (functionally covering both the current t/needs.t and t/issue/1508.t, and then some)
  • Introduce the same code changes as in this PR (the second solution only)
  • Changes, ...

For the rewrite of test script, I guess the below is what we've got:

  • Get rid of POD & do away with TODO marks
  • Factor out package T into a separate module, e.g. t/lib/TaskTrack.pm or something (any suggestions for the name?)
  • See what can be done about the unpacking logic .. %{; pop } ...
  • Improve readibility depending on what PerlCritic has to say...

On the other hand, I would be quite reluctant to move Rex::Module and Nested::Module packages into separate modules, because they are part of what is being actually tested --even if that means silencing PerlCritic

Any other suggestons?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants