Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Next Steps for Fully Functioning Composite Actions #646

Closed
ethanchewy opened this issue Aug 5, 2020 · 102 comments
Closed

Next Steps for Fully Functioning Composite Actions #646

ethanchewy opened this issue Aug 5, 2020 · 102 comments
Labels
enhancement New feature or request Runner Feature Feature scope to the runner

Comments

@ethanchewy
Copy link
Contributor

ethanchewy commented Aug 5, 2020

NOTE:

For those who are not aware, I'm a college student who interned at GitHub for Summer 2020. Thank you all for all the thoughtful comments and feedback on this thread but since I'm not working at GitHub currently, I will not be responding to any new comments below in this thread.

================================================

Hi there 👋

We just released a new feature called composite run steps actions(https://docs.github.com/en/actions/creating-actions/about-actions#composite-run-steps-actions).

This issue thread will serve as a point of reference for people who are curious about the next steps after this feature and what we plan to work on next. There are probably quite a few questions about when we are going to support in the future like "when are you going to support 'uses' steps in a composite action"? In this Issue thread, I'll talk about the technical problems that we are planning to solve to accomplish this. Throughout the next week, I'll update it with any additional updates.

tl;dr in this issue, I'll go over what we currently support for composite run steps and how we plan on developing fully functioning composite actions.

Composite run steps background

What is a composite run steps action?

Many of you have been asking us to build a feature that enables them to nest actions within actions (ex: #438).

An important first step for supporting nesting within action is to first start supporting the execution of multiple run steps in an action.

For some developers, they may want to execute a variety of scripts and commands in different languages and shells in a single action and map those outputs to the workflow. We’ve also heard from our community that many developers want to reuse parts of their workflows into other workflows. This new action type, composite run steps, can help developers do just that! For example, developers can abstract parts of their workflows as composite run steps actions and reuse them for other workflows.

Thus, we created this feature called composite run steps which allows users to run multiple run steps in an action!

What does composite run steps currently support?

For each run step in a composite action, we support:

  • name
  • id
  • run
  • env
  • shell
  • working-directory

In addition, we support mapping input and outputs throughout the action.

See docs for more info.

What does Composite Run Steps Not Support

We don't support setting conditionals, continue-on-error, timeout-minutes, "uses", and secrets on individual steps within a composite action right now.

(Note: we do support these attributes being set in workflows for a step that uses a composite run steps action)

Examples of how you can use composite run steps actions

Next Steps for Developing Fully Functioning Composite Run Steps

In this section, I'll describe some current features that we are currently working on or are planning to work on. You can see more details about each of these in https://github.com/actions/runner/blob/users/ethanchewy/compositeADR/docs/adrs/0549-composite-run-steps.md.

If conditionals

see: https://github.com/actions/runner/blob/main/docs/adrs/0549-composite-run-steps.md#if-condition

For if conditionals, we treat the composite action as a completely new slate. Each proceeding step depends on the proceeding step's states (failure, cancelled, or success).

To keep track of these states, we can create a new ExecutionContext for composite actions that keep track of the attributes we need such as the current state of a composite action step before evaluating. When we support nesting, we need to follow some sort of inheritance model (maybe something like state = (parent state && child state)?

Uses

Related questions that we need to answer first before we can support uses:

  • How are we going to handle post and pre steps? Should we aggregate them into one individual step for each type?
  • When should we download the repos for nested actions

Current experimentation for this: #612

timeout-minutes

It makes a ton of sense creating a separate execution context for composite actions especially in the cases for keeping track of the conditional status of the step. This would make it easier to some sort of "time element" to easily get the "correct" condition of the step in log time (something like this algorithm: https://maksimdan.gitbook.io/interview-practice-problems/leetcode_sessions/design/permission-management-for-a-general-tree)

For the nested timeout-minutes support, couldn't we just link all children tokens to their parents? The time left will trickle down to the children. For example,

|A (30 min.)|
  [B (15 min.)|
    |B2|
  |C (None)|
    |C2|

Let's say the A composite action whole step has a timeout-minutes fo 30 minutes and the step B (which is a step inside the composite action) has a timeout-minutes of 15 minutes and the step B exceeds that time, we can easily fail that thread and differentiate that between a cancellation. Then, we would move onto the next step C which doesn't have a timeout-minutes set and since it has ~15 minutes left, it will still start running. If the next step C takes more than 15 minutes, then we fail the whole action since the whole action has taken more than 30 minutes (the step C will automatically fail as well as the whole composite action step since they are linked)

Areas that we are also thinking about

There are many more layers of complexity for fully functioning composite actions.

For example at a high level,

  • How would we use multiple Docker actions in a composite action effectively?
  • How do we propagate and resolve errors effectively with nested actions?
  • Should we support the shell attribute for a uses step? (probably not)

Runner specific questions:

  • Should we create a separate, slimmed-down version of ExecutionContext for Composite Actions?
    • Yes, because this would reduce confusion and create an easier way to add onto it + easily hook this up with the .Global ExecutionContext.
  • How do we map our Scopes in nested actions?
    • We've already come up with a pretty clever solution for generating a context name (i.e. {context.ScopeName}.{context.ContextName})
    • Our dictionary for resolving outputs is in the nested dictionary StepsContext[Step_Name][Scope_Name]["outputs"]. Could we flatten the dictionary to reduce lookup time?

TODOs

  • Handle Continue-on-error (we need to change how we handle failures in the composite action handler [i.e. don't call Complete(TaskResult.Failed))
  • Handle private actions
  • Outcomes?
  • Revisit defaults? Does it make sense to add defaults once we have fully functional composite actions.

Extra Information

How we built it

@ethanchewy ethanchewy added the enhancement New feature or request label Aug 5, 2020
@ethanchewy
Copy link
Contributor Author

will reopen when feature launches.

@ethanchewy ethanchewy reopened this Aug 7, 2020
@latacora-tomekr
Copy link

latacora-tomekr commented Aug 7, 2020

Thanks for the release! Can composited actions be referenced from the local repository (in the way standard actions can https://docs.github.com/en/actions/configuring-and-managing-workflows/configuring-a-workflow#referencing-an-action-in-the-same-repository-where-a-workflow-file-uses-the-action) or can they only be referenced from separate public repositories?

@ethanchewy
Copy link
Contributor Author

Great question @latacora-tomekr . Yes, this can be referenced from the local repository as well.

@jeffeb3
Copy link

jeffeb3 commented Aug 8, 2020

Thank you @latacora-tomekr for that pointer. I was trying to follow this example:

https://docs.github.com/en/actions/creating-actions/creating-a-composite-run-steps-action

And it wasn't seeing the action. I tried it with the relative path, and it worked in my dummy repo. I will leave this up, despite my embarrassment, in the hopes it will help others:

https://github.com/jeffeb3/hello-world-composite-run-steps-action

@DaanDeMeyer
Copy link

Thank you for this feature! I was wondering today if it would be possible to reuse Github Actions yaml steps and found out this was just released. Talk about great timing!

Aside from what's already in the roadmap (uses, visualization), it would be great to have action scope env and shell keywords to allow specifying action scope step shell and action scope environment variables.

Thanks again for working on this!

@bewuethr
Copy link

This is very useful and fits my use case well: I mostly run shell scripts in my workflows, and making them reusable is now a lot simpler.

I've noticed that the "Scripts" example uses an environment variable $GITHUB_ACTION_PATH to access files living in the same directory tree as the action itself. This isn't documented anywhere, neither under Creating a composite steps action nor under Default environment variables, and should probably be amended?

@ethanchewy
Copy link
Contributor Author

ethanchewy commented Aug 10, 2020

This is very useful and fits my use case well: I mostly run shell scripts in my workflows, and making them reusable is now a lot simpler.

I've noticed that the "Scripts" example uses an environment variable $GITHUB_ACTION_PATH to access files living in the same directory tree as the action itself. This isn't documented anywhere, neither under Creating a composite steps action nor under Default environment variables, and should probably be amended?

@bewuethr Thanks for the kind words!

github.action_path is documented here: https://docs.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions#github-context (see "github.action_path). github.action_path is equivalent to GITHUB_ACTION_PATH.

We follow this same pattern of also adding the github context attribute as an environment variable like github.token = GITHUB_TOKEN.

@mikeparker
Copy link

mikeparker commented Aug 13, 2020

Hi, this is really great. Maybe I missed it, am I right in saying the composite action must be in a public repo right now? If so, is there a plan for referencing private actions? We don't really want to make our all CICD scripts public just to reuse them and we have many repos that all need to reference central pipelines / scripts / actions.

@ethanchewy
Copy link
Contributor Author

@mikeparker good point - I'll add it to our tracking issue so that we can think it through. Thanks for bringing this up!

@bewuethr
Copy link

bewuethr commented Aug 13, 2020

@mikeparker @ethanchewy I can see that issue on the roadmap for Q1 2021.

@hross hross added the Runner Feature Feature scope to the runner label Aug 17, 2020
@sidvishnoi
Copy link

sidvishnoi commented Aug 17, 2020

Steps in workflow are shown as different collapsible sections in the UI. Are there any plans for supporting similar UI (nested within the workflow's uses step) for composite actions? Something like core.group() for each composite step would be nice for starters 🙏
Right now, I'm manually setting the group workflow command in each step.

@austinsasko
Copy link

austinsasko commented Aug 18, 2020

It seems that we cannot use the outcome of a composite step in an action, is that expected?
outputs: output_var: description: 'Outcome of running pretty format on the code' value: ${{ steps.step_id.outcome }}

and
- run: echo "this is ${{ steps.step_id.outcome }} ${{ steps.step_id_2.outcome }} ${{ steps.step_id_3.outcome }}" shell: bash

both of the above code snippets have empty values for the ${{ }} parameters.

The output of toJson(steps) was an empty Json array {}

@austinsasko
Copy link

Also, it seems that composite actions do not support defaults. https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#defaultsrun

Is this something planned? I notice its not in the initial description of this issue.

rajivmb added a commit to rajivmb/p13i-eg-jpll that referenced this issue Aug 19, 2020
@ethanchewy
Copy link
Contributor Author

ethanchewy commented Aug 19, 2020

It seems that we cannot use the outcome of a composite step in an action, is that expected?
outputs: output_var: description: 'Outcome of running pretty format on the code' value: ${{ steps.step_id.outcome }}

and
- run: echo "this is ${{ steps.step_id.outcome }} ${{ steps.step_id_2.outcome }} ${{ steps.step_id_3.outcome }}" shell: bash

both of the above code snippets have empty values for the ${{ }} parameters.

The output of toJson(steps) was an empty Json array {}

Outcomes are not supported in composite steps at the moment. I'll update my description above - we should support them once we add conditionals.

@ethanchewy
Copy link
Contributor Author

ethanchewy commented Aug 19, 2020

Also, it seems that composite actions do not support defaults. https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#defaultsrun

Is this something planned? I notice its not in the initial description of this issue.

Defaults are not supported in composite run steps actions.

The reason why we avoided defaults for the design is that we want the workflow author to not know any of the internal workings of the composite action yaml file. So, we require the composite action author to always explicitly define shell for each run step in a composite action.

@scobrad
Copy link

scobrad commented Oct 27, 2021

@thboop Any idea when the uses keyword will be available for use in composite actions for GHES? I was looking at the roadmap, but did not see it.

@Alexander-Betz
Copy link

Alexander-Betz commented Nov 1, 2021

@thboop Will it be possible to use secrets directly in the composite action instead of passing them as input parameters? Or is it not planned to be an option?

jcmorrow pushed a commit to jcmorrow/devtools that referenced this issue Nov 9, 2021
- In order to run either the mock tests or the end-to-end tests, we need a dev server running in CI. Previously, we would spawn that manually from inside of a script. Now, we start that from inside of the action that does the testing. This means that if you want to run the tests locally, you'll also need your local dev server running (which was already true).
- Add necessary parts of the `test` folder to public. This means that we will be able to run against preview branches as well.
- Add a github action for setting up Next.js with a cache, so we only have to copy one line everywhere, not an entire step. It would be nice if we could further refactor out the test stripes, because a lot of things are repeated 9 times.
- Remove the `dev-server.js` file. If we're going to use next.js, then we should try and keep our config as out-of-the-box as possible, in order to allow us to stay up-to-date. The fact that the `test` folder has to be available on the development server for the tests to run at all seems like the real problem here. A good follow-on task would be to sort out the best way to handle those files in the test/dev environment.
- Go back to node 14 for lock file. I didn't realize when I changed this that we were stuck on such an outdated node version. We should fix this ASAP, but until then, let's stick with what was working, just to minimize the factors that have gone into resolving all of the failing tests.
- Add a `waitingFor` field to the `waitUntil` method used in tests. It's really frustrating when a test fails and all it says is `timed out`. Like... timed out while waiting for *what*? Test failures should guide you to the place of failure as quickly as possible.
- By default, wait for 10 seconds. If something doesn't appear in the DOM in 10 seconds, that should be considered a failure.
- Log more from playwright tests. This could be annoying, maybe we should tune it, but it's nice to know what's going on in these tests, and we're only going to look at the output when something is failing anyways, so it may as well be verbose.
- Log the time that tests take. We started with a lot of timeouts. I fixed them, but sometimes it was hard to see what tests were timing out. Now, when a test completes, we emit some test stats. Eventually, it might be nice to just like... use a real test runner... but this is fine for now.
- Mock test URLs like http://localhost:8080/recording/24c0bc00-fdad-4e5e-b2ac-23193ba3ac6b?mock=1 show a gray screen after a few seconds. If you remove the `mock` query param then the screen shows up totally normally. I'm not sure of the exact interactions here. Let's figure this out in a follow-on task.
- Add `test/run` and `test/mock/run` scripts for running the rest runners locally, and taking care of making sure that the files get shuffled around correctly before and after, and that the dev server is running, etc.
- Add README instructions about `.env` files and new test run scripts.

That's true! We still have flaky, intermittent tests. The goal of this PR was to get things to *run*, getting them to pass is a future us problem!

I'm going to put these into a notion doc as well, but documenting here couldn't hurt. Three things that tripped me up on this PR:

- *any* merge conflicts mean actions *will not run at all*. If you have a branch and it's unclear why actions aren't kicking off, make sure you check for and resolve merge conflicts.
- When using actions that are type `javascript` your `inputs` get automatically turned into environment variables, which is convenient. However **when using `composite` actions this won't happen**. You have to manually pass the environment variables in with an [expression](https://docs.github.com/en/actions/learn-github-actions/expressions).
- The `checkout` action that we use all the time performs a strict `clean` of the directory that it's about to clone into! So, if you are trying to do some sort of caching or otherwise unloading assets into a dir that you also need to do `git checkout XXXX` in (with the checkout action), you either need to checkout *first*, or you can use `[clean: false](https://github.com/actions/checkout#usage)` I think, though I haven't actually tried this.
- Actions are somewhat limited. More details here: actions/runner#646

Fixes replayio#4313
jcmorrow pushed a commit to jcmorrow/devtools that referenced this issue Nov 9, 2021
- In order to run either the mock tests or the end-to-end tests, we need a dev server running in CI. Previously, we would spawn that manually from inside of a script. Now, we start that from inside of the action that does the testing. This means that if you want to run the tests locally, you'll also need your local dev server running (which was already true).
- Add necessary parts of the `test` folder to public. This means that we will be able to run against preview branches as well.
- Add a github action for setting up Next.js with a cache, so we only have to copy one line everywhere, not an entire step. It would be nice if we could further refactor out the test stripes, because a lot of things are repeated 9 times.
- Remove the `dev-server.js` file. If we're going to use next.js, then we should try and keep our config as out-of-the-box as possible, in order to allow us to stay up-to-date. The fact that the `test` folder has to be available on the development server for the tests to run at all seems like the real problem here. A good follow-on task would be to sort out the best way to handle those files in the test/dev environment.
- Go back to node 14 for lock file. I didn't realize when I changed this that we were stuck on such an outdated node version. We should fix this ASAP, but until then, let's stick with what was working, just to minimize the factors that have gone into resolving all of the failing tests.
- Add a `waitingFor` field to the `waitUntil` method used in tests. It's really frustrating when a test fails and all it says is `timed out`. Like... timed out while waiting for *what*? Test failures should guide you to the place of failure as quickly as possible.
- By default, wait for 10 seconds. If something doesn't appear in the DOM in 10 seconds, that should be considered a failure.
- Log more from playwright tests. This could be annoying, maybe we should tune it, but it's nice to know what's going on in these tests, and we're only going to look at the output when something is failing anyways, so it may as well be verbose.
- Log the time that tests take. We started with a lot of timeouts. I fixed them, but sometimes it was hard to see what tests were timing out. Now, when a test completes, we emit some test stats. Eventually, it might be nice to just like... use a real test runner... but this is fine for now.
- Mock test URLs like http://localhost:8080/recording/24c0bc00-fdad-4e5e-b2ac-23193ba3ac6b?mock=1 show a gray screen after a few seconds. If you remove the `mock` query param then the screen shows up totally normally. I'm not sure of the exact interactions here. Let's figure this out in a follow-on task.
- Add `test/run` and `test/mock/run` scripts for running the rest runners locally, and taking care of making sure that the files get shuffled around correctly before and after, and that the dev server is running, etc.
- Add README instructions about `.env` files and new test run scripts.

That's true! We still have flaky, intermittent tests. The goal of this PR was to get things to *run*, getting them to pass is a future us problem!

I'm going to put these into a notion doc as well, but documenting here couldn't hurt. Three things that tripped me up on this PR:

- *any* merge conflicts mean actions *will not run at all*. If you have a branch and it's unclear why actions aren't kicking off, make sure you check for and resolve merge conflicts.
- When using actions that are type `javascript` your `inputs` get automatically turned into environment variables, which is convenient. However **when using `composite` actions this won't happen**. You have to manually pass the environment variables in with an [expression](https://docs.github.com/en/actions/learn-github-actions/expressions).
- The `checkout` action that we use all the time performs a strict `clean` of the directory that it's about to clone into! So, if you are trying to do some sort of caching or otherwise unloading assets into a dir that you also need to do `git checkout XXXX` in (with the checkout action), you either need to checkout *first*, or you can use `[clean: false](https://github.com/actions/checkout#usage)` I think, though I haven't actually tried this.
- Actions are somewhat limited. More details here: actions/runner#646

Fixes replayio#4313
@aramfe
Copy link

aramfe commented Nov 9, 2021

I really like the concept of composite actions, but there is a fundamental problem with composite actions and templates which are supposed to be used in the same repository. Let's look how it is currently done, to re-use composite actions, which are located in the very same repository.

    # References a local action
    - uses: ./.github/actions/my-action

So far so good, BUT:

In our CI / CD workflow, we check out the repository only once in the very first job. We do that, because the checkout takes some time, as our mono repository is quite large. From this initial checkout, we then pass relevant parts of our repository to the corresponding, following jobs as artifacts. And it works fine and makes our CI / CD architecture much faster.

But the issue with that, is that I basically can't use template actions from the repositories .github directory in all the following jobs anymore. The workaround would be, to also upload the .github directory as an artifact and to pass them to all following jobs as well, but I don't really like this approach, because it creates so much more boilerplate definitions/steps in the YAML file.

An additional option like

    # References a local action
    - uses: local-github-workflows@./.github/path/to/my-action

would really help to solve this problem. It would be basically a sparse checkout, without doing all this overhead definitions to make them re-usable in every job. In fact, my expectation was, that the .github directory is always accessible from all jobs, even if there is no checkout at all, as it's more of a meta definition for the pipeline and not part of the actual source code.

Either this is already possible, and I am doing something wrong, or this is really something, which needs to be supported as well. I would love to find a better approach for handling templates, as I'm right now very annoyed with this.

jcmorrow pushed a commit to jcmorrow/devtools that referenced this issue Nov 9, 2021
- In order to run either the mock tests or the end-to-end tests, we need
  a dev server running in CI. Previously, we would spawn that manually
  from inside of a script. Now, we start that from inside of the action
  that does the testing. This means that if you want to run the tests
  locally, you'll also need your local dev server running (which was
  already true).
- Add necessary parts of the `test` folder to public. This means that we
  will be able to run against preview branches as well.
- Add a github action for setting up Next.js with a cache, so we only
  have to copy one line everywhere, not an entire step. It would be nice
  if we could further refactor out the test stripes, because a lot of
  things are repeated 9 times.
- Remove the `dev-server.js` file. If we're going to use next.js, then
  we should try and keep our config as out-of-the-box as possible, in
  order to allow us to stay up-to-date. The fact that the `test` folder
  has to be available on the development server for the tests to run at
  all seems like the real problem here. A good follow-on task would be
  to sort out the best way to handle those files in the test/dev
  environment.
- Go back to node 14 for lock file. I didn't realize when I changed this
  that we were stuck on such an outdated node version. We should fix
  this ASAP, but until then, let's stick with what was working, just to
  minimize the factors that have gone into resolving all of the failing
  tests.
- Add a `waitingFor` field to the `waitUntil` method used in tests. It's
  really frustrating when a test fails and all it says is `timed out`.
  Like... timed out while waiting for *what*? Test failures should guide
  you to the place of failure as quickly as possible.
- By default, wait for 10 seconds. If something doesn't appear in the
  DOM in 10 seconds, that should be considered a failure.
- Log more from playwright tests. This could be annoying, maybe we
  should tune it, but it's nice to know what's going on in these tests,
  and we're only going to look at the output when something is failing
  anyways, so it may as well be verbose.
- Log the time that tests take. We started with a lot of timeouts. I
  fixed them, but sometimes it was hard to see what tests were timing
  out. Now, when a test completes, we emit some test stats. Eventually,
  it might be nice to just like... use a real test runner... but this is
  fine for now.
- Mock test URLs like
  http://localhost:8080/recording/24c0bc00-fdad-4e5e-b2ac-23193ba3ac6b?mock=1
  show a gray screen after a few seconds. If you remove the `mock` query
  param then the screen shows up totally normally. I'm not sure of the
  exact interactions here. Let's figure this out in a follow-on task.
- Add `test/run` and `test/mock/run` scripts for running the rest
  runners locally, and taking care of making sure that the files get
  shuffled around correctly before and after, and that the dev server is
  running, etc.
- Add README instructions about `.env` files and new test run scripts.

That's true! We still have flaky, intermittent tests. The goal of this
PR was to get things to *run*, getting them to pass is a future us
problem! I'm disabling the tests that seem to *always* fail for now.

I'm going to put these into a notion doc as well, but documenting here
couldn't hurt. Three things that tripped me up on this PR:

- *any* merge conflicts mean actions *will not run at all*. If you have
  a branch and it's unclear why actions aren't kicking off, make sure
  you check for and resolve merge conflicts.
- When using actions that are type `javascript` your `inputs` get
  automatically turned into environment variables, which is convenient.
  However **when using `composite` actions this won't happen**. You have
  to manually pass the environment variables in with an
  [expression](https://docs.github.com/en/actions/learn-github-actions/expressions).
- The `checkout` action that we use all the time performs a strict
  `clean` of the directory that it's about to clone into! So, if you are
  trying to do some sort of caching or otherwise unloading assets into a
  dir that you also need to do `git checkout XXXX` in (with the checkout
  action), you either need to checkout *first*, or you can use `[clean:
  false](https://github.com/actions/checkout#usage)` I think, though I
  haven't actually tried this.
- Actions are somewhat limited. More details here:
  actions/runner#646

Fixes replayio#4313
jcmorrow pushed a commit to jcmorrow/devtools that referenced this issue Nov 9, 2021
- In order to run either the mock tests or the end-to-end tests, we need
  a dev server running in CI. Previously, we would spawn that manually
  from inside of a script. Now, we start that from inside of the action
  that does the testing. This means that if you want to run the tests
  locally, you'll also need your local dev server running (which was
  already true).
- Add necessary parts of the `test` folder to public. This means that we
  will be able to run against preview branches as well.
- Add a github action for setting up Next.js with a cache, so we only
  have to copy one line everywhere, not an entire step. It would be nice
  if we could further refactor out the test stripes, because a lot of
  things are repeated 9 times.
- Remove the `dev-server.js` file. If we're going to use next.js, then
  we should try and keep our config as out-of-the-box as possible, in
  order to allow us to stay up-to-date. The fact that the `test` folder
  has to be available on the development server for the tests to run at
  all seems like the real problem here. A good follow-on task would be
  to sort out the best way to handle those files in the test/dev
  environment.
- Go back to node 14 for lock file. I didn't realize when I changed this
  that we were stuck on such an outdated node version. We should fix
  this ASAP, but until then, let's stick with what was working, just to
  minimize the factors that have gone into resolving all of the failing
  tests.
- Add a `waitingFor` field to the `waitUntil` method used in tests. It's
  really frustrating when a test fails and all it says is `timed out`.
  Like... timed out while waiting for *what*? Test failures should guide
  you to the place of failure as quickly as possible.
- By default, wait for 10 seconds. If something doesn't appear in the
  DOM in 10 seconds, that should be considered a failure.
- Log more from playwright tests. This could be annoying, maybe we
  should tune it, but it's nice to know what's going on in these tests,
  and we're only going to look at the output when something is failing
  anyways, so it may as well be verbose.
- Log the time that tests take. We started with a lot of timeouts. I
  fixed them, but sometimes it was hard to see what tests were timing
  out. Now, when a test completes, we emit some test stats. Eventually,
  it might be nice to just like... use a real test runner... but this is
  fine for now.
- Mock test URLs like
  http://localhost:8080/recording/24c0bc00-fdad-4e5e-b2ac-23193ba3ac6b?mock=1
  show a gray screen after a few seconds. If you remove the `mock` query
  param then the screen shows up totally normally. I'm not sure of the
  exact interactions here. Let's figure this out in a follow-on task.
- Add `test/run` and `test/mock/run` scripts for running the rest
  runners locally, and taking care of making sure that the files get
  shuffled around correctly before and after, and that the dev server is
  running, etc.
- Add README instructions about `.env` files and new test run scripts.

That's true! We still have flaky, intermittent tests. The goal of this
PR was to get things to *run*, getting them to pass is a future us
problem! I'm disabling the tests that seem to *always* fail for now.

I'm going to put these into a notion doc as well, but documenting here
couldn't hurt. Three things that tripped me up on this PR:

- *any* merge conflicts mean actions *will not run at all*. If you have
  a branch and it's unclear why actions aren't kicking off, make sure
  you check for and resolve merge conflicts.
- When using actions that are type `javascript` your `inputs` get
  automatically turned into environment variables, which is convenient.
  However **when using `composite` actions this won't happen**. You have
  to manually pass the environment variables in with an
  [expression](https://docs.github.com/en/actions/learn-github-actions/expressions).
- The `checkout` action that we use all the time performs a strict
  `clean` of the directory that it's about to clone into! So, if you are
  trying to do some sort of caching or otherwise unloading assets into a
  dir that you also need to do `git checkout XXXX` in (with the checkout
  action), you either need to checkout *first*, or you can use `[clean:
  false](https://github.com/actions/checkout#usage)` I think, though I
  haven't actually tried this.
- Actions are somewhat limited. More details here:
  actions/runner#646

Fixes replayio#4313
jcmorrow pushed a commit to jcmorrow/devtools that referenced this issue Nov 9, 2021
- In order to run either the mock tests or the end-to-end tests, we need
  a dev server running in CI. Previously, we would spawn that manually
  from inside of a script. Now, we start that from inside of the action
  that does the testing. This means that if you want to run the tests
  locally, you'll also need your local dev server running (which was
  already true).
- Add necessary parts of the `test` folder to public. This means that we
  will be able to run against preview branches as well.
- Add a github action for setting up Next.js with a cache, so we only
  have to copy one line everywhere, not an entire step. It would be nice
  if we could further refactor out the test stripes, because a lot of
  things are repeated 9 times.
- Remove the `dev-server.js` file. If we're going to use next.js, then
  we should try and keep our config as out-of-the-box as possible, in
  order to allow us to stay up-to-date. The fact that the `test` folder
  has to be available on the development server for the tests to run at
  all seems like the real problem here. A good follow-on task would be
  to sort out the best way to handle those files in the test/dev
  environment.
- Go back to node 14 for lock file. I didn't realize when I changed this
  that we were stuck on such an outdated node version. We should fix
  this ASAP, but until then, let's stick with what was working, just to
  minimize the factors that have gone into resolving all of the failing
  tests.
- Add a `waitingFor` field to the `waitUntil` method used in tests. It's
  really frustrating when a test fails and all it says is `timed out`.
  Like... timed out while waiting for *what*? Test failures should guide
  you to the place of failure as quickly as possible.
- By default, wait for 10 seconds. If something doesn't appear in the
  DOM in 10 seconds, that should be considered a failure.
- Log more from playwright tests. This could be annoying, maybe we
  should tune it, but it's nice to know what's going on in these tests,
  and we're only going to look at the output when something is failing
  anyways, so it may as well be verbose.
- Log the time that tests take. We started with a lot of timeouts. I
  fixed them, but sometimes it was hard to see what tests were timing
  out. Now, when a test completes, we emit some test stats. Eventually,
  it might be nice to just like... use a real test runner... but this is
  fine for now.
- Mock test URLs like
  http://localhost:8080/recording/24c0bc00-fdad-4e5e-b2ac-23193ba3ac6b?mock=1
  show a gray screen after a few seconds. If you remove the `mock` query
  param then the screen shows up totally normally. I'm not sure of the
  exact interactions here. Let's figure this out in a follow-on task.
- Add `test/run` and `test/mock/run` scripts for running the rest
  runners locally, and taking care of making sure that the files get
  shuffled around correctly before and after, and that the dev server is
  running, etc.
- Add README instructions about `.env` files and new test run scripts.

That's true! We still have flaky, intermittent tests. The goal of this
PR was to get things to *run*, getting them to pass is a future us
problem! I'm disabling the tests that seem to *always* fail for now.

I'm going to put these into a notion doc as well, but documenting here
couldn't hurt. Three things that tripped me up on this PR:

- *any* merge conflicts mean actions *will not run at all*. If you have
  a branch and it's unclear why actions aren't kicking off, make sure
  you check for and resolve merge conflicts.
- When using actions that are type `javascript` your `inputs` get
  automatically turned into environment variables, which is convenient.
  However **when using `composite` actions this won't happen**. You have
  to manually pass the environment variables in with an
  [expression](https://docs.github.com/en/actions/learn-github-actions/expressions).
- The `checkout` action that we use all the time performs a strict
  `clean` of the directory that it's about to clone into! So, if you are
  trying to do some sort of caching or otherwise unloading assets into a
  dir that you also need to do `git checkout XXXX` in (with the checkout
  action), you either need to checkout *first*, or you can use `[clean:
  false](https://github.com/actions/checkout#usage)` I think, though I
  haven't actually tried this.
- Actions are somewhat limited. More details here:
  actions/runner#646

Fixes replayio#4313
@umarcor
Copy link

umarcor commented Nov 12, 2021

@aramfe, what about "other jobs" consuming the action as if it was external?

- uses: your/repository/.github/path/to/your-action@main

That is uncomfortable in the PRs where the action itself is modified. However, it should work for any other modification.

Overall, I agree with your feature request, as I'm running into a similar use case. I would like to include the actions/checkout step inside the reusable workflow (composite action). For completeness, see https://github.com/hdl/containers/blob/main/.github/workflows/boolector.yml#L57-L68 and https://github.com/hdl/containers/blob/main/utils/build-test-release/action.yml#L55-L62. Using it as external does not work in this case, because the Action is expected to be modified frequently.

@pllim
Copy link

pllim commented Nov 19, 2021

Is the composite action mentioned in #646 (comment) the same thing as re-usable workflows mentioned in https://docs.github.com/en/actions/learn-github-actions/reusing-workflows ?

@airtower-luna
Copy link

@pllim No, those are separate things with different purposes (with some overlap). The documentation for composite actions is here: https://docs.github.com/en/actions/creating-actions/creating-a-composite-action

andry81 added a commit to andry81-devops/gh-action--accum-gh-stats that referenced this issue Nov 21, 2021
…expected value 'continue-on-error'`.

Implementation details: actions/runner#646
wesleyboar pushed a commit to TACC/Core-Portal that referenced this issue Jan 7, 2022
* Change only loading part of state when fetching workbench

* Pass context data which has setup_complete

* Handle anonymous users

* Add unit tests

* Build/install client during CI to generate templates

* Add build client step for template generation to CI

* Refactor github actions to client install is defined in standalone action

* Run actions/checkout in local action

* Fix action name/path

* Revert attempt at composite action

Not supported yet: actions/runner#646

* Copy template for unit testing instead of client building it

* Fix copy command
Robpol86 added a commit to Robpol86/robpol86.com that referenced this issue Jan 15, 2022
Now that GitHub supports nested composite actions:
actions/runner#646 (comment)
Robpol86 added a commit to Robpol86/robpol86.com that referenced this issue Jan 15, 2022
Now that GitHub supports nested composite actions:
actions/runner#646 (comment)
@Kurt-von-Laven
Copy link

@aramfe, you could maintain a workflow tag of your monorepo that contains only your workflows. That would essentially allow you to do a sparse checkout of only .github/workflows.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request Runner Feature Feature scope to the runner
Projects
None yet
Development

No branches or pull requests