Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better feedback during applies #185

Closed
lkysow opened this issue Jul 17, 2018 · 36 comments
Closed

Better feedback during applies #185

lkysow opened this issue Jul 17, 2018 · 36 comments
Labels
feature New functionality/enhancement help wanted Good feature for contributors

Comments

@lkysow
Copy link
Member

lkysow commented Jul 17, 2018

Some apply's can take a long time. Would be nice to have either a link to view the streaming log or Atlantis to edit the pull request with the latest log.

Or even just a comment that says the apply is still ongoing.

@lkysow lkysow changed the title Stream apply output Better feedback during applies Aug 24, 2018
@mechastorm
Copy link

mechastorm commented Sep 7, 2018

I think we should also add that we need better feedback on plans as well.

Recently i had done changes to 20+ projects in a monolith TF repo in one PR - primarily upgrading the minimum version of Terraform. Atlantis then autoplan that PR but it had to go through all 20+ projects which took a while.

The only way I could verify that Atlantis was still running the plan

  • When I run atlantis plan, Atlantis will error saying another process is running. (that is good)
  • I go directly into the Atlantis server and do a top command, I can then see the individual terraform commands that Atlantis is running.

For plans, it would be good to least show the the summarized commands it is running just so we know where it is in the planning stage.

@jakauppila
Copy link

A link to the streaming log sounds useful; we use BitBucket Server which doesn't refresh on PR changes automatically anyways.

@majormoses
Copy link
Contributor

While that does sound useful to link to a log stream altantis does not really have any concept authentication/authorization so I would be hesitant to expose something like that. At my work we use: https://github.com/bitly/oauth2_proxy but I know some people are not even protecting the UI at all.

One idea we could take from terraform when you create resources it keeps track and prints how long it takes and for resources such as RDS creation deletion. We could have atlantis comment something say every 60 seconds with either this information or at least some kind of summary. If we wanted a quick win we could have it be something as simple as plan is still running on state x, y, z github will group/hide the comments together for example:
image

@lkysow lkysow added the feature New functionality/enhancement label Apr 4, 2019
@ehaselwanter
Copy link

Atlantis already has a secret token. would it be possible to expose an API endpoint so that some external dashboard (maybe the UI later-on) can request portions of the log?

@psalaberria002
Copy link
Contributor

A link to a streaming log would be amazing

@liquid-cloud
Copy link

Is there an actual log file that we can use until this feature is implemented ?

@lkysow
Copy link
Member Author

lkysow commented May 7, 2019

Is there an actual log file that we can use until this feature is implemented ?

No, Atlantis collects the output from the process in memory and then comments it back to the pull request.

@liquid-cloud
Copy link

Okay, thank you :)

@smiller171
Copy link

@lkysow An option to be able to direct the logs to a file would be helpful as we could roll our own solutions and should be less work as it doesn't involve building a secure web UI around it.

@lkysow
Copy link
Member Author

lkysow commented May 7, 2019

True. You could do that now with a custom workflow and tee

@smiller171
Copy link

True. You could do that now with a custom workflow and tee

Oh that's a great solution I hadn't thought of. Thanks!

@yatanasov
Copy link

True. You could do that now with a custom workflow and tee

Oh that's a great solution I hadn't thought of. Thanks!

Hey smiller171, could you explain how this can be implemented ? How would I use tee if the logs are stored in memory ?

@smiller171
Copy link

True. You could do that now with a custom workflow and tee

Oh that's a great solution I hadn't thought of. Thanks!

Hey smiller171, could you explain how this can be implemented ? How would I use tee if the logs are stored in memory ?

With a custom workflow you can redirect the output however you like. Here's an example:

workflows:
  default:
    plan:
      steps:
        - init:
        - run: >
            terraform plan -input=false -no-color
            -out ${PLANFILE} | tee -a /var/log/tfplan.log

Unfortunately you can't use this to direct the logs to stdio, which makes retrieving the logs slightly more annoying if you're running in a container, especially with the Fargate deployment method.

@gordonbondon
Copy link

We are running https://github.com/LeKovr/webtail watching log folder in a sidecar. Works pretty great for our needs.

@smiller171
Copy link

We are running https://github.com/LeKovr/webtail watching log folder in a sidecar. Works pretty great for our needs.

I don't think Fargate supports sidecar containers. If it does I'd still need to fork the official Terraform module.

@cyrus-mc
Copy link

@gordonbondon Where is the atlantis log folder? Looking to use webtail as you have but can't find where Atlantis streams the apply logs to.

@gordonbondon
Copy link

@cyrus-mc there is none, we use custom workflow and replace plan and apply steps with our own scripts to tee outputs to the location we want

@rverma-jm
Copy link

@gordonbondon can you share the scripts, are you would be generating different log files for different plan iterations even on same pull request?.

@gordonbondon
Copy link

@yuankunzhang
Copy link

Is there an actual log file that we can use until this feature is implemented ?

No, Atlantis collects the output from the process in memory and then comments it back to the pull request.

Hi, can we somehow also print the logs to stdout?

@lkysow
Copy link
Member Author

lkysow commented Apr 3, 2020

Hi, can we somehow also print the logs to stdout?

Maybe you could do something funky with tee and finding the device that is Atlantis' stdout. But in general no. Atlantis captures the output of the command, it doesn't write it to stdout.

@bjaworski3
Copy link

Came up with a hacky shell script that sends the output to the Atlantis docker container stdout (assumed to be process 1) while still keeping the Github output the same:

workflows:
  custom1:
    plan:
      steps:
      - run: >
          terraform$ATLANTIS_TERRAFORM_VERSION plan -input=false -no-color -out ${PLANFILE} | awk -v owner=${BASE_REPO_OWNER} -v repo=${BASE_REPO_NAME} -v pr=${PULL_NUM} '{ print strftime("%Y/%m/%d %X+0000 [INFO] ") owner "/" repo "#" pr ":", $0; fflush(); }' | tee -a /proc/1/fd/1 | cut -d" "  -f5-
    apply:
      steps:
      - run: >
          terraform$ATLANTIS_TERRAFORM_VERSION apply -no-color $PLANFILE | awk -v owner=${BASE_REPO_OWNER} -v repo=${BASE_REPO_NAME} -v pr=${PULL_NUM} '{ print strftime("%Y/%m/%d %X+0000 [INFO] ") owner "/" repo "#" pr ":", $0; fflush(); }' | tee -a /proc/1/fd/1 | cut -d" "  -f5-

The above will use the proper specified terraform version, append the logs to the container logs in the same format as the rest of the logs, and then cut out the timestamp at the end so the github logs don't contain it. I hardcoded the timezone to +0000 but it could probably be fixed for other timezones.

@dimisjim
Copy link
Contributor

@bjaworski3 Is there a way to run the above, but taking into account the target args given in the PR comment?

@bjaworski3
Copy link

bjaworski3 commented Aug 25, 2020

@dimisjim I think if you added $COMMENT_ARGS after the plan/apply that would work, but I don't use target in my comments so I haven't tried it

https://www.runatlantis.io/docs/custom-workflows.html#reference

@dimisjim
Copy link
Contributor

@bjaworski3
Thanks for the quick response.
However, I have tried it, I couldn't make it work: #1167

@ghostsquad
Copy link

ghostsquad commented Sep 10, 2020

I was thinking that a way to do streaming output would be to expose an API endpoint for Atlantis (default disabled for backwards compatibility & security reasons) that had pageable output, then query that from a Github Action (explained below).

atlantis would "flush" the output to a "page" at a given interval (say every 5 seconds). These would be in-memory, or in-db page results, that would expire after a few minutes.

an example of such API result would be:

{
   "id": "1234",
   "links": {
     "prev": "/example-data?page[before]=yyy&page[size]=2",
     "next": "/example-data?page[after]=zzz&page[size]=2"
   },
   "data": {
     "lines": [
        "doing something...",
        "more stuff..."
     ]
   }
}

Some situations and results:

situation result code
valid page after flush example result above 200
valid page before flush 204 (no content)
invalid page 404 (not found)
valid page after expiration 410 (gone)

https://www.restapitutorial.com/httpstatuscodes.html

You could then write a Github Action, that would query the API, at a given interval, and print the lines. Using the below table to decide what to do next

On a 200 print the output
On a 204 wait and retry (with a max retry limit)
On a 404 indicate that an error with the server, especially if you believe the URL is valid
On a 410 indicate that the results have expired and are no longer accessible

if links.next exists, enqueue that URL as the next query, otherwise stop.

@ghostsquad
Copy link

ghostsquad commented Sep 10, 2020

Although the Github Action part of this solution is specific to github, the actual client code needed to page through results and prints is simple enough to write, even in a bash script, such that it could be implemented in a variety of other ways and for other setups.

I can't speak for what's possible with BitBucket or GitLab, but I'm sure someone else with more experience there could probably think of something.

I think the following server options could be added with defaults listed

--streaming-api-enable=false
--streaming-api-flush-interval=5s
--streaming-api-expiration=15m

@ghostsquad
Copy link

some other alternatives or additional implementations:

a long-lived chunked request. This would be a long-lived HTTP connection, potentially up to 20 minutes or more, depending on the duration of the plan/apply. The client can the act upon this. As an example, curl I believe natively supports buffered and unbuffered options and behavior.

an Atlantis CLI could be written, which does the above, or uses something like GRPC in order to stream the output.

@ghostsquad
Copy link

How can I help make this a reality? The user experience right now of Atlantis is a bit "black box" ish... you have to wait (and hope) that Atlantis will post back a comment with the results. But that can sometimes take 20 minutes.

Is this something that should a PR be created would be a welcome change? Are there other initiatives currently in the works?

@obscurerichard
Copy link

Having this feature would make life much more pleasant for the teams I'm working with that are relying on Atlantis. Especially when the plans are large, it can help speed troubleshooting of problem areas.

@mochi99999
Copy link

mochi99999 commented May 8, 2021

Yes, I agree. This would provide much better insights into what is actually going on while the apply is in progress. 🙏 Especially with applies that takes longer such as an EKS control plane upgrade. ~45-60 minutes

@tomharrisonjr
Copy link
Contributor

Not to pile on here, but the absence of feedback for a process that can take a very long time is leaving us hoping things work. Or, more likely, running terraform apply locally just to get the feedback output that Terraform itself provides. As with @ghostsquad I would be happy to do a PR and test on Fargate / GitHub if this would be helpful -- just need some guidance.

@majormoses
Copy link
Contributor

majormoses commented Sep 22, 2021

I think atlantis has matured a bit and I think it's time to tackle features such as this. I do still think we need to create some sort of authentication and authorization mechanism as a prerequisite. It's very possible for secrets to show up in log output of terraform even with newer features. In the context of Github the risk is lower because you are already relying on Github's auth mechanism to limit who can see it.

@yatanasov
Copy link

@majormoses - completely agree.

@chenrui333 chenrui333 added the help wanted Good feature for contributors label Dec 30, 2021
@obscurerichard
Copy link

Does #1937 resolve this completely? That got merged in for the v0.18.x series of releases.

@nishkrishnan
Copy link
Contributor

Yeah this should be done now.

jamengual pushed a commit that referenced this issue Nov 23, 2022
* timestamps

* tests

* test nit

* respond to comments

* gh times

* follow up

* undo nil check removal
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New functionality/enhancement help wanted Good feature for contributors
Projects
None yet
Development

No branches or pull requests