-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Streams deployment task logs if there is only one task #436
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this! This is a feature I'm excited about. So looking at some outputs:
(without this change)
> PRINT_LOGS=1 rt test/integration/kubernetes_deploy_test.rb -n /test_service_account_predeployed_before_unmanaged_pod/
...
[INFO][2019-03-01 09:50:47 -0800] ------------------------------Phase 3: Predeploying priority resources------------------------------
[INFO][2019-03-01 09:50:47 -0800] Deploying ConfigMap/hello-cloud-configmap-data (timeout: 30s)
[INFO][2019-03-01 09:50:48 -0800] Successfully deployed in 0.3s: ConfigMap/hello-cloud-configmap-data
[INFO][2019-03-01 09:50:48 -0800]
[INFO][2019-03-01 09:50:48 -0800] Deploying ServiceAccount/build-robot (timeout: 30s)
[INFO][2019-03-01 09:50:48 -0800] Successfully deployed in 0.3s: ServiceAccount/build-robot
[INFO][2019-03-01 09:50:48 -0800]
[INFO][2019-03-01 09:50:48 -0800] Deploying Pod/unmanaged-pod-k52d1dce-58bb206e (timeout: 60s)
[INFO][2019-03-01 09:50:51 -0800] Logs from Pod/unmanaged-pod-k52d1dce-58bb206e container 'hello-cloud':
[INFO][2019-03-01 09:50:51 -0800] Hello from the command runner!
[INFO][2019-03-01 09:50:51 -0800]
[INFO][2019-03-01 09:50:51 -0800] Successfully deployed in 3.4s: Pod/unmanaged-pod-k52d1dce-58bb206e
...
vs
(with this change)
> PRINT_LOGS=1 rt test/integration/kubernetes_deploy_test.rb -n /test_service_account_predeployed_before_unmanaged_pod/
...
[INFO][2019-03-01 09:57:44 -0800] ------------------------------Phase 3: Predeploying priority resources------------------------------
[INFO][2019-03-01 09:57:44 -0800] Deploying ConfigMap/hello-cloud-configmap-data (timeout: 30s)
[INFO][2019-03-01 09:57:44 -0800] Successfully deployed in 0.3s: ConfigMap/hello-cloud-configmap-data
[INFO][2019-03-01 09:57:44 -0800]
[INFO][2019-03-01 09:57:44 -0800] Deploying ServiceAccount/build-robot (timeout: 30s)
[INFO][2019-03-01 09:57:45 -0800] Successfully deployed in 0.3s: ServiceAccount/build-robot
[INFO][2019-03-01 09:57:45 -0800]
[INFO][2019-03-01 09:57:45 -0800] Deploying Pod/unmanaged-pod-k51f10ac-43e7e9e6 (timeout: 60s)
[INFO][2019-03-01 09:57:48 -0800] Hello from the command runner!
[INFO][2019-03-01 09:57:48 -0800] Successfully deployed in 3.4s: Pod/unmanaged-pod-k51f10ac-43e7e9e6
[INFO][2019-03-01 09:57:48 -0800]
...
There is a line Logs from Pod/unmanaged-pod-k52d1dce-58bb206e container 'hello-cloud':
that gets lost in the streaming version. It's from RemoteLogs#print_all
. Once idea to look at if is if we could have the first call to RemoteLogs#print_latest
emit something similar but like: Streaming logs from Pod/unmanaged-pod-k52d1dce-58bb206e container 'hello-cloud':
. Then we could test for that line. Now its not exactly that simple since we'd need to make sure the output from the task runner (see PRINT_LOGS=1 rt test/integration/runner_task_test.rb -n /test_run_with_verify_result_success/
) is still reasonable.
cbfbbfe
to
0f0ef50
Compare
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this looks solid
782f1b7
to
a914f63
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just about done. Thanks for sticking with this.
4d023fe
to
9ea3287
Compare
66476f2
to
d4b8524
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a bit hard to read due to the rebsae. But looks good
d4b8524
to
fa1d9fb
Compare
Rebased for the last time since #441 was merged. Should be good to go. |
fa1d9fb
to
b1d8509
Compare
b1d8509
to
e61df4f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Testing this locally is working as expected and nothing is jumping out at me for comment
Closes #346
I have not attempted any sort of implementation for the case where multiple tasks exist in a deployment, but even with the simple case of a single task, I'm unsure how this would be tested. I'd appreciate it if someone would point me in the right direction.
The best idea I've had so far is to use a fixture where the pod prints some output over several seconds, and assert that the logs reflect this in the timestamps at the beginning of each line. Indeed, if I do this and run the test with
PRINT_LOGS=1
, I can see that it works. Anyone have a better idea?