Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Guest shutdown support in endpointVM #6943

Merged
merged 6 commits into from
Aug 8, 2018

Conversation

hickeng
Copy link
Member

@hickeng hickeng commented Dec 12, 2017

This adds neat shutdown support in the endpointVM:

  • cleans up Views, Managers and sessions in portlayer
  • cleans up session in personality
  • cleans up vicadmin sessions on shutdown

Unifies diskmanager usage on a single instance for the storage
component. This is unrelated to the rest of the PR but is the correct usage
given locking assumptions and will reduce retries of vmomi operations.

Updates vic-machine to use guest shutdown for polite shutdown first.

Fixes tether unit tests to run as non-root (again).

Notes:
Need to find an effective way to test this. I was planning on adding the name of the VCH to the client but that ran into the immovable object that is session.Session usage. I still think this would be a useful UX enhancement for both us and customers, and session.Session definitely needs attention.

Currently there is still occasional dangling sessions from the personality admiral client. In this case client.Logout gets called but it does not occur. In the last full-ci run this happened twice:

Deleting the VCH appliance VCH-17937-4337
[ WARN ] Dangling sessions found: 52558178-8cbb-2dd6-fe91-75f1f64a7128  VSPHERE.LOCAL\Administrator                                        2018-04-02 23:01  3m34s  10.158.214.80   vic-dynamic-config/1.4.0-dev  

This is not ready to merge until the hardcoded filtering for 1.4.0-dev sessions is improved.

  • updated to filter for -dev, skipping the upgrade leaked sessions. This can be removed once old versions are not expected to leak sessions.
[parallel jobs=4]

Todo:

  • remove hardcoded version filtering
    • now filters for -dev to omit sessions from GA code. This means it will not currently match RC builds but given we're only using it for warning currently I'm ok with that.
  • perform neat exit on could not initialize port layer errors rather than log.Fatal

@hickeng hickeng force-pushed the vchGuestOperations branch 4 times, most recently from 7b90176 to f5f6e1a Compare December 13, 2017 22:22
@hickeng hickeng force-pushed the vchGuestOperations branch from 7c9fb5e to 903e5df Compare February 5, 2018 23:07
@hickeng hickeng changed the title WIP: Guest shutdown support in endpointVM [full ci] Guest shutdown support in endpointVM [full ci] Feb 5, 2018
@cgtexmex
Copy link
Contributor

cgtexmex commented Feb 6, 2018

General question: should no vic sessions be left upon uninstall? I'm assuming yes and assuming this change should address those orphaned sessions...

@hickeng
Copy link
Member Author

hickeng commented Feb 7, 2018

Need to test against VC:

ERRO[0053] vic/pkg/trace.(*Operation).Err: Shut down endpointVM error: context deadline exceeded
vic/lib/install/management.(*Dispatcher).powerOffVM:173 Shut down endpointVM
vic/cmd/vic-machine/delete.(*Uninstall).Run:92 vic-machine-linux
vic/cmd/vic-machine/common.NewOperation:27 vic-machine-linux
WARN[0053] Guest shutdown failed, resorting to power off - sessions will be left open: Post https://sc-rdops-vm09-dhcp-9-200.eng.vmware.com/sdk: context deadline exceeded
ERRO[0054] unexpected fault on task retry: &types.InvalidPowerState{InvalidState:types.InvalidState{VimFault:types.VimFault{MethodFault:types.MethodFault{FaultCause:(*types.LocalizedMethodFault)(nil), FaultMessage:[]types.LocalizableMessage(nil)}}}, RequestedState:"poweredOn", ExistingState:"poweredOff"}
ERRO[0054] Failed to power off existing appliance for The attempted operation cannot be performed in the current state (Powered off).
DEBU[0054] Remove network is not supported for vCenter
INFO[0054] Collecting 36f6f93d-34ad-4abe-94e7-fca97dbe8f56 vpxd.log
ERRO[0055] The attempted operation cannot be performed in the current state (Powered off).

@cgtexmex has also noted vic-dynamic-config/1.3.0-rc1 session left behind which is the connection used to discover and link with an OVA.

@hickeng hickeng force-pushed the vchGuestOperations branch 3 times, most recently from 75c40fe to dac4259 Compare February 8, 2018 01:14
@hickeng hickeng changed the title Guest shutdown support in endpointVM [full ci] Guest shutdown support in endpointVM Feb 8, 2018
@hickeng hickeng force-pushed the vchGuestOperations branch from dac4259 to b992908 Compare February 8, 2018 01:37
@hickeng
Copy link
Member Author

hickeng commented Feb 8, 2018

Debugging left over dynamic config client:

tether.debug full shutdown trace

Feb  8 2018 02:02:37.091Z DEBUG Found 2 subscribers to 1347: vsphere.VMEvent - Guest OS shut down for localtls on localhost.localdomain in ha-datacenter
Feb  8 2018 02:02:37.101Z DEBUG [0xc4203ddef0] write "Feb  8 2018 02:02:37.091Z DEBUG Found 2 subscribers to 1347: vsphere.VMEvent - Guest OS shut down for localtls on localhost.localdomain in ha-datacenter\n" to 2 writers (err: <nil>)
Feb  8 2018 02:02:37.101Z DEBUG Event manager calling back to exec for Event(1347): vsphere.VMEvent
Feb  8 2018 02:02:37.101Z DEBUG Event manager calling back to logging for Event(1347): vsphere.VMEvent
Feb  8 2018 02:02:37.101Z DEBUG [BEGIN]  [vic/lib/portlayer/logging.eventCallback:52]
Feb  8 2018 02:02:37.101Z DEBUG [ END ]  [vic/lib/portlayer/logging.eventCallback:52] [8.554µs]
Feb  8 2018 02:02:37.125Z DEBUG [0xc4203ddef0] write "Feb  8 2018 02:02:37.101Z DEBUG Event manager calling back to exec for Event(1347): vsphere.VMEvent\nFeb  8 2018 02:02:37.101Z DEBUG Event manager calling back to logging for Event(1347): vsphere.VMEvent\nFeb  8 2018 02:02:37.101Z DEBUG [BEGIN]  [vic/lib/portlayer/logging.eventCallback:52]\nFeb  8 2018 02:02:37.101Z DEBUG [ END ]  [vic/lib/portlayer/logging.eventCallback:52] [8.554µs] \n" to 2 writers (err: <nil>)
2018/02/08 02:02:37 dispatching power op "OS_Halt"
Feb  8 2018 02:02:37.146Z INFO  Powering off the system
Feb  8 2018 02:02:37.150Z DEBUG [BEGIN]  [main.exitTether:150]
Feb  8 2018 02:02:37.154Z DEBUG [ END ]  [main.exitTether:150] [3.970414ms]
Feb  8 2018 02:02:37.158Z INFO  Waiting for 3 processes to exit
Feb  8 2018 02:02:37.165Z INFO  Stopping tether via signal user defined signal 2
Feb  8 2018 02:02:37.169Z DEBUG [BEGIN]  [vic/lib/tether.(*tether).Stop:627]
Feb  8 2018 02:02:37.173Z INFO  Waiting for 3 processes to exit
Feb  8 2018 02:02:37.175Z WARN  Someone called shutdown, exiting reload loop
Feb  8 2018 02:02:37.181Z DEBUG [BEGIN]  [vic/lib/tether.(*tether).cleanup:199] main tether cleanup
Feb  8 2018 02:02:37.185Z DEBUG Processing config for session: docker-personality
Feb  8 2018 02:02:37.188Z DEBUG Process for session docker-personality is running (pid: 350)
Feb  8 2018 02:02:37.191Z INFO  Running session docker-personality has been deactivated (pid: 350, system status: context canceled)
Feb  8 2018 02:02:37.198Z INFO  sending signal TERM (15) to docker-personality
Feb  8 2018 02:02:37.201Z INFO  Processing signal 'terminated'
Feb  8 2018 02:02:37.201Z INFO  Closing down docker personality
Feb  8 2018 02:02:37.201Z DEBUG [BEGIN]  [vic/lib/apiservers/engine/backends.(*PortlayerEventMonitor).Stop:130]
Feb  8 2018 02:02:37.201Z DEBUG [ END ]  [vic/lib/apiservers/engine/backends.(*PortlayerEventMonitor).Stop:130] [8.823µs]
Feb  8 2018 02:02:37.201Z INFO  Shutting down docker API server backend
Feb  8 2018 02:02:37.201Z INFO  Shutting down dynamic configuration
Feb  8 2018 02:02:37.201Z INFO  Logging out dynamic config
Feb  8 2018 02:02:37.229Z DEBUG [0xc4203ddd70] write "Feb  8 2018 02:02:37.201Z INFO  Processing signal 'terminated'\nFeb  8 2018 02:02:37.201Z INFO  Closing down docker personality\nFeb  8 2018 02:02:37.201Z DEBUG [BEGIN]  [vic/lib/apiservers/engine/backends.(*PortlayerEventMonitor).Stop:130]\nFeb  8 2018 02:02:37.201Z DEBUG [ END ]  [vic/lib/apiservers/engine/backends.(*PortlayerEventMonitor).Stop:130] [8.823µs] \nFeb  8 2018 02:02:37.201Z INFO  Shutting down docker API server backend\nFeb  8 2018 02:02:37.201Z INFO  Shutting down dynamic configuration\nFeb  8 2018 02:02:37.201Z INFO  Logging out dynamic config\n" to 2 writers (err: <nil>)
Feb  8 2018 02:02:37.250Z DEBUG Writer goroutine for stderr returned: %!s(<nil>)
Feb  8 2018 02:02:37.253Z DEBUG Writer goroutine for stderr exiting
Feb  8 2018 02:02:37.256Z DEBUG Writer goroutine for stdout returned: %!s(<nil>)
Feb  8 2018 02:02:37.259Z DEBUG Writer goroutine for stdout exiting
Feb  8 2018 02:02:37.204Z DEBUG [BEGIN]  [vic/lib/portlayer/event.(*Manager).Unsubscribe:124] events.ContainerEvent:PLE-2581042e0d0e6441dc4b412dcfb2a573fea7b1c8360de14220caf92d9c6b765a
Feb  8 2018 02:02:37.204Z DEBUG [ END ]  [vic/lib/portlayer/event.(*Manager).Unsubscribe:124] [43.366µs] events.ContainerEvent:PLE-2581042e0d0e6441dc4b412dcfb2a573fea7b1c8360de14220caf92d9c6b765a
Feb  8 2018 02:02:37.204Z DEBUG Completed stream cleanup for events:PLE-2581042e0d0e6441dc4b412dcfb2a573fea7b1c8360de14220caf92d9c6b765a
Feb  8 2018 02:02:37.204Z DEBUG Finished streaming events for PLE-2581042e0d0e6441dc4b412dcfb2a573fea7b1c8360de14220caf92d9c6b765a (unwrapped bytes: 0)
Feb  8 2018 02:02:37.204Z DEBUG [ END ]  [vic/lib/apiservers/portlayer/restapi/handlers.(*StreamOutputHandler).WriteResponse:53] [23.326785213s] Stream of events:PLE-2581042e0d0e6441dc4b412dcfb2a573fea7b1c8360de14220caf92d9c6b765a
Feb  8 2018 02:02:37.296Z DEBUG [0xc4203ddef0] write "Feb  8 2018 02:02:37.204Z DEBUG [BEGIN]  [vic/lib/portlayer/event.(*Manager).Unsubscribe:124] events.ContainerEvent:PLE-2581042e0d0e6441dc4b412dcfb2a573fea7b1c8360de14220caf92d9c6b765a\nFeb  8 2018 02:02:37.204Z DEBUG [ END ]  [vic/lib/portlayer/event.(*Manager).Unsubscribe:124] [43.366µs] events.ContainerEvent:PLE-2581042e0d0e6441dc4b412dcfb2a573fea7b1c8360de14220caf92d9c6b765a\nFeb  8 2018 02:02:37.204Z DEBUG Completed stream cleanup for events:PLE-2581042e0d0e6441dc4b412dcfb2a573fea7b1c8360de14220caf92d9c6b765a\nFeb  8 2018 02:02:37.204Z DEBUG Finished streaming events for PLE-2581042e0d0e6441dc4b412dcfb2a573fea7b1c8360de14220caf92d9c6b765a (unwrapped bytes: 0)\nFeb  8 2018 02:02:37.204Z DEBUG [ END ]  [vic/lib/apiservers/portlayer/restapi/handlers.(*StreamOutputHandler).WriteResponse:53] [23.326785213s] Stream of events:PLE-2581042e0d0e6441dc4b412dcfb2a573fea7b1c8360de14220caf92d9c6b765a\n" to 2 writers (err: <nil>)
Feb  8 2018 02:02:37.334Z DEBUG Inspecting children with status change
Feb  8 2018 02:02:37.337Z DEBUG Reaped process 350, return code: 0
Feb  8 2018 02:02:37.340Z DEBUG Removed child pid: 350
Feb  8 2018 02:02:37.345Z DEBUG Processing config for session: port-layer
Feb  8 2018 02:02:37.349Z DEBUG Process for session port-layer is running (pid: 359)
Feb  8 2018 02:02:37.352Z INFO  Running session port-layer has been deactivated (pid: 359, system status: context canceled)
Feb  8 2018 02:02:37.357Z INFO  sending signal TERM (15) to port-layer
Feb  8 2018 02:02:37.363Z DEBUG Processing config for session: vicadmin
Feb  8 2018 02:02:37.367Z DEBUG Process for session vicadmin is running (pid: 365)
Feb  8 2018 02:02:37.374Z DEBUG [BEGIN]  [vic/lib/tether.(*tether).handleSessionExit:743] handling exit of session docker-personality
Feb  8 2018 02:02:37.362Z DEBUG [BEGIN]  [vic/lib/vspc.(*Vspc).Stop:188] stop vspc
Feb  8 2018 02:02:37.362Z DEBUG [ END ]  [vic/lib/vspc.(*Vspc).Stop:188] [41.013µs] stop vspc
Feb  8 2018 02:02:37.362Z DEBUG Shutting down udpserver
Feb  8 2018 02:02:37.362Z DEBUG Shutting down tcpserver
Feb  8 2018 02:02:37.362Z INFO  shutdown initiated
Feb  8 2018 02:02:37.363Z INFO  Stopped serving port layer at http://127.0.0.1:2377
Feb  8 2018 02:02:37.363Z INFO  Shutting down port-layer-server
Feb  8 2018 02:02:37.363Z DEBUG op=359.12: [NewOperation] op=359.12 [vic/lib/portlayer.Finalize:107]
Feb  8 2018 02:02:37.363Z DEBUG [BEGIN] op=359.12 [vic/lib/portlayer.Finalize:108]
Feb  8 2018 02:02:37.363Z ERROR vSPC cannot accept connections: accept tcp 192.168.78.127:2377: use of closed network connection
Feb  8 2018 02:02:37.363Z INFO  vSPC exiting...
Feb  8 2018 02:02:37.363Z DEBUG UDP server exited
Feb  8 2018 02:02:37.428Z INFO  Running session vicadmin has been deactivated (pid: 365, system status: context canceled)
Feb  8 2018 02:02:37.434Z INFO  sending signal TERM (15) to vicadmin
Feb  8 2018 02:02:37.440Z INFO  Waiting for 2 processes to exit
Feb  8 2018 02:02:37.438Z DEBUG Waiting on session.wait
Feb  8 2018 02:02:37.448Z DEBUG Wait on session.wait completed
Feb  8 2018 02:02:37.450Z DEBUG Calling wait on cmd
Feb  8 2018 02:02:37.452Z DEBUG Wait returned waitid: no child processes
Feb  8 2018 02:02:37.456Z DEBUG Calling close on writers
Feb  8 2018 02:02:37.464Z DEBUG [0xc4203ddce0] Close on writers
Feb  8 2018 02:02:37.428Z DEBUG Writer goroutine for stdout returned: %!s(<nil>)
Feb  8 2018 02:02:37.438Z INFO  received terminated
Feb  8 2018 02:02:37.439Z DEBUG [BEGIN]  [main.(*server).stop:430]
Feb  8 2018 02:02:37.439Z DEBUG [ END ]  [main.(*server).stop:430] [105.369µs]
Feb  8 2018 02:02:37.439Z DEBUG [ END ]  [main.(*server).serve:384] [35.10341198s]
Feb  8 2018 02:02:37.486Z DEBUG Writer goroutine for stdout exiting
Feb  8 2018 02:02:37.472Z DEBUG [0xc4203ddef0] write "Feb  8 2018 02:02:37.362Z DEBUG [BEGIN]  [vic/lib/vspc.(*Vspc).Stop:188] stop vspc\nFeb  8 2018 02:02:37.362Z DEBUG [ END ]  [vic/lib/vspc.(*Vspc).Stop:188] [41.013µs] stop vspc\nFeb  8 2018 02:02:37.362Z DEBUG Shutting down udpserver\nFeb  8 2018 02:02:37.362Z DEBUG Shutting down tcpserver\nFeb  8 2018 02:02:37.362Z INFO  shutdown initiated\nFeb  8 2018 02:02:37.363Z INFO  Stopped serving port layer at http://127.0.0.1:2377\nFeb  8 2018 02:02:37.363Z INFO  Shutting down port-layer-server\nFeb  8 2018 02:02:37.363Z DEBUG op=359.12: [NewOperation] op=359.12 [vic/lib/portlayer.Finalize:107]\nFeb  8 2018 02:02:37.363Z DEBUG [BEGIN] op=359.12 [vic/lib/portlayer.Finalize:108]\nFeb  8 2018 02:02:37.363Z ERROR vSPC cannot accept connections: accept tcp 192.168.78.127:2377: use of closed network connection\nFeb  8 2018 02:02:37.363Z INFO  vSPC exiting...\nFeb  8 2018 02:02:37.363Z DEBUG UDP server exited\n" to 2 writers (err: <nil>)
Feb  8 2018 02:02:37.372Z INFO  Shutting down event collector vSphere Event Collector
Feb  8 2018 02:02:37.372Z DEBUG [ END ] op=359.12 [vic/lib/portlayer.Finalize:108] [9.117971ms]
Feb  8 2018 02:02:37.532Z DEBUG [0xc4203ddef0] write "Feb  8 2018 02:02:37.372Z INFO  Shutting down event collector vSphere Event Collector\nFeb  8 2018 02:02:37.372Z DEBUG [ END ] op=359.12 [vic/lib/portlayer.Finalize:108] [9.117971ms] \n" to 2 writers (err: <nil>)
Feb  8 2018 02:02:37.542Z DEBUG Writer goroutine for stderr returned: %!s(<nil>)
Feb  8 2018 02:02:37.548Z DEBUG Writer goroutine for stderr exiting
Feb  8 2018 02:02:37.523Z DEBUG [0xc420400090] write "Feb  8 2018 02:02:37.438Z INFO  received terminated\nFeb  8 2018 02:02:37.439Z DEBUG [BEGIN]  [main.(*server).stop:430]\nFeb  8 2018 02:02:37.439Z DEBUG [ END ]  [main.(*server).stop:430] [105.369µs] \nFeb  8 2018 02:02:37.439Z DEBUG [ END ]  [main.(*server).serve:384] [35.10341198s] \n" to 2 writers (err: <nil>)
Feb  8 2018 02:02:37.565Z DEBUG Writer goroutine for stderr returned: %!s(<nil>)
Feb  8 2018 02:02:37.568Z DEBUG Writer goroutine for stderr exiting
Feb  8 2018 02:02:37.523Z DEBUG Writer goroutine for stdout returned: %!s(<nil>)
Feb  8 2018 02:02:37.574Z DEBUG Writer goroutine for stdout exiting
Feb  8 2018 02:02:37.468Z DEBUG [0xc4203ddce0] Closing writer &{file:0xc4203fc020}
Feb  8 2018 02:02:37.582Z DEBUG [0xc4203ddce0] is a Closer%!(EXTRA *os.File=&{0xc4203fc020})
Feb  8 2018 02:02:37.586Z DEBUG [0xc4203ddce0] Closing writer &{file:0xc42000c0a0}
Feb  8 2018 02:02:37.589Z DEBUG [0xc4203ddd70] Close on writers
Feb  8 2018 02:02:37.592Z DEBUG [0xc4203ddd70] Closing writer &{file:0xc4203fc020}
Feb  8 2018 02:02:37.596Z DEBUG [0xc4203ddd70] is a Closer%!(EXTRA *os.File=&{0xc4203fc020})
Feb  8 2018 02:02:37.599Z DEBUG [0xc4203ddd70] Closing writer &{file:0xc42000c0c0}
Feb  8 2018 02:02:37.602Z DEBUG Calling close on reader
Feb  8 2018 02:02:37.605Z DEBUG [0xc4203e0a00] Close on readers
Feb  8 2018 02:02:37.607Z DEBUG [BEGIN]  [main.(*operations).HandleSessionExit:62]
Feb  8 2018 02:02:37.611Z DEBUG [ END ]  [main.(*operations).HandleSessionExit:62] [4.141839ms]
Feb  8 2018 02:02:37.616Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|docker-personality.detail.createtime, value: "0"
Feb  8 2018 02:02:37.621Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|docker-personality.detail.starttime, value: "1518055319"
Feb  8 2018 02:02:37.628Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|docker-personality.detail.stoptime, value: "1518055357"
Feb  8 2018 02:02:37.645Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|docker-personality.diagnostics.resurrections, value: "0"
Feb  8 2018 02:02:37.654Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|docker-personality.status, value: "0"
Feb  8 2018 02:02:37.661Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|docker-personality.started, value: "true"
Feb  8 2018 02:02:37.668Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|docker-personality.runblock, value: "false"
Feb  8 2018 02:02:37.675Z DEBUG Calling t.ops.HandleSessionExit
Feb  8 2018 02:02:37.683Z INFO  Waiting for 3 processes to exit
Feb  8 2018 02:02:37.686Z INFO  Waiting for 3 processes to exit
Feb  8 2018 02:02:37.940Z INFO  Waiting for 2 processes to exit
<snip>
Feb  8 2018 02:02:40.673Z INFO  Waiting for 2 processes to exit
Feb  8 2018 02:02:40.689Z DEBUG [BEGIN]  [vic/lib/tether.(*tether).Reload:672]
Feb  8 2018 02:02:40.699Z WARN  Someone called shutdown, dropping the reload request
Feb  8 2018 02:02:40.705Z DEBUG [ END ]  [vic/lib/tether.(*tether).Reload:672] [16.747597ms]
Feb  8 2018 02:02:40.717Z INFO  Triggered reload
Feb  8 2018 02:02:40.720Z DEBUG [ END ]  [vic/lib/tether.(*tether).handleSessionExit:743] [3.346281332s] handling exit of session docker-personality
Feb  8 2018 02:02:40.737Z DEBUG Inspecting children with status change
Feb  8 2018 02:02:40.750Z DEBUG Reaped process 359, return code: 0
Feb  8 2018 02:02:40.775Z DEBUG Removed child pid: 359
Feb  8 2018 02:02:40.792Z DEBUG [BEGIN]  [vic/lib/tether.(*tether).handleSessionExit:743] handling exit of session port-layer
Feb  8 2018 02:02:40.839Z DEBUG Waiting on session.wait
Feb  8 2018 02:02:40.859Z DEBUG Wait on session.wait completed
Feb  8 2018 02:02:40.889Z DEBUG Calling wait on cmd
Feb  8 2018 02:02:40.901Z DEBUG Wait returned waitid: no child processes
Feb  8 2018 02:02:40.958Z DEBUG Calling close on writers
Feb  8 2018 02:02:40.990Z DEBUG [0xc4203dde60] Close on writers
Feb  8 2018 02:02:40.997Z DEBUG [0xc4203dde60] Closing writer &{file:0xc4203fc1e0}
Feb  8 2018 02:02:41.020Z DEBUG [0xc4203dde60] is a Closer%!(EXTRA *os.File=&{0xc4203fc1e0})
Feb  8 2018 02:02:41.034Z DEBUG [0xc4203dde60] Closing writer &{file:0xc42000c0a0}
Feb  8 2018 02:02:41.041Z DEBUG [0xc4203ddef0] Close on writers
Feb  8 2018 02:02:41.050Z DEBUG [0xc4203ddef0] Closing writer &{file:0xc4203fc1e0}
Feb  8 2018 02:02:41.058Z DEBUG [0xc4203ddef0] is a Closer%!(EXTRA *os.File=&{0xc4203fc1e0})
Feb  8 2018 02:02:41.067Z DEBUG [0xc4203ddef0] Closing writer &{file:0xc42000c0c0}
Feb  8 2018 02:02:41.071Z DEBUG Calling close on reader
Feb  8 2018 02:02:41.075Z DEBUG [0xc4203e0b80] Close on readers
Feb  8 2018 02:02:41.088Z DEBUG [BEGIN]  [main.(*operations).HandleSessionExit:62]
Feb  8 2018 02:02:41.102Z DEBUG [ END ]  [main.(*operations).HandleSessionExit:62] [14.208181ms]
Feb  8 2018 02:02:41.107Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|port-layer.detail.createtime, value: "0"
Feb  8 2018 02:02:40.997Z INFO  Waiting for 2 processes to exit
Feb  8 2018 02:02:41.123Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|port-layer.detail.starttime, value: "1518055320"
Feb  8 2018 02:02:41.138Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|port-layer.detail.stoptime, value: "1518055361"
Feb  8 2018 02:02:41.167Z INFO  Waiting for 2 processes to exit
Feb  8 2018 02:02:41.200Z INFO  Waiting for 2 processes to exit
Feb  8 2018 02:02:41.205Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|port-layer.diagnostics.resurrections, value: "0"
Feb  8 2018 02:02:41.234Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|port-layer.status, value: "0"
Feb  8 2018 02:02:41.242Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|port-layer.started, value: "true"
Feb  8 2018 02:02:41.254Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|port-layer.runblock, value: "false"
Feb  8 2018 02:02:41.265Z DEBUG Calling t.ops.HandleSessionExit
Feb  8 2018 02:02:41.450Z INFO  Waiting for 1 processes to exit
<snip>
Feb  8 2018 02:02:44.174Z INFO  Waiting for 1 processes to exit
Feb  8 2018 02:02:44.270Z DEBUG [BEGIN]  [vic/lib/tether.(*tether).Reload:672]
Feb  8 2018 02:02:44.288Z WARN  Someone called shutdown, dropping the reload request
Feb  8 2018 02:02:44.337Z DEBUG [ END ]  [vic/lib/tether.(*tether).Reload:672] [66.678883ms]
Feb  8 2018 02:02:44.366Z INFO  Triggered reload
Feb  8 2018 02:02:44.369Z DEBUG [ END ]  [vic/lib/tether.(*tether).handleSessionExit:743] [3.576485558s] handling exit of session port-layer
Feb  8 2018 02:02:44.382Z DEBUG Inspecting children with status change
Feb  8 2018 02:02:44.391Z DEBUG Reaped process 365, return code: 0
Feb  8 2018 02:02:44.400Z DEBUG Removed child pid: 365
Feb  8 2018 02:02:44.406Z DEBUG [BEGIN]  [vic/lib/tether.(*tether).handleSessionExit:743] handling exit of session vicadmin
Feb  8 2018 02:02:44.417Z DEBUG Waiting on session.wait
Feb  8 2018 02:02:44.421Z DEBUG Wait on session.wait completed
Feb  8 2018 02:02:44.425Z DEBUG Calling wait on cmd
Feb  8 2018 02:02:44.432Z DEBUG Wait returned waitid: no child processes
Feb  8 2018 02:02:44.436Z DEBUG Calling close on writers
Feb  8 2018 02:02:44.440Z DEBUG [0xc420400000] Close on writers
Feb  8 2018 02:02:44.443Z DEBUG [0xc420400000] Closing writer &{file:0xc4203fc3a0}
Feb  8 2018 02:02:44.450Z DEBUG [0xc420400000] is a Closer%!(EXTRA *os.File=&{0xc4203fc3a0})
Feb  8 2018 02:02:44.455Z DEBUG [0xc420400000] Closing writer &{file:0xc42000c0a0}
Feb  8 2018 02:02:44.460Z DEBUG [0xc420400090] Close on writers
Feb  8 2018 02:02:44.466Z DEBUG [0xc420400090] Closing writer &{file:0xc4203fc3a0}
Feb  8 2018 02:02:44.471Z DEBUG [0xc420400090] is a Closer%!(EXTRA *os.File=&{0xc4203fc3a0})
Feb  8 2018 02:02:44.476Z DEBUG [0xc420400090] Closing writer &{file:0xc42000c0c0}
Feb  8 2018 02:02:44.483Z DEBUG Calling close on reader
Feb  8 2018 02:02:44.487Z DEBUG [0xc4203e0d00] Close on readers
Feb  8 2018 02:02:44.491Z DEBUG [BEGIN]  [main.(*operations).HandleSessionExit:62]
Feb  8 2018 02:02:44.498Z DEBUG [ END ]  [main.(*operations).HandleSessionExit:62] [6.928508ms]
Feb  8 2018 02:02:44.503Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|vicadmin.detail.createtime, value: "0"
Feb  8 2018 02:02:44.460Z INFO  Waiting for 1 processes to exit
Feb  8 2018 02:02:44.520Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|vicadmin.detail.starttime, value: "1518055320"
Feb  8 2018 02:02:44.526Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|vicadmin.detail.stoptime, value: "1518055364"
Feb  8 2018 02:02:44.554Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|vicadmin.diagnostics.resurrections, value: "0"
Feb  8 2018 02:02:44.567Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|vicadmin.status, value: "0"
Feb  8 2018 02:02:44.573Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|vicadmin.started, value: "true"
Feb  8 2018 02:02:44.584Z DEBUG GuestInfoSink: setting key: guestinfo.vice..init.sessions|vicadmin.runblock, value: "false"
Feb  8 2018 02:02:44.592Z DEBUG Calling t.ops.HandleSessionExit
Feb  8 2018 02:02:44.659Z INFO  Waiting for 1 processes to exit
Feb  8 2018 02:02:44.673Z INFO  Waiting for 1 processes to exit
Feb  8 2018 02:02:44.940Z DEBUG [BEGIN]  [vic/lib/tether.(*tether).stopReaper:163] Shutting down child reaping
Feb  8 2018 02:02:44.946Z DEBUG Removing the signal notifier
Feb  8 2018 02:02:44.951Z DEBUG Closing the reapers signal channel
Feb  8 2018 02:02:44.954Z DEBUG [ END ]  [vic/lib/tether.(*tether).stopReaper:163] [14.209066ms] Shutting down child reaping
Feb  8 2018 02:02:44.966Z INFO  Stopping extension Toolbox

before delete:

$ govc session.ls
Key                                   Name   Time              Idle   Host            Agent
5220ce87-fa11-7853-80ce-db6991d8d31e  dcui   2018-02-05 20:15  2m21s  127.0.0.1       VMware-client/6.5.0
5236c37b-3f8d-8eb9-7cab-1c8374339596  drone  2018-02-08 02:02  4s     192.168.78.127  vic-engine/1.4.0-dev
528e007b-c57e-1c5d-91f6-c240548ddc8f  drone  2018-02-08 02:02  20s    192.168.78.127  vic-dynamic-config/1.4.0-dev
52e439b2-0205-3072-4d01-0767e644284c  drone  2018-02-08 01:10    .    192.168.78.215  govc/0.14.0

after delete:

$ govc session.ls
Key                                   Name   Time              Idle   Host            Agent
5220ce87-fa11-7853-80ce-db6991d8d31e  dcui   2018-02-05 20:15  2m49s  127.0.0.1       VMware-client/6.5.0
528e007b-c57e-1c5d-91f6-c240548ddc8f  drone  2018-02-08 02:02  13s    192.168.78.127  vic-dynamic-config/1.4.0-dev
52e439b2-0205-3072-4d01-0767e644284c  drone  2018-02-08 01:10    .    192.168.78.215  govc/0.14.0

We can see in the tether.debug that we've called the client logout, that there's only one from the govc session.ls output, and it occurs with plenty of time to take effect before actual shutdown:

Feb  8 2018 02:02:37.201Z INFO  Processing signal 'terminated'\n
Feb  8 2018 02:02:37.201Z INFO  Closing down docker personality\n
Feb  8 2018 02:02:37.201Z DEBUG [BEGIN]  [vic/lib/apiservers/engine/backends.(*PortlayerEventMonitor).Stop:130]\n
Feb  8 2018 02:02:37.201Z DEBUG [ END ]  [vic/lib/apiservers/engine/backends.(*PortlayerEventMonitor).Stop:130] [8.823µs] \n
Feb  8 2018 02:02:37.201Z INFO  Shutting down docker API server backend\n
Feb  8 2018 02:02:37.201Z INFO  Shutting down dynamic configuration\n
Feb  8 2018 02:02:37.201Z INFO  Logging out dynamic config\n

@dougm any ideas? We're calling session.Logout but it's not reliably taking effect.

@dougm
Copy link
Member

dougm commented Feb 8, 2018

@hickeng I don't know what vic-dynamic-config is, but maybe the vapi client used for the tags API needs to terminate its session?

Copy link
Member

@zjs zjs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is mostly code I have limited exposure to, so the review focuses on more superficial things. None of these comments are blockers.

Fixes tether unit tests to run as non-root (again).

Should this be a separate commit?


go func() {
for s := range sigs {
switch s {
case syscall.SIGHUP:
log.Infof("Reloading tether configuration")
tthr.Reload()
case syscall.SIGUSR1, syscall.SIGUSR2, syscall.SIGPWR:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why include both SIGUSR1 and SIGUSR2 here? Is there a chance we'd ever want to use one for something other than "stop" in the future?

Copy link
Member Author

@hickeng hickeng Feb 16, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added comments in the code for posterity.

@@ -123,14 +123,22 @@ func halt() {

func startSignalHandler() {
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, syscall.SIGHUP)
signal.Notify(sigs, syscall.SIGHUP, syscall.SIGUSR1, syscall.SIGUSR2, syscall.SIGPWR, syscall.SIGTERM, syscall.SIGINT)

go func() {
for s := range sigs {
switch s {
case syscall.SIGHUP:
log.Infof("Reloading tether configuration")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: could we add via signal %s to this log message for consistency? (Someone looking at the code can see that there's only one signal that generates this log line, but I think it's better not to assume everyone looking at the log messages is also reading the code.)

@@ -201,3 +254,28 @@ func defaultIP() string {

return toolbox.DefaultIP()
}

func startSignalHandler() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have a pattern for sharing base functionality between vic-init and `tether? Consistency in the way we handle signals seems like something we'd want to preserve in the future (if for no other reason than developer sanity), and duplicating the code allows for unintentional drift.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a good chunk of shared function in lib/tether, however the code in cmd/tether and cmd/vic-init is expressly that needed to support different behaviours.

What you're proposing is definitely desirable but I think necessitates the toolbox interface for system interaction that I noted above. This would be needed to order to avoid the package local tthr skyhook that is used in this function.

I'll revisit this; the most that I'm likely to do prior to designing the system interaction portion of vcsim is to make this a lib/tether function that takes tthr as an argument.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now, can we at least add a comment in each place referring to the other? That might increase the chances of remembering to update one when we update the other.


// The EventManager cannot be destroyed like this as it isn't a ManagedEntity.
// TODO: we do need to ensure the EventHistoryCollector is destroyed, but that's a specific call
// and requires a govmomi change. At least it is lifecycle coupled with the session.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we file an issue for this TODO and cross-link it with this code change?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in PR vmware/govmomi#962 - comment just needs removing.

// store.Finalize(ctx)
exec.Finalize(op)
// network.Finalize(ctx)
// logging.Finalize(ctx)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are some of these Finalize calls commented out? I don't see an explanation here, in the commit message, or in the PR summary.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because at the moment the methods don't exist as there's no Finalization required for session handling in those packages.
I'll add the stub functions in.

err := ts.ln.Close()

// so far I have not identified any routines that need to be
// explicitly stopped after the underlying connection is closed.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a symptom someone might observe that would indicate that a routine would need to be explicitly stopped here? If so, can we document that?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can add basic doc note:

// if vspc server fails to shut down in a timely manner then the clean endpointVM shutdown will fail
// this failure to shut down cleanly can be observed in the "/var/log/vic/init.log" or
// "[datastore] endpointVM/tether.debug" log files as vic-init will report that it's waiting for the portlayer to exit.

@hickeng
Copy link
Member Author

hickeng commented Feb 9, 2018

@dougm vic-dynamic-config is the personality client used to find Admiral instances in the VC and bind to one if the VCH is configured there. This isn't a consistent thing - is there possibly a race condition with Logout and the KeepAlive login? (more a note to self to investigate that).

@hickeng hickeng changed the title Guest shutdown support in endpointVM Guest shutdown support in endpointVM [full ci] Feb 17, 2018
@hickeng hickeng force-pushed the vchGuestOperations branch from 72f1ab2 to ef8da5e Compare March 29, 2018 19:39
@hickeng
Copy link
Member Author

hickeng commented Apr 5, 2018

tracking down how vicadmin gets the VCH name for presentation on the dashboard page. It uses validator.GetVCHName to populate the name in vicadmin, which in turn uses guest.GetSelf and vm.ObjectName.

@hickeng hickeng force-pushed the vchGuestOperations branch 2 times, most recently from 85186e0 to b9c0a41 Compare April 19, 2018 17:26
@hickeng hickeng force-pushed the vchGuestOperations branch from e429f9d to 73b4a4e Compare May 10, 2018 16:51
@hickeng hickeng force-pushed the vchGuestOperations branch from 44290d4 to 06ba697 Compare June 14, 2018 21:16
@hickeng hickeng requested a review from a team as a code owner June 14, 2018 21:16
@@ -532,7 +544,7 @@ func newSession(ctx context.Context, config *config.VirtualContainerHostConfigSp
User: url.UserPassword(config.Username, config.Token),
Thumbprint: config.TargetThumbprint,
Keepalive: defaultSessionKeepAlive,
UserAgent: version.UserAgent("vic-engine"),
UserAgent: version.UserAgent("vic-dynamic-config"),
Copy link
Member

@zjs zjs Jun 21, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At least this portion of this PR should be included in any cherry-pick of #7887. Therefore, this may be worth submitting separately (in a way that more clearly references the previous commit).

@hickeng hickeng force-pushed the vchGuestOperations branch 3 times, most recently from 3f6f4c7 to d3338cb Compare July 25, 2018 21:43
hickeng added 5 commits August 2, 2018 11:29
This ensures that each vmomi client has a unique user-agent string so it
is possible to link vmomi sessions back to the originating component
within a VCH. The only user-agent that needed updating was the one used to
pull project specific configuration from admiral/harbor.

This follows on from PR vmware#7887
This restores use of the mechanisms allowing us to run tether tests as
non-root. In this case it's simply ensuring we are checking whether there
is a directory prefix that should be prepended.
The disk manager is intended to be a singleton that controls locking and
use of the storage subsystems. It may function with soft failures if there
are multiple instances of it, but it's not tested or intended to be used
in that manner.

This updates the container, image and volume logic to use a singleton disk
manager that's held at the portlayer level instead of within the
individual subsystems.
There are multiple places in the code where we have to call power off as a
precaution if a clean shutdown path does not complete in the expected
timeframe. This is inherently a race with the in-guest shutdown so we need
to be able to detect if an error from the power off call is because the VM
is _already_ powered off and can be squashed, or is an actual failure to
power off and needs to be propagated.
This commit adds a convenience method for performing that check and
switches existing checks to use it.
The interfaces within the endpointVM get renamed to their network roles
and an entry is added to /etc/hosts so that it's easy for services to bind
to specific network roles irrespective of actual network configuration.
As components do not launch until the network is fully configured and
initialized this works as expected, but fails for the pprof server
embedded in vic-init that is performing the network initialization.

This adds logic to block the pprof initialization until the expected
name (client.localhost) resolves. This only impacts execution with debug
level of >=2.
@hickeng hickeng force-pushed the vchGuestOperations branch from d3338cb to 5e23b1c Compare August 2, 2018 20:01
Copy link
Member

@zjs zjs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few minor comments, but no serious concerns.

@@ -122,12 +123,16 @@ func main() {
serveAPIWait := make(chan error)
go api.Wait(serveAPIWait)

// signal.Trap explicitly calls os.Exit so an exit logic has to be rolled in here
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... so an exit logic ...

Should this be "so any exit logic"?

tthr.Stop()
case syscall.SIGTERM, syscall.SIGINT:
log.Infof("Stopping system in lieu of restart handling via signal %s", s.String())
// TODO: update this to adjust power off handling for reboot
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this reference a GitHub issue?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure it's worth it. Tether needs heavy redesign - this comment was purely as a note for if/when copying code into a new implementation.
I also do not have an issue for reworking tether - it's part of the larger design exercise to add support for extensions/etc that I'd consider part of a 2.0

log.Warn(err)
}

if debugLevel > 2 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to document this change? (And the corresponding change to reboot, below.)

Copy link
Member Author

@hickeng hickeng Aug 7, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is actually bringing it inline with other behaviours where >1 introduces small behaviour changes, and >2 introduces the more significant behavioural changes.

As for where we document this - yes it does need documenting... as do all of the others.
vmware/vic-tasks#84

@@ -201,3 +254,28 @@ func defaultIP() string {

return toolbox.DefaultIP()
}

func startSignalHandler() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now, can we at least add a comment in each place referring to the other? That might increase the chances of remembering to update one when we update the other.


// this is a VERY basic throttle so that we don't DoS the remote when the client is NotAuthenticated.
// this should be removed/replaced once there is structured connection management in place to trigger re-authentication as required.
time.Sleep(2 * time.Second)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do I recall you mentioning that we had a standardized way to do back-off?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

retry package, there are some good examples around in the code. see https://github.com/vmware/vic/blob/master/pkg/retry/retry.go

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do have the retry but we have knowledge about the keepalive triggering reauth (20s at time time) and don't expect to need or want exponential backoff. That package also assumes time-bounds (e.g. Max retry time) whereas this is expected to be indefinite.
Basically while I could probably force the backoff package to do basic throttling I don't think it's a good fit here. We may want an actual throttle package, however what we're really after here structured connection management so this doesn't have to occur inline in product code at all.

network.Finalize(ctx)
logging.Finalize(ctx)
metrics.Finalize(op)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems likely that we'll forget to update this if we have a new component. (However, I don't have a specific suggestion for improving this.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have suggestions ;) An extension interface that has structured registration, initialization, finalization, etc. Too much for now I think.

// Create view of VirtualMachine objects under the VCH's resource pool
Config.ContainerView, err = mngr.CreateContainerView(ctx, pool.Reference(), []string{"VirtualMachine"}, true)
Config.ContainerView, err = mngr.CreateContainerView(op, pool.Reference(), []string{"VirtualMachine"}, true)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have a constant we could reference instead of VirtualMachine?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not that I can find. @dougm?

}
}

if err != nil || !tools {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The way this is currently written, I find myself wondering things like "In what cases will we hit this block?" (Where was err set? In what cases will/won't it be nil? Have we already logged the err?)

I don't have a specific suggestion for how to improve this method, but I don't think the current structure is the clearest way to communicate the intended flow.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it can be structured like this:
if tools {
Log message about trying to use GuestShutdown
If success return, otherwise log error message
}
Log message about doing VM Power off
do VM power off

}
}

if err != nil || !tools {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it can be structured like this:
if tools {
Log message about trying to use GuestShutdown
If success return, otherwise log error message
}
Log message about doing VM Power off
do VM power off

@hickeng hickeng force-pushed the vchGuestOperations branch from 5e23b1c to aa54573 Compare August 7, 2018 23:12
@hickeng
Copy link
Member Author

hickeng commented Aug 8, 2018

https://ci-vic.vmware.com/vmware/vic/19731/7 failure is due to parallel CI issue generating CA certs.

@hickeng hickeng force-pushed the vchGuestOperations branch from aa54573 to 2e0abb2 Compare August 8, 2018 17:50
d.op.Debugf("Guest shutdown failed: %s", err)
}

d.op.Warnf("Guest tools unavailable, resorting to power off - sessions will be left open")
Copy link
Member

@zjs zjs Aug 8, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure Guest tools unavailable is accurate after the restructuring.

(Aside: I find the new structure much easier to read.)

Copy link
Member

@zjs zjs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not do a complete re-review, but a quick spot-check of areas where changes were requested only yielded the above issue about the log message.

This adds neat shutdown support in the endpointVM:
* cleans up Views, Managers and sessions in portlayer
* cleans up session in personality
* cleans up and purges all active sessions in vicadmin

Updates the way tether waits for sessions to exit.
Updates vic-machine to use guest shutdown for endpointVM
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants