Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nightly 5-6-2: After docker stop, ps reports stopped and not exit (Error: the virtual machine is not suspended) #5981

Closed
sflxn opened this issue Aug 9, 2017 · 7 comments
Assignees
Labels
component/portlayer/execution kind/defect Behavior that is inconsistent with what's intended priority/p0 status/needs-attention The issue needs to be discussed by the team
Milestone

Comments

@sflxn
Copy link
Contributor

sflxn commented Aug 9, 2017

8/7, build 12978, vSphere 6.0
Manual-Test-Cases.Group5-Functional-Tests.5-6-2-VSAN-Complex-VCH-0-8020-container-logs.zip

During regression test, docker stop succeeds, but there is an actual issue in the portlayer. Docker ps then shows the container as "stopped" when "exited" was expected. This is VCH-0-8020. Look in containerr logs.

Test logs shows,

'CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
 7dbd8f51b320        busybox             "/bin/top"          11 seconds ago      Stopped                                 lucid_varahamihira' does not contain 'Exited'

Portlayer logs shows

Aug  8 2017 11:56:46.380Z WARN  stopping 7dbd8f51b320f24869819cdb3e0eb85db3a729301a22ad6c0ed7955644b75f2e via hard power off due to: sending kill -TERM 7dbd8f51b320f24869819cdb3e0eb85db3a729301a22ad6c0ed7955644b75f2e: ServerFaultCode: A general system error occurred: vix error codes = (3016, 0).

Aug  8 2017 11:56:46.380Z DEBUG op=286.20 (delta:4.954µs): [NewOperation] op=286.20 (delta:2.408µs) [github.com/vmware/vic/pkg/vsphere/tasks.WaitForResult:65]
Aug  8 2017 11:56:46.829Z DEBUG vSphere Event Task: Power Off virtual machine for eventID(1155) ignored by the event collector
Aug  8 2017 11:56:46.833Z DEBUG vSphere Event lucid_varahamihira-7dbd8f51b320 on  10.160.143.215 in vcqaDC is stopping for eventID(1156) ignored by the event collector

From vpxd logs,

--> (vmodl.fault.SystemError) {
-->    faultCause = (vmodl.MethodFault) null, 
-->    reason = "vix error codes = (3016, 0).
--> ", 
-->    msg = "Received SOAP response fault from [<cs p:00007ff66025f3b0, TCP:10.160.143.215:443>]: startProgram
--> Received SOAP response fault from [<cs p:1f39f590, TCP:localhost:8307>]: startProgram
--> A general system error occurred: vix error codes = (3016, 0).
--> "
--> }
@sflxn sflxn added component/portlayer/execution priority/p0 status/needs-attention The issue needs to be discussed by the team labels Aug 9, 2017
@sflxn
Copy link
Contributor Author

sflxn commented Aug 9, 2017

Highly possible it is related to #5803

@sflxn sflxn changed the title Nightly 5-6-2: After docker stopped, ps reports stopped and not exit Nightly 5-6-2: After docker stop, ps reports stopped and not exit Aug 9, 2017
@sflxn
Copy link
Contributor Author

sflxn commented Aug 9, 2017

Same error on 8/8, build 13042, vSphere 6.5.

This time, the error is in 5-6-1-VSAN-Simple.
Manual-Test-Cases.Group5-Functional-Tests.5-6-1-VSAN-Simple-VCH-0-4460-container-logs.zip

From portlayer logs:

Aug  9 2017 02:38:41.863Z DEBUG [BEGIN] [github.com/vmware/vic/lib/portlayer/exec.(*Container).stop:407] eb5e7808762ca23d64c1fa6f02c91742bbf0f9a90a026e315503173aa19f0c3a
Aug  9 2017 02:38:41.863Z DEBUG Setting container eb5e7808762ca23d64c1fa6f02c91742bbf0f9a90a026e315503173aa19f0c3a state: Stopping
Aug  9 2017 02:38:41.863Z INFO  sending kill -TERM eb5e7808762ca23d64c1fa6f02c91742bbf0f9a90a026e315503173aa19f0c3a
Aug  9 2017 02:38:41.863Z DEBUG [BEGIN] [github.com/vmware/vic/lib/portlayer/exec.(*containerBase).startGuestProgram:180] eb5e7808762ca23d64c1fa6f02c91742bbf0f9a90a026e315503173aa19f0c3a:kill
Aug  9 2017 02:38:42.148Z DEBUG [ END ] [github.com/vmware/vic/lib/portlayer/exec.(*containerBase).startGuestProgram:180] [284.829373ms] eb5e7808762ca23d64c1fa6f02c91742bbf0f9a90a026e315503173aa19f0c3a:kill
Aug  9 2017 02:38:42.148Z INFO  waiting 10s for eb5e7808762ca23d64c1fa6f02c91742bbf0f9a90a026e315503173aa19f0c3a to power off
Aug  9 2017 02:38:42.717Z DEBUG vSphere Event Guest operation Start Program performed on Virtual machine blissful_meitner-eb5e7808762c. for eventID(1006) ignored by the event collector
Aug  9 2017 02:38:51.866Z WARN  timeout (10s) waiting for eb5e7808762ca23d64c1fa6f02c91742bbf0f9a90a026e315503173aa19f0c3a to power off via SIGTERM
Aug  9 2017 02:38:51.866Z INFO  sending kill -KILL eb5e7808762ca23d64c1fa6f02c91742bbf0f9a90a026e315503173aa19f0c3a
Aug  9 2017 02:38:51.866Z DEBUG [BEGIN] [github.com/vmware/vic/lib/portlayer/exec.(*containerBase).startGuestProgram:180] eb5e7808762ca23d64c1fa6f02c91742bbf0f9a90a026e315503173aa19f0c3a:kill
Aug  9 2017 02:38:54.172Z DEBUG [ END ] [github.com/vmware/vic/lib/portlayer/exec.(*containerBase).startGuestProgram:180] [2.30579066s] eb5e7808762ca23d64c1fa6f02c91742bbf0f9a90a026e315503173aa19f0c3a:kill
Aug  9 2017 02:38:54.172Z WARN  stopping eb5e7808762ca23d64c1fa6f02c91742bbf0f9a90a026e315503173aa19f0c3a via hard power off due to: sending kill -KILL eb5e7808762ca23d64c1fa6f02c91742bbf0f9a90a026e315503173aa19f0c3a: ServerFaultCode: The guest operations agent could not be contacted.
Aug  9 2017 02:38:54.172Z DEBUG op=298.20 (delta:3.724µs): [NewOperation] op=298.20 (delta:1.989µs) [github.com/vmware/vic/pkg/vsphere/tasks.WaitForResult:65]
Aug  9 2017 02:38:54.304Z ERROR op=298.20 (delta:131.846258ms): unexpected fault on task retry : &types.GenericVmConfigFault{VmConfigFault:types.VmConfigFault{VimFault:types.VimFault{MethodFault:types.MethodFault{FaultCause:(*types.LocalizedMethodFault)(nil), FaultMessage:[]types.LocalizableMessage{types.LocalizableMessage{DynamicData:types.DynamicData{}, Key:"msg.suspend.powerOff.notsuspended", Arg:[]types.KeyAnyValue(nil), Message:"The virtual machine is not suspended."}}}}}, Reason:"The virtual machine is not suspended."}
Aug  9 2017 02:38:54.325Z INFO  power off eb5e7808762ca23d64c1fa6f02c91742bbf0f9a90a026e315503173aa19f0c3a task skipped due to guest shutdown
Aug  9 2017 02:38:54.325Z DEBUG Set container eb5e7808762ca23d64c1fa6f02c91742bbf0f9a90a026e315503173aa19f0c3a state: Stopped
Aug  9 2017 02:38:54.325Z DEBUG Container(eb5e7808762ca23d64c1fa6f02c91742bbf0f9a90a026e315503173aa19f0c3a) closing 0 log followers
Aug  9 2017 02:38:54.325Z DEBUG [ END ] [github.com/vmware/vic/lib/portlayer/exec.(*Container).stop:407] [12.461711185s] eb5e7808762ca23d64c1fa6f02c91742bbf0f9a90a026e315503173aa19f0c3a

@sflxn
Copy link
Contributor Author

sflxn commented Aug 9, 2017

Same error on 8/8, build 13042, vSphere 6.5, vMotion VCH appliance test

Manual-Test-Cases.Group13-vMotion.13-1-vMotion-VCH-Appliance-VCH-0-2025-container-logs.zip

Aug  9 2017 08:35:19.522Z DEBUG [BEGIN] [github.com/vmware/vic/lib/portlayer/exec.(*Container).stop:407] fcb3cedc1222d46b44e61fef8b5c167331035388c4b64c4ab7d3e9076245b06e
Aug  9 2017 08:35:19.522Z DEBUG Setting container fcb3cedc1222d46b44e61fef8b5c167331035388c4b64c4ab7d3e9076245b06e state: Stopping
Aug  9 2017 08:35:19.522Z INFO  sending kill -TERM fcb3cedc1222d46b44e61fef8b5c167331035388c4b64c4ab7d3e9076245b06e
Aug  9 2017 08:35:19.522Z DEBUG [BEGIN] [github.com/vmware/vic/lib/portlayer/exec.(*containerBase).startGuestProgram:180] fcb3cedc1222d46b44e61fef8b5c167331035388c4b64c4ab7d3e9076245b06e:kill
Aug  9 2017 08:35:20.686Z DEBUG [ END ] [github.com/vmware/vic/lib/portlayer/exec.(*containerBase).startGuestProgram:180] [1.164348757s] fcb3cedc1222d46b44e61fef8b5c167331035388c4b64c4ab7d3e9076245b06e:kill
Aug  9 2017 08:35:20.686Z WARN  stopping fcb3cedc1222d46b44e61fef8b5c167331035388c4b64c4ab7d3e9076245b06e via hard power off due to: sending kill -TERM fcb3cedc1222d46b44e61fef8b5c167331035388c4b64c4ab7d3e9076245b06e: ServerFaultCode: A general system error occurred: vix error codes = (3016, 0).

Aug  9 2017 08:35:20.686Z DEBUG op=301.20 (delta:3.838µs): [NewOperation] op=301.20 (delta:2.147µs) [github.com/vmware/vic/pkg/vsphere/tasks.WaitForResult:65]
Aug  9 2017 08:35:21.003Z ERROR op=301.20 (delta:316.970102ms): unexpected fault on task retry : &types.GenericVmConfigFault{VmConfigFault:types.VmConfigFault{VimFault:types.VimFault{MethodFault:types.MethodFault{FaultCause:(*types.LocalizedMethodFault)(nil), FaultMessage:[]types.LocalizableMessage{types.LocalizableMessage{DynamicData:types.DynamicData{}, Key:"msg.suspend.powerOff.notsuspended", Arg:[]types.KeyAnyValue(nil), Message:"The virtual machine is not suspended."}}}}}, Reason:"The virtual machine is not suspended."}
Aug  9 2017 08:35:21.025Z INFO  power off fcb3cedc1222d46b44e61fef8b5c167331035388c4b64c4ab7d3e9076245b06e task skipped due to guest shutdown
Aug  9 2017 08:35:21.025Z DEBUG Set container fcb3cedc1222d46b44e61fef8b5c167331035388c4b64c4ab7d3e9076245b06e state: Stopped
Aug  9 2017 08:35:21.025Z DEBUG Container(fcb3cedc1222d46b44e61fef8b5c167331035388c4b64c4ab7d3e9076245b06e) closing 0 log followers
Aug  9 2017 08:35:21.025Z DEBUG [ END ] [github.com/vmware/vic/lib/portlayer/exec.(*Container).stop:407] [1.503513117s] fcb3cedc1222d46b44e61fef8b5c167331035388c4b64c4ab7d3e9076245b06e

@mdubya66 mdubya66 added the kind/defect Behavior that is inconsistent with what's intended label Aug 10, 2017
@chengwang86 chengwang86 changed the title Nightly 5-6-2: After docker stop, ps reports stopped and not exit Nightly 5-6-2: After docker stop, ps reports stopped and not exit (virtual machine is not suspended) Aug 14, 2017
@chengwang86 chengwang86 changed the title Nightly 5-6-2: After docker stop, ps reports stopped and not exit (virtual machine is not suspended) Nightly 5-6-2: After docker stop, ps reports stopped and not exit (Error: the virtual machine is not suspended) Aug 14, 2017
@sflxn
Copy link
Contributor Author

sflxn commented Aug 15, 2017

This could also be related to #5629 since the stop could cause the VM to shutdown and we detect the powerstate before we read the exit code.

@sflxn
Copy link
Contributor Author

sflxn commented Aug 16, 2017

I don't think this is related to #5629 anymore. I've dug down into the code some more. I think this issue is a dupe of #5803, and I now agree with @cgtexmex that #5629 was an unrelated issue as we never hit that code in this issue.

@sflxn
Copy link
Contributor Author

sflxn commented Aug 17, 2017

From the hostd file, I see the following,

From the hostd file:

2017-08-08T11:56:46.674Z info hostd[3E6C2B70] [Originator@6876 sub=Solo.Vmomi] Result:
--> (vmodl.fault.SystemError) {
-->    faultCause = (vmodl.MethodFault) null,
-->    reason = "vix error codes = (3016, 0).
--> ",
-->    msg = ""
--> }
2017-08-08T11:56:46.689Z info hostd[3E640B70] [Originator@6876 sub=Vimsvc.ha-eventmgr] Event 177 : The dvPort 34 was not in passthrough mode in the vSphere Distributed Switch  in ha-datacenter.
2017-08-08T11:56:46.690Z info hostd[3E640B70] [Originator@6876 sub=Vimsvc.ha-eventmgr] Event 178 : The dvPort 34 link was down in the vSphere Distributed Switch  in ha-datacenter
2017-08-08T11:56:46.724Z info hostd[3E6C2B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:5296b4c5894838e4-0fb99c6ef25d4f89/cda68959-03d3-9a7b-63b8-02000ac2c76e/lucid_varahamihira-7dbd8f51b320.vmx] Tools manifest version status changed from guestToolsUnmanaged to guestToolsUnmanaged, on install is TRUE
2017-08-08T11:56:46.807Z info hostd[3E6C2B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:5296b4c5894838e4-0fb99c6ef25d4f89/cda68959-03d3-9a7b-63b8-02000ac2c76e/lucid_varahamihira-7dbd8f51b320.vmx] Send config update invoked
2017-08-08T11:56:46.813Z info hostd[3E1CDB70] [Originator@6876 sub=Vimsvc.ha-eventmgr] Event 179 : The dvPort 34 was not in passthrough mode in the vSphere Distributed Switch  in ha-datacenter.
2017-08-08T11:56:47.622Z info hostd[FFF02B70] [Originator@6876 sub=Libs] SOCKET 2 (37)
2017-08-08T11:56:47.622Z info hostd[FFF02B70] [Originator@6876 sub=Libs] recv detected client closed connection
2017-08-08T11:56:47.622Z info hostd[FFF02B70] [Originator@6876 sub=Libs] VigorTransportClientProcessError: Remote connection failure
2017-08-08T11:56:47.622Z info hostd[FFF02B70] [Originator@6876 sub=Libs] VigorTransportClientDrainRecv: draining read.
2017-08-08T11:56:47.622Z info hostd[FFF02B70] [Originator@6876 sub=Libs] SOCKET 2 (37)
2017-08-08T11:56:47.622Z info hostd[FFF02B70] [Originator@6876 sub=Libs] recv detected client closed connection
2017-08-08T11:56:47.622Z info hostd[FFF02B70] [Originator@6876 sub=Libs] VigorTransportClientProcessError: closing connection.
2017-08-08T11:56:47.623Z info hostd[FFF02B70] [Originator@6876 sub=Libs] VigorTransportClientManageConnection: connection closed.
2017-08-08T11:56:47.657Z info hostd[FFF02B70] [Originator@6876 sub=Libs] CnxAuthdProtoSecureConnect: Unencrypted connection, skipping thumbprint exchange.
2017-08-08T11:56:47.660Z info hostd[FFF02B70] [Originator@6876 sub=Libs] CnxConnectAuthd: Returning false because CnxAuthdProtoConnect failed
2017-08-08T11:56:47.660Z info hostd[FFF02B70] [Originator@6876 sub=Libs] Cnx_Connect: Returning false because CnxConnectAuthd failed
2017-08-08T11:56:47.660Z info hostd[FFF02B70] [Originator@6876 sub=vm:Cnx_Connect: Error message: There is no VMware process running for config file /vmfs/volumes/vsan:5296b4c5894838e4-0fb99c6ef25d4f89/cda68959-03d3-9a7b-63b8-02000ac2c76e/lucid_varahamihira-7dbd8f51b320.vmx]
2017-08-08T11:56:47.660Z warning hostd[FFF02B70] [Originator@6876 sub=vm:VigorTransportClientManageConnection: Failed to re-connect to VM /vmfs/volumes/vsan:5296b4c5894838e4-0fb99c6ef25d4f89/cda68959-03d3-9a7b-63b8-02000ac2c76e/lucid_varahamihira-7dbd8f51b320.vmx]
2017-08-08T11:56:47.675Z info hostd[FFF02B70] [Originator@6876 sub=Libs] VigorOnlineDisconnectCb: connection closed (is final).
2017-08-08T11:56:47.709Z info hostd[3E6C2B70] [Originator@6876 sub=Hostsvc] Lookupvm: Cartel ID not set for VM 2
2017-08-08T11:56:47.709Z warning hostd[3E6C2B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:5296b4c5894838e4-0fb99c6ef25d4f89/cda68959-03d3-9a7b-63b8-02000ac2c76e/lucid_varahamihira-7dbd8f51b320.vmx] Unable to get resource settings for a powered on VM
2017-08-08T11:56:47.710Z info hostd[3E6C2B70] [Originator@6876 sub=Hostsvc] Lookupvm: Cartel ID not set for VM 2
2017-08-08T11:56:47.720Z info hostd[3E6C2B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:5296b4c5894838e4-0fb99c6ef25d4f89/cda68959-03d3-9a7b-63b8-02000ac2c76e/lucid_varahamihira-7dbd8f51b320.vmx] Checking for all objects accessibility (VM's current state: VM_STATE_ON, stable? true)
2017-08-08T11:56:47.720Z info hostd[3E6C2B70] [Originator@6876 sub=VmObjectStorageMonitor] Object UUID 94a68959-cb6e-c2e9-cf13-02000a936b47 APD state: Healthy
2017-08-08T11:56:47.720Z info hostd[3E6C2B70] [Originator@6876 sub=VmObjectStorageMonitor] Object UUID aea68959-cdf3-0d57-1279-02000a936b47 APD state: Healthy
2017-08-08T11:56:47.720Z info hostd[3E6C2B70] [Originator@6876 sub=VmObjectStorageMonitor] Object UUID b5a68959-7fd6-3929-a584-02000a936b47 APD state: Healthy
2017-08-08T11:56:47.720Z info hostd[3E6C2B70] [Originator@6876 sub=VmObjectStorageMonitor] Object UUID cda68959-03d3-9a7b-63b8-02000ac2c76e APD state: Healthy
2017-08-08T11:56:47.720Z info hostd[3E6C2B70] [Originator@6876 sub=VmObjectStorageMonitor] Object UUID d3a68959-473a-24ec-6182-02000ac2c76e APD state: Healthy
2017-08-08T11:56:47.721Z info hostd[3E6C2B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:5296b4c5894838e4-0fb99c6ef25d4f89/cda68959-03d3-9a7b-63b8-02000ac2c76e/lucid_varahamihira-7dbd8f51b320.vmx] Updated active set of monitored objects for VM '2': /vmfs/volumes/vsan:5296b4c5894838e4-0fb99c6ef25d4f89/cda68959-03d3-9a7b-63b8-02000ac2c76e/lucid_varahamihira-7dbd8f51b320.vmx (5 objects)
2017-08-08T11:56:47.722Z info hostd[3DC91B70] [Originator@6876 sub=Vimsvc.TaskManager opID=212f83c1-d4-0483 user=vpxuser:VSPHERE.LOCAL\Administrator] Task Created : haTask-2-vim.VirtualMachine.powerOff-559
2017-08-08T11:56:47.771Z info hostd[3E681B70] [Originator@6876 sub=Vimsvc.ha-eventmgr opID=212f83c1-d4-0483 user=vpxuser:VSPHERE.LOCAL\Administrator] Event 180 : lucid_varahamihira-7dbd8f51b320 on  sc-rdops-vm05-dhcp-143-215.eng.vmware.com in ha-datacenter is stopping
2017-08-08T11:56:47.771Z info hostd[3E681B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:5296b4c5894838e4-0fb99c6ef25d4f89/cda68959-03d3-9a7b-63b8-02000ac2c76e/lucid_varahamihira-7dbd8f51b320.vmx opID=212f83c1-d4-0483 user=vpxuser:VSPHERE.LOCAL\Administrator] State Transition (VM_STATE_ON -> VM_STATE_POWERING_OFF)
2017-08-08T11:56:47.781Z info hostd[3E6C2B70] [Originator@6876 sub=Hostsvc] Decremented SIOC Injector Flag2
2017-08-08T11:56:47.781Z warning hostd[3E6C2B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:5296b4c5894838e4-0fb99c6ef25d4f89/cda68959-03d3-9a7b-63b8-02000ac2c76e/lucid_varahamihira-7dbd8f51b320.vmx] Failed operation
2017-08-08T11:56:47.781Z info hostd[3E6C2B70] [Originator@6876 sub=Vimsvc.ha-eventmgr] Event 181 : lucid_varahamihira-7dbd8f51b320 on  sc-rdops-vm05-dhcp-143-215.eng.vmware.com in ha-datacenter is powered off
2017-08-08T11:56:47.781Z info hostd[3E6C2B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:5296b4c5894838e4-0fb99c6ef25d4f89/cda68959-03d3-9a7b-63b8-02000ac2c76e/lucid_varahamihira-7dbd8f51b320.vmx] State Transition (VM_STATE_POWERING_OFF -> VM_STATE_OFF)
2017-08-08T11:56:47.781Z info hostd[3E1CDB70] [Originator@6876 sub=Guestsvc.GuestFileTransferImpl] Entered VmPowerStateListener
2017-08-08T11:56:47.781Z info hostd[3E1CDB70] [Originator@6876 sub=Guestsvc.GuestFileTransferImpl] VmPowerStateListener succeeded
2017-08-08T11:56:47.781Z info hostd[3E1CDB70] [Originator@6876 sub=Hbrsvc] Replicator: powerstate change VM: 2 Old: 1 New: 0
2017-08-08T11:56:47.781Z info hostd[3E1CDB70] [Originator@6876 sub=Hbrsvc] Replicator: Poweroff for VM: (id=2)
2017-08-08T11:56:47.781Z info hostd[3E6C2B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:5296b4c5894838e4-0fb99c6ef25d4f89/cda68959-03d3-9a7b-63b8-02000ac2c76e/lucid_varahamihira-7dbd8f51b320.vmx] Send config update invoked
2017-08-08T11:56:47.807Z info hostd[3E6C2B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:5296b4c5894838e4-0fb99c6ef25d4f89/cda68959-03d3-9a7b-63b8-02000ac2c76e/lucid_varahamihira-7dbd8f51b320.vmx] Checking for all objects accessibility (VM's current state: VM_STATE_OFF, stable? true)
2017-08-08T11:56:47.807Z info hostd[3E6C2B70] [Originator@6876 sub=VmObjectStorageMonitor] Object UUID 94a68959-cb6e-c2e9-cf13-02000a936b47 APD state: Healthy
2017-08-08T11:56:47.807Z info hostd[3E6C2B70] [Originator@6876 sub=VmObjectStorageMonitor] Object UUID aea68959-cdf3-0d57-1279-02000a936b47 APD state: Healthy
2017-08-08T11:56:47.807Z info hostd[3E6C2B70] [Originator@6876 sub=VmObjectStorageMonitor] Object UUID b5a68959-7fd6-3929-a584-02000a936b47 APD state: Healthy
2017-08-08T11:56:47.807Z info hostd[3E6C2B70] [Originator@6876 sub=VmObjectStorageMonitor] Object UUID cda68959-03d3-9a7b-63b8-02000ac2c76e APD state: Healthy
2017-08-08T11:56:47.807Z info hostd[3E6C2B70] [Originator@6876 sub=VmObjectStorageMonitor] Object UUID d3a68959-473a-24ec-6182-02000ac2c76e APD state: Healthy
2017-08-08T11:56:47.808Z info hostd[3E6C2B70] [Originator@6876 sub=Vimsvc.TaskManager] Task Completed : haTask-2-vim.VirtualMachine.powerOff-559 Status error
2017-08-08T11:56:47.809Z info hostd[3E6C2B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/vsan:5296b4c5894838e4-0fb99c6ef25d4f89/cda68959-03d3-9a7b-63b8-02000ac2c76e/lucid_varahamihira-7dbd8f51b320.vmx] Skip a duplicate transition to: VM_STATE_OFF

Here are some observations:

  • VIX error 3016 is "Guest tools is not running."
    I talked to @dougm about this, and he believes that since it's on VC, VC hasn't updated it's state yet and report 3016.
  • Eventually, we see the power off request and the VM successfully transition to powerstate VM_STATE_OFF

@sflxn
Copy link
Contributor Author

sflxn commented Aug 17, 2017

Closing this as a dupe of #5803 and continue the analysis there, using that issue's logs.

@sflxn sflxn closed this as completed Aug 17, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/portlayer/execution kind/defect Behavior that is inconsistent with what's intended priority/p0 status/needs-attention The issue needs to be discussed by the team
Projects
None yet
Development

No branches or pull requests

3 participants