Skip to content
This repository was archived by the owner on Feb 8, 2021. It is now read-only.

fix the race in case sandbox start failed during pod creating #600

Merged
merged 4 commits into from
Apr 12, 2017

Conversation

gnawux
Copy link
Member

@gnawux gnawux commented Apr 8, 2017

For #598

Signed-off-by: Wang Xu <gnawux@gmail.com>
@gnawux gnawux changed the title [DO NOT MERGE] fix rollback if vm failed during pod creating fix the race in case sandbox start failed during pod creating Apr 9, 2017
@gnawux
Copy link
Member Author

gnawux commented Apr 9, 2017

And this should be part of #462

@gnawux
Copy link
Member Author

gnawux commented Apr 9, 2017

retest this please, hykins

@gnawux
Copy link
Member Author

gnawux commented Apr 9, 2017

The CI result shows, even failed by #557, this patch help preventing hyperd from the panic ( #598 ).

@gnawux
Copy link
Member Author

gnawux commented Apr 9, 2017

safe quit from #557 init error:

E0409 12:04:48.721138   23772 json.go:363] read tty data failed
E0409 12:04:48.721174   23772 json.go:420] SB[vm-VwoaqdiWbA] tty socket closed, quit the reading goroutine: EOF
I0409 12:04:48.721182   23772 json.go:88] SB[vm-VwoaqdiWbA] close jsonBasedHyperstart
E0409 12:04:48.721211   23772 json.go:545] SB[vm-VwoaqdiWbA] get hyperstart API version error: hyperstart closed
W0409 12:04:48.721220   23772 hypervisor.go:59] SB[vm-VwoaqdiWbA] keep-alive test end with error: hyperstart closed
I0409 12:04:48.721265   23772 hypervisor.go:23] SB[vm-VwoaqdiWbA] main event loop got message 14(ERROR_INIT_FAIL)
E0409 12:04:48.721277   23772 vm_states.go:237] SB[vm-VwoaqdiWbA] hyperstart failed: hyperstart closed
E0409 12:04:48.721286   23772 vm_states.go:195] SB[vm-VwoaqdiWbA] Shutting down because of an exception: %!(EXTRA string=connection to vm broken)
I0409 12:04:48.721293   23772 vm_states.go:198] SB[vm-VwoaqdiWbA] poweroff vm based on command: connection to vm broken
E0409 12:04:48.721424   23772 hypervisor.go:42] SB[vm-VwoaqdiWbA] hyperstart stopped
I0409 12:04:48.721444   23772 json.go:388] SB[vm-VwoaqdiWbA] tty chan closed, quit sent goroutine
E0409 12:04:48.721478   23772 vm_states.go:176] SB[vm-VwoaqdiWbA] Start POD failed
I0409 12:04:48.721641   23772 provision.go:307] Pod[pod-xMGXDCItWf] sandbox init result: &api.ResultBase{Id:"vm-VwoaqdiWbA", Success:false, ResultMessage:"got failed event when wait init message"}
E0409 12:04:48.724827   23772 json.go:139] read init data failed
I0409 12:04:48.724934   23772 vm_console.go:63] Input byte chan closed, close the output string chan
I0409 12:04:48.724962   23772 vm_console.go:48] SB[vm-VwoaqdiWbA] console output end
E0409 12:04:48.724998   23772 json.go:173] SB[vm-VwoaqdiWbA] error when readVmMessage() for ready message: EOF
�[37mDEBU�[0m[0258] container mounted via layerStore: /var/lib/hyper/rawblock/mnt/c4530c04010a5cf91dcb2f3cd9d1172f748edc423fb4991560f59912820cc882/rootfs 
I0409 12:04:48.922956   23772 context.go:257] SB[vm-VwoaqdiWbA] VmContext Close()
I0409 12:04:48.923026   23772 hypervisor.go:31] SB[vm-VwoaqdiWbA] main event loop exiting
I0409 12:04:48.923091   23772 decommission.go:526] Pod[pod-xMGXDCItWf] got vm exit event
I0409 12:04:48.923129   23772 decommission.go:574] Pod[pod-xMGXDCItWf] umount all containers and volumes, release IP addresses
I0409 12:04:48.923146   23772 etchosts.go:97] cleanupHosts /var/lib/hyper/hosts/pod-xMGXDCItWf, /var/lib/hyper/hosts/pod-xMGXDCItWf/hosts
I0409 12:04:48.923186   23772 etchosts.go:101] cannot find /var/lib/hyper/hosts/pod-xMGXDCItWf/hosts
I0409 12:04:48.923340   23772 decommission.go:554] Pod[pod-xMGXDCItWf] sandbox info removed from db
I0409 12:04:48.923363   23772 decommission.go:559] Pod[pod-xMGXDCItWf] tag pod as stopped
I0409 12:04:48.923374   23772 decommission.go:566] Pod[pod-xMGXDCItWf] pod stopped
I0409 12:04:48.923955   23772 container.go:498] Pod[pod-xMGXDCItWf] Con[(pod-xMGXDCItWf-irssi-0)] create container c16a3c1481821215c77eb0a84e114112bbb904d07600997345c5c6af03dc777b (w/: [])
I0409 12:04:48.924041   23772 container.go:515] Pod[pod-xMGXDCItWf] Con[c16a3c148182(pod-xMGXDCItWf-irssi-0)] container info config &container.Config{Hostname:"c16a3c148182", Domainname:"", User:"user", AttachStdin:false, AttachStdout:false, AttachStderr:false, ExposedPorts:map[nat.Port]struct {}(nil), PublishService:"", Tty:false, OpenStdin:false, StdinOnce:false, Env:[]string{"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "HOME=/home/user", "LANG=C.UTF-8", "IRSSI_VERSION=1.0.2"}, Cmd:(*strslice.StrSlice)(0xc420de0fc0), ArgsEscaped:false, Image:"irssi:1", Volumes:map[string]struct {}(nil), WorkingDir:"/home/user", Entrypoint:(*strslice.StrSlice)(nil), NetworkDisabled:true, MacAddress:"", OnBuild:[]string(nil), Labels:map[string]string{}, StopSignal:""}, Cmd [sh], Args []
I0409 12:04:48.924055   23772 container.go:520] Pod[pod-xMGXDCItWf] Con[c16a3c148182(pod-xMGXDCItWf-irssi-0)] describe container
I0409 12:04:48.924267   23772 container.go:528] Pod[pod-xMGXDCItWf] Con[c16a3c148182(pod-xMGXDCItWf-irssi-0)] mount id: c4530c04010a5cf91dcb2f3cd9d1172f748edc423fb4991560f59912820cc882
I0409 12:04:48.924334   23772 container.go:608] Pod[pod-xMGXDCItWf] Con[c16a3c148182(pod-xMGXDCItWf-irssi-0)] Container Info is 
&api.ContainerDescription{Id:"c16a3c1481821215c77eb0a84e114112bbb904d07600997345c5c6af03dc777b", Name:"/pod-xMGXDCItWf-irssi-0", Image:"sha256:c9b9931a3c0a33319eb291eb546a063b10ea1de6a1e9104ac8cbaf566a659c61", Labels:map[string]string(nil), Tty:false, StopSignal:"TERM", RootVolume:(*api.VolumeDescription)(0xc42107c640), MountId:"c4530c04010a5cf91dcb2f3cd9d1172f748edc423fb4991560f59912820cc882", RootPath:"rootfs", UGI:(*api.UserGroupInfo)(0xc4214f7680), Envs:map[string]string{"IRSSI_VERSION":"1.0.2", "PATH":"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "HOME":"/home/user", "LANG":"C.UTF-8"}, Workdir:"/home/user", Path:"sh", Args:[]string{}, Rlimits:[]*api.Rlimit{}, Sysctl:map[string]string(nil), Volumes:map[string]*api.VolumeReference(nil), Initialize:false}
I0409 12:04:48.924367   23772 container.go:749] Pod[pod-xMGXDCItWf] Con[c16a3c148182(pod-xMGXDCItWf-irssi-0)] configure dns
I0409 12:04:48.924391   23772 container.go:807] Pod[pod-xMGXDCItWf] Con[c16a3c148182(pod-xMGXDCItWf-irssi-0)] inject file /etc/resolv.conf
E0409 12:04:49.006063   23772 provision.go:406] Pod[pod-xMGXDCItWf] pod is not alive, can not prepare resources
E0409 12:04:49.006101   23772 run.go:38] pod-xMGXDCItWf: failed to add pod: hyper pod not alive: cannot complete the operation, because the pod pod-xMGXDCItWf is not alive
E0409 12:04:49.006236   23772 server.go:170] Handler for POST /v0.8.0/pod/create returned error: cannot complete the operation, because the pod pod-xMGXDCItWf is not alive

@gnawux gnawux self-assigned this Apr 10, 2017
@gnawux gnawux added this to the v0.8.1 milestone Apr 10, 2017
@laijs laijs requested a review from Crazykev April 11, 2017 07:00
@laijs
Copy link
Contributor

laijs commented Apr 11, 2017

@Crazykev could you also review it pls?

@Crazykev
Copy link
Contributor

@laijs Cool, glad to help.

p.Log(ERROR, "init chan broken")
return false, nil
}
p.initChan <- res
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I've understood why we need send this result back here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case there are several op wait pod running, only one could get notification from the chan, and send back to let others get the event.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, that should be fine, I was worried if we need to send this in restore.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so. But I am thinking whether I should change to Cond/Wait here....

@Crazykev
Copy link
Contributor

LGTM. @laijs Do you want have another look?

@gnawux
Copy link
Member Author

gnawux commented Apr 11, 2017

wait, let me check if I could use Cond/Wait instead of a chan, which will help understanding.

@Crazykev
Copy link
Contributor

OK, I recheck current implement, if we start a running pod, there will be an ugly error

hyperctl ERROR: Error from daemon's response: finished with errors: map[39c3371cd64c6a5c47e983cac3ce2529d9641b79f2eb0d0d201bea64d219b307:only CREATING container could be set to creatd, current: 3]

Could you also fix this within this patch?

@gnawux
Copy link
Member Author

gnawux commented Apr 11, 2017

@Crazykev do you mean just return success if pod is running?

maybe need get current IPs for save?

Signed-off-by: Wang Xu <gnawux@gmail.com>
@Crazykev
Copy link
Contributor

Return success or report pod is already running is fine, just don't try to add container to that pod again.

@gnawux
Copy link
Member Author

gnawux commented Apr 11, 2017

updated

if cs.State != S_CONTAINER_CREATED {
if cs.State == S_CONTAINER_RUNNING {
return errors.ErrContainerAlreadyRunning
} else if cs.State != S_CONTAINER_CREATED {
return fmt.Errorf("only CREATING container could be set to creatd, current: %d", cs.State)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems this is a wrong error message, could you also fix it here.
I think this should be only CREATED container could be set to RUNNING

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm.... I didn't change this line, but looks you are right....

func (p *XPod) waitPodRun(activity string) error {
p.statusLock.RLock()
for {
if p.status == S_POD_RUNNING || p.statusLock == S_POD_PAUSED {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

p.statusLock == S_POD_PAUSED -> p.status == S_POD_PAUSED

p.Log(DEBUG, "pod is running, proceed %s", activity)
return nil
}
if p.statusLock != S_POD_STARTING {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

p.statusLock != S_POD_STARTING -> p.status != S_POD_STARTING

@@ -211,7 +214,7 @@ func (p *XPod) ContainerStart(cid string) error {
// Start() means start a STOPPED pod.
func (p *XPod) Start() error {

if p.status == S_POD_STOPPED {
if p.IsStopped() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remember there is a S_POD_ERROR state, should we filter it here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's leave this status in later issue/pr.

In principle, if a pod failed and could be restarted, it should be set back to stopped status. Only those meet fatal exception and could not start any more should stay in ERROR status.

However, sometimes the daemon can not judge the situation accurately. And I think it might be better if we could allow a user to set back status from ERROR to STOPPED manually.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, more complicated than I thought.

@gnawux gnawux force-pushed the rollback branch 2 times, most recently from f48b73c to 4c4585d Compare April 11, 2017 16:19
@gnawux
Copy link
Member Author

gnawux commented Apr 11, 2017

@laijs would you like look at this test result: http://ci.hypercontainer.io:8080/job/hyperd-auto/338/console . looks runv can't get hyperstart version

@laijs
Copy link
Contributor

laijs commented Apr 12, 2017

16:55:25 I0411 16:55:25.569547    6283 vm_console.go:46] SB[vm-aRbFtiKhxu] [CNL] uptime 3.64 0.35
16:55:25 I0411 16:55:25.574477    6283 vm_console.go:46] SB[vm-aRbFtiKhxu] [CNL] 
16:55:25 I0411 16:55:25.578552    6283 vm_console.go:46] SB[vm-aRbFtiKhxu] [CNL] hyper ctl append type 9, len 0
16:55:25 I0411 16:55:25.582866    6283 vm_console.go:46] SB[vm-aRbFtiKhxu] [CNL] hyper_handle_event event EPOLLOUT, he 0x61d648, fd 3, 0x61d4c0
16:55:25 I0411 16:55:25.586796    6283 json.go:330] SB[vm-aRbFtiKhxu] readVmMessage code: 14, len: 4
16:55:25 I0411 16:55:25.586816    6283 json.go:330] SB[vm-aRbFtiKhxu] readVmMessage code: 14, len: 4
16:55:25 I0411 16:55:25.586843    6283 json.go:330] SB[vm-aRbFtiKhxu] readVmMessage code: 9, len: 0
16:55:25 I0411 16:55:25.586852    6283 json.go:231] SB[vm-aRbFtiKhxu] got cmd:14
16:55:25 I0411 16:55:25.586861    6283 json.go:252] SB[vm-aRbFtiKhxu] get command NEXT: send 98, receive 8
16:55:25 I0411 16:55:25.586867    6283 json.go:231] SB[vm-aRbFtiKhxu] got cmd:14
16:55:25 I0411 16:55:25.586873    6283 json.go:252] SB[vm-aRbFtiKhxu] get command NEXT: send 98, receive 98
16:55:25 I0411 16:55:25.586886    6283 json.go:314] SB[vm-aRbFtiKhxu] write 24 to hyperstart.
16:55:25 I0411 16:55:25.586903    6283 json.go:231] SB[vm-aRbFtiKhxu] got cmd:9
16:55:25 I0411 16:55:25.586929    6283 vm_states.go:171] SB[vm-aRbFtiKhxu] pod start successfully
16:55:25 I0411 16:55:25.587105    6283 provision.go:303] Pod[service] sandbox init result: &api.ResultBase{Id:"vm-aRbFtiKhxu", Success:true, ResultMessage:"wait init message successfully"}
16:55:25 I0411 16:55:25.587233    6283 vm_console.go:46] SB[vm-aRbFtiKhxu] [CNL] hyper_modify_event modify event fd 3, 0x61d648, event 8193
16:55:25 I0411 16:55:25.592838    6283 vm_console.go:46] SB[vm-aRbFtiKhxu] [CNL] pid 328 exit normally, status 0
16:55:25 �[37mDEBU�[0m[0165] container mounted via layerStore: /var/lib/hyper/overlay/271d3b86367459d4ccefc2efe9e003af870f232f4fe24820700c179df0e6742b/merged 
16:55:25 I0411 16:55:25.603064    6283 vm_console.go:46] SB[vm-aRbFtiKhxu] [CNL] hyper_handle_event event EPOLLIN, he 0x61d648, fd 3, 0x61d4c0
16:55:25 I0411 16:55:25.607829    6283 vm_console.go:46] SB[vm-aRbFtiKhxu] [CNL] hyper ctl append type 14, len 4
16:55:25 I0411 16:55:25.608739    6283 container.go:504] Pod[service] Con[(service)] create container c855786521db2911cc5f2cd64065042083192271530af9404005d3f75d287049 (w/: [])
16:55:25 I0411 16:55:25.608849    6283 container.go:521] Pod[service] Con[c855786521db(service)] container info config &container.Config{Hostname:"c855786521db", Domainname:"", User:"", AttachStdin:false, AttachStdout:false, AttachStderr:false, ExposedPorts:map[nat.Port]struct {}(nil), PublishService:"", Tty:false, OpenStdin:false, StdinOnce:false, Env:[]string{"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"}, Cmd:(*strslice.StrSlice)(0xc420c50c40), ArgsEscaped:false, Image:"busybox:latest", Volumes:map[string]struct {}(nil), WorkingDir:"", Entrypoint:(*strslice.StrSlice)(nil), NetworkDisabled:true, MacAddress:"", OnBuild:[]string(nil), Labels:map[string]string{}, StopSignal:""}, Cmd [/bin/sh -c ps aux], Args [-c ps aux]
16:55:25 I0411 16:55:25.608875    6283 container.go:526] Pod[service] Con[c855786521db(service)] describe container
16:55:25 I0411 16:55:25.608911    6283 container.go:534] Pod[service] Con[c855786521db(service)] mount id: 271d3b86367459d4ccefc2efe9e003af870f232f4fe24820700c179df0e6742b
16:55:25 I0411 16:55:25.608965    6283 container.go:614] Pod[service] Con[c855786521db(service)] Container Info is 
16:55:25 &api.ContainerDescription{Id:"c855786521db2911cc5f2cd64065042083192271530af9404005d3f75d287049", Name:"/service", Image:"sha256:00f017a8c2a6e1fe2ffd05c281f27d069d2a99323a8cd514dd35f228ba26d2ff", Labels:map[string]string(nil), Tty:true, StopSignal:"TERM", RootVolume:(*api.VolumeDescription)(0xc420db1040), MountId:"271d3b86367459d4ccefc2efe9e003af870f232f4fe24820700c179df0e6742b", RootPath:"rootfs", UGI:(*api.UserGroupInfo)(nil), Envs:map[string]string{"PATH":"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"}, Workdir:"/", Path:"/bin/sh", Args:[]string{"-c", "ps aux"}, Rlimits:[]*api.Rlimit{}, Sysctl:map[string]string(nil), Volumes:map[string]*api.VolumeReference(nil), Initialize:false}
16:55:25 I0411 16:55:25.608995    6283 container.go:755] Pod[service] Con[c855786521db(service)] configure dns
16:55:25 I0411 16:55:25.609014    6283 container.go:762] Pod[service] Con[c855786521db(service)] Already has DNS config, bypass DNS insert
16:55:25 I0411 16:55:25.609263    6283 networks.go:38] Pod[service] Nic[eth-default] prepare inf info: &api.InterfaceDescription{Id:"eth-default", Lo:false, Bridge:"hyper0", Ip:"192.168.123.20", Mac:"52:54:60:70:29:b7", Gw:"192.168.123.1", TapName:""}
16:55:25 I0411 16:55:25.609304    6283 provision.go:436] Pod[service] adding resource to sandbox
16:55:25 I0411 16:55:25.609415    6283 volume.go:241] Pod[service] Vol[etchosts-volume] transit volume from state 0 to 1, ok
16:55:25 I0411 16:55:25.609439    6283 volume.go:161] Pod[service] Vol[etchosts-volume] mount volume
16:55:25 I0411 16:55:25.609459    6283 volumes.go:29] trying to bind dir /var/lib/hyper/hosts/service/hosts to /var/run/hyper/vm-aRbFtiKhxu/share_dir/JbfSLyboAc
16:55:25 I0411 16:55:25.609597    6283 mount.go:58] dir /var/lib/hyper/hosts/service/hosts is bound to JbfSLyboAc
16:55:25 I0411 16:55:25.609616    6283 volume.go:107] Pod[service] Vol[etchosts-volume] insert volume to sandbox
16:55:25 I0411 16:55:25.609637    6283 context.go:430] SB[vm-aRbFtiKhxu] return volume add success for dir/nas etchosts-volume
16:55:25 I0411 16:55:25.609653    6283 volume.go:115] Pod[service] Vol[etchosts-volume] volume inserted
16:55:25 I0411 16:55:25.609670    6283 volume.go:241] Pod[service] Vol[etchosts-volume] transit volume from state 1 to 2, ok
16:55:25 I0411 16:55:25.609732    6283 network_linux.go:1197] parse IP addr 192.168.123.20
16:55:25 I0411 16:55:25.612135    6283 qmp_wrapper_amd64.go:17] send net to qemu at 24
16:55:25 I0411 16:55:25.612175    6283 qmp_handler.go:298] got new session
16:55:25 I0411 16:55:25.612202    6283 qmp_handler.go:227] Begin process command session
16:55:25 I0411 16:55:25.612226    6283 qmp_handler.go:240] send cmd with scm (24 bytes) (1) {"execute":"getfd","arguments":{"fdname":"fdeth0"}}
16:55:25 I0411 16:55:25.612264    6283 container.go:872] Pod[service] Con[c855786521db(service)] begin add to sandbox
16:55:25 I0411 16:55:25.612283    6283 volume.go:187] Pod[service] Vol[etchosts-volume] subcribe volume insert
16:55:25 I0411 16:55:25.612298    6283 volume.go:191] Pod[service] Vol[etchosts-volume] the subscribed volume has been inserted, need nothing.
16:55:25 I0411 16:55:25.612545    6283 container.go:894] Pod[service] Con[c855786521db(service)] finished container prepare, wait for volumes
16:55:25 I0411 16:55:25.613149    6283 container.go:903] Pod[service] Con[c855786521db(service)] resources ready, insert container to sandbox
16:55:25 I0411 16:55:25.613197    6283 container.go:74] SB[vm-aRbFtiKhxu] Con[c855786521db2911cc5f2cd64065042083192271530af9404005d3f75d287049] volume (fs mapping) etchosts-volume is ready
16:55:25 I0411 16:55:25.613215    6283 container.go:105] SB[vm-aRbFtiKhxu] Con[c855786521db2911cc5f2cd64065042083192271530af9404005d3f75d287049] all images and volume resources have been added to sandbox
16:55:25 I0411 16:55:25.609381    6283 servicediscovery.go:201] Pod[service] [Serv] commit IPVS service patch: 
16:55:25 -A -t 10.254.0.24:2834 -s rr
16:55:25 -a -t 10.254.0.24:2834 -r 192.168.23.2:2345 -m -w 1
16:55:25 I0411 16:55:25.613297    6283 json.go:231] SB[vm-aRbFtiKhxu] got cmd:6
16:55:25 I0411 16:55:25.613316    6283 json.go:263] SB[vm-aRbFtiKhxu] delay version-awared command :6
16:55:25 I0411 16:55:25.615447    6283 json.go:231] SB[vm-aRbFtiKhxu] got cmd:6
16:55:25 I0411 16:55:25.615475    6283 json.go:263] SB[vm-aRbFtiKhxu] delay version-awared command :6
16:55:25 I0411 16:55:25.617522    6283 json.go:231] SB[vm-aRbFtiKhxu] got cmd:6
16:55:25 I0411 16:55:25.617547    6283 json.go:263] SB[vm-aRbFtiKhxu] delay version-awared command :6
16:55:25 I0411 16:55:25.619636    6283 json.go:231] SB[vm-aRbFtiKhxu] got cmd:6
16:55:25 I0411 16:55:25.619660    6283 json.go:263] SB[vm-aRbFtiKhxu] delay version-awared command :6
16:55:25 I0411 16:55:25.621747    6283 json.go:231] SB[vm-aRbFtiKhxu] got cmd:6
16:55:25 I0411 16:55:25.621771    6283 json.go:263] SB[vm-aRbFtiKhxu] delay version-awared command :6

@gnawux
Copy link
Member Author

gnawux commented Apr 12, 2017

I move the CI failure to #606 , and now could this PR be merged? @laijs @Crazykev

@Crazykev
Copy link
Contributor

@gnawux Changes LGTM, although there is still little flaw in that error message, we can fix it later. This patch is more important.

@gnawux
Copy link
Member Author

gnawux commented Apr 12, 2017

@Crazykev which error message do you mean? the already running? I have add a commit for it: 9b4e438 , does this work?

@Crazykev
Copy link
Contributor

@gnawux I think could be set to *creatd* in here should be could be set to *RUNNING*

@gnawux
Copy link
Member Author

gnawux commented Apr 12, 2017

.... one line need two fix...

gnawux added 2 commits April 12, 2017 17:06
Signed-off-by: Wang Xu <gnawux@gmail.com>
if sandbox failed, the cleanup and pod provision may race

Signed-off-by: Wang Xu <gnawux@gmail.com>
@gnawux
Copy link
Member Author

gnawux commented Apr 12, 2017

updated again

@Crazykev
Copy link
Contributor

CI passed, can I merge this?

@gnawux
Copy link
Member Author

gnawux commented Apr 12, 2017

Just do it

@Crazykev Crazykev merged commit cee2396 into hyperhq:master Apr 12, 2017
@gnawux gnawux deleted the rollback branch April 12, 2017 10:28
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants