Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bugfix: should initialize ContainerIO for all container #2668

Merged
merged 1 commit into from
Jan 17, 2019
Merged

bugfix: should initialize ContainerIO for all container #2668

merged 1 commit into from
Jan 17, 2019

Conversation

fuweid
Copy link
Contributor

@fuweid fuweid commented Jan 15, 2019

Ⅰ. Describe what this PR did

For now, when pouchd restarts, the restore function only re-connects to
existing running/pause containers with initializing container IO.
However, the stopped container also needs the container IO if the user
want to restart the container. For the case, we need to initialize
container IO for all existing container, not just for running/pause
containers.

Signed-off-by: Wei Fu fuweid89@gmail.com

Ⅱ. Does this pull request fix one issue?

none

Ⅲ. Why don't you add test cases (unit test/integration test)? (你真的觉得不需要加测试吗?)

add one

Ⅳ. Describe how to verify it

CI

Ⅴ. Special notes for reviews

@codecov
Copy link

codecov bot commented Jan 15, 2019

Codecov Report

Merging #2668 into master will increase coverage by 12.8%.
The diff coverage is 33.33%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2668      +/-   ##
==========================================
+ Coverage   56.82%   69.63%   +12.8%     
==========================================
  Files         281      281              
  Lines       18794    18795       +1     
==========================================
+ Hits        10679    13087    +2408     
+ Misses       6879     4250    -2629     
- Partials     1236     1458     +222
Flag Coverage Δ
#criv1alpha1test 31.59% <33.33%> (-0.02%) ⬇️
#criv1alpha2test 35.77% <33.33%> (ø) ⬆️
#integrationtest 42.26% <33.33%> (?)
#nodee2etest 32.45% <33.33%> (+0.04%) ⬆️
#unittest 27.07% <0%> (-0.01%) ⬇️
Impacted Files Coverage Δ
daemon/mgr/container.go 58.94% <33.33%> (+23.22%) ⬆️
cri/v1alpha2/cri.go 69.77% <0%> (+0.24%) ⬆️
daemon/logger/jsonfile/utils.go 71.54% <0%> (+0.81%) ⬆️
daemon/mgr/container_utils.go 83.92% <0%> (+1.19%) ⬆️
pkg/user/user.go 40% <0%> (+1.66%) ⬆️
daemon/containerio/io.go 74.75% <0%> (+1.94%) ⬆️
pkg/utils/utils.go 83.26% <0%> (+2.04%) ⬆️
ctrd/client.go 64.04% <0%> (+2.24%) ⬆️
daemon/daemon.go 69.23% <0%> (+2.56%) ⬆️
pkg/utils/timeutils.go 100% <0%> (+2.63%) ⬆️
... and 77 more

@pouchrobot pouchrobot added kind/bug This is bug report for project size/M labels Jan 15, 2019
@fuweid fuweid changed the title bugfix: should initialize ContainerIO for all container [WIP] bugfix: should initialize ContainerIO for all container Jan 15, 2019
@fuweid fuweid changed the title [WIP] bugfix: should initialize ContainerIO for all container bugfix: should initialize ContainerIO for all container Jan 15, 2019
@fuweid fuweid requested review from HusterWan and rudyfly January 15, 2019 15:26
// when container is stopped and then pouchd restarts, the restore logic should
// initialize the existing container IO settings even though they are not alive.
func (suite *PouchDaemonSuite) TestRestartStoppedContainerAfterDaemonRestart(c *check.C) {
cfg := daemon.NewConfig()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is start daemon with config file? or with command line as before?
And we need think about alios test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

synced with @rudyfly offline. @rudyfly is right. we would use specific config file when we write daemon related case. If not, the /etc/pouch/config.json will impact the running case.

updated it.

id := c.Key()
// recover the running or paused container.

// NOTE: when pouch restarts, we need to initialize container IO
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/restarts/is restarting

@@ -280,6 +281,13 @@ func (mgr *ContainerManager) Restore(ctx context.Context) error {
return err
}

// recover the running or paused container.
if !(c.IsRunning() || c.IsPaused()) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

running or paused containers don't to recover?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need to recover the created/stopped container

daemon/mgr/container.go Outdated Show resolved Hide resolved
@fuweid fuweid changed the title bugfix: should initialize ContainerIO for all container [WIP] bugfix: should initialize ContainerIO for all container Jan 16, 2019
For now, when pouchd restarts, the restore function only re-connects to
existing running/pause containers with initializing container IO.
However, the stopped container also needs the container IO if the user
want to restart the container. For the case, we need to initialize
container IO for all existing container, not just for running/pause
containers.

Signed-off-by: Wei Fu <fuweid89@gmail.com>
@fuweid fuweid changed the title [WIP] bugfix: should initialize ContainerIO for all container bugfix: should initialize ContainerIO for all container Jan 16, 2019
@rudyfly
Copy link
Collaborator

rudyfly commented Jan 17, 2019

LGTM

@pouchrobot pouchrobot added the LGTM one maintainer or community participant agrees to merge the pull reuqest. label Jan 17, 2019
@rudyfly rudyfly merged commit 60c8bc7 into AliyunContainerService:master Jan 17, 2019
@fuweid fuweid deleted the bugfix_restore_container_io branch February 21, 2019 03:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug This is bug report for project LGTM one maintainer or community participant agrees to merge the pull reuqest. size/M
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants