-
Notifications
You must be signed in to change notification settings - Fork 950
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: re-design upgrade feature #2211
refactor: re-design upgrade feature #2211
Conversation
22257ce
to
ab1b5f2
Compare
Codecov Report
@@ Coverage Diff @@
## master #2211 +/- ##
==========================================
+ Coverage 67.33% 67.37% +0.03%
==========================================
Files 213 215 +2
Lines 17517 17593 +76
==========================================
+ Hits 11795 11853 +58
- Misses 4327 4340 +13
- Partials 1395 1400 +5
|
ctrd/container_types.go
Outdated
Labels map[string]string | ||
IO *containerio.IO | ||
Spec *specs.Spec | ||
SnapshotKey string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this used for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SnapshotKey
is the snapshot id used by container.
we used to use container id as snapshot id, but when we support upgrade
a container, the snapshot id may be different with container id.
So, we must record the snapshot id that container used
daemon/mgr/container.go
Outdated
// Upgrade upgrades a container with new image and args. | ||
func (mgr *ContainerManager) Upgrade(ctx context.Context, name string, config *types.ContainerUpgradeConfig) error { | ||
c, err := mgr.container(name) | ||
func (mgr *ContainerManager) getImageRef(ctx context.Context, image string) (digest.Digest, string, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not think this is function of ContainerManager. It is better to locate it in ImageManager. So we should use containerMgr.ImageMgr.GetImageRef
. WDYT? @HusterWan
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good advice, i will put those function to ImageMgr
daemon/mgr/container.go
Outdated
return imageID, primaryRef.String(), nil | ||
} | ||
|
||
func (mgr *ContainerManager) getOCIImageConfig(ctx context.Context, image string) (ocispec.ImageConfig, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same issue, improper place to locate the function here.
daemon/mgr/container.go
Outdated
return ociImage.Config, nil | ||
} | ||
|
||
func (mgr *ContainerManager) prepareContainerEntrypointForUpgrade(ctx context.Context, c *Container, config *types.ContainerUpgradeConfig) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For all the related Upgrade functions. could we move all of them to a saperated file container_upgrade.go
?
Just a piece of advice.
daemon/mgr/container.go
Outdated
// Upgrade success, remove snapshot of old container | ||
if err := mgr.Client.RemoveSnapshot(ctx, oldSnapID); err != nil { | ||
// TODO(ziren): remove old snapshot failed, may cause dirty data | ||
logrus.Errorf("failed to remove snapshot %s: %v", oldSnapID, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need the log level of Error
? Is Warn
enough?
Just double check to see the right way to use error and warn.
daemon/mgr/container_types.go
Outdated
@@ -258,6 +258,9 @@ type Container struct { | |||
|
|||
// MountFS is used to mark the directory of mount overlayfs for pouch daemon to operate the image. | |||
MountFS string `json:"-"` | |||
|
|||
// SnapshotKey specify id of the snapshot that container using. | |||
SnapshotKey string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about directly using SnapshotID
?
ab1b5f2
to
ebd117a
Compare
21c3b71
to
bffe170
Compare
apis/swagger.yml
Outdated
properties: | ||
HostConfig: | ||
$ref: "#/definitions/HostConfig" | ||
ContainerUpgradeConfig is used for API "POST /containers/upgrade". when upgrade a container, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"POST /containers/{id}/upgrade"?
Are you missing the {id}/
? @HusterWan
s/when upgrade/When upgrading/? Uppercase and add the ing
?
bffe170
to
87917c0
Compare
|
87917c0
to
0ad0282
Compare
@allencloud fixed |
0ad0282
to
3d573a7
Compare
c8443e6
to
270cb6d
Compare
Could we make this move on ASAP? @HusterWan |
270cb6d
to
c04e44b
Compare
of course, i have updated this PR |
@@ -322,9 +321,9 @@ func (mgr *ContainerManager) Create(ctx context.Context, name string, config *ty | |||
return nil, errors.Wrapf(errtypes.ErrInvalidParam, "unknown runtime %s", config.HostConfig.Runtime) | |||
} | |||
|
|||
config.Image = primaryRef.String() | |||
snapID := id |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you explain why to use generated container ID to assign to snapshot ID?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SnapshotID is used to equal with ContainerID, but after upgrade
they are different.
So, i use param snapID
here just to reminder that snapshot id and container id are different
Signed-off-by: Michael Wan <zirenwan@gmail.com>
c04e44b
to
cd51345
Compare
|
cd51345
to
f521a14
Compare
Signed-off-by: Michael Wan <zirenwan@gmail.com>
f521a14
to
a89c125
Compare
Since we need more discussion to determine how to integrate |
LGTM |
Signed-off-by: Michael Wan zirenwan@gmail.com
Ⅰ. Describe what this PR did
Before i re-design the
upgrade
feature, i have thought about a question for a long time: what's for user to useupgrade
? If you want to change memory, cpu, env etc, i think you should use theupdate
feature. But if you want to update the container's image, and inherit the volume and network from the old container, now you should think about the upgrade feature.After figured out the purpose of the
upgrade
feature, i re-designed the api interface and the core logic of theupgrade
.About API
Now,
upgrade
api request body is below, we only support you to specify theImage
which used to create a new container and the newEntrypoint
andCmd
that the new container used to run.I think it's enough to upgrade a container from the old image to a new one.
About the core logic
I think we must solve problems below when implement the
upgrade
feature.1. Which cmd should be used to create the new container when upgrade
There are four commands as mentioned below, when create a new container in upgrade, the order of using the command is:
CMD3
;CMD1
, ifCMD4
is not specified ;CMD4
, ifCMD1
is also not specified ;CMD2
should not be used.2. Inherit the volumes and network from the old container when create new container
mounts
information of the old one, so we can use the volumes used by old container, and we should also parse the volume information contained in new image.3. How to prepare new rootfs for the new container
In containerd,
snapshot
represents the rootfs of container, so we should use the new image to create a newsnapshot
for the new container in upgrade. In case of snapshot name conflicted, we add a random suffix string to the container id as the name of the new snapshot.4. Rollback
When occurred any error during upgrade, we should have the ability to rollback the upgrade and recover the old container just like it before.
Ⅱ. Does this pull request fix one issue?
none
Ⅲ. Why don't you add test cases (unit test/integration test)? (你真的觉得不需要加测试吗?)
Yes, add more test cases
Ⅳ. Describe how to verify it
none
Ⅴ. Special notes for reviews
none