-
Notifications
You must be signed in to change notification settings - Fork 950
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enhance: cache image size and oci image spec in pouchd #2301
enhance: cache image size and oci image spec in pouchd #2301
Conversation
Codecov Report
@@ Coverage Diff @@
## master #2301 +/- ##
==========================================
+ Coverage 66.61% 66.84% +0.22%
==========================================
Files 211 211
Lines 17195 17205 +10
==========================================
+ Hits 11455 11500 +45
+ Misses 4337 4308 -29
+ Partials 1403 1397 -6
|
daemon/mgr/image_store.go
Outdated
@@ -62,6 +65,16 @@ type imageStore struct { | |||
|
|||
// primaryRefsIndexByID stores primay references, index by image ID | |||
primaryRefsIndexByID map[digest.Digest]referenceMap | |||
|
|||
// cache ociImage to avoid open stream grpc to containerd, index by digest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add the consequence which will happen if not do the cache thing.
when the containerd manages many big images, the pouchd ListImage will consume a lot of CPU resources caused by containerd.Content.Read stream grpc call. In the k8s, the kubelet gc image manager will send the ListImage request every 30s. If the containerd consumes too many CPU resources, it will impact the container. Therefore, pouchd needs to cache the data. Signed-off-by: Wei Fu <fuweid89@gmail.com>
if err != nil { | ||
if err == errCtrdImageInfoNotExist { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the image not found in local store, should we check it from containerd?
// RemoveAllReferences removes all the reference by the given imageID. | ||
func (store *imageStore) RemoveAllReferences(id digest.Digest) error { | ||
// ListCtrdImageInfo returns all the CtrdImageInfo. | ||
func (store *imageStore) ListCtrdImageInfo() []CtrdImageInfo { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
support filter later?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
YES!
// containerd.RemoveImage must success. so if the localStore has been | ||
// remove all the primary references, we should clear the CtrdImageInfo | ||
// cache. | ||
defer func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whether we should remove the cache image info when the image has been succeeded?
I have discuss with @fuweid offline and think this PR is LGTM |
Signed-off-by: Wei Fu fuweid89@gmail.com
Ⅰ. Describe what this PR did
when the containerd manages many big images, the pouchd ListImage will
consume a lot of CPU resources caused by containerd.Content.Read stream
grpc call.
In the k8s, the kubelet gc image manager will send the ListImage request
every 30s. If the containerd consumes too many CPU resources, it will
impact the container. Therefore, pouchd needs to cache the data.
Ⅱ. Does this pull request fix one issue?
NONE
Ⅲ. Why don't you add test cases (unit test/integration test)? (你真的觉得不需要加测试吗?)
no need
Ⅳ. Describe how to verify it
CI
Ⅴ. Special notes for reviews