Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

upgrade: upgrade the vendor of cri-o/ocicni #2282

Merged
merged 1 commit into from
Oct 15, 2018

Conversation

starnop
Copy link
Contributor

@starnop starnop commented Sep 28, 2018

Signed-off-by: Starnop starnop@163.com

Ⅰ. Describe what this PR did

At present, cri-o/ocicni will watch CNI configuration file through inotify and quit if inofity runs out without any error report, it's not make sense. Fortunately, the issue has been settled at cri-o/ocicni@0831829 . So we update the vendor of cri-o/ocicni

Ⅱ. Does this pull request fix one issue?

None.

Ⅲ. Why don't you add test cases (unit test/integration test)? (你真的觉得不需要加测试吗?)

Ⅳ. Describe how to verify it

use command sudo sysctl -w fs.inotify.max_user_instances=10 to limit the inotify, and you will get error like this:

# pouchd --enable-cri
INFO[2018-09-28T13:00:41.374491472Z] ctrd: new containerd process, pid: 14674     
INFO[0000] starting containerd                           module=containerd revision=773c489c9c1b21a6d78b5c538cd395416ec50f88 version=v1.0.3
INFO[0000] loading plugin "io.containerd.content.v1.content"...  module=containerd type=io.containerd.content.v1
INFO[0000] loading plugin "io.containerd.snapshotter.v1.btrfs"...  module=containerd type=io.containerd.snapshotter.v1
WARN[0000] failed to load plugin io.containerd.snapshotter.v1.btrfs  error="path /var/lib/pouch/containerd/root/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" module=containerd
INFO[0000] loading plugin "io.containerd.snapshotter.v1.overlayfs"...  module=containerd type=io.containerd.snapshotter.v1
INFO[0000] loading plugin "io.containerd.metadata.v1.bolt"...  module=containerd type=io.containerd.metadata.v1
WARN[0000] could not use snapshotter btrfs in metadata plugin  error="path /var/lib/pouch/containerd/root/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" module="containerd/io.containerd.metadata.v1.bolt"
INFO[0000] loading plugin "io.containerd.differ.v1.walking"...  module=containerd type=io.containerd.differ.v1
INFO[0000] loading plugin "io.containerd.gc.v1.scheduler"...  module=containerd type=io.containerd.gc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.containers"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.content"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.diff"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.events"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.healthcheck"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.images"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.leases"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.namespaces"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.snapshots"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.monitor.v1.cgroups"...  module=containerd type=io.containerd.monitor.v1
INFO[0000] loading plugin "io.containerd.runtime.v1.linux"...  module=containerd type=io.containerd.runtime.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.tasks"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.version"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.introspection"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] serving...                                    address="/run/containerd/debug.sock" module="containerd/debug"
INFO[0000] serving...                                    address="/var/run/containerd.sock" module="containerd/grpc"
INFO[0000] containerd successfully booted in 0.003135s   module=containerd
INFO[2018-09-28T13:00:41.399856096Z] success to create 5 containerd clients, connect to: /var/run/containerd.sock 
INFO[2018-09-28T13:00:41.537924242Z] Start CRI service with CRI version: v1alpha2 
INFO[2018-09-28T13:00:41.538337026Z] Stream Server will bind to address 10.140.0.7:10010 
INFO[2018-09-28T13:00:41.538553827Z] start new CNI                                
INFO[2018-09-28T13:00:41.538605687Z] start init CNI                               
INFO[2018-09-28T13:00:41.53884408Z] init CNI success                             
WARN[2018-09-28T13:00:41.541473527Z] failed to find group pouch, cannot change unix socket /var/run/pouchd.sock to pouch group 
INFO[2018-09-28T13:00:41.541559456Z] start to listen to: unix:///var/run/pouchd.sock 
Error: failed to get CriManager with error: failed to create cni manager: could not create new watcher too many open files
ERRO[2018-09-28T13:00:41.541667407Z] failed to get CriManager with error: failed to create cni manager: could not create new watcher too many open files 

Ⅴ. Special notes for reviews

BTY, update the vendor of github.com/containernetworking/cni.

@codecov
Copy link

codecov bot commented Sep 28, 2018

Codecov Report

Merging #2282 into master will increase coverage by 0.08%.
The diff coverage is 45.45%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2282      +/-   ##
==========================================
+ Coverage   66.97%   67.05%   +0.08%     
==========================================
  Files         211      211              
  Lines       17306    17315       +9     
==========================================
+ Hits        11591    11611      +20     
+ Misses       4312     4304       -8     
+ Partials     1403     1400       -3
Flag Coverage Δ
#criv1alpha1test 32.34% <45.45%> (+0.27%) ⬆️
#criv1alpha2test 35.72% <45.45%> (+0.12%) ⬆️
#integrationtest 40.18% <0%> (ø) ⬆️
#nodee2etest 33.13% <45.45%> (+0.01%) ⬆️
#unittest 23.4% <ø> (ø) ⬆️
Impacted Files Coverage Δ
cri/ocicni/cni_manager.go 71.42% <45.45%> (-8.58%) ⬇️
daemon/mgr/snapshot.go 89.85% <0%> (-4.35%) ⬇️
cri/v1alpha2/cri_wrapper.go 61.2% <0%> (-1.2%) ⬇️
cri/v1alpha2/cri.go 66.32% <0%> (ø) ⬆️
cri/v1alpha1/cri.go 60.7% <0%> (+0.32%) ⬆️
daemon/mgr/container.go 57.63% <0%> (+0.4%) ⬆️
ctrd/container.go 58.8% <0%> (+0.95%) ⬆️
daemon/mgr/container_utils.go 84.33% <0%> (+1.2%) ⬆️
pkg/meta/store.go 64.06% <0%> (+1.56%) ⬆️
daemon/logger/jsonfile/utils.go 73.17% <0%> (+1.62%) ⬆️
... and 3 more

return ip, nil
if len(results) > 0 {
// NOTE: only get the first IP of the first network at present
result := results[0].(*cnicurrent.Result)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we use (val, ok) to do assertion? because it's safe 😅

@starnop starnop force-pushed the cni-upgrade branch 2 times, most recently from f57f394 to 9af1907 Compare September 29, 2018 01:51
@starnop starnop changed the title upgrade: upgrade the vendor of cri-o/ocicni [WIP] upgrade: upgrade the vendor of cri-o/ocicni Sep 29, 2018
@allencloud
Copy link
Collaborator

Any update on this? Still WIP? @starnop

@starnop starnop force-pushed the cni-upgrade branch 3 times, most recently from d361f82 to 191dfca Compare October 10, 2018 03:03
@starnop starnop changed the title [WIP] upgrade: upgrade the vendor of cri-o/ocicni upgrade: upgrade the vendor of cri-o/ocicni Oct 10, 2018
@starnop
Copy link
Contributor Author

starnop commented Oct 10, 2018

@allencloud has been updated. :)

@starnop
Copy link
Contributor Author

starnop commented Oct 10, 2018

@fuweid @YaoZengzeng PTAL. ^_^

if len(results) > 0 {
// NOTE: only get the first IP of the first network at present
// TODO: support multiple network
if result, ok := results[0].(*cnicurrent.Result); ok {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so, is the first one the *cnicurrent.Result?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. as comments mentioned, we only get the first IP of the first network at present. :)

@@ -32,7 +33,7 @@ func NewCniManager(cfg *config.Config) (CniMgr, error) {
}
}

plugin, err := ocicni.InitCNI(networkPluginConfDir, networkPluginBinDir)
plugin, err := ocicni.InitCNI("", networkPluginConfDir, networkPluginBinDir)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@starnop could you comment here for the reason why we set it empty?

if err != nil {
return "", fmt.Errorf("failed to get pod network status: %v", err)
}

return ip, nil
if len(results) > 0 {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest that we could use return fast to ident less.

if len(results)==0{
    return "", fmt.Errorf("failed to get pod network status for nil result")
}

if result, ok := results[0].(*cnicurrent.Result); !ok {
    return "", fmt.Errorf("failed to get pod network status for wrong result: %+v", results[0])
}

if len(result.IPs) == 0 {
    return "", fmt.Errorf("failed to get pod network status for nil IP")
}

ip := result.IPs[0].Address.IP.String()
return ip, nil

@allencloud
Copy link
Collaborator

I think we could merge this until we finish some minor change. @starnop

Signed-off-by: Starnop <starnop@163.com>
@starnop
Copy link
Contributor Author

starnop commented Oct 12, 2018

All above mentioned has been updated. :)

@allencloud
Copy link
Collaborator

LGTM by @fuweid

@pouchrobot pouchrobot added the LGTM one maintainer or community participant agrees to merge the pull reuqest. label Oct 15, 2018
@allencloud allencloud merged commit 6cbbd01 into AliyunContainerService:master Oct 15, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
areas/network areas/test LGTM one maintainer or community participant agrees to merge the pull reuqest. size/XXL
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants