note: the notes are checked in after every meeting to https://github.com/containernetworking/meeting-notes
An editable copy is hosted at https://hackmd.io/jU7dQ49dQ86ugrXBx1De9w. Feel free to add agenda items there
- [Tomo] Regrets due to headache
- [Dan] bugfixes to nftables stuff: containernetworking/plugins#1116, containernetworking/plugins#1117, containernetworking/plugins#1120
- Kubecon recap?
- DRA excitement
- [Tamilmani] like to discuss about this PR - containernetworking/cni#1121
- Question: what does it mean when IPAM returns an interface?
- Could you return the name of the bridge?
- Could you return the macvlan "master"? definitely
- ENI: it's not a master, but the uplink
- PR containernetworking/cni#1137 filed
- Question: what does it mean when IPAM returns an interface?
- Q: Skip for KubeCon week?
- A: Casey/Tomo regrets, unrelated reasons
- [Zappa] cni.dev not updated with latest spec. I can do this if this is an oversight. This is just a reminder for me
- [Lionel] Validation
- containernetworking/cni#1132
- [tomo] need to discuss
- where to implement (plugin or other)
- how we define (in spec?)
- example: define json schema in different file (of diffent repo, or directory)
- likely conclude that we could have schema (e.g. json schema) in repository and recommend to use to validate. No need to update SPEC. Good to implement to provide such schema file from current golang file
- TODO: come up with the way to generate schema file
- [Zappa] do we have any call outs for kubecon that I should include?
- Casey: remind everyone that CNI composes in two dimensions: multiple interfaces, and multiple plugins for the same interface. Thus, a single gRPC call cannot represent the currrent API surface
- [Swagat] bridge CNI plugin
containernetworking/plugins#1107
- related to isolating containers connected to teh same bridge, there is similar functionality in Docker
Regrets: Casey, Tomo
- [Lionel/Antonio] CNI DRA Driver
- How to validate values for CNI plugins.
- e.g. a VALIDATE call to make sure a config is "valid"
- Where "valid" means not just structure (which we already check/validate)
- but also that the values of the config are within the bounds the plugin will accept
- to catch/minimize schedule-time add failures
- CHECK is for containers that already are scheduled, and can't work for this
- STATUS doesn't currently take a config at all.
- We do not currently require plugins to define value ranges they accept, so ADD would just fail today.
- There would likely still be some sort of practical gap between VALID = TRUE and ADD = FAIL, due to node/plugin runtime state?
- Could also be used for scheduling feedback in K8S?
- [Dylan/Isovalent] Delegated IPAM issue
- [Lionel/Antonio] CNI DRA Driver
-
Q: does it make sense for CNI via DRA to have the end-goal of mediating the primary network?
-
Q: How might the CNI API better match (DRA / kubelet)'s lifecycle?
- Observation: biggest mismatch is chaining
-
[doug] Fear is having 2 overlapping things people have to deal with, rather than 1.
-
[ben] 2 things CNI does do well:
- doesn't allow platform to disallow networking extensions.
- supports N number of extensions. Those are the properties I think any CNI replacement solution has to have a path toward.
-
- Casey: CNI and DRA look, from a higher level, identical. Both are privileged hooks in to Pod and PodSandbox lifecycle. The only big difference is composition (i.e. chaining).
- The nice thing that DRA brings to the table is declarative in-cluster configuration (A Real K8S API)
- The nice thing that CNI brings to the table is vendor-and-user-extensible networking hooks
- [cdc] plugins release
- Reverted exclude-cidr for bandwidth plugin containernetworking/plugins#1105
- containernetworking/plugins#1092
- hopefully cut release live :-)
- [Lionel/Antonio] CNI DRA Driver
- kubernetes/enhancements#4861
- POC: https://github.com/LionelJouin/network-dra?tab=readme-ov-file#result
- We chat about "dependent" relations, i.e. a DRA driver that consumes hardware from another driver
- Vision: something multus-like that is DRA aware
- [zappa/dan] CNI/NPWG as a sig-network subproject
- [danwinship] docs update for nftables patches, containernetworking/cni.dev#143 (merged!)
- plugins release:
- waiting for one PR containernetworking/plugins#1097
- vrf flake: containernetworking/plugins#1103
- as soon as that merges, casey will cut release
- [zappa] quick chat about conf
-
plugins release: which PRs should get in?
- bandwidth fixes (containernetworking/plugins#1100, containernetworking/plugins#1097)
- host-device temp netns: containernetworking/plugins#1073
- STATUS passthrough to IPAM: containernetworking/plugins#1082
- sbr flake containernetworking/plugins#1096
-
[Zappa] propose Lionel/Ben as CNI maintainers
- heck yeah!
-
[tomo] CNI2.0 requirement brainstorming
-
Revisit CNI1.x requirements and check whether it is applicable or not - do people want accomplish these tasks without explicit support in 2.0, and if so, how would they do so. Perhaps frame this as a back compat problem.
- Chaining
- Cached state (for GC)
- Return type w/ interfaces & addresses
- init
- de-init (e.g. events for bridge deletion)
- capture interface events? (i.e. v6 SLAAC events)
- dynamic changing attribute without ADD/DEL
- Route
- IP address
- MTU
- and so on
- How
- Adding API verb (CHANGE)?
- ADD/DEL for attirubute?
-
-
[lionel] Please continue to review/comment on containernetworking/plugins#1096
- Regrets: Tomo
- [Lionel] PRs
- containernetworking/plugins#1088
- merged!
- containernetworking/plugins#1087
- merged!
- containernetworking/plugins#1088
- containernetworking/plugins#935 (nftables) also ready for re-review
- Status of GC and STATUS:
- CRI-O calls status, will call GC "soon"
- Containerd? No support for status, PR is containerd/go-cni#114
- Multus? STATUS/GC in review.
- STATUS/GC support only in clusterNetwork (i.e. eth0)
- FYI: CNI "support" for DRA being discussed: kubernetes/enhancements#4817
- [zappa] moved agenda to next week
- [zappa] US Holiday
- [tomo] CNI2.0 requirement brainstorming (skip to next)
- Revisit CNI1.x requirements and check whether it is applicable or not
- [Lionel] PRs (skip to next)
- [cdc] can't make it, on train
- [zappa/lionel] CNI 2.0/DRA brainstorming
- How is the DRA driver integrated in the container runtime?
- Many ideas from Mike Brown -> Zappa
- https://github.com/MikeZappa87/cniv2/blob/main/pkg/types/cni.proto
- Add proto for Service mesh or other servers post AddAttachment
- need finalizer event as well?
- Remove exec model, remove on-disk configs
- Need plugin registration method?
- [tomo] Reviews...
- Bridge CNI PR request.
- [danwinship] nftables containernetworking/plugins#935
- cdc to review
- minor questions about /sbin/nft vs. direct netlink APIs
- [zappa/antonio] 2.0 proto design discussion
- [zappa] CNI 2.0 paradigm shift to RPC
- [cdc] Anything else stalled on review? assign it to @squeed as reviewer
- [cdc] poor bandwidh, back next week!
- [zappa] CNI 2.0 paradigm shift to RPC (moved to next week)
- [zappa] fyi: containerd/containerd#10434
- [tomo] (skip this week)Multi-gc containernetworking/cni#1091
- [doug] Question about "Support safe subdirectory-based plugin conf loading"
- containernetworking/cni#1052
- First off, I love this and I'm PoC'ing up using this functionality.
- Question about bytes in the conf, especially in the NetworkConfFromFile method
- Do we only represent the bytes as were in the file in the file in the conf.Bytes?
- Was wondering because the config is manipulated, but, not the returned bytes of the config.
- skipped
- Tomo cannot join due to power outage
- [cdc] containernetworking/cni#1103, review wanted
- We merged 1052
- Mike gives chatting about gRPC
- [Tomo] not join, due to national holiday
- [tomo] Multi-gc containernetworking/cni#1091
- [cdc]
- what's difference between current vs. new?
- what's about delegation?
- [cdc]
- [cdc] containernetworking/cni#1103
- "cni.dev/attachments" vs. "cni.dev/valid-attachments"
- Oops, we used different keys in SPEC vs libcni
- Should we change SPEC or libcni?
- Decision: valid-attachments is a clearer name in this case.
- [Tomo] cannot make today's call
- [Ben] merge: containernetworking/cni#1052 for 1.2
- [cdc] oops, wrong array key in libcni vs. spec: containernetworking/cni#1101
- solution: fix spec, set both versions in libcni
- containernetworking/cni#1103
- [Tomo] PR: containernetworking/plugins#1058
- and need to release new plugins again for cilium user;) containernetworking/plugins#1053
- [cdc] 1.2 review
- 1.1 implementation status
- containerd stuck on Masterminds
- we fix that!
- [Tomo] PR: containernetworking/plugins#1054
- [Tomo] containernetworking/cni#1097
- for issue at prev call containernetworking/cni#1096
- PR pass
- We merge some PRs for v1.1.1
- We will cut v1.1.11.1.1.1.1 shortly, once PRs merge and deps are bumped
- Also need wording for "Also mention that current GC is mainly for single CNI config and needs to design GC that supports multiple CNI config"
- 1.1 runtime checkin
- cri-o
- STATUS done, GC in progress
- multus
- k8snetworkplumbingwg/multus-cni#1273
- still in review...
- containerd
- hung up on semver replacement
- cri-o
- 1.2 wishes milestone
- drop-ins
- [ben] just docs but would like to get containernetworking/cni#1081 in too for 1.2 after drop-in merges.
- multi-network GC?
- INIT
- DEINIT
- metadata for interface (and more, e.g. address?)
- capability/runtimeConfig for deviceID?
- drop-ins
- [cdc / lionel] oops, overriding JSONMarshal() breaks types that embed it -- containernetworking/plugins#1050 (review)
- oops, gotta fix this :-) tomo to take a look
- heck you, struct embedding
- containernetworking/cni#1096
- Updated: fix PR ready: containernetworking/cni#1097
- [jaime] cri-o GC call
- cri-o/cri-o#8245
- Casey PTAL
- I won't be able to attend the meeting this week
- [Tomo] GC Improvelment discussion
- Gist: should we add GC for 'CNI Configs', not 'a CNI config' for runtimes with multiple CNI config (i.e. multus or containerd)
- Note: Containerd supports multiple CNI Configs with following config:
with above config, containerd picks two CNI configs from CNI directory.
[plugins."io.containerd.grpc.v1.cri".cni] max_conf_num = 2
- Current risk:
- As CNI Spec, if plugin identifies attachment uniqueness with "CONTAINER_ID" and "IFNAME", then current GC (validAttachment is identified with "CONTAINER_ID", "IFNAME" AND "CNI network name") may remove valid attachments unexpectedly...
- NetA: VA1, VA2
- VB1, VB2 seems to be invalid from NetA
- NetB: VB1, VB2
- VA1, VA2 seems to be invalid from NetB
- NetA: VA1, VA2
- As CNI Spec, if plugin identifies attachment uniqueness with "CONTAINER_ID" and "IFNAME", then current GC (validAttachment is identified with "CONTAINER_ID", "IFNAME" AND "CNI network name") may remove valid attachments unexpectedly...
- Toughts:
- keep current one for single CNI
- Just add new API
GCNetworkLists(ctx context.Context, net []*NetworkConfigList, args *GCArgs) error
- get CNI Configs
- gather each validAttachemnents
- for each CNI plugin, call it
- This API also optimizes GC call (i.e. less 'Stop the world')
- Required for SPEC:
- need to change https://github.com/containernetworking/cni/blob/main/SPEC.md#gc-clean-up-any-stale-resources
- from: cni.dev/attachments (array of objects): The list of still valid attachments to this network:
- to: cni.dev/attachments (array of objects): The list of still valid attachments in a runtime:
- need to change https://github.com/containernetworking/cni/blob/main/SPEC.md#gc-clean-up-any-stale-resources
- Alternate solution: Just recommend not to use GC in multiple CNI config environment
- AI:
- Add words in SPEC, to mention GC care about not only CONTAINER_ID, IF_NAME and also "CNI network name" (i.e. 'name' in CNI config)
- Also mention that current GC is mainly for single CNI config and needs to design GC that supports multiple CNI config
- [mz] kubecon roll call!?
- containernetworking/cni#1052
- ready to merge
- Will hold off until disableGC is merged and v1.1.1 is cut
- [tomo] GC Improvement
- containernetworking/cni#1091
- plugin.GC(config, [validID1])
- plugin.GC(configA, [validID1])
- plugin.GC(configB, [validID2])
- Plugin GC
- resource [validID1 + validID2]
- plugin.GC(configs, [(validID1, netname1)])
- how to identify?
- CNI name + container id (as described https://github.com/containernetworking/cni/blob/main/SPEC.md#section-3-execution-of-network-configurations)
- AI: need to continue to discuss about it`
- [miguel] CNI 1.2 STATUS verb
- missing updating the CRI-O dependency: cri-o/cri-o#8207
- We need to remove an extra conditional, Miguel to file PR
- containernetworking/cni#1095
- [miguel] containernetworking/plugins#1021 status ? (need help w/ this ??)
- Extremely!
- miguel to split the status part from this PR
- [jaime] Discussion on GC implementations
- Concerns around GC taking a long time, blocking other operations
- For now, "safest" time to call is on start-up
- Once timing considerations are known, we can consider adding additional calls
- [zappa] containerd will no longer depend on Loopback plugin :-)
- go-cni PR for STATUS and GC is stuck
- Casey on holiday, but we will have the call without him
- Tagged plugins v1.5.0 (contains "CNI version output" fix)
- Extend tuning plugin to also support ethtool configuration
- [tomo] how about to another CNI plugin ('ethtool' pluign?) because now tuning plugin has a lots of feature...
- Could you please file a issue in github!
- Extend tuning plugin to make configuration also on the host side for veth
- Could you please file a issue in github!
- containerd: PR is pending, stuck on failed CI
- [Ben] CNI 1.2 - dropin (Reviewed+Approved - probably waiting on Casey to merge) containernetworking/cni#1052
- Doc introduced to explain relation of spec to libcni - pls review+comment: containernetworking/cni#1081
- SBR Table ID: https://www.cni.dev/plugins/current/meta/sbr/#future-enhancements-and-known-limitations
- [miguel] how is GC supposed to be used ?? Any docs w/ examples anywhere ?
- [Jaime] Q: multus GC status
- Check in on CNI v1.1 runtime implementations
- Multus: "primary" network GC, STATUS in progress, no big hurdles. secondary networks trickier (need discussion)
- cri-o: oci-cni support has merged, Jaime working on cri-o GC.
- Question: when to issue a GC? Answer: On startup at least, on a timer if you like. Fun would also be on CNI DEL failure. May need to disable GC by explicit config
- [Tomo]FYI: CNI 1.1 on multus
- [cdc] CNI v1.1 for ocicni is merged cri-o/ocicni#197
- TODO: call GC from cri-o. Jaime to take a look
- [Ben] CNI 1.2 - dropin (updated, LFR PR review) containernetworking/cni#1052
- [cdc] Discussion: disable GC? containernetworking/cni#1086
- regrets: Tomo (national holiday). Pls ping me @K8s slack
-
We chat about k8s and the DRA
- Antonio working on adding netdevs to oci spec
- [ben] thinking aloud, if netdev management was done here, would that mean that CNI plugins might become like FINALIZE variants (more or less), basically?
-
GC doubts: should we add a DisableGC option?
- use-case: one network, multiple runtimes
- [Tomo]+1
-
dropin (updated, PR review) containernetworking/cni#1052
-
ocicni STATUS and GC PRs: cri-o/ocicni#196, cri-o/ocicni#197
- Tagging v1.1: minor cleanup needed
- v1.2 milestone:
- dropins [containernetworking/cni#1052]
- finalize
- init?
- gRPC
- FINALIZE discussion
- use-cases:
- ensuring routes
- inserting iptables rules (e.g. service-mesh proxy)
- ECMP (eaugh)
- lifecycle:
- ADD (network 1)
- ADD (network 2)
- FINALIZE (network 1)
- FINALIZE (network 2)
- later... CHECK
- Configuration source:
- in-line from config
- specific FINALIZE configuration?
- maybe not needed.
- cri-o / containerd could have a magic dropin directory?
- What if a configuration only has FINALIZE plugins?
- then we don't ADD, just FINALIZE
- What is passed to the plugin(s)?
- We could pass all results of all networks
- Tomo: this is complicated (and plugin could get that by netlink), let's not
- fair enough
- CNI_IFNAME?
- Standard prevResult?
- We could pass all results of all networks
- What is returned?
- Not allowed to produce result?
- Philosophical question: is FINALIZE "network-level" or "container-level"
- does it get IFNAME? PrevResult?
- Homework: come up with more use-cases.
- use-cases:
- [minor] readme PR: containernetworking/cni#1081
- [minor] licensing question containernetworking/plugins#1021
PR
- containernetworking/cni#1054 [approved]
- Merge today! (or reject today), it is so big and hard to keep in PR list...: containernetworking/plugins#921 [merged]
- containernetworking/cni#1052 [for 1.2]
- Q: should this be a config or not? containernetworking/cni#1052 (comment)
- A: Consensus seems to be leave in the flag, change the name, maybe add docs around how libcni implements the spec with file loading.
- [dougbtv] working on NetworkPlumbing WG proposal to attach secondary networks more granularly
- [ben] that sounds good to me, I want to keep this very tightly scoped to avoid main config file contention, more granular behaviors should be handled elsewhere, 100%
Discussion
- Should 'ready' be both CNI network configuration and binaries present? Right now its just the network config. [Zappa]
- [tomo] agreed but need to care UX (how to told this to user)
- k8s node object should have error messages ()
- [ben] since we can't check much about the binary, this is necessarily a simplistic check that a file exists at the binary path - it could still fail to execute, or be partially copied, etc
- [cdc] I wrote the .Validate() libcni function some time ago for this, use it :-)
- [tomo] agreed but need to care UX (how to told this to user)
- CNI X.Y [Zappa/Casey]
- Finalize/Init Verb [Zappa]
- Loopback fun [Zappa]
No CNI calls today due to Easter holiday, dismiss!
- Headsup:
- [squeed] Last call for tagging libcni for v1.1.0 and let's conclude next week meeting, 4/8!!
KubeCon update:
- It happened
- Something something DRA?
- Antonio wants to move network establishment in to device plugins
- Casey & Tomo's presentation: https://docs.google.com/presentation/d/1eCOFcro7dc9iq3VS-31045EsVUstfqmF/edit?usp=sharing&ouid=110611166904085433395&rtpof=true&sd=true CNI v1.1
- containernetworking/plugins#1021
- please review
- ContainerD STATUS support is in progress
- Need to do cri-o, multus
- v1.1 is feature frozen? sure!
- Milestone is completed anyway as of today, call it.
CNI v1.2 ideas:
- drop-in directories
- Interface metadata
- TODO: file issue
- FINALIZE verb
- Problem: we have no inter-container lifecycle guarantees
- use-cases:
- Ensuring route table is in a defined state
- insering iptables rules for proxy sidecar (e.g. Envoy) chaining
- Biggest problem: how to configure?
- /etc/cni/net.d/_finalize.d/00_istio.{fconf,conf}
- Do we use a standardized directory that applies to all plugins?
- Do we have finalizers per-network, or finalizers after all networks?
- Ben: Do we need both, or could we get away with just global finalizers?
- Casey, others: We might (for some use cases) actually need per-network.
- Which one is less footgunny? Would running finalizers per-network "hide" global state that might make finalizers more likely to break things?
- Multus is also trying to add a finalizer pattern, for multiple CNIs - consider how this works as well.
/etc/cni/net.d/00-foobar.conf
{
// usual CNI
plugins: [],
finalize_plugins: [{"type":"force_route"}] // type a: in CNI config
}
// type b: drop-in directory
// should we change '_finalize.d' as configurable?
/etc/cni/net.d/_finalize.d/00-foobar.conf
{
// one finalizer
}
/etc/cni/net.d/_finalize.d/99-istio.conf
{
// put istio, that wants to be final!
}
/etc/cni/net.d/_finalize.d/999-barfoo.conf
{
// oh, sorry, it is the actual final guy!
}
Work on outline for Kubecon project update.
CNI: update, what's next
- CNI basic overview
- CNI is an execution protocol, run by the runtime
- CNI 1.0
- '.conf' file (i.e. single plugin conf) is removed!
- interface has no longer 'version' field (of address)
- new version CNI 1.1 Update!
- new verbs
- STATUS: replaces "write or delete CNI config file" to signify plugin readiness
- GC: clean up leftover resources
- new fields
- route MTU, table, scope
- interface MTU, PCIID, socket path
- cniVersions array
- Configuration files can now support multiple supported versions
- Will be released shortly
- implementing v1.1 in plugins now
- cri-o, containerd also in-progress
- Community question: Should the CRI closer reflect the CNI?
- a.k.a. how can I use these shiny new fields?
- our opinion: heck yeah! Let's make the CRI more fun
- Should we expand v1/PodStatus? MultiNetwork WG is proposing this, 👍
- device capability (not 1.2, but whatever)
- Standardized way for devices to be passed down from kubelet -> runtime -> plugin (e.g. SR-IOV)
- Still no way for plugins to say they need a certain device
- Looking for ways to tie config back in to the scheduler
- complicated! help wanted!
what's next (for v1.2)
- drop-in directories (definite)
- no more manipulating CNI configuration files
- isto community contribution
- Interface metadata, (likely)
- We prefer to have these as fields
- We generally have a low threshold for adding a field
- But some things are just too weird even for us :p
- We prefer to have these as fields
- FINALIZE (maybe??)
- some kind of post-ADD "finalize" plugin?
- called after every ADD of every interface
- possible use case: resolve route conflicts
- INIT (stretch)
- Called before first ADD
- Not container-specific
- Really sorry about this one, Multus
- Nasty lifecycle leaks
- Dynamic reconfiguration (vague idea)
- Spec says ADD is non-idempotent
- But there's no reason this has to be the case
- Do you want this? Get involved!!
So, what about KNI?
- KNI is, and is not a replacement for CNI
- e.g. KNI proposed to be responsible for isolation domain management
- KNI extensibility is still a work in progress
- If KNI merges, the default impl will be containerd / cri-o wrapping CNI
Calls to action:
- This is a dynamic area of k8s right now, lots of things are being proposed
- CNI fits in a complicated ecosystem (k8s, CNI, CRI, runtimes)
- There is a lot of room for improvement, but it reaches across a lot of concerns
- We are all busy, we can't be in all projects at all times
- Reach out! Let's make features people use!
-
PR Review:
- containernetworking/cni#1069
- containernetworking/cni#1052
- getting close to the end :-)
-
Discussions:
- CNI cached directory questions
- Why is the cached directory not on volatile storage?
- because we try and pass the same values to DEL as to ADD, even after a reboot
- But we sometimes fail to delete because of invalid cache :-)
- We should handle this case gracefully, same as a missing cache file
- Casey wonders: How do we handle this for GC?????
- Why is the cached directory not on volatile storage?
- [Zappa] go-cni PR for Status [draft]
- [Zappa] go-cni PR for additional fields [draft]
- need same work for CRI-O, where are you, Jaime?
- [cdc] Device conventions: containernetworking/cni#1070
- [cdc] working on CNI v1.1 for plugins, slowly
- Kubecon Talk: What did we do????????
- CNI v1.0
- no .conf files
- CHECK
- CNI v1.1 -- lots of new features
- GC, SATUS
-
- more types
- KNI?
- What do people actually want? What verbs should come next?
- FINALIZE?
- Hey multi-networking, please figure out how to configure CNI via k8s API plz thanks
- CNI v1.0
- CNI cached directory questions
-
PR Review:
- [Draft] containernetworking/cni#1069
- Need to provide definitions in the spec for both fields
- [LFR] containernetworking/cni#1052
- Updated to invert flag logic (enabled by default)
- [Draft] containernetworking/cni#1069
-
Discussions:
- [Zappa] Noticed uptick of issues around /var/lib/cni/results
- Unable to delete pods (+ip address)
- [cdc] Help wanted: plugins v1.1 impl.
- [cdc] Any ideas on device convention? containernetworking/cni#1070
- [aojea] STATUS implementation
- I have some volunteers but I need to provide some guidance
- Casey/Zappa have branches for this
- I have some volunteers but I need to provide some guidance
- [Zappa] Noticed uptick of issues around /var/lib/cni/results
- US Holiday (ish)
- PR review:
- containernetworking/cni#1062
- containernetworking/cni#1068
- both look good, need integer pointers and the implementation in plugins
- containernetworking/cni#1052
- containernetworking/cni#1060 (mtu on interface).
- containernetworking/plugins#1003
- PR's
- Support loading plugins from subdirectories: containernetworking/cni#1052
- Comments addressed, this now adds a new opt-in config flag rather than forcing more drastic changes to the config spec. PTAL, need Casey/Dan to do a final pass
- Support loading plugins from subdirectories: containernetworking/cni#1052
- New issues
- Metadata proposal (fields vs map)
- PCIID (to Interface structure)
- SocketPath (to Interface structure)
- MTU (to Interface structure)
- Route table ID (to Route structure)
- Route Scope/Flag (to Route structure)
- Or map[string] string or go with both approaches?
- PR review:
- containernetworking/plugins#1002 (tomo will review)
- containernetworking/plugins#1003 (tomo will review)
-
PR's
- Support loading plugins from subdirectories: containernetworking/cni#1052
- Comments addressed, this now adds a new opt-in config flag rather than forcing more drastic changes to the config spec.
- PTAL, need Casey/Dan to do a final pass
- Support loading plugins from subdirectories: containernetworking/cni#1052
-
discuss:
- SocketPath/DeviceID aka metadata (need Casey/Dan)
- KNI? (need Casey/Dan)
- CNI 2.0 / Multi-Network / KNI / DRA / NRI / ? meetup at KubeCon?
- Plan/strategy? KNI will probably need a plugin strategy, will probably support CNI plugins, but going forward could support a better/smarter plugin interface.
- containernetworking/cni#1061
- containernetworking/cni#598
- PR's
- Add Github action to build binary at tag release containernetworking/plugins#1000
- bridge: Enable disabling bridge interface: containernetworking/plugins#997
- Initial attempt to support loading plugins from subdirectories: containernetworking/cni#1052
- Hypothetical impl there now, PTAL & offer opinions
- (if time, otherwise defer) Conf vs Conflist in libcni [bl]
- only conflist is supported by current spec, and that has been true for some time.
- looking for historical context on current state
- should we mark conf as deprecated, and remove on major bump? Given the above, that seems reasonable.
- Decision: Even though the pre-1.0.0 format is deprecated, we cannot remove it yet.
- Metadata
- Socket Path
- pciid
- Convention for results from device plugins [cdc, others]
- Should we make it easy for containerd / crio to pass devices ti plugins?
- Prefer to phased approach
- phase1: Just formalize in cni repo and change multus/sr-iov
- phase2: Integrate cni runtime as well as container runtime
- CDI
- Add MTU to the interface results in the CNI
- annoying that spec v1.0 is library v1.1
- Do we split spec repo and library?
- What if we skip v1.1 and move to v1.2?
- PR's
- Add Github action to build binary at tag release containernetworking/plugins#1000
- Initial attempt to support loading plugins from subdirectories: containernetworking/cni#1052
- Hypothetical impl there now, PTAL & offer opinions
- (if time, otherwise defer) Conf vs Conflist in libcni [bl]
- only conflist is supported by current spec, and that has been true for some time.
- looking for historical context on current state
- should we mark conf as deprecated, and remove on major bump? Given the above, that seems reasonable.
- Metadata
- Socket Path
- pciid
- US Holiday, cdc out too
- Welcome to the New Year!
- PR:
- containernetworking/plugins#844
- from the last comment: It's unfortunate that this as been pending so long, it seems like the maintainers are ignoring this or don't find value in this PR 😭
- May need to decision (to include/not include?)
- containernetworking/plugins#921 (local bandwidth)
- Tomo and Mike approved
- We will ship this in the next release.
- containernetworking/plugins#844
- CNI 1.1
- tag -rc1?
- implement in plugins
- implement in runtimes (go-cni cri-o)
- What belongs in CNI v1.2:
- Metadata proposal
- conclusion: this seems worthy, let's expore it
- come up with some use cases, draft a SPEC change
- multi-ADD / idempotent ADD / reconfigure
- wellllll, k8s doesn't have network configuration, so how can we reconfigure what we don't have?
- Pete to write up proposal?
- [bl] QQ: Config versioning
- Do we distinguish between config file and config (in-mem) schema versions, or are they always 1:1 in the spec?
- PR:
- containernetworking/cni#1039 (remind)
- containernetworking/plugins#921 (follow-up to MikeZ)
- Metadata Proposal Discussion [Zappa]
- New definitions specification update [Zappa]
- ready to cut 1.1?
- Subdir-based plugin conf [Ben L.] [Post 1.1/Jan]
- Issue: containernetworking/cni#928
- Spec rewrite PR to start convo around details (mostly spec rev. versus none):
- Issue: containernetworking/cni#928
- PR:
- containernetworking/cni#1039 (remind)
- containernetworking/plugins#921 (follow-up to MikeZ)
- v1.1 blockers https://github.com/containernetworking/cni/milestone/9
- only big question is cni versions (containernetworking/cni#1010) MERGED
- and route MTU: containernetworking/cni#1041
- Metadata Proposal Discussion [Zappa]
- Review https://github.com/jasonliny/tag-security/blob/main/assessments/projects/cni/self-assessment.md
- Idea: we should not allow delegated plugins outside of CNI_BINDIR
- We talk about how delegated plugins are totally dangerous and we should be more careful
- [cdc] need to cut a plugins release
- (grump about CVE scanners)
- last PR sweep
- cdc to cut release shortly
- CNCF TAG Security: working on a self-assessment for CNI, which will be submitted to TAG Security (details here: cncf/tag-security#1102)
- We would appreciate some feedback, as we suspect there may be misunderstandings or inaccurate information in our current draft (https://github.com/jasonliny/tag-security/blob/main/assessments/projects/cni/self-assessment.md)
- CNI 2.0 Note Comparison [Zappa]
- KNI Design proposal
- Mike Z is going to do further work on gRPC in the container runtime to see if it fits conceptually once its in there.
- Key points:
- Something like this exists in some private forks
- Extra metadata on CNI result [Zappa]
- KubeCon EU Presentation? (skipped / next time) (From Tomo)
- Tomo is out but Doug said he'd bring this up.
- usual (not community one) CFP deadline is Nov 26.
- CNI1.1 talk?
- No one has the dates for the maintainer track CFP due date, but Casey's going to ask around about it.
- CNI 2.0 requirements discussion (We should still discuss 1.1)
- [mz] Extra metadata on CNI result
- KubeCon EU Presentation? (skipped / next time)
- usual (not community one) CFP deadline is Nov 26.
- CNI1.1 talk?
- CNI 2.0 requirements discussion (We should still discuss 1.1) [Zappa]
- Extra metadata on CNI result
- [Tomo] PR / Issue (follow-up)
- containernetworking/cni#1038 (simple fix)
- containernetworking/cni#1039 (supersedes PR#1035)
- containernetworking/plugins#921 (support exclude subnets in bandwidth plugin)
- containernetworking/cni.dev#130 (bandwidth plugin doc change)
- [MikeZ] CNI 2.0 discussion
- [Tomo] PR / Issue
- containernetworking/plugins#973 (to close)
- containernetworking/cni#1038 (simple fix)
- containernetworking/cni#1039 (supersedes PR#1035)
- containernetworking/plugins#974 (Add v6 disc_notify in macvlan)
- containernetworking/plugins#979 (Add v6 disc_notify in ipvlan)
- containernetworking/plugins#969 (required for future CNI vendor update)
- containernetworking/plugins#962 (remove unused code)
- containernetworking/plugins#921 (support exclude subnets in bandwidth plugin)
- containernetworking/cni.dev#130 (bandwidth plugin doc change)
- [cdc] STATUS implementation: containernetworking/cni#1030
- and SPEC: containernetworking/cni#1003
- Do some PR reviews (it's kubecon week)
- [Tomo] PR review: CNI repo change to omit DNS in CNI Conf and Result (1.0.0 only)
- Currently DNS field is 'omitempty' but not pointer, hence empty structure is returned
- containernetworking/cni#1035
- Supersedes containernetworking/cni#1007 (that changes to DNS Conf side)
- [PeterW] Reference multi-network design doc: https://docs.google.com/document/d/1oVOzlX4nDMyQM6VWJzqMO02FJDYaPb-FgD_88LkXjkU/edit#heading=h.m758iblg0in4
- [Tomo] (TODO)Writing DNS Doc...
- Discovery: DNS type is not currently called out as optional, it should be
- [Ed] Request from kubevirt community
- containernetworking/plugins#951 (activateInterface option for bridge CNI plugin)
- support non-interface specific sysctl params in
tuning
(with as stand-alone, not meta plugin)
- note: tuning currently allows anything in /proc/sys/net
- [Casey] Status PRs are ready for review
NOTE: jitsi died, https://meet.google.com/hpm-ifun-ujm
- [PeterW] Multi-network update
- KEP-EP heading towards Alpha
- GC is merged!
- We get distracted talking about multi-network and DNS responses
- AI: Tomo will create doc (problem statement)
- [Tomo] containernetworking/plugins#951
- should we support that?
- config name / semantic both are weird...
- Todo: Ask them about that in next
- [Tomo] Request from kubevirt community
- support non-interface specific sysctl params in
tuning
(with as stand-alone, not meta plugin) - Todo: Ask them about that in next
- note: tuning currently allows anything in /proc/sys/net
- support non-interface specific sysctl params in
- [cdc] re-review GC (containernetworking/cni#1022)
- Aojea has questions about ContainerD sandbox API
- Shouldn't affect CNI; just uses Sandbox api instead of runc / OCI
- containernetworking/cni#1022 : GC PR
- Review outstanding TODOs for v1.1
- https://github.com/containernetworking/cni/milestone/9
- Looking good. We push INIT to v1.2
- PR: containernetworking/cni#1024
- Attendance: Doug, Michael Cambria, Antonio, Dan Williams , Dan Winship
- Tomo: maintainer
- If not discussed this meeting Tomo will open a PR for github discussion
- resolved: files PR to add to maintainer list
- containernetworking/cni#1024
- Doug: High level question, what do you all think about K8s native multi-networking?
- Giving a talk with Maciej Skrocki (Google) at Kubecon NA on K8s native multi-networking
- I want to address "what's the position from a CNI viewpoint?"
- My point is: We kinda of "ignore CNI as an implementation detail"
- But! ...It's an important ecosystem.
- Multus is kind of a "kubernetes enabled CNI runtime" -- or at least, users treat it that way
- Should it continue to function in that role?
- Should CNI evolve to have the "kubernetes enabled" functionality?
- What do you all think?
- CNI has always supported multinetworking (especially: rkt)
- And K8s has taken almost 10 years!
- Mike brings up, that it's really that the runtime insisted on doing only one interface.
- Doug asks is it the runtime should be enabled with the functionality
- What about dynamic reconfiguration
- Mike Z mentions he's working on the CRI side, executing multiple cni
- Pod sandbox status, to relay multiple IPs back, and the network names
- Node network interface(NRI) doesn't have a network domain
- Network domain hooks pre-and-post
- This is happening outside of the k8s space.
- Use cases outside of Kubernetes, as well.
- Custom schedulers for BGP, OSPF, etc.
- Mike C brings up consideration of scheduling a pod with knowledge of which networks will be available.
- Re: STATUS
- Progrmatically distributing network configuration, and
- that problem appears in single network as well, and has relation to STATUS
- Progrmatically distributing network configuration, and
- Antonio brings up, what percent of community benefits from multiple interfaces?
- CNI has been surprisingly static in the face of other changes (e.g. kubenet -> [...])
- Doug: Also any updates in Kubecon NA maintainer's summit?
- no maintainers going :-(
- Back to STATUS?
- Sticking point: how do you know whether or not to rely on STATUS -- as a plugin that automatically writes a configuration file
- Idea: version negotiation when cniVersion is empty
- This works if CRI-O / containerd ignore conflist files w/o a cniVersion
- Casey to experiment
- Still doesn't solve the problem of plugins knowing which value to use
- How do we know that a node supports v1.1 (and thus uses STATUS)?
- sweet, the ContainerD / CRI-O version is exported in the Node object
- Ugly, but heuristics will work
- draft GC PR: containernetworking/cni#1022
- of interest: deprecate PluginMain(add, del, check) b/c signature changes stink
- Attendance: Tomo, Antonio, Henry Wang, Dan Williams, Dan Winship
- Network Ready
- Now: have a CNI config file on-disk
- Container Runtimes: containerd and crio reply NetworkReady through CRI based on the existing of that file
- When CNI plugin can't add interfaces to new Pods, we want the node to be no-schedule TODO(aojea) if condition Network notReady = tainted
- Only way to currently indicate this is the CNI config file on-disk
- Can't really remove the config file to indicate readiness (though libcni does cache config for DEL)
- One option: enforce STATUS in plugin by always writing out CNI config with CNIVersion that includes status
- Runtimes that don't know STATUS won't parse your config and will ignore your plugin
- Downside: you have to know the runtime supports STATUS
- Downside: in OpenShift upgrades, old CRIO runs with new plugin until node reboot, this would break that. You'd have to have a window where the runtime supported STATUS but your plugin didn't use it yet. Then 2 OpenShift releases later you can flip to requiring STATUS.
- KubeCon NA: maintainer's summit?
- Usually we can book these (might be a bit too late though)
- PR Update:
- containernetworking/plugins#921
- Tomo approved
- containernetworking/plugins#921
- Attendance: Casey, Peter, Tomo
- Question for the US people: KubeCon maintainer's summit?
- Definite topic for next week
- Tomo: maintainer
- will discuss next week
- Attendance: Antonio, MikeZ, Tomo
- Discussion about NRI/multi-network design
- Attendance: Antonio, Peter, Dan Winship, Tomo
- Mike Zappa is likely to write down KEP for kubernetes CRI/CNI/NRI to handle cases like multi network
- Current multinetwork approach for the KEP is focusing on API phases
- Follow STATUS PR: containernetworking/cni#1003
- CDC on vacation next two weeks
- Milestone review -- https://github.com/containernetworking/cni/milestone/9
- we close out a few proposals that have been rejected
- Further disussion for CNI over CRI
- NRI: node resource interface: a series of hooks
- It could make sense for networking to be integrated in NRI
- NRI has no networking support / domain right now; could be conceivably expanded
- Network Service for Containerd
- Reviving version negotiation
- Casey's proposal: a configlist without a version uses VERSION to pick the highest one
- This means that administrators don't have to pick a version, which requires understanding too many disparate components
- Can we rely on IPs never being used?
- Nope, you have to use the ContainerID
- No good way around it
- CNCF graduation?
- MikeZ reached out about security audit, need to add the "best practices" badge
- We merge the GC spec
- woohoo!
- Casey looks at PR https://github.com/containernetworking/plugins/pull/936/files and is a bit surprised at how many bridge VLAN settings there are
- Could we get some holistic documentation of these options?
- Discussion w.r.t.: containernetworking/cni#927
- Background: how do we tell what version if config to install?
- We talk about adding more information to the VERSION command; could do things like discovering capabilities
- Dream about executing containers instead of binaries on disk
- (wow, it's like a shitty PodSpec! But still very interesting)
- Remove as much as possible from CNI configuration, make it easy for administrators
- Hope that multi-networking will make it easier for admins to push out network changes
- How does one feel about version autonegotiation
- let's do it
- CNCF graduated project?
- requirements: https://github.com/cncf/toc/blob/main/process/graduation_criteria.md#graduation-stage
- no real opposition, just not high on the list
- nftables! (FYI) containernetworking/plugins#935
- We talk about whether it is safe to rely on IP addresses being cleaned up between DEL and ADD
- libcni always deletes chained plugins last-to-first to avoid this very issue... except not quite
Thus, it is potentially safe to delete map entries solely by IP address
- Is it safe to rely on IP addresses always being cleaned up?
- Multinetwork report from Pete White
- MikeZ is meditating on how it fits in with the CRI
- STATUS verb (PR 1003) (Issue 859)
- The problem: plugins don't know whether they should use legacy (write file when ready) behavior versus rely on STATUS
- Potential solutions:
- CRI signals whether or not it supports STATUS via config file or something (discussed in issue 927)
- Biggest blocker for a feature file is downgrades
- Add an additional directory, "cniv1.1", that is only read by cni clients
- Plugins write a file that is invalid for v1.0, but valid for v1.1, when status is failing
- Switch to a new directory entirely
- New filename suffix (.conflistv1.1)
- Not a terrible idea
- CRI signals whether or not it supports STATUS via config file or something (discussed in issue 927)
- reviewers wanted: containernetworking/plugins#921
- Review of PRs, looking in pretty good shape
- Regrets: Tomo
- CNI Route type and MTU containernetworking/cni#1004
- previous effort stalled out at containernetworking/cni#831
- no opposition, let's try and get this in for v1.1
- Easy-to-review PR list (by Tomo)
- AI: Tomo review: containernetworking/plugins#903
- Finalize CNI STATUS verb
- Let's review some PRs
- multi-network chit chat
- Continuing STATUS editing
- Tomo asks about version divergence between a plugin and its delegate. We talk about version negotiation.
- Circle back for CNI+CRI
- Update: we file containernetworking/cni#1003
- Brief discussion about CNI and CRI for old time's sake
- We will work on initial wording of the STATUS verb
- presented in sig-network meeting last Thursday
- What do we return? Just non-zero exit code? OR JSON type?
- We should return a list of conditions
- Conditions: (please better names please)
- AddReady
- RoutingReady (do we need this)?
- ContainerReady
- NetworkReady
- Should we return 0 or non-zero?
- after a lot of discussion, we come back to returning nothing on success and just error
STATUS
is a way for a runtime to determine the readiness of a network plugin.
A plugin must exit with a zero (success) return code if the plugin is ready to service ADD requests. If the plugin is not able to service ADD requests, it must exit with a non-zero return code and output an error on standard out (see below).
The following error codes are defined in the context of STATUS
:
- 50: The plugin is not available (i.e. cannot service
ADD
requests) - 51: The plugin is not available, and existing containers in the network may have limited connectivity.
Plugin considerations:
- Status is purely informational. A plugin MUST NOT rely on
STATUS
being called. - Plugins should always expect other CNI operations (like
ADD
,DEL
, etc) even ifSTATUS
returns an error.STATUS
does not prevent other runtime requests. - If a plugin relies on a delegated plugin (e.g. IPAM) to service
ADD
requests, it must also execute aSTATUS
request to that plugin. If the delegated plugin return an error result, the executing plugin should return an error result.
Input:
The runtime will provide a json-serialized plugin configuration object (defined below) on standard in.
Optional environment parameters:
CNI_PATH
message RuntimeCondition {
// Type of runtime condition.
string type = 1;
// Status of the condition, one of true/false. Default: false.
bool status = 2;
// Brief CamelCase string containing reason for the condition's last transition.
string reason = 3;
// Human-readable message indicating details about last transition.
string message = 4;
}
- PTAL: containernetworking/cni.dev#119
- cdc observes we're due for some website maintainance
- [aojea] - more on CNI status checks - dryRun option?
- We would really like the STATUS verb
- It would solve an annoying user situation
- Let's do it.
- Next week we'll sit down and hammer out the spec.
- containernetworking/cni#859
- strawman approach: kubelet (networkReady) -- CRI --> container_runtime -- (exec) --> CNI STATUS
- runtimes should use the version to use the new VERB
- See if GC spec needs any changes: containernetworking/cni#981
- We need wording for paralleization:
- The container runtime must not invoke parallel operations for the same container, but is allowed to invoke parallel operations for different containers. This includes across multiple attachments.
- Exception: The runtime must exclusively execute either gc or add and delete. The runtime must ensure that no add or delete operations are in progress before executing gc, and must wait for gc to complete before issuing new add or delete commands.
- [aojea] follow CNI conversation from 2023-05-08
- hard to land important changes on CRI API
- For configuration understand existing kubelet subsystems like "kubelet devices plugins" and "DirectResourceAllocation" to align and being able to plug CNI
- New KEP to use QoS kubernetes/enhancements#3004 (comment)
- We look at the DRA design to see how it fits CNI: https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/
- KEP: https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/3063-dynamic-resource-allocation/README.md
- (No mention of sandbox creation)
- Could we make a CNI DRA driver?
- Only if the lifecycle matches exactly what we need
- We can't create network interfaces until the network namespace exists
- the network namespace is created with the PodSandbox (a.k.a pause container)
- Current order:
- Pod created.
- kubelet calls CreatePodSandbox CRI method...
- containerd creates netns
- containerd calls CNI ADD
- CreatePodSandbox done, Containers now created and started
- Only if the lifecycle matches exactly what we need
- The DRA object model looks really good
- it has the ability to have arbitrary parameters a.k.a. ResourceClaimTemplate, which would be nice
- what if we have a CNI plugin that needs devices created from additional DRA providers?
- For improving supportability improve kubelet check containernetworking/cni#859
- MZappa great diagram https://drive.google.com/file/d/1TTTM2YP67J4mjG4BchNyEmEi1nXKHtfN/view
- [cdc] GC is stalled, but should have time in 2-3 weeks to work on it
- [cdc] anyone have time to work on status? containernetworking/cni#859
- let's turn off the github auto-staler, it's being mean
- [cdc] we cut a release! yay!
- [cdc] Chatting with Multus implementers about version negotiation
- Should we just do this automatically? We already have the VERSION command...
- Everyone is uncomfortable with "magic" happening without someone asking for it
- Original proposal was for cniVersions array.
- What if we added "auto" as a possible cni version?
- Concern: how do we expose what version we decided to use?
- Or what if we just autonegotiate all the time
- YOU GET A NEW VERSION! YOU GET A NEW VERSION!
Agenda:
- [tomo] What should GC(de-INIT) do if INIT fails
- What parallel operations should be allowed?
- Can you GC and ADD / DEL at the same time?
- No way; the runtime has to "stop-the-world"
- This makes sense, "network-wide" operations can be thought of as touching "every" attachment, and we don't allow parallel operations on an attachment.
- [cdc] Sorry, I owe a release
- [aojea] Evolve CNI to support new demands containernetworking/cni#785 (comment)
- We talk about the difference between the "configuration" API vs. the plugin API
- Everyone seems to settle on CNI via the CRI
- Casey drafted a version of this: https://hackmd.io/@squeed/cri-cni
- Labor day
- plugins will be released next week
- We review some PRs:
- KubeCon week, kind of quiet.
- PR 873 is merged.
- Let's try and do a release in the next few weeks.
- containernetworking/cni#981 has feedback, let's address it
- We button up some of the wording for GC
- Review some PRs. Merge some of them.
Agenda:
Agenda:
Agenda:
- INIT/DE-INIT discussion
- Should INIT/DEINIT be per-network, or per-plugin?
- Probably per-network... but resources shared across networks? (see DEINIT discussion)
- Serialization
- Should the runtime be required to serialize calls to a plugin?
- Eg can the plugin call INIT for two networks for the same plugin simultaneously, or not
- Tomo asked about ordering guarantees; we shouldn't have double-INIT or double-DEINIT
- If the config changes, do we DEINIT and then INIT with the new config? That could be very problematic.
- Do we need an UPDATE for config change case?
- What if the chain itself changes, plugins added/removed?
- What if a plugin in the chain fails INIT?
- What is the failure behavior if INIT fails?
- When does the runtime retry INIT?
- What if DEINIT fails?
- Should GC be called?
- Timing is pretty vague; when should DEINIT be called?
- when the network config disappears (deleted from disk, removed from Kube API, etc)
- when config disappears and all containers using the network are gone?
- How should plugins handle deleting resources common to all networks? (eg plugin iptable chain)
- Should we require that networks use unique resources to prevent this issue?
- And/or punt to plugins that they just have to track/handle this kind of thing
- "How do I uninstall a CNI plugin?"
- CNI spec doesn't talk about any of this
- (partly because we let the runtime decide where config is actually stored, even though libcni implements one method for doing this -- on-disk)
- When config gets deleted, how do we invoke DEINIT with the now-deleted config?
- Use cached config?
- libcni would need to keep cached config after DEL; currently it doesn't
- Keep a new kind of "network"-level config for this?
- Should INIT/DEINIT be per-network, or per-plugin?
- PR review
- containernetworking/plugins#867 (MERGED)
- containernetworking/plugins#855 (MERGED)
Agenda:
- Brief discussion about some sort of SYNC
- usual chained plugin issue
- Let's try and write the spec for GC.
- We do! containernetworking/cni#981
Agenda:
round of introductions
- STATUS for v1.1?
- Multi-network? Doug not present
- Network Plugin for containerd(Henry)
- initial problem: trying to solve leaking resources (sometimes cleanup fails)
- led to GC proposal, as well as GC() method on libcni
- containerd/containerd#7947
- Does it make sense for some kind of idempotent Sync()
- Challenges:
- hard to make fast / high overhead
- chained plugins make this difficult, might have flapping interfaces
- pushes a lot of overhead on the plugins
- Does INIT solve this? Not really; runtime might not call INIT when it's needed
- Challenges:
- What do we do on failed CHECK?
- Should we allow for ADD after failed CHECK
- Chained plugins make this difficult, but we could change the spec
- (tomo) Consider a bridge - when should we delete it?
- even though no container interface in bridge, user may add some physical interface to the pod
- bridge plugin does not have lock mechanism for multiple container
- we considered a DEINIT verb, but it didn't seem useful
- Let's do some reviews. Oops, we run out of time
- Should we formalize "how to interact with libcni"?
- What are the expectations for how configuration files are dropped in? (e.g. permission error)
CNI v1.1 roadmap: https://github.com/containernetworking/cni/milestone/9
Agenda:
- CDC: bumping spec version to 1.1 -- containernetworking/cni#967
-
- GC and INIT for CNI 1.1
- From last meeting, these are the current priorities
- TODO: write spec, then implement in libcni
- mz has opened issues #974 and #975
-
- Network Plugin for containerd
-
- Update meeting invite for new
- Review containernetworking/cni#963
- Review containernetworking/plugins#844
Tomo asks for clarification about GC and INIT
INIT: runtime calls INIT on a configuration but without a container. It means "please prepare to receive CNI ADDs". For example, a plugin could create a bridge interface.
GC: two aspects to this discussion. Most of the GC logic would actually be in libcni, which already maintains a cache of configuration and attachments. The runtime would pass a list of still-valid attachments. Libcni could synthesize DEL for any "stale" attachments.
Separately, there could be a spec verb, GC, that would tell runtimes to delete any stale resources
We do some reviews.
Next week: Woah, there are a lot of PRs to review. Oof.