Skip to content

Commit

Permalink
Updating documentation, and schema descriptors to be inline with the …
Browse files Browse the repository at this point in the history
…extended functionality.

We actually have more generally supported attributes now, than dynamic-only :)
  • Loading branch information
Levovar committed Jul 22, 2019
1 parent 2daf900 commit b571ad3
Show file tree
Hide file tree
Showing 5 changed files with 61 additions and 55 deletions.
33 changes: 22 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@ If you want you can even add all available APIs at the same time to see which me
We advise new users, or users operating a single tenant Kubernetes cluster to start out with a streamlined, lightweight network management experience.
In this "mode" DANM only recognizes one network management API, called **DanmNet**.
Both administrators, and tenant users manage their networks through the same API. Everyone has the same level of access, and can configure all the parameters supported by DANM at their leisure.
At the same time it is impossible to create networks, which can be used across tenants (disclaimer: we use the word "tenant" as a synonym to "Kubernetes namespace").
At the same time it is impossible to create networks, which can be used across tenants (disclaimer: we use the word "tenant" as a synonym to "Kubernetes namespace" throughout the document).
##### Production-grade network management experience
In a real, production-grade cluster the lightweight management paradigm does not suffice, because usually there are different users, with different roles interacting with each other.
There are possibly multiple users using their own segment of the cloud -or should we say tenant?- at the same time; while there can be administrator(s) overseeing that everything is configured, and works as it should be.
Expand All @@ -283,7 +283,7 @@ Wonder how? Refer to chapter [Connecting TenantNetworks to TenantConfigs](#conne

Interested user can find reference manifests showcasing the features of the new APIs under [DANM V4 example manifests](https://github.com/nokia/danm/tree/master/example/4_0_examples).
##### Network management in the practical sense
Regardless which paradigm thrives in your cluster, network objects are managed the exact same way - you just might not be allowed to execute a specific provisioning operation in case you are trying to overstep your boundaries! Don't worry, as DANM will always explicitly and instantly tells you if this is the case.
Regardless which paradigm thrives in your cluster, network objects are managed the exact same way - you just might not be allowed to execute a specific provisioning operation in case you are trying to overstep your boundaries! Don't worry, as DANM will always explicitly and instantly tell you if this is the case.
Unless explicitly stated in the description of a specific feature, all API features are generally supported, and supported the same way regardless through which network management API type you use them.

Network management always starts with the creation of Kubernetes API objects, logically representing the characteristics of a network Pods can connect to.
Expand Down Expand Up @@ -336,7 +336,7 @@ Sorry, but they made us do it :)
**Note**: some CNI plugins try to be smart about this limitation on their own, and decided not to adhere to the CNI standard! An example of this behaviour can be found in Flannel.
It is the user's responsibility to put the network connection of such boneheaded backends to the first place in the Pod's annotation!

However, DANM also supports both explicit, and implicit interface naming schemes for all NetworkTypes to help you flexibly name the other -and CNI standard- interfaces!
Besides making sure the first interface is always named correctly, DANM also supports both explicit, and implicit interface naming schemes for all NetworkTypes to help you flexibly name the other -and CNI standard- interfaces!
An interface connected to a network containing the container_prefix attribute is always named accordingly. You can use this API to explicitly set descriptive, unique names to NICs connecting to this network.
In case container_prefix is not set in an interface's network descriptor, DANM automatically uses the "eth" as the prefix when naming the interface.
Regardless which prefix is used, the interface name is also suffixed with an integer number corresponding to the sequence number of the network connection (e.g. the first interface defined in the annotation is called "eth0", second interface "eth1" etc.)
Expand Down Expand Up @@ -368,8 +368,10 @@ This way users can dynamically configure various networking solutions via the sa
A generic framework supporting this method is built into DANM's code, but still this level of integration requires case-by-case implementation.
As a result, DANM currently supports two integration levels:

- **Dynamic integration level:** CNI-specific network attributes (such as IP ranges, parent host devices etc.) can be controlled on a per network level, taken directly from the CRD object
- **Static integration level:** CNI-specific network attributes (such as IP ranges, parent host devices etc.) can be only configured via static CNI configuration files (Note: this is the default CNI configuration method)
- **Dynamic integration level:** CNI-specific network attributes (such as IP ranges, parent host devices etc.) can be controlled on a per network level, exclusively taken directly from the CRD object
- **Static integration level:** CNI-specific network attributes are by default configured via static CNI configuration files (Note: this is the default CNI configuration method); but certain parameters are influenced by the DANM API configuration values.

Always refer to the schema descriptors for more details on which parameters are universally supported!

Our aim is to integrate all the popular CNIs into the DANM eco-system over time, but currently the following CNI's achieved dynamic integration level:

Expand All @@ -383,21 +385,21 @@ Our aim is to integrate all the popular CNIs into the DANM eco-system over time,
No separate configuration needs to be provided to DANM when it connects Pods to networks, if the network is backed by a CNI plugin with dynamic integration level.
Everything happens automatically purely based on the network manifest!

When network management is delegated to CNI plugins with static integration level; DANM reads their configuration from the configured CNI config directory.
When network management is delegated to CNI plugins with static integration level; DANM first reads their configuration from the configured CNI config directory.
The directory can be configured via setting the "CNI_CONF_DIR" environment variable in DANM CNI's context (be it in the host namespace, or inside a Kubelet container). Default value is "/etc/cni/net.d".
In case there are multiple configuration files present for the same backend, users can control which one is used in a specific network provisioning operation via the NetworkID parameter.

So, all in all: a Pod connecting to a network with "NetworkType" set to "bridge", and "NetworkID" set to "example_network" gets an interface provisioned by the <CONFIGURED_CNI_PATH_IN_KUBELET>/bridge binary based on the <CNI_CONF_DIR>/example_network.conf file!
In addition to simply delegating the interface creation operation, generally supported DANM API-based features -such as static and dynamic IP route provisioning, flexible interface naming- are also configured by DANM.
In addition to simply delegating the interface creation operation, the universally supported features of the DANM management APIs -such as static and dynamic IP route provisioning, flexible interface naming, or centralized IPAM- are also configured either before, or after the delegation took place.
##### Connecting Pods to specific networks
Pods can request network connections to networks by defining one or more network connections in the annotation of their (template) spec field, according to the schema described in the **schema/network_attach.yaml** file.

For each connection defined in such a manner DANM provisions exactly one interface into the Pod's network namespace, according to the way described in previous chapters (configuration taken from the referenced API object).
In case you have added more than one network management APIs to your cluster, it is possible to connect the same Pod to different networks of different APIs. But please note, that physical network interfaces are 1:1 mapped to logical networks.

In addition to simply invoking other CNI libraries to set-up network connections, Pod's can even influence the way their interfaces are created to a certain extent.
For example Pods can ask DANM to provision L3 IP addresses to their IPVLAN, MACVLAN or SR-IOV interfaces dynamically, statically, or not at all!
Or, as described earlier; creation of policy-based L3 IP routes into their network namespace is also a supported by the solution.
For example Pods can ask DANM to provision L3 IP addresses to their network interfaces dynamically, statically, or not at all!
Or, as described earlier; creation of policy-based L3 IP routes into their network namespace is also universally supported by the solution.
##### Defining default networks
If the Pod annotation is empty (no explicit connections are defined), DANM tries to fall back to a configured default network.
In the lightweight network management paradigm default networks can be only configured on a per namespace level, by creating one DanmNet object with ObjectMeta.Name field set to "default" in the Pod's namespace.
Expand All @@ -414,9 +416,9 @@ DANM waits for the CNI result of all executors before converting, and merging th
If any executor reported an error, or hasn't finished its job even after 10 seconds; the result of the whole operation will be an error.
DANM reports all errors towards kubelet in case multiple CNI plugins failed to do their job.
#### DANM IPAM
DANM includes a fully generic and very flexible IPAM module in-built into the solution. The usage of this module is seamlessly integrated together with the natively supported CNI plugins, that is, DANM's IPVLAN, Intel's SR-IOV, and the CNI project's reference MACVLAN plugins.
DANM includes a fully generic and very flexible IPAM module in-built into the solution. The usage of this module is seamlessly integrated together with all the natively supported CNI plugins (DANM's IPVLAN, Intel's SR-IOV, and the CNI project's reference MACVLAN plugins); as well as with any other CNI backend fully adhering to the v0.3.1 CNI standard!

That is because just like the above CNIs, configuration of DANM's IPAM is also integrated into the DANM's network management APIs through the attributes called "cidr", "allocation_pool", and "net6". Therefore users of the module can easily configure all aspects of network management by manipulating solely dynamic Kubernetes API objects!
The main feature of DANM's IPAM is that it's fully integrated into DANM's network management APIs through the attributes called "cidr", "allocation_pool", and "net6". Therefore users of the module can easily configure all aspects of network management by manipulating solely dynamic Kubernetes API objects!

This native integration also enables a very tempting possibility. **As IP allocations belonging to a network are dynamically tracked *within the same API object***, it becomes possible to define:
* discontinuous subnets 1:1 mapped to a logical network
Expand All @@ -426,6 +428,13 @@ Network administrators can simply put the CIDR, and the allocation pool into the

The flexible IPAM module also allows Pods to define the IP allocation scheme best suited for them. Pods can ask dynamically allocated IPs from the defined allocation pool, or can ask for one, specific, static address.
The application can even ask DANM to forego the allocation of any IPs to their interface in case a L2 network interface is required.
##### Using IPAM with static backends
While using the DANM IPAM with dynamic backends is mandatory, netadmins can freely choose if they want their static CNI backends to be also integrated to DANM's IPAM; or they would prefer these interfaces to be statically configured by another IPAM module.
By default the "ipam" section of a static delegate is always configured from the CNI configuration file identified by the network's NetworkID parameter.
However, users can overwrite this inflexible -and most of the time host-local- option by defining "cidr", and/or "net6" in their network manifest just as they would with a dynamic backend.
When a Pod connects to a network with static NetworkType but containing allocation subnets, and explicitly asks for an "ip", and/or "ip6" address from DANM in its annotation; DANM overwrites the "ipam" section coming from the static config with its own, dynamically allocated address.
If a Pod does not ask DANM to allocate an IP, or the network does not define the necessary parameters; the delegation automatically falls back to the "ipam" defined in the static config file.
**Note**: DANM can only integrate static backends to its flexible IPAM if the CNI itself is fully compliant to the standard, i.e. uses the plugin defined in the "ipam" section of its configuration. It is the administrator's responsibility to configure the DANM management APIs according to the capabilities of every CNI!
##### IPv6 and dual-stack support
DANM's IPAM module, and its integration to dynamic backends -IPVLAN, MACVLAN, and SR-IOV CNIs- support both IPv6, and dual-stack (one IPv4, and one IPv6 address provisioned to the same interface) addresses!
To configure an IPv6 CIDR for a network, network administrators shall configure the "net6" attribute. Additionally, IP routes for IPv6 subnets can be configured via "routes6".
Expand All @@ -438,6 +447,8 @@ That being said, network administrators using IPv6, or dual-stack features need
* the smallest supported IPv6 subnet is /64
* allocation pools cannot be defined for IPv6 subnets

This feature is generally supported the same way even for static CNI backends! However guaranteeing that every specific backend is compabile and comfortable with both IPv6, and dual IPs allocated by an IPAM cannot be guaranteed by DANM.
Therefore, it is the administrator's responsibility to configure the DANM management APIs according to the capabilities of every CNI!
#### DANM IPVLAN CNI
DANM's IPVLAN CNI uses the Linux kernel's IPVLAN module to provision high-speed, low-latency network interfaces for applications which need better performance than a bridge (or any other overlay technology) can provide.

Expand Down
21 changes: 8 additions & 13 deletions schema/ClusterNetwork.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,16 +14,16 @@ metadata:
spec:
# This parameter provides a second identifier for ClusterNetworks, and can be used to control a number of API features.
# For static delegates, the parameter configures which CNI configuration file is to be used if NetworkType points to a static-level CNI backend.
# For dynamic delegates, VxLAN host interfaces are suffixed, while VLAN host interfaces are prefixed with the NetworkID.
# VxLAN host interfaces are suffixed, while VLAN host interfaces are prefixed with the NetworkID.
# This allows deployment administrators to separate their own interfaces from others' in a multi-tenant environment, i.e. by setting NetworkID to "name_namespace" value.
# OPTIONAL - STRING, MAXIMUM 12 CHARACTERS
# OPTIONAL - STRING, MAXIMUM 11 CHARACTERS
NetworkID: ## NETWORK_ID ##
# This parameter, denotes which backend is used to provision the container interface connected to this network.
# Currently supported values with dynamic integration level are IPVLAN (default), SRIOV, or MACVLAN.
# - IPVLAN option results in an IPVLAN sub-interface provisioned in L2 mode, and connected to the designated host device
# - SRIOV option pushes a pre-allocated Virtual Function of the configured host device to the container's netns
# - MACVLAN option results in a MACVLAN sub-interface provisioned in bridge mode, and connected to the designated host device
# Setting this option to another value results in delegating the network provisioning operation to the named backend with static configuration (i.e. most Options are ignored).
# Setting this option to another value results in delegating the network provisioning operation to the named backend with static configuration (i.e. coming from a standard CNI config file).
# The default IPVLAN backend is used when this parameter is not specified.
# OPTIONAL - ONE OF {ipvlan,sriov,macvlan,<NAME_OF_ANY_STATIC_LEVEL_CNI_COMPLIANT_BINARY>}
# DEFAULT VALUE: ipvlan
Expand All @@ -37,9 +37,8 @@ spec:
# - K8S_NAMESPACE1
# - K8S_NAMESPACE2
# Specific extra configuration options can be passed to the network provisioning backends.
# Most of the parameters are only supported for dynamic level backends, such as IPVLAN, MACVLAN, and SRIOV.
# Other network interfaces are always provisioned based on their associated static CNI configuration files.
# The exceptional attributes are "rt_tables", "container_prefix", and "routes/6". DANM universally supports the features related to these parameters for all CNI backends.
# Most of the parameters are generally supported for all network types.
# Options only supported for dynamic level backends, such as IPVLAN, MACVLAN, and SRIOV are explicitly noted.
Options:
# Name of the parent host device (i.e. physical host NIC).
# Sub-interfaces are connected to this NIC in case NetworkType is set to IPVLAN, or MACVLAN.
Expand All @@ -53,24 +52,20 @@ spec:
# OPTIONAL - STRING
device_pool: ## DEVICE_PLUGIN_RESOURCE_POOL_MAME ##
# The IPv4 CIDR notation of the subnet associated with the network.
# Pods connecting to this network will get their IPv4 IP from this subnet, if defined.
# Only has an effect with dynamically integrated backends. Ignored for other NetworkTypes.
# Pods connecting to this network get an IPv4 address to their interface from this subnet.
# OPTIONAL - IPv4 CIDR FORMAT (e.g. "10.0.0.0/24")
cidr: ## SUBNET_CIDR ##
# IPv4 allocation will be done according to the narrowed down allocation pool parameter, if defined.
# Allocation pool must be provided together with "cidr", and shall be included in the subnet range.
# Only has an effect with dynamically integrated backends. Ignored for other NetworkTypes.
# If CIDR is provided without defining an allocation pool, it is automatically calculated for the whole netmask (minus the first, and the last IP).
# The gateway IPs of all the configured IP routes are also automatically reserved from the allocation pool when it is generated.
# When the network administrator sets the allocation pool, DANM assumes the non-usable IPs (e.g. broadcast IP, gateway IPs etc.) were already discounted.
allocation_pool:
start: ## FIRST_ASSIGNABLE_IP ##
end: ## LAST_ASSIGNABLE_IP ##
# The IPv6 CIDR notation of the subnet associated with the network.
# Pods connecting to this network will get their IPv6 addresses from this subnet, if defined.
# Only has an effect with dynamically integrated backends. Ignored for other NetworkTypes.
# OPTIONAL - IPv6 CIDR FORMAT (e.g. "2001:db8::/45").
# NOTE: Netmask of the subnet cannot be higher than /64 (i.e. /65 and upwards).
# Pods connecting to this network will get their IPv6s from this subnet, if defined.
# OPTIONAL - IPv6 CIDR FORMAT (e.g. "2001:db8::/45"). Netmask of the subnet cannot be higher than /64 (i.e. /65 and upwards).
net6: ## SUBNET_CIDR ##
# Interfaces connected to this network are renamed inside the Pod's network namespace to a string starting with "container_prefix".
# If not provided, DANM uses "eth" as the prefix.
Expand Down
Loading

0 comments on commit b571ad3

Please sign in to comment.