Skip to content

Commit

Permalink
Website update for main
Browse files Browse the repository at this point in the history
  • Loading branch information
antrea-bot committed Jul 18, 2024
1 parent 1e5a86a commit b7e9a21
Show file tree
Hide file tree
Showing 13 changed files with 82 additions and 84 deletions.
4 changes: 2 additions & 2 deletions content/docs/main/docs/antrea-ipam.md
Original file line number Diff line number Diff line change
Expand Up @@ -278,8 +278,8 @@ where the underlay router will route the traffic to the destination VLAN.
### Requirements for this Feature

As of now, this feature is supported on Linux Nodes, with IPv4, `system` OVS datapath
type, `noEncap`, `noSNAT` traffic mode, and `AntreaProxy` feature enabled. Configuration
with `ProxyAll` feature enabled is not verified.
type, `noEncap`, `noSNAT` traffic mode, and Antrea Proxy enabled. Configuration
with `proxyAll` enabled is not verified.

The IPs in the `IPPools` without VLAN must be in the same underlay subnet as the Node
IP, because inter-Node traffic of AntreaIPAM Pods is forwarded by the Node network.
Expand Down
10 changes: 5 additions & 5 deletions content/docs/main/docs/antrea-network-policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -1490,7 +1490,7 @@ Kubernetes](https://kubernetes.io/docs/concepts/services-networking/dns-pod-serv
Services. The reason is that Antrea will use the information included in A or
AAAA DNS records to implement FQDN based policies. In the case of "normal" (not
headless) Services, the DNS name resolves to the ClusterIP for the Service, but
policy rules are enforced after AntreaProxy Service Load-Balancing and at that
policy rules are enforced after Antrea Proxy Service Load-Balancing and at that
stage the destination IP address has already been rewritten to the address of an
endpoint backing the Service. For headless Services, a ClusterIP is not
allocated and, assuming the Service has a selector, the DNS server returns A /
Expand Down Expand Up @@ -1571,8 +1571,8 @@ A combination of Service name and Service Namespace can be used in `toServices`
by this field. A sample policy can be found [here](#acnp-for-toservices-rule).

Since `toServices` represents a combination of IP+port, it cannot be used with `to` or `ports` within the same egress rule.
Also, since the matching process relies on the groupID assigned to Service by AntreaProxy, this field can only be used when
AntreaProxy is enabled.
Also, since the matching process relies on the groupID assigned to Service by Antrea Proxy, this field can only be used when
Antrea Proxy is enabled.

This clusterIP-based match has one caveat: direct access to the Endpoints of this Service is not affected by
`toServices` rules. To restrict access towards backend Endpoints of a Service, define a `ClusterGroup` with `ServiceReference`
Expand Down Expand Up @@ -1952,11 +1952,11 @@ Similar RBAC is applied to the ClusterGroup resource.
won't be blocked by new rules.
- For hairpin Service traffic, when a Pod initiates traffic towards the Service it
provides, and the same Pod is selected as the Endpoint, NetworkPolicies will
consistently permit this traffic during ingress enforcement if AntreaProxy is enabled,
consistently permit this traffic during ingress enforcement if Antrea Proxy is enabled,
irrespective of the ingress rules defined by the user. In the presence of ingress rules
preventing access to the Service from Pods providing the Service, accessing the Service
from one of these Pods will succeed if traffic is hairpinned back to the source Pod, and
will fail if a different Endpoint is selected by AntreaProxy. However, when AntreaProxy
will fail if a different Endpoint is selected by Antrea Proxy. However, when Antrea Proxy
is disabled, NetworkPolicies may not function as expected for hairpin Service traffic.
This is due to kube-proxy performing SNAT, which conceals the original source IP from
Antrea. Consequently, NetworkPolicies are unable to differentiate between hairpin
Expand Down
60 changes: 30 additions & 30 deletions content/docs/main/docs/antrea-proxy.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# AntreaProxy
# Antrea Proxy

## Table of Contents

<!-- toc -->
- [Introduction](#introduction)
- [AntreaProxy with proxyAll](#antreaproxy-with-proxyall)
- [Antrea Proxy with proxyAll](#antrea-proxy-with-proxyall)
- [Removing kube-proxy](#removing-kube-proxy)
- [Windows Nodes](#windows-nodes)
- [Configuring load balancer mode for external traffic](#configuring-load-balancer-mode-for-external-traffic)
Expand All @@ -16,46 +16,46 @@

## Introduction

AntreaProxy was first introduced in Antrea v0.8 and has been enabled by default
on all platforms since v0.11. AntreaProxy enables some or all of the cluster's
Antrea Proxy was first introduced in Antrea v0.8 and has been enabled by default
on all platforms since v0.11. Antrea Proxy enables some or all of the cluster's
Service traffic to be load-balanced as part of the OVS pipeline, instead of
depending on kube-proxy. We typically observe latency improvements for Service
traffic when AntreaProxy is used.
traffic when Antrea Proxy is used.

While AntreaProxy can be disabled on Linux Nodes by setting the `AntreaProxy`
While Antrea Proxy can be disabled on Linux Nodes by setting the `AntreaProxy`
Feature Gate to `false`, it should remain enabled on all Windows Nodes, as it is
needed for correct NetworkPolicy implementation for Pod-to-Service traffic.

By default, AntreaProxy will only handle Service traffic originating from Pods
By default, Antrea Proxy will only handle Service traffic originating from Pods
in the cluster, with no support for NodePort. However, starting with Antrea
v1.4, a new operating mode was introduced in which AntreaProxy can handle all
v1.4, a new operating mode was introduced in which Antrea Proxy can handle all
Service traffic, including NodePort. See the following
[section](#antreaproxy-with-proxyall) for more information.
[section](#antrea-proxy-with-proxyall) for more information.

## AntreaProxy with proxyAll
## Antrea Proxy with proxyAll

The `proxyAll` configuration parameter can be enabled in the Antrea
configuration if you want AntreaProxy to handle all Service traffic, with the
configuration if you want Antrea Proxy to handle all Service traffic, with the
possibility to remove kube-proxy altogether and have one less DaemonSet running
in the cluster. This is particularly interesting on Windows Nodes, since until
the introduction of `proxyAll`, Antrea relied on userspace kube-proxy, which is
no longer actively maintained by the K8s community and is slower than other
kube-proxy backends.

Note that on Linux, before Antrea v2.1, when `proxyAll` is enabled, kube-proxy
will usually take priority over AntreaProxy and will keep handling all kinds of
will usually take priority over Antrea Proxy and will keep handling all kinds of
Service traffic (unless the source is a Pod, which is pretty unusual as Pods
typically access Services by ClusterIP). This is because kube-proxy rules typically
come before the rules installed by AntreaProxy to redirect traffic to OVS. When
kube-proxy is not deployed or is removed from the cluster, AntreaProxy will then
come before the rules installed by Antrea Proxy to redirect traffic to OVS. When
kube-proxy is not deployed or is removed from the cluster, Antrea Proxy will then
handle all Service traffic.

Starting with Antrea v2.1, when `proxyAll` is enabled, AntreaProxy will handle
Starting with Antrea v2.1, when `proxyAll` is enabled, Antrea Proxy will handle
Service traffic destined to NodePort, LoadBalancerIP and ExternalIP, even if
kube-proxy is present. This benefits users who want to take advantage of
AntreaProxy's advanced features, such as Direct Server Return (DSR) mode, but
Antrea Proxy's advanced features, such as Direct Server Return (DSR) mode, but
lack control over kube-proxy's installation. This is accomplished by
prioritizing the rules installed by AntreaProxy over those installed by
prioritizing the rules installed by Antrea Proxy over those installed by
kube-proxy, thus it works only with kube-proxy iptables mode. Support for other
kube-proxy modes may be added in the future.

Expand Down Expand Up @@ -141,7 +141,7 @@ kube-proxy:

Starting with Antrea v1.13, the `defaultLoadBalancerMode` configuration
parameter and the `service.antrea.io/load-balancer-mode` Service annotation
can be used to specify how you want AntreaProxy to handle external traffic
can be used to specify how you want Antrea Proxy to handle external traffic
destined to LoadBalancerIPs and ExternalIPs of Services. Specifically, the mode
determines how external traffic is processed when it's load balanced across
Nodes. Currently, it has two options: `nat` (default) and `dsr`.
Expand Down Expand Up @@ -209,8 +209,8 @@ configured to bind to that address and can therefore intercept queries. In case
of a cache miss, queries can be sent to the cluster CoreDNS Pods thanks to a
"shadow" Service which will expose CoreDNS Pods via a new ClusterIP.

When AntreaProxy is enabled (default), Pod DNS queries to the kube-dns ClusterIP
will be load-balanced directly by AntreaProxy to a CoreDNS Pod endpoint. This
When Antrea Proxy is enabled (default), Pod DNS queries to the kube-dns ClusterIP
will be load-balanced directly by Antrea Proxy to a CoreDNS Pod endpoint. This
means that NodeLocal DNSCache is completely bypassed, which is probably not
acceptable for users who want to leverage this feature to improve DNS
performance in their clusters. While these users can update the Pod
Expand All @@ -219,8 +219,8 @@ interface, this is not always ideal in the context of CaaS, as it can require
everyone running Pods in the cluster to be aware of the situation.

This is the reason why we initially introduced the `skipServices` configuration
option for AntreaProxy in Antrea v1.4. By adding the kube-dns Service (which
exposes CoreDNS) to the list, you can ensure that AntreaProxy will "ignore" Pod
option for Antrea Proxy in Antrea v1.4. By adding the kube-dns Service (which
exposes CoreDNS) to the list, you can ensure that Antrea Proxy will "ignore" Pod
DNS queries, and that they will be forwarded to NodeLocal DNSCache. You can edit
the `antrea-config` ConfigMap as follows:

Expand All @@ -241,11 +241,11 @@ data:
In some cases, the external LoadBalancer for a cluster provides additional
capabilities (e.g., TLS termination) and it is desirable for Pods to access
in-cluster Services through the external LoadBalancer. By default, this is not
the case as both kube-proxy and AntreaProxy will install rules to load-balance
the case as both kube-proxy and Antrea Proxy will install rules to load-balance
this traffic directly at the source Node (even when the destination IP is set to
the external `loadBalancerIP`). To circumvent this behavior, we introduced the
`proxyLoadBalancerIPs` configuration option for AntreaProxy in Antrea v1.5. This
option defaults to `true`, but when setting it to `false`, AntreaProxy will no
`proxyLoadBalancerIPs` configuration option for Antrea Proxy in Antrea v1.5. This
option defaults to `true`, but when setting it to `false`, Antrea Proxy will no
longer load-balance traffic destined to external `loadBalancerIP`s, hence
ensuring that this traffic can go to the external LoadBalancer. You can set it
to `false` by editing the `antrea-config` ConfigMap as follows:
Expand All @@ -262,7 +262,7 @@ data:
proxyLoadBalancerIPs: false
```

With the above configuration, AntreaProxy will ignore all external `loadBalancerIP`s.
With the above configuration, Antrea Proxy will ignore all external `loadBalancerIP`s.
Starting with K8s v1.29, feature [LoadBalancerIPMode](https://kubernetes.io/docs/concepts/services-networking/service/#load-balancer-ip-mode)
was introduced, providing users with a more fine-grained mechanism to control how
every external `loadBalancerIP` behaves in a LoadBalancer Service.
Expand All @@ -274,12 +274,12 @@ every external `loadBalancerIP` behaves in a LoadBalancer Service.
destined for the corresponding external `loadBalancerIP` should be sent to the
external LoadBalancer.

Starting with Antrea v2.0, AntreaProxy will respect `LoadBalancerIPMode` in LoadBalancer
Starting with Antrea v2.0, Antrea Proxy will respect `LoadBalancerIPMode` in LoadBalancer
Services when the configuration option `proxyLoadBalancerIPs` is set to `true`
(default). In this case, AntreaProxy will serve only the external `loadBalancerIP`s
(default). In this case, Antrea Proxy will serve only the external `loadBalancerIP`s
configured with `LoadBalancerIPModeVIP`, and those configured with
`LoadBalancerIPModeProxy` will bypass AntreaProxy. If the configuration option
`proxyLoadBalancerIPs` is set to `false`, AntreaProxy will ignore the external
`LoadBalancerIPModeProxy` will bypass Antrea Proxy. If the configuration option
`proxyLoadBalancerIPs` is set to `false`, Antrea Proxy will ignore the external
`loadBalancerIP`s even if configured with `LoadBalancerIPModeVIP`.

There are two important prerequisites for this feature:
Expand Down
2 changes: 1 addition & 1 deletion content/docs/main/docs/api-reference.html
Original file line number Diff line number Diff line change
Expand Up @@ -11761,5 +11761,5 @@ <h3 id="system.antrea.io/v1beta1.BundleStatus">BundleStatus
<hr/>
<p><em>
Generated with <code>gen-crd-api-reference-docs</code>
on git commit <code>6e4ff87</code>.
on git commit <code>63b8117</code>.
</em></p>
6 changes: 3 additions & 3 deletions content/docs/main/docs/design/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@ so their source IP will be rewritten to the Node's IP before going out.
### ClusterIP Service

Antrea supports two ways to implement Services of type ClusterIP - leveraging
`kube-proxy`, or AntreaProxy that implements load balancing for ClusterIP
`kube-proxy`, or Antrea Proxy that implements load balancing for ClusterIP
Service traffic with OVS.

When leveraging `kube-proxy`, Antrea Agent adds OVS flows to forward the packets
Expand All @@ -222,12 +222,12 @@ the tunnel.
See the [Kubernetes Service Proxies documentation](https://kubernetes.io/docs/reference/networking/virtual-ips)
for more details.

When AntreaProxy is enabled, Antrea Agent will add OVS flows that implement
When Antrea Proxy is enabled, Antrea Agent will add OVS flows that implement
load balancing and DNAT for the ClusterIP Service traffic. In this way, Service
traffic load balancing is done inside OVS together with the rest of the
forwarding, and it can achieve better performance than using `kube-proxy`, as
there is no extra overhead of forwarding Service traffic to the host's network
stack and iptables processing. The AntreaProxy implementation in Antrea Agent
stack and iptables processing. The Antrea Proxy implementation in Antrea Agent
leverages some `kube-proxy` packages to watch and process Service Endpoints.

### NetworkPolicy
Expand Down
17 changes: 9 additions & 8 deletions content/docs/main/docs/design/ovs-pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -257,7 +257,7 @@ Like K8s NetworkPolicy, several tables of the pipeline are dedicated to [Kuberne
Service](https://kubernetes.io/docs/concepts/services-networking/service/) implementation (tables [NodePortMark],
[SessionAffinity], [ServiceLB], and [EndpointDNAT]).
By enabling `proxyAll`, ClusterIP, NodePort, LoadBalancer, and ExternalIP are all handled by AntreaProxy. Otherwise,
By enabling `proxyAll`, ClusterIP, NodePort, LoadBalancer, and ExternalIP are all handled by Antrea Proxy. Otherwise,
only in-cluster ClusterIP is handled. In this document, we use the sample K8s Services below. These Services select Pods
with the label `app: web` as Endpoints.

Expand Down Expand Up @@ -788,12 +788,13 @@ Flow 1 is for case 1, matching packets received on the local Antrea gateway port
addresses. There are some cases where the source IP of the packets through the local Antrea gateway port is not the local
Antrea gateway IP address:

- When Antrea is deployed with kube-proxy, and `AntreaProxy` is not enabled, packets from local Pods destined for Services
will first go through the gateway port, get load-balanced by the kube-proxy data path (undergoes DNAT) then re-enter
the OVS pipeline through the gateway port (through an "onlink" route, installed by Antrea, directing the DNAT'd packets
to the gateway port), resulting in the source IP being that of a local Pod.
- When Antrea is deployed without kube-proxy, and both `AntreaProxy` and `proxyAll` are enabled, packets from the external
network destined for Services will be routed to OVS through the gateway port without masquerading source IP.
- When Antrea is deployed with kube-proxy, and the feature `AntreaProxy` is not enabled, packets from local Pods destined
for Services will first go through the gateway port, get load-balanced by the kube-proxy data path (undergoes DNAT)
then re-enter the OVS pipeline through the gateway port (through an "onlink" route, installed by Antrea, directing the
DNAT'd packets to the gateway port), resulting in the source IP being that of a local Pod.
- When Antrea is deployed without kube-proxy, and both the feature `AntreaProxy` and option `proxyAll` are enabled,
packets from the external network destined for Services will be routed to OVS through the gateway port without
masquerading source IP.
- When Antrea is deployed with kube-proxy, packets from the external network destined for Services whose
`externalTrafficPolicy` is set to `Local` will get load-balanced by the kube-proxy data path (undergoes DNAT with a
local Endpoint selected by the kube-proxy) and then enter the OVS pipeline through the gateway (through a "onlink"
Expand Down Expand Up @@ -1303,7 +1304,7 @@ through the local Antrea gateway. In other words, these are connections for whic
(SYN packet for TCP) was received through the local Antrea gateway. It rewrites the destination MAC address to
that of the local Antrea gateway, loads `ToGatewayRegMark`, and forwards them to table [L3DecTTL]. This ensures that
reply packets can be forwarded back to the local Antrea gateway in subsequent tables. This flow is required to handle
the following cases when AntreaProxy is not enabled:
the following cases when Antrea Proxy is not enabled:

- Reply traffic for connections from a local Pod to a ClusterIP Service, which are handled by kube-proxy and go through
DNAT. In this case, the destination IP address of the reply traffic is the Pod which initiated the connection to the
Expand Down
2 changes: 1 addition & 1 deletion content/docs/main/docs/design/windows-design.md
Original file line number Diff line number Diff line change
Expand Up @@ -210,7 +210,7 @@ Kube-proxy userspace mode is configured to provide NodePort Service function. A
"HNS Internal NIC" is provided to kube-proxy to configure Service addresses. The OpenFlow entries for the
NodePort Service traffic on Windows are the same as those on Linux.

AntreaProxy implements the ClusterIP Service function. Antrea Agent installs routes to send ClusterIP Service
Antrea Proxy implements the ClusterIP Service function. Antrea Agent installs routes to send ClusterIP Service
traffic from host network to the OVS bridge. For each Service, it adds a route that routes the traffic via a
virtual IP (169.254.0.253), and it also adds a route to indicate that the virtual IP is reachable via
antrea-gw0. The reason to add a virtual IP, rather than routing the traffic directly to antrea-gw0, is that
Expand Down
Loading

0 comments on commit b7e9a21

Please sign in to comment.