- The physical layer
- The data-link layer
- The network layer
- The transport layer (Load Balancer)
- The session layer
- The presentation layer
- 7 The application layer (Application Gateway)
- Used for geo-routing.
- It provides DNS-based routing to redirect end user traffic to globally distributed end points.
- OSI Layer 4 (TCP & UDP) load balancer that distributes inbound traffic to backend pools (resources) according to rules and health probes.
- For application-layer processing, see Application Gateway (e.g. SSL off-load)
- For DNS load balancer (geo-based) see Traffic Manager
- In actual Azure infrastructure there's always a load balancer.
- No instance is really created, capacity is always available.
- Front-end <=> Load balancer => (through load balancing rules and health probs) Back-end pools (includes VMs)
- Features
- Instant scaling to applications
- Reliability via health checks through health probes.
- Secure: You can add NAT
- Network Address Translation: remapping one IP to another in VM.
- Lowers attack surface of the VMs that otherwise could be attacked directly.
- Port forwarding => A frontend IP to a port of a back-end inside VNet.
- Agnostic & transparent => Endpoint is only answered by VM (VM TCP handshakes)
- There are quick start ARM templates
- E.g. 2 VMs internal load balancer.
- Internet facing
- Maps public IP+port of incoming traffic <=> private IP+port.
- E.g. TCP port 80 <=> VMs in web tier subnet in a multi-tiered architecture.
- Can be placed in front of public load balancer to create multi-tier application.
- Routes traffic between VMs inside private VNETs or VNETs that use VPN access to Azure.
- Good for:
- Handling communication within same VNet.
- Hybrid scenarios: On-prem to VMs in same VNet
- Multi-tiered applications
- E.g.: an internal load balancer can receive DB requests that needs to be distributed to back-end SQL servers.
- Critical (line-of-business) applications.
- Frontend IPs and VNets are never directly exposed to internet.
- Accessed from within Azure or on-prem resources
- Can be used to create multi-tiered hybrid applications.
- Determine how traffic is distributed to the backend.
- E.g. you can spread load of incoming web request traffic across multiple webservers.
- Settings:
Port
,protocol
,IP version
,back-end port
,backend pool
,health probe
,session persistence
,floating IP (enable/disable)
- Back-end server pool
- IP addresses of back-end servers.
- Back-ends span to two VNets, must be in same VNet.
- Front-end port is called Listener and can have SSL certificate.
- Frontend & backend pool are connected with rules.
- Front-end is associated with the back-end VM through rule definition.
- Rules reference to a health probe of target VM.
- You can set Multiple Frontends
- Load balance on multiple ports, multiple IP addresses or both.
-
Default rule with no-back-end port reuse
-
Each rule must have a flow with unique IP+port
-
Multiple rules can distribute flows to same DIP on different ports.
- DIP = destination IP of VM
-
Example:
Rule Map frontend To backend pool 1 💚Frontend1:80 💚DIP1:80, 💚DIP2:80 2 💜Frontend2:80 💜DIP1:81, 💜DIP2:81
-
-
*Backend port reuse by using Floating IP
-
If you want to reuse back-end port across multiple rules.
-
Good for: e.g. clustering for high availability, network virtual appliances, and exposing multiple TLS endpoints without re-encryption.
-
Example:
Rule Map frontend To backend pool 1 💚Frontend1:80 💚 DIP1:80, 💚DIP2:80 2 💜Frontend2:80 💜 DIP1:80, 💜DIP2:80 -
Floating IP rule allows backend ports to be re-used.
- It must be enabled.
- It's a part of DSR (Direct Server Return)
- Flow topology
- Outbound part of a flow is always correctly rewritten to flow directly back to the origin.
- At platform level LB operates this way.
- IP address mapping scheme
- By changing destination IP, you can enable port re-use on same VM.
- Flow topology
-
- Must be explicitly attached to a VM (or network interface) to complete the path to the target
- Load balancing rule
- VM is selected from the back-end address pool or VMs
- 📝 You would use NAT rule when you have 1 backend server or you know which backend server to get to and load balancing rule when you want to load balance to multiple backend servers.
- 📝 Provides stickiness to same VM.
- 📝 A session is a 5-tuple hash of: Source IP, source port, destination IP, destination port (public port), protocol (optional).
- Settings
- When adding a rule you can select only client IP or client IP + protocol.
- Idle timeout: Default: 4 min, same server response via HTTP(S)/TCP sessions but no guarantee that connection is maintained.
- Allows resilience and scalability.
- The health probe dynamically adds or removes VMs from the load balancer rotation based on their response to health checks.
- Types:
- HTTP: HTTP 200 => healthy (timeout : 31 seconds)
- TCP: If connection is accepted / refused
- Guest probe: Runs inside VM, not recommended
- Standard SKU sends health probe status as metrics through Azure Monitor.
- Is a variant of a load balancing rule for internal load balancers.
- Single rule to load-balance all TCP and UDP flows that arrive on all ports of an internal load balancer.
- Non floating: Cannot add more rules.
- Floating: can have same back-end instance or multiple back-ends.
- Not mutable, requires re-deploy.
- Basic SKU
- Free
- Subset of Standard
- HA ports are not available
- Standard SKU
- 💡 New applications should use Standard
- More flexible & larger backend pools
- Available only in Standard: Outbound rules (public IP, NAT, HTTPS), SLA
- When back-end pool probes down
- Standard allows TCP connections to continue
- Basic terminates all connections.
- Application Delivery Controller (ADC) as a service
- Web traffic load balancer at "Layer 7 – Application"
- As opposed to Load Balancer that operates at "Layer 4 – TCP and UDP"
- Application gateway can be configured as
- Internet-facing gateway
- Internal-only gateway
- Combination of both
- You can configure more than one web site on same instance
- ❗ Up to 20 web sites.
- Each website is directed to its own backend pool of servers.
- Implementation: 2 backend server pools, 2 listeners (type of multi-site), 2 routing rules.
- Other features
- Redirection
- One port to other port => enables HTTP => HTTPS and more (e.g. external site)
- Session affinity
- Keep a user session on the same server
- Other features
- WebSockets & HTTP/2 traffic, rewrite HTTP headers
- Redirection
- Frontend IP Configuration <=> Application Gateway (has WAF and is L7 LB) <=> Listener (through rules) => Backend Server Pool
- Frontend IP configuration
- Traffic-facing port
- Backend server pool
- The list of IP addresses of the backend servers.
- The IP addresses listed should either belong to the virtual network subnet or should be a public IP/VIP.
- Every pool has settings like port, protocol, and cookie-based affinity.
- Listener
- Front-end configuration of the gateway.
- has a front-end port, a protocol (HTTP or HTTPS), and the SSL certificate name (optional).
- Can terminate SSL (SSL ofFloading), so that traffic flows unencrypted to the backend servers.
- Unburdens from encryption & decryption overhead.
- To implement this, upload certificate & set it on listener.
- Rule
- Binds the listener and the backend server pool
- Defines which backend server pool the traffic should be directed to when it hits a listener.
- Health probes can be HTTP, HTTPS, default => default webpage, you can configure custom URL.
- Path based rules allows routing based on paths
/video/*
=> video backend,/picture/*
=> picture backend
- Rules are processed in the order they are listed
- Web application firewall (WAF).
- Centralizes security management of web applications = Simpler management.
- Uses rules from OWASP Core Rule Set:
- Protection against: SQL Injection (SQLi), Cross Site Scripting (XSS), Local File Inclusion (LFI), Remote File Inclusion (RFI), PHP Code Injection, Java Code Injection, Shellshock, Unix/Windows Shell Injection, Session Fixation, Scripting/Scanner/Bot Detection, Metadata/Error Leakages
- You can deselect rules or disable the firewall.
-
3 SKUs
Average back-end page response size Small Medium Large 6KB 7.5 Mbps 13 Mbps 50 Mbps 100KB 35 Mbps 100 Mbps 200 Mbps -
Instance count: Ensures availability: 💡 Min 2 recommended for production