High availability
If one backend fails, traffic can continue to healthy instances instead of the service fully going down.
Learn Azure Load Balancer step by step with a production-friendly explanation of public and internal load balancer design, Standard vs Basic SKU, frontend IPs, backend pools, health probes, load balancing rules, inbound NAT, outbound SNAT, and real-world traffic flows.
This page is designed for cloud engineers, DevOps teams, infrastructure engineers, interview preparation, and anyone building Azure networking architectures that need resilient Layer 4 traffic distribution.
Azure Load Balancer is a managed Layer 4 service that distributes inbound or outbound traffic across healthy backend resources using TCP and UDP rules.
Azure Load Balancer sits in front of one or more backend targets such as virtual machines, virtual machine scale sets, or IP-based instances and spreads network traffic across them. It does not make decisions using URL paths, HTTP headers, cookies, or application-layer behavior. Instead, it works at the transport layer and focuses on reliable traffic distribution, scale, and availability.
In practical terms, it gives you one stable frontend entry point and maps incoming or outgoing traffic to backend resources based on configured rules and probe health. This is a foundational building block for Azure networking, especially when you want to avoid a single backend instance becoming a single point of failure.
The main goal is to improve availability, resilience, and traffic distribution across multiple backend systems.
If one backend fails, traffic can continue to healthy instances instead of the service fully going down.
You can add more backend servers and spread traffic across them instead of overloading one instance.
Internal Load Balancer helps distribute east-west traffic between internal application tiers.
Public Load Balancer exposes one public IP while keeping backend architecture flexible and scalable.
Standard Load Balancer can also help define outbound egress behavior for private backend instances.
Teams manage one frontend endpoint instead of exposing every backend system directly.
This format helps learners, interview candidates, and working engineers understand the service faster.
A managed Azure Layer 4 load balancing service for TCP and UDP traffic distribution.
To improve service uptime, spread traffic, reduce single points of failure, and support scalable backend design.
Use it when you need network-level balancing, private internal balancing, or public traffic distribution without Layer 7 routing logic.
It sits between clients and backend resources, or between internal application tiers inside Azure networks.
Cloud engineers, platform teams, DevOps engineers, infrastructure engineers, and solution architects use it.
It accepts traffic on a frontend IP, checks configured rules and health probes, and sends traffic to healthy backend resources.
Azure Load Balancer can be deployed with either a public frontend IP or a private frontend IP depending on the use case.
| Type | Frontend IP | Main purpose | Typical use case |
|---|---|---|---|
| Public Load Balancer | Public IP | Distribute internet-facing traffic | Public apps, TCP services, internet-facing workloads |
| Internal Load Balancer | Private IP | Distribute private internal traffic | Internal APIs, app tiers, private service endpoints |
For modern deployments, Standard SKU is the preferred design choice.
| Feature | Standard | Basic |
|---|---|---|
| Recommended for new deployments | Yes | No |
| Security model | Closed by default | Older behavior |
| Production readiness | Strong choice | Legacy scenarios only |
| Zone-aware design | Supported | Limited |
| Outbound rules | Supported | Limited |
| Overall recommendation | Use for almost all new environments | Avoid for new builds |
These are the main building blocks you must understand when designing or troubleshooting a load balancer.
The address clients connect to. It can be public for internet-facing traffic or private for internal traffic.
The set of backend resources that receive traffic, such as VMs, scale sets, or IP-based instances.
Checks whether a backend target is healthy enough to receive traffic. Unhealthy targets are removed from rotation.
Maps frontend IP and port to backend pool and backend port, and ties the flow to a health probe.
Maps a specific frontend port to a single backend instance and port, often used for controlled admin access.
Defines outbound connectivity behavior and how backend systems use a frontend IP for internet egress.
The traffic flow becomes simple once you understand the sequence from frontend IP to healthy backend resource.
Internet / Private Client
|
v
+----------------------------------+
| Azure Load Balancer |
| Frontend IP: Public or Private |
+----------------------------------+
|
| Load Balancing Rule
v
+----------------------------------+
| Backend Pool |
| VM1 VM2 VM3 |
+----------------------------------+
^
|
Health Probes decide
which instances are healthy
These two concepts are commonly confused, but they solve different traffic problems.
| Feature | Purpose | Example |
|---|---|---|
| Load balancing rule | Distribute traffic across many healthy backend targets | Port 80 on frontend to port 80 on backend pool |
| Inbound NAT rule | Send traffic to one specific backend instance | Frontend 50001 to VM1 port 3389 |
| Outbound rule / SNAT | Allow backend instances to access outbound destinations | Private VMs using shared public frontend IP for egress |
These examples show where Azure Load Balancer fits well in practical architectures.
A company runs three backend VMs for a TCP-based service and places a Public Standard Load Balancer in front so clients use one public IP while traffic is spread across healthy nodes.
An Internal Load Balancer exposes a private frontend IP used by app servers to reach a cluster of internal API servers inside the same VNet.
VMs have no public IP addresses, but teams still need outbound patching, package downloads, or approved internet access using controlled outbound rules.
Inbound NAT rules map unique frontend ports to specific backend VMs for limited RDP or SSH access without exposing every VM directly.
These services are both important, but they solve different problems in Azure networking.
| Feature | Azure Load Balancer | Azure Application Gateway |
|---|---|---|
| OSI layer | Layer 4 | Layer 7 |
| Traffic type | TCP and UDP | HTTP and HTTPS |
| Path-based routing | No | Yes |
| Host-based routing | No | Yes |
| TLS termination | No application-layer termination | Yes |
| WAF support | No | Yes |
| Best use | Fast Layer 4 distribution | Web traffic routing and protection |
These are the habits that make Azure Load Balancer deployments cleaner and more production-ready.
Most Azure Load Balancer issues are caused by a few repeated design misunderstandings.
If you need path-based routing, host-based routing, TLS offload, or WAF, Application Gateway is usually the better fit.
A bad probe can make healthy servers look unhealthy and silently block traffic from reaching them.
Even when the load balancer is correct, missing NSG rules can stop traffic completely.
Inbound NAT targets one specific instance, while a load balancing rule distributes across multiple healthy instances.
Private workloads often need careful egress design. Teams should not leave SNAT behavior as an afterthought.
For most modern deployments, Standard SKU is the better operational and architectural choice.
Quick answers to the questions most people ask when learning Azure Load Balancer.
Azure Load Balancer is a Layer 4 service that handles TCP and UDP traffic.
Public Load Balancer uses a public frontend IP for internet-facing traffic, while Internal Load Balancer uses a private frontend IP for internal network traffic.
A health probe checks backend status and ensures traffic is only sent to healthy instances.
An inbound NAT rule maps traffic to one specific backend instance, while a load balancing rule distributes traffic across a backend pool.
For new deployments and production environments, Standard Load Balancer is the recommended choice.
No. Azure Load Balancer is for Layer 4 traffic balancing, while Application Gateway is designed for Layer 7 web traffic routing and WAF scenarios.