Whilst setting up a number of VMs in Azure I logged in one afternoon to find I had no outbound internet access. I could perform DNS lookups but I found something was blocking outbound internet access.
The reason for no internet access? I was using an Azure Standard Load Balancer! (And not fully configuring it…)
A quick overview for those who want to know, I was using an Azure Standard Load Balancer (the Standard SKU being the important part here) to allow me to use inbound NAT rules on a single IP Address. This was then set to forward traffic to multiple internal VMs depending on the inbound port rule.
- Azure Standard Load Balancer
- Backend Pool: 2 Windows VMs
- Health Probes: TCP port 3389 (I didn’t actually need a health probe since traffic isn’t technically balanced)
It turned out that I had to configure the load balancer rules to balance traffic on ports 80 and 443 for outbound traffic to also be allowed. Since I wasn’t actually balancing traffic on these ports it never even occurred to me that I needed to set this up! This only applies to the Standard Load Balancer not the Basic Load Balancer offering.
To ensure the VMs remained secure, the NSG on the subnet was set to deny all inbound traffic (Including over the ports 80 and 443) EXCEPT over my inbound nat rules with traffic from a single IP address. Don’t expose your servers to the internet unnecessarily!