Load Balancing and Traffic Management in Azure

Ever imagined how organizations that have their market across the globe make their websites or applications to respond very quickly for the user requests without any latency even if they are pinging from different geographical regions? Here is an article that helps you to know how you can make use of Azure for providing high availability of your application and route the incoming requests from the users of different geographical regions to their nearby datacenters. For achieving this, we will be using a set of Azure services as shown below.

  1. Virtual Machines: We will use these for hosting application or website.
  2. Load Balancer: It will be used for providing availability of your application even during a software level or a hardware level failure in the datacenter.
  3. Traffic Manager: We can use the Traffic Manager for different ways by using algorithms like,
    • Priority Based Routing
    • Weighted Based Routing
    • Performance Based Routing
    • Geographic Based Routing
    • Multivalue for Routing
    • Subnet based Routing

Each of the above-mentioned algorithms has their own working that helps us to use Traffic Manager in different ways. For the following demo, we will be making use of Performance Based Routing.

 

Demo Scenario

Imagine that you are going to create a setup in Azure that will help you to host a website in Virtual Machines in two different datacenters for providing your customers a latency-free access to your website. Each data center will have two or more virtual machines in which the same website is hosted as redundant copy for overcoming software level or hardware level downtimes and a Load Balancer is configured to these machine for sharing the incoming user load. Also, you want to have your website running in East US datacenter and South Indian datacenter of Azure. For reducing the latency for your customers, you are using Azure Traffic Manager with Performance Based Routing algorithm which will route the user requests to the data center that is nearby their geographical location. Take a look at below shown architecture for understanding what solution we are going to build now. Based on this architecture, we will be deploying our solutions.

 

Demo Scenario Architecture

 

Creating Virtual Machines in South Indian Datacenter

We have to create virtual machines for hosting website. These virtual machines must be created with highly available architecture. To create a virtual machine with high availability,

  1. Choose Create a resource in the upper left-hand corner of the Azure portal.
  2. In the list of Azure Marketplace resources, search for and select Windows Server 2016 Datacentre, then choose Create.
  3. In the Basics tab, under Project details, make sure the correct subscription is selected and then choose the resource group that you created earlier.
  4. Under Instance details, type VM1 for the Virtual machine name and choose South India for your Location. For Availability options select Availability Set and click on Create new.

Give a valid name for the availability set and leave the number of fault and update domains as default and click OK. You can change them if you have different requirements.

 

  • Under the Administrator account, provide a username, such as kishore and a password.
  • Under Inbound port rules, choose Allow selected ports and then select RDP (3389) and HTTP from the drop-down.
  • Leave the remaining defaults and then select the Review + create button at the bottom of the page.

 

After validation gets succeeded, click on Create.

 

Click on Go to resource to view the virtual machine.

 

The VM can be accessed by using its public IP address and hosted with IIS web server for hosting your application. Refer the following source if you like to host the web server in your virtual machine. https://docs.microsoft.com/en-us/azure/virtual-machines/windows/quick-create-portal#install-web-server

 

Repeat the same process to create another VM in south India in the same availability set. Slight changes have to be made which can be found in the below-given steps.

For availability options, select Availability set and choose the availability set that you created earlier.

 

  • Under the Administrator account, provide a username, such as kishore and a password.
  • Under Inbound port rules, choose Allow selected ports and then select RDP (3389) and HTTP from the drop-down.
  • Leave the remaining defaults and then select the Review + create button at the bottom of the page.

 

Click on Create button.

 

You can connect to this VM the same way for hosting the site.

 

Now we have two VMs that are created in Availability set. Let’s configure the load balancer for these VMs for managing the incoming load.

 

Creating Public Load Balancer

  1. On the upper-left side of the portal, select Create a resource > Networking > Load Balancer.
  2. In the Create load balancer pane, type or select these values:
    • Name: Type SouthIndianLB.
    • Type: Select Public.
    • SKU: Select Basic.
    • Public IP address: Select Create new.
      • Public IP Address field: Type LBIPSouthIndia.
      • Configure Public IP address > Assignment: Select Static.
    • ResourceGroup: Select previously created resource group and select OK.
  3. Select Create.

 

Creating a Backend Address Pool

To distribute traffic to the VMs, the load balancer uses a back-end address pool. The back-end address pool contains the IP addresses of the virtual network interfaces (NICs) that are connected to the load balancer.

To create a back-end address pool that includes VM1 and VM2:

  1. Open the load balancer that you have created.
  2. Under Settings, select Backend pools, and then select Add.

 

  • On the Add a backend pool page, type or select the following values:
    • Name: Type BackEndPool.
    • Associated to: Drop down and select Availability set.
    • Availability set: Select AVSetSouthIndia.
  • Select Add a target network IP configuration.
    • Add each virtual machine (MyVM1 and MyVM2) that you created to the back-end pool.
    • After you add each machine, drop down and select its Network IP configuration.
  • Select OK.

 

On the Backend pools page, expand MyBackendPool and make sure both VM1 and VM2 are listed.

Creating a Health Probe

To create a health probe to monitor the health of the VMs:

  • Under Settings of load balancer, select Health probes, and then select Add.

  • On the Add a health probe page, type or select the following values:
    • Name: Type HealthProbe.
    • Protocol: Drop down and select HTTP.
    • Port: Type 80.
    • Path: Accept / for the default URI. You can replace this value with any other URI.
    • Interval: Type 5. Interval is the number of seconds between probe attempts.
    • Unhealthy threshold: Type 2. This value is the number of consecutive probe failures that occur before a VM is considered unhealthy.
  • Select OK.

Creating a NAT Rule

To create NAT rule:

  • Under Settings of Load Balancer, select Inbound NAT Rules, and then select Add.

Give a name for the rule and select the load balancer IP. Select the protocol as HTTP. For Port give the value as 3441 and select Target Virtual Machine as VM2 and then select its Network IP configuration. Choose Custom for port mapping and give target port value as 3389.

 

You can now find a rule. Again, click on Add button to add rule for VM2.

 

Give a name for the rule and select the load balancer IP. Select the protocol as HTTP. For Port give the value as 3442 and select Target Virtual Machine as VM2 and then select its Network IP configuration. Choose Custom for port mapping and give target port value as 3389.

 

These two rules will help for sharing the load between VM1 and VM2 by network address translating incoming user request.

Creating a load balancer rule

A load balancer rule defines how traffic is distributed to the VMs. The rule defines the front-end IP configuration for incoming traffic, the back-end IP pool to receive the traffic, and the required source and destination ports.

The load balancer rule named LBRule listens to port 80 in the front-end LoadBalancerFrontEnd. The rule sends network traffic to the back-end address pool BackEndPool, also on port 80.

To create the load balancer rule:

  • Select All resources on the left menu, and then select SouthIndia_LB from the resource list.
  • Under Settings, select Load balancing rules, and then select Add.

  • On the Add load balancing rule page, type or select the following values:
    • Name: Type LBRule.
    • Frontend IP address: Select SouthIndia_LBIP.
    • Protocol: Select TCP.
    • Port: Type 80.
    • Backend port: Type 80.
    • Backend pool: Select SouthIndiaBackendPool.
    • Health probe: Select HealthProbe.
  • Select OK.

 

You can find the rule that was added now.

 

Now, you are all set with load balancer configured for VM1 and VM2 in South Indian datacentre. Now the incoming user requests will be equally sent to VM1 and VM2.

 

Creating Virtual Machines in East US Datacenter

We have to create virtual machines for hosting website. These virtual machines must be created with highly available architecture. To create virtual machine with high availability,

  • Choose Create a resource in the upper left-hand corner of the Azure portal.
  • In the list of Azure Marketplace resources, search for and select Windows Server 2016 Datacentre, then choose Create.
  • In the Basics tab, under Project details, make sure the correct subscription is selected and then choose the resource group that you created earlier.
  • Under Instance details, type VM2 for the Virtual machine name and choose East US for your Location. For Availability options select Availability Set and click on Create new.

 

Give a valid name for the availability set and leave the number of fault and update domains as default and click OK. You can change them if you have different requirements.

 

  • Under Administrator account, provide a username, such as kishore and a password.
  • Under Inbound port rules, choose Allow selected ports and then select RDP (3389) and HTTP from the drop-down.
  • Leave the remaining defaults and then select the Review + create button at the bottom of the page.

 

  • After validation gets succeeded, click on Create.

 

The VM can be accessed by using its public IP address. You can repeat the same steps that you followed earlier if you wish to host a web server.

 

Repeat the same process to create another VM in East US in the same availability set. Slight changes have to be made which can be found in the below-given steps.

For availability options, select Availability set and choose the availability set that you created earlier.

  • Under the Administrator account, provide a username, such as kishore and a password.
  • Under Inbound port rules, choose Allow selected ports and then select RDP (3389) and HTTP from the drop-down.
  • Leave the remaining defaults and then select the Review + create button at the bottom of the page.

 

Click on Create button.

 

You can connect to this VM the same way for hosting the site.

 

Now we have two VMs that are created in Availability set. Let’s configure the load balancer for these VMs for managing the incoming load.

Creating Public Load Balancer

  • On the upper-left side of the portal, select Create a resource > Networking > Load Balancer.
  • In the Create load balancer pane, type or select these values:
    • Name: Type LoadBalancerEastUS.
    • Type: Select Public.
    • SKU: Select Basic.
    • Public IP address: Select Create new.
      • Public IP Address field: Type LBIPEastUS.
      • Configure Public IP address > Assignment: Select Dynamic.
    • ResourceGroup: Select previously created resource group and select OK.
  • Select Create.

Creating a Backend Address Pool

To create a back-end address pool that includes VM3 and VM4:

  • Open the load balancer that you have created.
  • Under Settings, select Backend pools, and then select Add

 

  1. On the Add a backend pool page, type or select the following values:
  • Name: Type bckpool.
  • Associated to: Drop down and select Availability set.
  • Availability set: Select AVSet_EastUS.
  1. Select Add a target network IP configuration.
  2. Add each virtual machine (VM3 and VM4) that you created to the back-end pool.
  3. After you add each machine, drop down and select its Network IP configuration.
  4. Select OK.

 

On the Backend pools page, expand MyBackendPool and make sure both VM1 and VM2 are listed.

Creating a Health Probe

To create a health probe to monitor the health of the VMs:

  • Under Settings of load balancer, select Health probes, and then select Add.

  • On the Add a health probe page, type or select the following values:
    • Name: Type Probe.
    • Protocol: Drop down and select HTTP.
    • Port: Type 80.
    • Path: Accept / for the default URI. You can replace this value with any other URI.
    • Interval: Type 5. Interval is the number of seconds between probe attempts.
    • Unhealthy threshold: Type 2. This value is the number of consecutive probe failures that occur before a VM is considered unhealthy.
  • Select OK.

Creating a NAT Rule

To create NAT rule:

  1. Under Settings of Load Balancer, select Inbound NAT Rules, and then select Add.

Give a name for the rule and select the load balancer IP. Select the protocol as HTTP. For Port give the value as 3441 and select Target Virtual Machine as VM3 and then select its Network IP configuration. Choose Custom for port mapping and give target port value as 3389.

 

You can now find a rule. Again, click on Add button to add a rule for VM4.

 

Give a name for the rule and select the load balancer IP. Select the protocol as HTTP. For Port give the value as 3442 and select Target Virtual Machine as VM4 and then select its Network IP configuration. Choose Custom for port mapping and give target port value as 3389.

 

These two rules will help for sharing the load between VM3 and VM4 by network address translating incoming user request.

Creating a load balancer rule

To create the load balancer rule:

  • Select All resources on the left menu, and then select LoadBalancerEastUS from the resource list.
  • Under Settings, select Load balancing rules, and then select Add.

  • On the Add load balancing rule page, type or select the following values:
    • Name: Type LBRule.
    • Frontend IP address: Select LoadBalancerEastUSIP.
    • Protocol: Select TCP.
    • Port: Type 80.
    • Backend port: Type 80.
    • Backend pool: Select bckpool.
    • Health probe: Select HealthProbe.
  • Select OK.

 

You can find the rule that was added now.

Now, you are all set with load balancer configured for VM3 and VM4 in East US datacentre. Now the incoming user requests will be equally sent to VM3 and VM4.

Creating Traffic Manager for Routing User Request:

Let us now create a Traffic Manager profile that directs user traffic based on endpoint performance.

  • On the upper-left side of the screen, select Create a resource > Networking > Traffic Manager profile.
  • In the Create Traffic Manager profile, enter, or select these settings:
Setting Value
Name Enter a unique name for your Traffic Manager profile.
Routing method Select Performance.
Subscription Select the subscription you want the traffic manager profile applied to.
Resource group Select the resource group that you are using from the beginning.
Location This setting refers to the location of the resource group. It has no effect on the Traffic Manager profile that will be deployed globally.
  • Select Create.

 

Now, let us configure Traffic Manager for routing user traffic based on performance.

Adding Traffic Manager endpoints

Before adding endpoints for your traffic manager to perform routing based on performance algorithm to your load balancer, you must have to create DNS for the IP address of your load balancer. This is because the load balancer will work based on DNS routing.

  • To create DNS for public IP of load balancer in South India IP, use filters and find the IP and click on it.

 

  • In this IP go to Configuration option to add DNS.

 

  • Under the DNS name label, give a unique DNS name and click on Save.

 

  • You can now access the load balancer using the DNS rather than pinging public IP.

 

  • Now repeat the same task for the IP address of load balancer in East US.

 

  • Under the DNS name label, give a unique DNS name and click on Save.

 

  • You can now access the load balancer using the DNS rather than pinging public IP.

 

  • Now we have set the requirements for creating endpoints in traffic manager. In the traffic manager that you created, go to Endpoints and click on Add.

Enter, or select, these settings:

Setting Value
Type Select Azure endpoint.
Name Enter endpoint1.
Target resource type Select PublicIP address.
Target resource Select Choose an IP address > LBIPEastUS.

 

 

  • Again add a load balancer in East US as Endpoint 2. To do so, click on the Add button.

 

Enter, or select, these settings:

Setting Value
Type Select Azure endpoint.
Name Enter endpoint2.
Target resource type Select PublicIP address.
Target resource Select Choose an IP address > LBIPSouthIndia.

 

Now you have configured your endpoints which will route the user requests to their nearby data centers from the location from which they ping the site.

 

You can now give the URL of traffic manager

 

Now, we can use the URL of the traffic manager to ping the website. When the website is pinged, the user request will be routed to the datacentre that is nearby to their geographical location. Also, the request that enters into the datacentre will be load balanced since the Load Balancer is configured in the datacentre level.

Hope you enjoyed implementing a real-time demo on Azure for load balancing and traffic management. Feel free to write your comments and chase me with queries via my social media handlers. Thank you!