Pages

Wednesday, July 7, 2021

What are Proximity Placement Groups?

Proximity Placement Groups was welcomed by many organizations when Microsoft announced the preview (https://azure.microsoft.com/en-us/blog/introducing-proximity-placement-groups/) in July 2019 and finally GA (https://azure.microsoft.com/en-ca/blog/announcing-the-general-availability-of-proximity-placement-groups/) in December 2019. The concept isn’t in any way complex but I wanted to write this post to demonstrate its use case for an OpenText Archive Center solution hosted on Azure project I was recently involved in. Before I begin, the following is the official documentation provided by Microsoft:

Proximity placement groups
https://docs.microsoft.com/en-us/azure/virtual-machines/co-location

The Scenario

One of the decisions we had to make at the beginning was how to deliver HA across Availability Zones in a region but OpenText was not clear as to whether they supported clustering the Archive Center across Availability Zones due to potential latency concerns. I do not believe that Microsoft publishes specific latency metrics for each region’s zones but the general guideline I use is that latency across zones can be 2ms or less as per the marketing material here:

https://azure.microsoft.com/en-ca/global-infrastructure/availability-zones/#faq

What is the latency perimeter for an Availability Zone?

We ensure that customer impact is minimal to none with a latency perimeter of less than two milliseconds between Availability Zones.

image

To make a long story short, what OpenText eventually provided as a requirement was that we would need to have a cluster of 4-nodes, where 2 servers need to be in one zone and another 2 can be in another. The servers that are located in the same zone must have the lowest latency possible, preferably be hosted in the same datacenter. The following is a diagram depicting the requirement:

image

Limitations of Availability Zones

With the above requirements in mind, simply deploying the 2 nodes with the availability zone set to 1 and another 2 nodes with the availability zone set to 2 or 3 would not suffice because of the following facts about Azure regions, zones and datacenters as its footprint grows:

  1. Availability Zones can span multiple datacenters because each zone can contain more than one datacenter
  2. Scale sets can span multiple datacenters
  3. Availability Sets in the future can span multiple datacenters

Microsoft understands that organizations would need a way to guarantee the lowest latency between VMs and therefore provides the concept of Proximity Placement Groups, which is a logical grouping used to guarantee Azure compute resources are physically located close to each other. Proximity Placement Groups can also be useful for low latency between stand-alone VMs, availability sets, virtual machine scale sets, and multiple application tier virtual machines.

Availability Zones with Proximity Groups

Leveraging Proximity Placement Groups (PPG) will guarantee that the 2 OpenText Archive Center cluster nodes in the same Availability Set will also be placed together to provide the lowest latency. The following is a diagram that depicts this.

image

Note that the above diagram also includes two additional Document Pipeline servers that will also be grouped with each of the OpenText Archive Center servers.

Limitations of Proximity Placement Group

Proximity Placement Groups does have limitations and that is when there are VM SKUs that are considered to be exotic where they may not be offered at every datacenter. Example of these VMs can be N series with NVDIA cards or large sized VMs for SAP. When mixing exotic VM SKUs, it is best to power on the most exotic VM first so the more popular VMs will likely be available in the same datacenter. In the event where Azure is unable to power on a VM in the same datacenter as the previously powered on VMs, it will fail with the error message:

Oversubscribed allocation request
Stop allocate and try in reverse order

Another method of ensuring all VMs can be powered on is to use an ARM template to place all the VMs in the Proximity Placement Group together to power on as Azure will locate a datacenter that has all of the available VM SKUs.

Measuring Latency Across Availability Zones

When we think about testing latency between servers, the quickest method is to use PING and while we’ll get a millisecond latency metric in the results, it actually isn’t an accurate way to measure latency. The Microsoft recommended tool to test and measure latency are the following because they measure TCP and UDP delivery time unlike PING, which uses ICMP:

As described in the following article:

Test VM network latency
https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-test-latency

The following is a demonstration of using latte.exe to obtain statistics for two Azure VMs hosted in different Azure Availability Zones in the Canada Central region.

image

I will begin by using two low end Standard B1s (1 vcpus, 1 GiB memory) virtual machines.

Set Up Receiver VM

Begin by logging onto the receiving VM and open firewall for the latte.exe tool:

netsh advfirewall firewall add rule program=c:\temp\latte.exe name="Latte" protocol=any dir=in action=allow enable=yes profile=ANY

image

The Latte application should be shown in the Allowed apps and features list upon successfully executing the netsh command:

image

On the receiver, start latte.exe (run it from the CMD window, not from PowerShell):

latte -a <Receiver IP address>:<port> -i <iterations>
latte -a 10.0.0.5:5005 -i 65100
Protocol TCP
SendMethod Blocking
ReceiveMethod Blocking
SO_SNDBUF Default
SO_RCVBUF Default
MsgSize(byte) 4
Iterations 65100

The parameters are as follows:

  • Use -a to specify the IP address and port
  • Use the IP address of the receiving VM
  • Any available port number is can be used (this example uses 5005)
  • Use -i to specify the iterations
  • Microsoft documentation indicates that around 65,000 iterations is long enough to return representative results
image

Set Up Sender VM

On the sender, start latte.exe (run it from the CMD window, not from PowerShell):

latte -c -a <Receiver IP address>:<port> -i <iterations>

The resulting command is the same as on the receiver, except with the addition of -c to indicate that this is the client, or sender:

latte -c -a 10.0.0.5:5005 -i 65100
Protocol TCP
SendMethod Blocking
ReceiveMethod Blocking
SO_SNDBUF Default
SO_RCVBUF Default
MsgSize(byte) 4
Iterations 65100

image

Results

Wait for a minute or so for the results to be displayed:

Sender:

C:\Temp>latte -c -a 10.0.0.5:5005 -i 65100
Protocol TCP
SendMethod Blocking
ReceiveMethod Blocking
SO_SNDBUF Default
SO_RCVBUF Default
MsgSize(byte) 4
Iterations 65100
Latency(usec) 2026.40
CPU(%) 5.2
CtxSwitch/sec 1382 (2.80/iteration)
SysCall/sec 3678 (7.45/iteration)
Interrupt/sec 1100 (2.23/iteration)
C:\Temp>

image

Receiver:

C:\Temp>latte -a 10.0.0.5:5005 -i 65100
Protocol TCP
SendMethod Blocking
ReceiveMethod Blocking
SO_SNDBUF Default
SO_RCVBUF Default
MsgSize(byte) 4
Iterations 65100
Latency(usec) 2026.51
CPU(%) 2.2
CtxSwitch/sec 1140 (2.31/iteration)
SysCall/sec 1524 (3.09/iteration)
Interrupt/sec 1068 (2.17/iteration)
C:\Temp>

image

Sender and receiver metrics side by side:

Note that the latency is labeled as Latency(usec), which is in microseconds and the results are 2130.93, which is about 2ms.

image

Next, I will change the VM size from the low end Standard B1s (1 vcpus, 1 GiB memory) to D series Standard D2s v3 (2 vcpus, 8 GiB memory).

Notice the latency, which is 1546 usec, is better with the D series:

image

However, changing the VM size to a higher Standard D4s v3 (4 vcpus, 16 GiB memory) actually yields slow results at 1840.29 usec. This is likely the fluctuation of the connectivity speed between the datacenter.

image

Accelerated Networking

Accelerated Networking is one of the recommendations provided in the Proximity Placement Group documentation. Not all VM sizes are capable of accelerated networking but the Standard D4s v3 (4 vcpus, 16 GiB memory) supports it so the following is a test with it enabled.

image

I have validated that my operating system is part of the supported operating systems. If connectivity to your VM is disrupted due to incompatible OS, please disable accelerated networking here and connection will resume.

image

image

Note that the latency has decreased to 1364.43 usec after enabling accelerated networking:

image

Latency within the same Availability Zone

The following are tests with two Standard D4s v3 (4 vcpus, 16 GiB memory) without accelerated networking VMs in the same availability zone.

Latency is 308.55 usecs.

image

The following are tests with two Standard D4s v3 (4 vcpus, 16 GiB memory) with accelerated networking.

The latency significantly improves to 55 to 61.09 usecs.

image

Same Availability Zone with Proximity Placement Group

The following are tests with two Standard D4s v3 (4 vcpus, 16 GiB memory) with accelerated networking VMs in the same availability zone with Proximity Placement Group configured.

Create the Proximity Placement Group:

image

image

image

Add the VMs to the Proximity Placement Group:

image

image

The latency results are more or less the same even though the 3 tests are a bit lower at 54 to 57 usec:

image

Summary

The last two tests where the results for two VMs in the same availability zone without PPG and the results for the two VMs in the same availability zone with PPG are more or less the same should not discourage you from using PPG because the VMs were likely powered on in the same datacenter. Using Proximity Placement Group will guarantee that this is the case every time it is powered off and back on.

The sample size of the tests I performed wouldn’t be able to claim that the results are conclusive but I hope it will give a general idea of the latency improvements with Accelerated Networking and Proximity Placement Groups.

If you would like to learn more about real world applications and Proximity Placement Groups, the following SAP on Azure article is a good read:

Azure proximity placement groups for optimal network latency with SAP applications

https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/virtual-machines/workloads/sap/sap-proximity-placement-scenarios.md

No comments: