Friday, June 23, 2023

Useful Kusto Query / KQL queries for Azure Firewall Troubleshooting

I do not often have the opportunity to do as many hands on deployment of Azure services on projects due to my role as an architect so when I do, I tend to spend a lot of time working with the service to try and understand the ins and outs of the product. One of my recent projects provided me the opportunity to deploy the Azure firewall that I designed and I noticed that there weren’t many Kusto query examples available for troubleshooting inbound and outbound traffic so I wanted to post a link to my GitHub repo where I have and continue to build upon KQL queries for querying Azure Firewall logs to monitor traffic:

I tried to demonstrate as many customizations such as time zones, days ago, start and end time, variables that allowed these basic KQL queries to help me troubleshoot all the Teams outbound traffic that were being blocked as well as weekly reporting I needed to deliver to the client. Hope this helps anyone who might be looking for example queries and can use these as a start.

Microsoft Teams audio calling fails with the error: “We ran into a problem – Try again in a few minutes” on Azure Virtual Desktop with Teams Media Optimization

One of the issues I recently encountered during a Azure Virtual Desktop deployment with Teams Media Optimization was where outbound calls from the virtual desktop would display the spinning wheel while constantly playing the dialing audio until the call fails with:

We ran into a problem
Try again in a few minutes



I wasn’t sure if it was the ordering of the software installation or the Remote Desktop Client app and after reinstalling all the components yet still receiving this error, I remembered that the profile cache may be the cause so went ahead and navigated to %appdata%\Microsoft\Teams to delete the files:


Then tried dialing again and this corrected the issue. The issue took a bit if time to resolve so I hope this short blog post will help anyone who may encounter this problem.

Thursday, June 22, 2023

Azure Virtual Desktop Teams Media Optimization fails to display local client devices

I’ve configured quite a few Teams Media Optimization with Azure Virtual Desktop as per the following Microsoft documentation in the past:

Use Microsoft Teams on Azure Virtual Desktop

The configuration isn’t difficult and I never had any issues until recently when I had to repeat the same for an environment I worked on. After performing all the steps, I noticed that the settings in Teams would either display:

Audio Devices: Custom Setup
Speaker: None
Microphone: None

Which means no devices are redirected or optimized:



Audio Devices: Custom Setup
Speaker: Remote Audio
Microphone: Remote Audio
Camera: Integrated Camera (Redirected)

Which means redirected audio and video devices were taking place but not optimization.

**Note that redirect works if these RDP settings are configured:

audiocapturemode:i:1 Enable audio capture from the local device and redirection to an audio application in the remote session

audiomode:i:0 Plays sound on the local computer

camerastoredirect:s:* Redirect cameras


After going through all the steps multiple times and not having any luck, I recalled a long time ago when I experienced an issue where if I had logged into Teams on an Azure Virtual Desktop BEFORE configuring Microsoft Teams Media Optimization, the optimization would fail. This generally wasn’t an issue for me as I always configure the optimization before rolling out the desktops but for this instance I had not so I went into the folder %appdata%\Microsoft\Teams to delete all the items and long behold it corrected the issue.


I haven’t encountered this issue as much but this took up quite a bit of my time to troubleshoot so I hope others with this issue will find this post and be able to resolve it quicker.

The versions of the applications I used for this deployment are:

Microsoft Teams:

Remote Desktop WebRTC Redirector Service: 1.33.2302.07001

Microsoft Remote Desktop: 1.2.4337.0 (x64)

Microsoft Visual C++ 2015-2022 Redistributable (x64): 14.36.32532.0

Tuesday, June 13, 2023

Designing Azure Storage Account Regional Failover with Private Endpoints

I’ve had the opportunity to work on several projects over the past year to design disaster recovery to recover from one Azure region to another. One of the most common topics that comes up is how to handle storage accounts that are accessed through private endpoints and have public endpoints disabled:



The purpose of this blog post is to provide a walkthrough of possible methods to design regional failover with private endpoints.

Sample Environment

Take the following topology as an example:


In this topology, we have a storage account in the East US region that is configured with Read-access geo-redundant storage (RA-GRS) so all data written to it will automatically get written to the paired region in West US:


Since Read Access is configured, a secondary endpoint is available for read access on the replicated copy in the secondary region:


A private endpoint is provisioned in the East US region so the vm-east-us-prod virtual machine can access the storage account privately from its subnet to the private endpoint at within the vnet-east-us VNet:


Although a secondary endpoint is available, this should not be mistaken for an endpoint that can be used for DR purposes because it allows for read access via the public endpoint to the replicated copy in West US during normal operation.

Notice that there is a pre-deployed virtual machine in the West US region that serve to provide continue operation of access to the storage account in the event where the East US region is unavailable. This type of very common for most environments as a DR failover region is typically pre-staged with networks that serve to host resources to continue operations in the event where the primary region is down.

Scenario #1 – Shared Private DNS Zone for Primary and Secondary Regions

One common design that can be used between two regions is where the Private DNS Zone is shared between the VNets in the two regions. This configuration allows for both VNets to use the same DNS zone for name resolution and therefore will resolve the same private IP address configured for the private endpoint in the primary region providing access to the storage account:


In the diagram above, the secondary region’s virtual machine is placed in a VNet that linked to the same private DNS zone:


It is important to note that the reason why we are able to link the two VNets to the same Private DNS Zone is because these are Global resources even though it is placed in a regional resource group:


This type of configuration means that attempting to resolve in both regions will direct the traffic to the private endpoint deployed in East US and since the two regions have Global VNet Peering configured, the West US traffic will traverse through that connection to the East US region.


In the event where the storage account is unavailable in East US or it has been manually failed over to the West US region, traffic will continue to be directed to the private endpoint in East US, then sent over a private link to the failed over storage account in the West US region, which now has become LRS (Locally-redundant storage):



Such a design unfortunately would not provide the required access in the event of an East US regional failure because the primary private endpoint will no longer be available if East US becomes unavailable:


A common design is to have a DR runbook that performs the following in the event of a regional failure:

  1. Provision a new private endpoint in the West US region
  2. Update the Private DNS Zone’s record to direct traffic to the new private endpoint

This type of design requires manual steps to be executed but saves cost in the disaster recovery region because while private endpoints costs $0.014 (CAD) per hour, which equates to around $10.22/month, larger environments can have many private endpoints and the charges for resources that are not actively used isn’t well received by organizations. Environments leveraging automation using Infrastructure as Code are great candidates for this type of design as the resources and changes can be executed with little manual labour. Furthermore, disaster recovery solutions are not always automatically invoked so having to provision private endpoints in the event of a catatrophic event is not uncommon. An example of this could be leveraging Azure Site Recovery to recover VMs with its recovery plan capability to execute Azure Automation runbooks.

Scenario #2 – Separate Private DNS Zone for Primary and Secondary Regions

If there is a desire to pre-provision all resources to either fully automate or reduce the amount of manual labour involved in the event of a DR, it is possible to provision a private link in the disaster recovery West US region that is linked to the storage account. The important design change here is that a second private DNS zone is created for the DR region and linked to the VNet as shown in the diagram below:


Notice that the pre-provisioned private link will now allow the virtual machine in the West US DR region to access the storage account through a private link rather than the global VNet peering. I won’t go into the details but I have had cross-region active/active deployments configured with such a design.

Here is how the configuration would look like in the Azure portal:





With the above design, a regional loss will require no manual configuration to access the storage account failed over to West US:


In summary, this design removes the requirement for provisioning a private endpoint and updating DNS in the event of a disaster recovery. However, this does incur additional cost as well as maintaining multiple private DNS zones that are associated to the different VNets in each region. There will also be additional considerations required when there is an on-premise hybrid cloud connectivity to the Azure regions and traffic originating outside of Azure needs to reach the private endpoint.

Hope this gives the reader a good idea about the designs available for providing private endpoint connectivity in the event of a disaster recovery.

PowerShell Script that will use the OneTimeSecret service to generate and return a URL to access a password

One of the frequent questions I have been asked after my post:

Using Microsoft Forms and Logic App to create an automated submissions and approval process for Azure AD User Creation

… was whether there is a more secured way to include the password of the newly created user in an email rather than just pasting it into the confirmation email. The main reason why I chose to include the password in plain text is because the password is temporary and would require the user to change upon successfully log on. Nevertheless, I’ve always preached that passwords should never be included in email so I would like to provide an alternate way to better the protection with the included the password.

The method I would recommend is to use a service such as OneTimeSecret that allows you to provide a link to a page that provides the password and this link can only be opened once and it has an expiry. The following is a PowerShell script that can be used in an Automation Account with a webhook that receives a passed password, uses OneTimeSecret to create a link, then return that link.

The PowerShell script can be found at my following GitHub repo:


Saturday, June 10, 2023

Attempting to join a Windows desktop to a Active Directory Domain Services (AD DS) fails with: "The following error occurred attempting to join the domain contoso.local": The specified network name is no longer available.

One of the projects I’ve been working on was a small Azure Virtual Desktop deployment for resources outside of Canada to securely access a VDI in Azure’s Canada Central region. To provide a “block all traffic and only allow whitelisted domain” solution, I opted to use the new Azure Firewall Basic SKU with Application Rules. Given there wasn’t any ingress traffic originating from the internet for published applications and connectivity to the AVDs were going to be through Microsoft’s managed gateway, I decided to place the Azure Firewall in the same VNet as the virtual desktops and servers. This doesn’t conform to the usual hub and spoke topology and the main reason for this is to avoid VNet to VNet peering costs between the subnets. What I have elected for the security network design was to send all traffic between the subnets within the same VNet through the firewall for visibility and logging so the default of traffic free flowing within the same VNet is not allowed. The following is a diagram of the topology:


The traffic originating from the AVD subnet containing the virtual desktops to the server subnet containing the AD DS servers are protected by the firewall. After placing the required route in the UDR associated to the AVD subnet and configuring the required firewall ports from client to server in the Network rules of the firewall policy:

  • UDP Port 88 for Kerberos authentication.
  • UDP and TCP Port 135 for the client to domain controller operations and domain controllers to domain controller operations.
  • TCP Port 139 and UDP 138 are used for File Replication Service between domain controllers.
  • UDP Port 389 for LDAP to handle regular queries from client computers to domain controllers.
  • TCP and UDP Port 445 for File Replication Service.
  • TCP and UDP Port 464 for Kerberos Password Change.
  • TCP Port 3268 and 3269 for Global Catalog from client to domain controller.
  • TCP and UDP Port 53 for DNS from domain controller to domain controller and client to the domain controller.

… then proceeding to deploy the desktops with AVD, it would fail to join the desktop to the domain with the error message:

VM has reported a failure when processing extension 'joindomain'. Error message: "Exception(s) occurred while joining Domain contoso.local

Trying to manually join the desktops to the domain will display the following message:

"The following error occurred attempting to join the domain contoso.local": The specified network name is no longer available.


Parsing through the logs of the Azure Firewall did not reveal any Deny activity but I did notice that there wasn’t any return traffic captured. It was then that I found I had forgotten to associate the UDR that would force traffic from the server subnet to the VDI subnet through the firewall.


This meant that any traffic originating from the VDI subnet would be sent through the firewall:


… while any traffic originating from the server subnet to the VDI subnet would just be sent through subnet to subnet within the same VNet. I’m not completely sure why this would be a problem given return traffic should have returned through the firewall and only new traffic from the domain controllers would not.

In any case, I went ahead and updated the server subnet to use the UDR that would route the traffic through the firewall and the domain join operation succeeded. Firewall logs would also began displaying the domain communication traffic to the AVD subnet.

This probably would have been resolved when I completed the configuration but I hope this blog post would help anyone who may encounter a similar issue.

PowerShell script for updating the domain of Azure AD accounts

One of the projects I’ve been involved in took over a year for a decision to be made on the custom domain that will be used for user accounts and the services that will be offered. This meant that all the accounts used the domain for a year during development and when the time came to register and use the new domain, there was already hundreds of accounts. Using the GUI wasn’t practical given the amount of accounts so I wrote a PowerShell script to update the accounts. The script can be found at my GitHub repo here:


Attempting to add a private endpoint to API Management service displays the message: "No available items" and "No supported sub-resources"


You attempt to configure a private endpoint for an API Management service but unable to select any Target sub-resource in the Resource configuration:



The value must not be empty.

No supported sub-resources


No available items.



For this environment, the issue was that the APIM was deployed on the stv1 compute platform:


One of the prerequisites as listed in the Microsoft documentation ( is:

The API Management instance must be hosted on the stv2 compute platform.


To correct this and keep the APIM in the developer tier, we would need to deploy the APIM within a virtual network (VNet) and select a public IP address during the deployment process:



Once the APIM was placed into a VNet and upgraded to stv2, we would then need to remove the APIM from the virtual network by setting the configuration to None as the option would not be available if the APIM was in a private network.

Wednesday, June 7, 2023

Automating the creation of Azure Calculator estimates with Selenium and Python (more than just VM resources)

As a follow-up to my previous post in April:

Automating the creation of Azure Calculator estimates with Selenium and Python

I was recently pulled into an opportunity where a colleague who led it was no longer available and the Azure build of materials provided to the team had different types of components that were not limited to virtual machines. My colleague indicated that he could not find the estimate saved in his profile’s Azure calculator, which meant we had to recreate it. Since I had just created the Python script that uses Selenium to create a virtual machine-only Azure estimate with an Excel spreadsheet containing an inventory, I went ahead and wrote another similar script that created an estimate by reading an Excel file and adding each row’s:

  1. Product
  2. Custom Label
  3. Region

The estimate would still require a bit of work to complete but this at least helped me save a bit of time.

The following is a screenshot of what the inventory Excel spreadsheet would look like:


Here is the link to Python script at my GitHub repo:

For more information on how to setup Python and Selenium, please refer to my earlier post provided at the beginning of this write up.