Pages

Showing posts with label Active Directory. Show all posts
Showing posts with label Active Directory. Show all posts

Saturday, June 10, 2023

Attempting to join a Windows desktop to a Active Directory Domain Services (AD DS) fails with: "The following error occurred attempting to join the domain contoso.local": The specified network name is no longer available.

One of the projects I’ve been working on was a small Azure Virtual Desktop deployment for resources outside of Canada to securely access a VDI in Azure’s Canada Central region. To provide a “block all traffic and only allow whitelisted domain” solution, I opted to use the new Azure Firewall Basic SKU with Application Rules. Given there wasn’t any ingress traffic originating from the internet for published applications and connectivity to the AVDs were going to be through Microsoft’s managed gateway, I decided to place the Azure Firewall in the same VNet as the virtual desktops and servers. This doesn’t conform to the usual hub and spoke topology and the main reason for this is to avoid VNet to VNet peering costs between the subnets. What I have elected for the security network design was to send all traffic between the subnets within the same VNet through the firewall for visibility and logging so the default of traffic free flowing within the same VNet is not allowed. The following is a diagram of the topology:

image

The traffic originating from the AVD subnet containing the virtual desktops to the server subnet containing the AD DS servers are protected by the firewall. After placing the required route in the UDR associated to the AVD subnet and configuring the required firewall ports from client to server in the Network rules of the firewall policy:

  • UDP Port 88 for Kerberos authentication.
  • UDP and TCP Port 135 for the client to domain controller operations and domain controllers to domain controller operations.
  • TCP Port 139 and UDP 138 are used for File Replication Service between domain controllers.
  • UDP Port 389 for LDAP to handle regular queries from client computers to domain controllers.
  • TCP and UDP Port 445 for File Replication Service.
  • TCP and UDP Port 464 for Kerberos Password Change.
  • TCP Port 3268 and 3269 for Global Catalog from client to domain controller.
  • TCP and UDP Port 53 for DNS from domain controller to domain controller and client to the domain controller.
image

… then proceeding to deploy the desktops with AVD, it would fail to join the desktop to the domain with the error message:

VM has reported a failure when processing extension 'joindomain'. Error message: "Exception(s) occurred while joining Domain contoso.local

Trying to manually join the desktops to the domain will display the following message:

"The following error occurred attempting to join the domain contoso.local": The specified network name is no longer available.

image

Parsing through the logs of the Azure Firewall did not reveal any Deny activity but I did notice that there wasn’t any return traffic captured. It was then that I found I had forgotten to associate the UDR that would force traffic from the server subnet to the VDI subnet through the firewall.

image

This meant that any traffic originating from the VDI subnet would be sent through the firewall:

image

… while any traffic originating from the server subnet to the VDI subnet would just be sent through subnet to subnet within the same VNet. I’m not completely sure why this would be a problem given return traffic should have returned through the firewall and only new traffic from the domain controllers would not.

In any case, I went ahead and updated the server subnet to use the UDR that would route the traffic through the firewall and the domain join operation succeeded. Firewall logs would also began displaying the domain communication traffic to the AVD subnet.

This probably would have been resolved when I completed the configuration but I hope this blog post would help anyone who may encounter a similar issue.

Monday, June 6, 2022

Behavior for Teams for users who are either disabled or deleted in the on-premise Active Directory synced to Azure AD

I recently had a customer ask me what would happen to a Microsoft Teams Team if the owner, which is an on-premise AD account synced into Azure AD, was disabled or deleted and as I did not know off the top of my head, I went ahead to test the scenarios. The following are the results in case anyone may be looking for this information.

On-Premise Active Directory Disabled User

  • Teams channels where the disabled user is the only owner and/or member will not deleted
  • The user will still be listed as owner of Public and Private Teams
  • Disabled status will cause the account to not be displayed when browsing in Manage users
  • Unable to log into Teams with message:

Your account has been locked. Contact your support person to unlock it, then try again.

  • Re-enabling will return back to normal

On-Premise Active Directory Deleted User

  • Teams channels where the disabled user is the only owner will be listed but displayed with an error:

We can’t retrieve information on this team. Refresh the page.

If you continue to have problems, contact Microsoft customer support.

image

  • Teams channels that have other owners or members will continue to be accessible and if there are only members, they can be promoted to be an owner
  • Channel is not deleted
  • User is removed and no longer appears in the Public and Private Teams
  • If the account is restored from the Recycling Bin:
    • The user will be placed back into the Teams with other members
    • The team with only this account will be accessible with the restored user as the owner

Monday, April 5, 2021

Deploying Carbon Black Cloud via GPO with a with a transform (MST) file fails with: “CAInstallPreCheck: Expect a cfg.ini in the same directory as the MSI, but could not find it.“

Problem

You’ve completed setting up Carbon Black Cloud to be deployed via GPO as described in one of my previous posts:

Deploying Carbon Black Cloud via GPO with a transform (MST) file specifying the Company Code and Group Name
http://terenceluk.blogspot.com/2021/04/deploying-carbon-black-cloud-via-gpo.html

But notice that it fails with the following event log errors:

Log Name: Application
Source: CbDefense
Event ID: 49
Level: Error

The description for Event ID 49 from source CbDefense cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.

If the event originated on another computer, the display information had to be saved with the event.

The following information was included with the event:

CbDefense

CAInstallPreCheck: Expect a cfg.ini in the same directory as the MSI, but could not find it.

image

Log Name: Application
Source: Application Management Group Policy
Event ID: 102
Level: Error

The install of application Carbon Black Cloud Sensor 64-bit from policy Test Carbon Black Cloud Install failed. The error was : %%1603

image

Solution

One of the reasons why this error would be thrown is if the COMPANY_CODE was missed when creating the transform file. Verify that both the COMPANY_CODE and GROUP_NAME exists in the transform file.

image

Deploying Carbon Black Cloud via GPO with a transform (MST) file specifying the Company Code and Group Name

I was recently asked about deploying Carbon Black Cloud Sensor via Group Policy as a published MSI file and recall how much difficulty I had with incorporating the settings for the Company Code and Group Name so I decided to dig up my old notes and write this blog post in case anyone else who may be trying to find this information.

Before I begin, those who might be looking for the installation command for the deployment with, say, Workspace ONE can use the following:

installer_vista_win7_win8-64-3.6.0.1979.msi /L*vx log.txt COMPANY_CODE=XXXXXXXXXXXXXX GROUP_NAME=Monitored /qn

**Substitute the COMPANY_CODE value with your organization code and the GROUP_NAME with the name of your group.

Before publishing the Carbon Black Cloud Sensor MSI in Active Directory as GPO, you’ll need to customize the MSI file with the orca.exe tool. Trying to obtain it isn’t straight forward so I’ll outline the process here.

Obtaining orca.exe for creating a Transform file (.MST)

Navigate to the following site where Windows 10 SDK can be downloaded:

Windows 10 SDK
https://developer.microsoft.com/en-us/windows/downloads/windows-10-sdk/

Download the ISO file:

image

Mount the ISO, navigate to the following directory:

E:\Installers

… and obtain the following files:

  • a35cd6c9233b6ba3da66eecaa9190436.cab
  • 838060235bcd28bf40ef7532c50ee032.cab
  • fe38b2fd0d440e3c6740b626f51a22fc.cab
  • Orca-x86_en-us.msi

image

Proceed to install Orca by running the MSI file and you should see the application in your start menu.

Creating a Microsoft Installer Transform (.MST) File

With Orca installed, we can proceed to modify the MSI file as demonstrated in the following KB:

To Create a Microsoft Installer Transform (.MST) File

https://docs.vmware.com/en/VMware-Carbon-Black-Cloud/services/cbc-sensor-installation-guide/GUID-F28C735B-EC91-4A56-A041-3C07F9D36DE6.html

Open the MSI file with Orca and click Transform > New Transform:

image

Select the Property table, then click on Tables > New Row:

image

Click Property and enter "COMPANY_CODE" then click Value and enter the company registration code for your organization:

image

Repeat the same process for the GROUP_NAME:

image

You should now see the two parameters added:

image

Proceed to generate the transform file by clicking on Transform > Generate Transform:

image

image

Deploying Carbon Black Cloud via Group Policy

With both the MSI and transform file (MST) created, we can now publish it in a Group Policy:

image

Select Advanced as the deployment method:

image

Navigate to the Modifications tab and select the transform file:

image

Click OK and assign the GPO to the appropriate OUs containing the computer objects.

image

Thursday, March 11, 2021

Using Azure Files SMB access with Windows on-premise Active Directory NTFS permissions

Years ago when I first started working with Azure, I was very excited about the release of Azure Files because it would allow migrating traditional Windows file servers to the cloud without having an IaaS Windows server providing the service. What I quickly realized was that it did not support the use of NTFS permissions and therefore was not a possible replacement. Fast forward to a few years later, the support for traditional NTFS permissions with on-premise Active Directory is finally available. I’ve been meaning to write a blog post to demonstrate the configuration so this post serves as a continuation to my previous post on how we can leverage Azure Files to replace traditional Windows Server file services.

Configuring and accessing Microsoft Azure Files
http://terenceluk.blogspot.com/2021/03/configuring-and-accessing-azure-files.html

I won’t go into too much detail about how the integration works as the information can be found in the following Microsoft documentation:

Overview - on-premises Active Directory Domain Services authentication over SMB for Azure file shares
https://docs.microsoft.com/en-us/azure/storage/files/storage-files-identity-auth-active-directory-enable

What I will do is highlight the items we need to configure it.

Topology

The environment I will be working with will be the following simple topology where an on-premise network is connected to Azure East US through a site-to-site VPN and an Azure Files configured:

clip_image002

Prerequisites

The following are the prerequisites for enabling AD DS authentication for Azure file shares:

  1. The on-premise Active Directory Domain Services will need to be synced into Azure AD with Azure AD Connect
  2. The identities that will be used for accessing the Azure Files need to be synced to Azure AD if filtering is applied
  3. The endpoint accessing the file share in Azure Files need to be able to traverse over via port 445
  4. A storage account name that will be less than 15 characters as that is the limit for the on-premise Active Directory SamAccountName

Step #1 – Create the Azure Storage Account and Azure File share

Begin by creating a new storage account with a name that has less than 15 characters:

image

With the storage account successfully created, open the new storage account and navigate to the File shares menu option:

image

Click on the + File share button to create a new file share:

image

Configure the new file share with the settings required.

I won’t go into the details of the Tiers but will provide this reference link for more information: https://docs.microsoft.com/en-us/azure/storage/files/storage-files-planning?WT.mc_id=Portal-Microsoft_Azure_FileStorage#storage-tiers

image

Complete creating the file share by clicking on the Create button.

With the test File share created, click to open it:

image

You can directly upload files into the file share, modify the tier, configure various operations and retrieve information pertaining to the file share.

image

Azure Storage Explorer can also be used to manage the file share.

image

You may notice that clicking into the Access Control (IAM) menu option will display the following:

Identity-based authentication (Active Directory) for Azure file shares

To give individual accounts access to the file share (Kerberos), enable identity-based authentication for the storage account. Learn more

image

This is where you would configure the Share permissions for Active Directory account access and will be configured in the following steps.

Step #2 – Enable AD DS authentication on the storage account

How Azure Files and on-premise authorization works

Unlike traditional Windows Servers, you can’t join an Azure storage account to an on-premise Active Directory so the way in which this is achieved is by registering the storage account with AD DS by creating an account representing it in AD DS. The account that will be created in the on-premise AD can be a user account or a computer account and if you are familiar with on-premise AD, you’ll immediately recognize that both of these accounts have passwords. Failure to update the password will cause authentication to Azure Files to fail.

Computer accounts – these accounts have a default password expiration age of 30 days
User accounts – these accounts have password expiration age set based on the password policy applied

The easy way to get around password expiration is to use a user account and set the password to never expire but doing so will likely get any administrator in trouble. The better method is to use Update-AzStorageAccountADObjectPassword cmdlet (https://docs.microsoft.com/en-us/azure/storage/files/storage-files-identity-ad-ds-update-password) to manually update the account’s password before it expires. There are several ways to automate this with either something as simple as a Windows task scheduler task or an enterprise management application to run it on a schedule.

Using AzFilesHybrid to create the on-premise account representing Azure Files

Proceed to download the latest AzFilesHybrid PowerShell module at the following URL: https://github.com/Azure-Samples/azure-files-samples/releases

image

Unpacking the ZIP file will provide the following 3 files:

  • AzFilesHybrid.psd1
  • AzFilesHybrid.psm1
  • CopyToPSPath.ps1

image

Before executing the script, you’ll need to use an account with the following properties and permissions:

  1. Replicated to Azure AD
  2. Permissions to modify create a user or computer object to the on-premise Active Directory
  3. Storage Account Owner or Contributor permissions

The account I’ll be using is a Domain admin and Global Admin rights.

From a domain joined computer where you are logged in with the required on-premise Active Directory account, launch PowerShell or PowerShell ISE and begin by setting the execution policy to unrestricted so we can run the AzFilesHybrid PowerShell scripts:

Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser

Navigate to the directory containing the unzipped scripts and execute:

.\CopyToPSPath.ps1

Import the AzFilesHybrid module by executing:

Import-Module -Name AzFilesHybrid

Connect to the Azure tenant:

Connect-AzAccount

image

Set up the variables for the subscription ID, the resource group name and storage account name:

$SubscriptionId = "<SubscriptionID>"

$ResourceGroupName = "<resourceGroupName>"

$StorageAccountName = "<storageAccountName>"

As you can have more than one subscription in a tenant, select the subscription containing the resources by executing:

Select-AzSubscription -SubscriptionId $SubscriptionId

image

With the prerequisites executed, we can now use the Join-AzStorageAccountForAuth cmdlet to create the account in the on-premise AD that represents the storage account in Azure:

Join-AzStorageAccountForAuth `

-ResourceGroupName $ResourceGroupName `

-Name $StorageAccountName `

-DomainAccountType "<ComputerAccount or ServiceLogonAccount>" `

## You can either specify the OU name or DN of the OU

-OrganizationalUnitName "<Name of OU>" `

-OrganizationalUnitDistinguishedName "<DN of OU>"

The following is an example:

Join-AzStorageAccountForAuth `

-ResourceGroupName $ResourceGroupName `

-Name $StorageAccountName `

-DomainAccountType "ServiceLogonAccount" `

-OrganizationalUnitDistinguishedName "OU=AzureFiles,DC=contoso,DC=com"

**Note that there are backticks (the character sharing the tilde character on the keyboard) used, which is used as an word-wrap operator. It allows the command to be written in multiple lines.

-----------------------------------------------------------------------------------------------------------------------

If your storage account is longer than 15 character then you’ll get an error:

WARNING: Parameter -DomainAccountType is 'ServiceLogonAccount', which will not be supported AES256 encryption for Kerberos tickets.

Join-AzStorageAccountForAuth : Parameter -StorageAccountName 'steastusserviceendpoint' has more than 15 characters, which is not supported to be used

as the SamAccountName to create an Active Directory object for the storage account. Azure Files will be supporting AES256 encryption for Kerberos

tickets, which requires that the SamAccountName match the storage account name. Please consider using a storage account with a shorter name.

At line:1 char:1

+ Join-AzStorageAccountForAuth `

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException

+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Join-AzStorageAccount

-----------------------------------------------------------------------------------------------------------------------

Successful execution of the Join-AzStorageAccountForAuth will display the following:

PS C:\AzFilesHybrid> Join-AzStorageAccountForAuth `

-ResourceGroupName $ResourceGroupName `

-Name $StorageAccountName `

-DomainAccountType "ServiceLogonAccount" `

-OrganizationalUnitDistinguishedName "OU=AzureFiles,DC=contoso,DC=com"

WARNING: Parameter -DomainAccountType is 'ServiceLogonAccount', which will not be supported AES256 encryption for Kerberos tickets.

StorageAccountName ResourceGroupName PrimaryLocation SkuName Kind AccessTier CreationTime ProvisioningState EnableHttpsTrafficOnly

------------------ ----------------- --------------- ------- ---- ---------- ------------ ----------------- ----------------------

stfsreplacement rg-prod-infraServers eastus Standard_LRS StorageV2 Hot 3/8/2021 11:30:02 AM Succeeded True

PS C:\AzFilesHybrid>

image

The corresponding object (in this case a user object) should also be created in the specified OU:

image

Notice how the password is automatically set to not expire:

image

We can also verify the configuration with the following PowerShell cmdlets:

Obtain the storage account and store it as a variable:

$storageAccount = Get-AzStorageAccount `

-ResourceGroupName $ResourceGroupName `

-Name $StorageAccountName

List the directory domain information of the storage account has enabled AD DS authentication for file shares

$storageAccount.AzureFilesIdentityBasedAuth.ActiveDirectoryProperties

https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.management.storage.models.azurefilesidentitybasedauthentication.activedirectoryproperties?view=azure-dotnet

View the directory service of the storage:

$storeageAccount.AzureFilesIdentityBasedAuth.DirectoryServiceOptions

https://docs.microsoft.com/en-us/java/api/com.microsoft.azure.management.storage.azurefilesidentitybasedauthentication.directoryserviceoptions?view=azure-java-stable

image

Step #3 – Configure On-Premise AD Groups for Azure Files Access (Share Permissions)

With the AD DS authentication integration setup for the storage account, the next step is to configure the on-premise Active Directory groups that will be granted access to the Azure Files file share. Think of this step as how we would configure Share permissions on a folder so we can then proceed to configure the NTFS permissions.

There are 3 predefined RBAC roles provided by Azure that will map to the on-premise AD groups and they are as follows:

Storage File Data SMB Share Contributor – Allows for read, write, and delete access in Azure Storage file shares over SMB.

Storage File Data SMB Share Elevated Contributor – Allows for read, write, delete and modify NTFS permissions access in Azure Storage file shares over SMB.

Storage File Data SMB Share Reader – Allows for read access to Azure File Share over SMB.

image

The following are the mappings that I have planned:

Azure Role: Storage File Data SMB Share Contributor
On-premise AD group: AzFileShareContributor

Azure Role: Storage File Data SMB Share Elevated Contributor
On-premise AD group: AzFileShareElevContributor

Azure Role: Storage File Data SMB Share Reader
On-premise AD group: AzFileShareReader

Proceed to create the groups in the on-premise Active Directory:

image

Then log into the Azure portal and navigate to the storage account > File Shares then click on the file share that has been created:

image

From within the file share, click on Access Control (IAM) and then Add role assignments:

image

Configure the appropriate mapping for the 3 on-premise AD groups and the Azure roles:

image

image

image

Step #4 – Mount the Azure Files file share with full permissions and configure NTFS permissions

With the share permissions set, we can now configure the NTFS permissions on the file share. There isn’t a way to perform this from within the Azure portal so we will need to mount an Azure file share to a VM joined to the on-premise Active Directory.

The UNC path for accessing the Azure Files share would be as follows:

\\<storageAccountName>.file.core.windows.net\<shareName> <storageAccountKey> /user:Azure\<storageAccountName>

You can use the net use <driveLetter>: command to mount the drive as such:

net use z: \\<storageAccountName>.file.core.windows.net\<shareName> <storageAccountKey> /user:Azure\<storageAccountName>

net use z: \\stfsreplacement.file.core.windows.net\test N2PrIm73/xHNPxe7BoVyNHBdjU3HBPpQg33Z+PeKmjy8nxUMSeOG4Azfnknyn+up2pQpOinUJ/FWl9ceeGz/bQ== /user:Azure\stfsreplacement

image

Note that the storage account key can be obtained here:

image

Or as an alternative, you can also retrieve a full PowerShell cmdlet to map the drive by using the Connect button for the file share:

image

With the file share mapped as a drive, we can now assign the appropriate NTFS permissions for the groups we created earlier:

Azure Role: Storage File Data SMB Share Contributor
On-premise AD group: AzFileShareContributor
Permissions:

  • Modify
  • Read & execute
  • List folder contents
  • Read

Azure Role: Storage File Data SMB Share Elevated Contributor
On-premise AD group: AzFileShareElevContributor
Permissions:

  • Full control
  • Modify
  • Read & execute
  • List folder contents
  • Read

Azure Role: Storage File Data SMB Share Reader
On-premise AD group: AzFileShareReader
Permissions:

  • Read & execute
  • List folder contents
  • Read
image

Step #5 – Mount the Azure Files file share as an on-premise Active Directory User

Now that the share and NTFS permissions have been set, we can proceed to mount the share as users who are placed into one of the 3 groups to test.

Step #6 – Update the password of the storage account identity in the on-premise Active Directory DS

The last action is how we would change/update the password on the account object representing storage account to enable Kerberos authentication. The following is a snippet from the Microsoft documentation: https://docs.microsoft.com/en-us/azure/storage/files/storage-files-identity-ad-ds-update-password

If you registered the Active Directory Domain Services (AD DS) identity/account that represents your storage account in an organizational unit or domain that enforces password expiration time, you must change the password before the maximum password age. Your organization may run automated cleanup scripts that delete accounts once their password expires. Because of this, if you do not change your password before it expires, your account could be deleted, which will cause you to lose access to your Azure file shares.

To trigger password rotation, you can run the Update-AzStorageAccountADObjectPassword command from the AzFilesHybrid module. This command must be run in an on-premises AD DS-joined environment using a hybrid user with owner permission to the storage account and AD DS permissions to change the password of the identity representing the storage account. The command performs actions similar to storage account key rotation. Specifically, it gets the second Kerberos key of the storage account, and uses it to update the password of the registered account in AD DS. Then, it regenerates the target Kerberos key of the storage account, and updates the password of the registered account in AD DS. You must run this command in an on-premises AD DS-joined environment.

The syntax for the Update-AzStorageAdccountADObjectPassword cmdlet to perform this will look as follows:

Update-AzStorageAccountADObjectPassword `

-RotateToKerbKey kerb2 `

-ResourceGroupName "<resourceGroupName>" `

-StorageAccountName "<storageAccountName>"

If you are continuing the configuration from the beginning of this blog post then the resource group and storage accounts are already stored in a variable so you can just call them as such:

Update-AzStorageAccountADObjectPassword `

-RotateToKerbKey kerb2 `

-ResourceGroupName $ResourceGroupName `

-StorageAccountName $StorageAccountName

image

Hope this helps anyone looking for a step by step demonstration on how to setup Azure Files for SMB accessing using on-premise AD NTFS permissions.