Pages

Thursday, March 11, 2021

Using Azure Files SMB access with Windows on-premise Active Directory NTFS permissions

Years ago when I first started working with Azure, I was very excited about the release of Azure Files because it would allow migrating traditional Windows file servers to the cloud without having an IaaS Windows server providing the service. What I quickly realized was that it did not support the use of NTFS permissions and therefore was not a possible replacement. Fast forward to a few years later, the support for traditional NTFS permissions with on-premise Active Directory is finally available. I’ve been meaning to write a blog post to demonstrate the configuration so this post serves as a continuation to my previous post on how we can leverage Azure Files to replace traditional Windows Server file services.

Configuring and accessing Microsoft Azure Files
http://terenceluk.blogspot.com/2021/03/configuring-and-accessing-azure-files.html

I won’t go into too much detail about how the integration works as the information can be found in the following Microsoft documentation:

Overview - on-premises Active Directory Domain Services authentication over SMB for Azure file shares
https://docs.microsoft.com/en-us/azure/storage/files/storage-files-identity-auth-active-directory-enable

What I will do is highlight the items we need to configure it.

Topology

The environment I will be working with will be the following simple topology where an on-premise network is connected to Azure East US through a site-to-site VPN and an Azure Files configured:

clip_image002

Prerequisites

The following are the prerequisites for enabling AD DS authentication for Azure file shares:

  1. The on-premise Active Directory Domain Services will need to be synced into Azure AD with Azure AD Connect
  2. The identities that will be used for accessing the Azure Files need to be synced to Azure AD if filtering is applied
  3. The endpoint accessing the file share in Azure Files need to be able to traverse over via port 445
  4. A storage account name that will be less than 15 characters as that is the limit for the on-premise Active Directory SamAccountName

Step #1 – Create the Azure Storage Account and Azure File share

Begin by creating a new storage account with a name that has less than 15 characters:

image

With the storage account successfully created, open the new storage account and navigate to the File shares menu option:

image

Click on the + File share button to create a new file share:

image

Configure the new file share with the settings required.

I won’t go into the details of the Tiers but will provide this reference link for more information: https://docs.microsoft.com/en-us/azure/storage/files/storage-files-planning?WT.mc_id=Portal-Microsoft_Azure_FileStorage#storage-tiers

image

Complete creating the file share by clicking on the Create button.

With the test File share created, click to open it:

image

You can directly upload files into the file share, modify the tier, configure various operations and retrieve information pertaining to the file share.

image

Azure Storage Explorer can also be used to manage the file share.

image

You may notice that clicking into the Access Control (IAM) menu option will display the following:

Identity-based authentication (Active Directory) for Azure file shares

To give individual accounts access to the file share (Kerberos), enable identity-based authentication for the storage account. Learn more

image

This is where you would configure the Share permissions for Active Directory account access and will be configured in the following steps.

Step #2 – Enable AD DS authentication on the storage account

How Azure Files and on-premise authorization works

Unlike traditional Windows Servers, you can’t join an Azure storage account to an on-premise Active Directory so the way in which this is achieved is by registering the storage account with AD DS by creating an account representing it in AD DS. The account that will be created in the on-premise AD can be a user account or a computer account and if you are familiar with on-premise AD, you’ll immediately recognize that both of these accounts have passwords. Failure to update the password will cause authentication to Azure Files to fail.

Computer accounts – these accounts have a default password expiration age of 30 days
User accounts – these accounts have password expiration age set based on the password policy applied

The easy way to get around password expiration is to use a user account and set the password to never expire but doing so will likely get any administrator in trouble. The better method is to use Update-AzStorageAccountADObjectPassword cmdlet (https://docs.microsoft.com/en-us/azure/storage/files/storage-files-identity-ad-ds-update-password) to manually update the account’s password before it expires. There are several ways to automate this with either something as simple as a Windows task scheduler task or an enterprise management application to run it on a schedule.

Using AzFilesHybrid to create the on-premise account representing Azure Files

Proceed to download the latest AzFilesHybrid PowerShell module at the following URL: https://github.com/Azure-Samples/azure-files-samples/releases

image

Unpacking the ZIP file will provide the following 3 files:

  • AzFilesHybrid.psd1
  • AzFilesHybrid.psm1
  • CopyToPSPath.ps1

image

Before executing the script, you’ll need to use an account with the following properties and permissions:

  1. Replicated to Azure AD
  2. Permissions to modify create a user or computer object to the on-premise Active Directory
  3. Storage Account Owner or Contributor permissions

The account I’ll be using is a Domain admin and Global Admin rights.

From a domain joined computer where you are logged in with the required on-premise Active Directory account, launch PowerShell or PowerShell ISE and begin by setting the execution policy to unrestricted so we can run the AzFilesHybrid PowerShell scripts:

Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser

Navigate to the directory containing the unzipped scripts and execute:

.\CopyToPSPath.ps1

Import the AzFilesHybrid module by executing:

Import-Module -Name AzFilesHybrid

Connect to the Azure tenant:

Connect-AzAccount

image

Set up the variables for the subscription ID, the resource group name and storage account name:

$SubscriptionId = "<SubscriptionID>"

$ResourceGroupName = "<resourceGroupName>"

$StorageAccountName = "<storageAccountName>"

As you can have more than one subscription in a tenant, select the subscription containing the resources by executing:

Select-AzSubscription -SubscriptionId $SubscriptionId

image

With the prerequisites executed, we can now use the Join-AzStorageAccountForAuth cmdlet to create the account in the on-premise AD that represents the storage account in Azure:

Join-AzStorageAccountForAuth `

-ResourceGroupName $ResourceGroupName `

-Name $StorageAccountName `

-DomainAccountType "<ComputerAccount or ServiceLogonAccount>" `

## You can either specify the OU name or DN of the OU

-OrganizationalUnitName "<Name of OU>" `

-OrganizationalUnitDistinguishedName "<DN of OU>"

The following is an example:

Join-AzStorageAccountForAuth `

-ResourceGroupName $ResourceGroupName `

-Name $StorageAccountName `

-DomainAccountType "ServiceLogonAccount" `

-OrganizationalUnitDistinguishedName "OU=AzureFiles,DC=contoso,DC=com"

**Note that there are backticks (the character sharing the tilde character on the keyboard) used, which is used as an word-wrap operator. It allows the command to be written in multiple lines.

-----------------------------------------------------------------------------------------------------------------------

If your storage account is longer than 15 character then you’ll get an error:

WARNING: Parameter -DomainAccountType is 'ServiceLogonAccount', which will not be supported AES256 encryption for Kerberos tickets.

Join-AzStorageAccountForAuth : Parameter -StorageAccountName 'steastusserviceendpoint' has more than 15 characters, which is not supported to be used

as the SamAccountName to create an Active Directory object for the storage account. Azure Files will be supporting AES256 encryption for Kerberos

tickets, which requires that the SamAccountName match the storage account name. Please consider using a storage account with a shorter name.

At line:1 char:1

+ Join-AzStorageAccountForAuth `

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException

+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Join-AzStorageAccount

-----------------------------------------------------------------------------------------------------------------------

Successful execution of the Join-AzStorageAccountForAuth will display the following:

PS C:\AzFilesHybrid> Join-AzStorageAccountForAuth `

-ResourceGroupName $ResourceGroupName `

-Name $StorageAccountName `

-DomainAccountType "ServiceLogonAccount" `

-OrganizationalUnitDistinguishedName "OU=AzureFiles,DC=contoso,DC=com"

WARNING: Parameter -DomainAccountType is 'ServiceLogonAccount', which will not be supported AES256 encryption for Kerberos tickets.

StorageAccountName ResourceGroupName PrimaryLocation SkuName Kind AccessTier CreationTime ProvisioningState EnableHttpsTrafficOnly

------------------ ----------------- --------------- ------- ---- ---------- ------------ ----------------- ----------------------

stfsreplacement rg-prod-infraServers eastus Standard_LRS StorageV2 Hot 3/8/2021 11:30:02 AM Succeeded True

PS C:\AzFilesHybrid>

image

The corresponding object (in this case a user object) should also be created in the specified OU:

image

Notice how the password is automatically set to not expire:

image

We can also verify the configuration with the following PowerShell cmdlets:

Obtain the storage account and store it as a variable:

$storageAccount = Get-AzStorageAccount `

-ResourceGroupName $ResourceGroupName `

-Name $StorageAccountName

List the directory domain information of the storage account has enabled AD DS authentication for file shares

$storageAccount.AzureFilesIdentityBasedAuth.ActiveDirectoryProperties

https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.management.storage.models.azurefilesidentitybasedauthentication.activedirectoryproperties?view=azure-dotnet

View the directory service of the storage:

$storeageAccount.AzureFilesIdentityBasedAuth.DirectoryServiceOptions

https://docs.microsoft.com/en-us/java/api/com.microsoft.azure.management.storage.azurefilesidentitybasedauthentication.directoryserviceoptions?view=azure-java-stable

image

Step #3 – Configure On-Premise AD Groups for Azure Files Access (Share Permissions)

With the AD DS authentication integration setup for the storage account, the next step is to configure the on-premise Active Directory groups that will be granted access to the Azure Files file share. Think of this step as how we would configure Share permissions on a folder so we can then proceed to configure the NTFS permissions.

There are 3 predefined RBAC roles provided by Azure that will map to the on-premise AD groups and they are as follows:

Storage File Data SMB Share Contributor – Allows for read, write, and delete access in Azure Storage file shares over SMB.

Storage File Data SMB Share Elevated Contributor – Allows for read, write, delete and modify NTFS permissions access in Azure Storage file shares over SMB.

Storage File Data SMB Share Reader – Allows for read access to Azure File Share over SMB.

image

The following are the mappings that I have planned:

Azure Role: Storage File Data SMB Share Contributor
On-premise AD group: AzFileShareContributor

Azure Role: Storage File Data SMB Share Elevated Contributor
On-premise AD group: AzFileShareElevContributor

Azure Role: Storage File Data SMB Share Reader
On-premise AD group: AzFileShareReader

Proceed to create the groups in the on-premise Active Directory:

image

Then log into the Azure portal and navigate to the storage account > File Shares then click on the file share that has been created:

image

From within the file share, click on Access Control (IAM) and then Add role assignments:

image

Configure the appropriate mapping for the 3 on-premise AD groups and the Azure roles:

image

image

image

Step #4 – Mount the Azure Files file share with full permissions and configure NTFS permissions

With the share permissions set, we can now configure the NTFS permissions on the file share. There isn’t a way to perform this from within the Azure portal so we will need to mount an Azure file share to a VM joined to the on-premise Active Directory.

The UNC path for accessing the Azure Files share would be as follows:

\\<storageAccountName>.file.core.windows.net\<shareName> <storageAccountKey> /user:Azure\<storageAccountName>

You can use the net use <driveLetter>: command to mount the drive as such:

net use z: \\<storageAccountName>.file.core.windows.net\<shareName> <storageAccountKey> /user:Azure\<storageAccountName>

net use z: \\stfsreplacement.file.core.windows.net\test N2PrIm73/xHNPxe7BoVyNHBdjU3HBPpQg33Z+PeKmjy8nxUMSeOG4Azfnknyn+up2pQpOinUJ/FWl9ceeGz/bQ== /user:Azure\stfsreplacement

image

Note that the storage account key can be obtained here:

image

Or as an alternative, you can also retrieve a full PowerShell cmdlet to map the drive by using the Connect button for the file share:

image

With the file share mapped as a drive, we can now assign the appropriate NTFS permissions for the groups we created earlier:

Azure Role: Storage File Data SMB Share Contributor
On-premise AD group: AzFileShareContributor
Permissions:

  • Modify
  • Read & execute
  • List folder contents
  • Read

Azure Role: Storage File Data SMB Share Elevated Contributor
On-premise AD group: AzFileShareElevContributor
Permissions:

  • Full control
  • Modify
  • Read & execute
  • List folder contents
  • Read

Azure Role: Storage File Data SMB Share Reader
On-premise AD group: AzFileShareReader
Permissions:

  • Read & execute
  • List folder contents
  • Read
image

Step #5 – Mount the Azure Files file share as an on-premise Active Directory User

Now that the share and NTFS permissions have been set, we can proceed to mount the share as users who are placed into one of the 3 groups to test.

Step #6 – Update the password of the storage account identity in the on-premise Active Directory DS

The last action is how we would change/update the password on the account object representing storage account to enable Kerberos authentication. The following is a snippet from the Microsoft documentation: https://docs.microsoft.com/en-us/azure/storage/files/storage-files-identity-ad-ds-update-password

If you registered the Active Directory Domain Services (AD DS) identity/account that represents your storage account in an organizational unit or domain that enforces password expiration time, you must change the password before the maximum password age. Your organization may run automated cleanup scripts that delete accounts once their password expires. Because of this, if you do not change your password before it expires, your account could be deleted, which will cause you to lose access to your Azure file shares.

To trigger password rotation, you can run the Update-AzStorageAccountADObjectPassword command from the AzFilesHybrid module. This command must be run in an on-premises AD DS-joined environment using a hybrid user with owner permission to the storage account and AD DS permissions to change the password of the identity representing the storage account. The command performs actions similar to storage account key rotation. Specifically, it gets the second Kerberos key of the storage account, and uses it to update the password of the registered account in AD DS. Then, it regenerates the target Kerberos key of the storage account, and updates the password of the registered account in AD DS. You must run this command in an on-premises AD DS-joined environment.

The syntax for the Update-AzStorageAdccountADObjectPassword cmdlet to perform this will look as follows:

Update-AzStorageAccountADObjectPassword `

-RotateToKerbKey kerb2 `

-ResourceGroupName "<resourceGroupName>" `

-StorageAccountName "<storageAccountName>"

If you are continuing the configuration from the beginning of this blog post then the resource group and storage accounts are already stored in a variable so you can just call them as such:

Update-AzStorageAccountADObjectPassword `

-RotateToKerbKey kerb2 `

-ResourceGroupName $ResourceGroupName `

-StorageAccountName $StorageAccountName

image

Hope this helps anyone looking for a step by step demonstration on how to setup Azure Files for SMB accessing using on-premise AD NTFS permissions.

Accessing Exchange Server 2019 /OWA and /ECP throws the errors: "Status code: 500" and "Could not load file or assembly 'Microsoft.Exchange.Common, Version=15.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies."

Problem

You’ve noticed that after patching Exchange Server 2019 servers (for this example, the patch for HAFNIUM was applied), access to /OWA and /ECP now displays the following errors:

OWA

:-(

Something went wrong

Your request couldn't be completed. HTTP Status code: 500.

X-ClientId: F8E662D41996402E8660EEEB0976EA50

request-id a5077939-13bd-4032-9471-d6c8dc221d5a

X-OWA-Error System.Web.HttpUnhandledException

X-OWA-Version 15.2.721.13

X-FEServer Exch01

X-BEServer Exch02

Date:3/8/2021 12:31:53 PM

InnerException: System.IO.DirectoryNotFoundException

Fewer details...

Refresh the page

image

ECP

Server Error in '/ecp' Application.

_______________________________________

Could not load file or assembly 'Microsoft.Exchange.Common, Version=15.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.

Exception Details: System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.Exchange.Common, Version=15.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.

Source Error:

An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.

Assembly Load Trace: The following information can be helpful to determine why the assembly 'Microsoft.Exchange.Common, Version=15.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' could not be loaded.

WRN: Assembly binding logging is turned OFF.

To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1.

Note: There is some performance penalty associated with assembly bind failure logging.

To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog].

Stack Trace:

[FileNotFoundException: Could not load file or assembly 'Microsoft.Exchange.Common, Version=15.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.]

System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMarkHandle stackMark, IntPtr pPrivHostBinder, Boolean loadTypeFromPartialName, ObjectHandleOnStack type) +0

System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean loadTypeFromPartialName) +96

System.Type.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +65

System.Web.Compilation.BuildManager.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +62

System.Web.Configuration.ConfigUtil.GetType(String typeName, String propertyName, ConfigurationElement configElement, XmlNode node, Boolean checkAptcaBit, Boolean ignoreCase) +50

[ConfigurationErrorsException: Could not load file or assembly 'Microsoft.Exchange.Common, Version=15.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.]

System.Web.Configuration.ConfigUtil.GetType(String typeName, String propertyName, ConfigurationElement configElement, XmlNode node, Boolean checkAptcaBit, Boolean ignoreCase) +572

System.Web.Configuration.ConfigUtil.GetType(String typeName, String propertyName, ConfigurationElement configElement, Boolean checkAptcaBit) +31

System.Web.Configuration.Common.ModulesEntry.SecureGetType(String typeName, String propertyName, ConfigurationElement configElement) +59

System.Web.Configuration.Common.ModulesEntry..ctor(String name, String typeName, String propertyName, ConfigurationElement configElement) +59

System.Web.HttpApplication.BuildIntegratedModuleCollection(List`1 moduleList) +221

System.Web.HttpApplication.GetModuleCollection(IntPtr appContext) +1103

System.Web.HttpApplication.RegisterEventSubscriptionsWithIIS(IntPtr appContext, HttpContext context, MethodInfo[] handlers) +122

System.Web.HttpApplication.InitSpecial(HttpApplicationState state, MethodInfo[] handlers, IntPtr appContext, HttpContext context) +173

System.Web.HttpApplicationFactory.GetSpecialApplicationInstance(IntPtr appContext, HttpContext context) +255

System.Web.Hosting.PipelineRuntime.InitializeApplication(IntPtr appContext) +347

[HttpException (0x80004005): Could not load file or assembly 'Microsoft.Exchange.Common, Version=15.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.]

System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +552

System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +122

System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context) +737

image

You’ve noticed that reviewing the BinSearchFolders application settings for ecp folder in the Exchange Back End website shows that the Value is configured with %ExchangeInstallDir%:

image

Changing this to the path without using the variable appears to fix the ECP page but not OWA:

image

Solution

One of the possible solutions to correct the issue is to use the UpdateCas.ps1 script located in the \Microsoft\Exchange Server\V15\Bin folder to rebuild the /OWA and /ECP directory:

image

image

Proceed to test the /owa and /ecp directories once the PowerShell completes.

Sunday, March 7, 2021

Configuring and accessing Microsoft Azure Files

One of the common questions I am asked by colleagues and clients is how and why they would use Azure Files. The answer to the “how” and “why” are in abundance and I usually provide examples based on the environment I am working in. Some features may be more important for others but a few examples I have off the top of my head are:

  • Lift and shift initiatives for applications installed on Windows Servers can have the data transitioned into the cloud with Azure Files (you can easily move all of the data stored on, say, and E drive of a Windows server into Azure Files then mount that drive back on the server as an E drive without changing code)
  • Container instances, which requires a drive for persistent storage, can leverage Azure Files to provide a drive that can be mounted in Linux
  • Traditional file servers hosted on Windows Servers can be migrated into Azure Files for serverless hosting
  • Azure File Sync can be used to synchronize files stored in Azure Files to an on-premise file server for fast local access

The following Microsoft document provides more information about Azure files:

What is Azure Files?
https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction

With a brief overview of Azure Files and its benefits out of the way, the following is a demonstration of how to set up Azure Files, access the share, snapshot, and lockdown access with a service endpoint. Azure Files can also be configured with Share and NTFS permissions similar to a traditional shared folder on a Windows Server but the process of the configuration is too long to include into this post so I will write a separate one in the future.

Setting up Azure Files

Begin by creating a new storage account that will contain the Azure Files:

image

Basic Tab

Fill in the required configuration parameters for the storage account based on the requirements. Note that the Storage account name will need to be unique across all of Azure’s storage accounts because the name will be used as part of the URL for access. The name needs to be:

  • Between 3 to 24 characters long
  • Contain only lowercase characters and numbers (no special characters such as “-“)
image

Networking Tab

We will be locking down the connectivity method to private endpoints later so leave the Connectivity method as Public endpoint (all networks) for now and Routing preference as the default Microsoft networking routing (default):

image

Data protection tab

The data protection options are displayed and the one that is related to Azure Files is the Turn on soft delete for file shares:

image

The setting that pertains to Azure Files in the advanced tag is Large file shares support, which provides file share support up to a maximum of 100 TiB but does not support geo-redundant storage:

image

image

Proceed to create the storage account by clicking Review + create button then Create.

With the storage account successfully created, open the new storage account and navigate to the File shares menu option:

image

Click on the + File share button to create a new file share:

image

Configure the new file share with the settings required.

I won’t go into the details of the Tiers but will provide this reference link for more information: https://docs.microsoft.com/en-us/azure/storage/files/storage-files-planning?WT.mc_id=Portal-Microsoft_Azure_FileStorage#storage-tiers

image

Complete creating the file share by clicking on the Create button.

With the test File share created, click to open it:

image

You can directly upload files into the file share, modify the tier, configure various operations and retrieve information pertaining to the file share.

image

You may notice that clicking into the Access Control (IAM) menu option will display the following:

Identity-based authentication (Active Directory) for Azure file shares

To give individual accounts access to the file share (Kerberos), enable identity-based authentication for the storage account. Learn more

image

This is where you would configure the Share permissions for Active Directory account access, which I will cover in a future blog post.

Clicking into the properties of the file share will display the https URL to access the share.

image

Note that you won’t be able to browse into the folder as how you would be able to for blog storage with anonymous access. Attempting to do so will display the following:

<Error>
<Code>InvalidHeaderValue</Code>
<Message>The value for one of the HTTP headers is not in the correct format. RequestId:ee3cfc97-601a-0077-765e-1342f2000000 Time:2021-03-07T14:33:25.6179879Z</Message>
<HeaderName>x-ms-version</HeaderName>
<HeaderValue/>
</Error>

image

The rest of the configuration settings are fairly self-explanatory where backups is to configure backups for the Azure File and Snapshots is a feature I will demonstrate later in this post.

Administratively Accessing Azure Files for Upload and Download and other Folder Operations

The Azure portal allows you to upload and download files but is not very efficient. A better way of administratively accessing the share would be to use Azure Storage Explorer, which is an application that is installed onto a desktop or server. Proceed to download and install the application: https://azure.microsoft.com/en-ca/features/storage-explorer/

Launch the application and click on the power plug icon on the left to connect to a variety of Azure services:

image

Note the following selection of Azure resources we can connect to:

  • Subscription
  • Storage account
  • Blog container
  • ADLS Gen2 container or directory
  • File share
  • Queue
  • Table
  • Local storage emulator
image

As we are configuring Azure Files, the 3 options we are interested in connecting to are:

  • Subscription
  • Storage account
  • File share

I will demonstrate connecting to the 3 of them.

Subscription

Connecting with the Subscription option simply requires credentials to the Azure tenant and essentially provides access to all of the storage resources in the subscription:

image

image

Storage account

Connecting to the Storage Account provides to options:

  • Account name and key
  • Shared access signature (SAS)
image

To use the Account name and key, navigate to the storage account in the Azure portal and into Access keys. The information we need is the Storage account name and key1 or key2:

image

Paste the information in the Azure Storage Explorer:

image

Proceed to connect:

image

The connection should succeed and you will see the storage account listed in the Storage Accounts node:

image

To use the Shared access signature (SAS), navigate to the storage account in the Azure portal and into Access keys. The information we need is the Storage account name and Connection string:

image

Paste the information in the Azure Storage Explorer:

image

Proceed to connect:

image

The connection should succeed and you will see the storage account listed in the Storage Accounts node:

image

File share

Connecting with the File share option requires a SAS (Shared Access Signature) to be created. You unfortunately can’t create it directly from the storage account portal as it will not be Azure File specific:

image

An alternative way of creating it is to use Azure Storage Explorer with an already established connection to the storage account, right click on the File Share, then select Get Shared Access Sigantuare…:

image

A Shared Access Signature window will be displayed with options to configure the permissions for this access:

image

Selecting Create after the parameters are set will generate the following three strings:

Share: Test

URI: https://steastusserviceendpoint.file.core.windows.net/test?st=2021-03-07T15%3A04%3A29Z&se=2021-03-08T15%3A04%3A29Z&sp=rl&sv=2018-03-28&sr=s&sig=X5lu4wbZGuOggVERMHuasvDVHPayoxFj9muJ9L%2FWsPM%3E

Query string: ?st=2021-03-07T15%3A02%3A29Z&se=2021-03-08E15%3A04%3A29Z&sp=rl&sv=2018-03-28&sr=s&sig=X5lu4mbEGuOugVJRMHutsvDVHPayoxFj9muJ9L%2FWsPM%3D

image

Use the strings to connect to Azure Files in the Azure Storage Explorer:

imageimage

image

Once connected to the Azure File Shares with Azure Storage Explorer, you’ll be able to create new folders, upload/download files and perform other folder related operations.

image

Access policies on the share can also be configured:

image

image

Accessing Azure Files by mounting the folder as a drive in Windows, Linux or Mac OS

With the Azure Files file share setup, access to it can be provided to Windows, Linux or Mac OS by clicking on the Connect button to bring up the commands to mount the drive:

image

Linux and Macs:

imageimage

The following demonstrates what using the PowerShell to mount the drive in Windows looks like:

image

**Note that just as all Windows map drives are, SMB over port 445 is used for communication and this port is usually blocked by ISPs so it is not likely to work if you run this on a remote computer coming in from the internet with no VPN into Azure.

The PowerShell cmdlet used to map the drive performs the following:

  • Test the connection to the storage account via port 445
  • Assuming connection succeeds, it will save the password to the storage account
  • Map the drive as the letter defined in the Azure portal and set it to be persistent

If this drive ever needs to get removed then use the Remove-PSDrive to remove it.

image

The drive should now be mapped as Z:

image

Another way to map the drive without using a PowerShell script is to simply use the drive mapping feature directly from Windows Explorer. Before attempting to map the drive, you’ll need to retrieve the path and the credentials for connecting to the drive. Begin by using the Azure portal and navigate into Properties of the File share, then copy the URL without the https:// as shown in the screenshot below:

steastusserviceendpoint.file.core.windows.net/test

image

Navigate to the Access keys of the storage account and copy key1 or key2 as it will be used as the password:

image

Proceed to use the Windows desktop or server to map a network drive and use the following parameters:

Folder: \\steastusserviceendpoint.file.core.windows.net\test < note that I changed the “/” for test to “\” and added “\\” to the beginning.

Use the following for authentication:

Username: Azure\<storageAccountName>

Password: Key1 or Key2

image

image

image

Azure Files Snapshots

Snapshots for the File Share is also available but note that it behaves more like a Volume Shadow Copies (VSS) on a Windows Server that allows the use of the Previous Versions tab than, say, a SAN or VM snapshot. You can create snapshot by navigating into the File share, select the Snapshots operation, and then click on Add snapshot:

image

Provide a comment for the snapshot then click OK:

image

A snapshot will be created:

image

Now if you proceed to edit the Test.txt file in the Azure File share, a previous version will be made available:

image

Lock down Azure Files access with to service endpoint

You may want to tighten the access security depending on the sensitivity of the data stored in the Azure Files file share and one of the ways to achieve this is to use an Azure Service Endpoint and Private Endpoint. I won’t go into depth for either of them as I want to write a separate blog post for it so what I’ll do is provide a brief overview of how we can secure access with a service endpoint.

Begin by navigating to the Networking configuration for the storage account and change Allow access from the configuration All networks to Selected networks:

image

There are multiple options for limiting access for this storage account but for the purpose of this example, I will be placing in 1 subnet from a production VNet in the environment:

image

With the above configuration set, only subnets in the defined VNet will be able to access the storage account and the Azure Files.