tag:blogger.com,1999:blog-22289479456095744372024-03-17T22:59:27.531-04:00Terence LukTackling the daily challenges of technology... one project at a time.Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.comBlogger1431125tag:blogger.com,1999:blog-2228947945609574437.post-56130934221060787362024-01-29T05:20:00.001-05:002024-01-29T05:20:21.063-05:00Updating aztfexport generated "res-#" resource names with PowerShell scripts<p>Happy new year! It has been an extremely busy start to 2024 for me with the projects I’ve been involved it so I’ve fallen behind on a few of the blog posts I have queued up since November of last year. While I still haven’t gotten to the backlog yet, I would like to quickly write this one as it was a challenge I came across while testing the aztfexport (Azure Export for Terraform) tool to export a set of Azure Firewall, VPN Gateway, and VNet resources in an environment. The following is the Microsoft documentation for this tool:</p> <p><b>Quickstart: Export your first resources using Azure Export for Terraform <br /></b><a href="https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-first-resources?tabs=azure-cli">https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-first-resources?tabs=azure-cli</a></p> <p>Those who have worked with this tool will know that the exported files it creates names the resource names of the resource types identified for import as:</p> <ul> <li>res-0</li> <li>res-1</li> <li>res-2</li> </ul> <a href="https://drive.google.com/uc?id=1JCvIhbtnb3OcbhLnZm0g609hmGgRb7rl"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1LYzVNAJ8GYKflpKfBLOO8-MKZ7KDLX_c" width="242" height="244" /></a> <p>… and so on. These references are used across these multiple files: </p> <ul> <li>aztfexportResourceMapping.json </li> <li>import.tf </li> <li>main.tf </li> </ul> <a href="https://drive.google.com/uc?id=1b8xyH58i5E_8BSA22iwYYmNemyiwFvhC"><img title="image" style="margin: 0px; display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=19UrdUUP9mY0kJZxa2fk__-FojEoPGiyY" width="230" height="197" /></a> <p>While the generated files with these default names will work, it makes it very difficult to identify what these resources are. One of the options available is to go and manually update these files with search and replace but any amount of over 20 resources can quickly because tedious and error prone.</p> <p>With this challenge, I decided to create 2 PowerShell scripts to automate the process of searching and replacing the names of res-0, res-1, res-2 and so on. The first script will parse the import.tf file:</p> <a href="https://drive.google.com/uc?id=1o7ofHwSeVSZvud63YSGDII0fXM33xWHX"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1fRlkwL6Z0VZi-woZ53aVeBLWeCFGXNO9" width="244" height="75" /></a> <p>… and extract the fields “id” and “to” into 2 columns, then create and addition 2 columns that contain the “res-#” and the next containing the name of the resource in Azure to a CSV:</p> <a href="https://drive.google.com/uc?id=14zzjYR-fpjtQwRnHA1cECEU9LpEka5Qm"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1QN5f_59iccAVaCt_CMiWN6q6yc9D-NPQ" width="244" height="170" /></a> <p>If the desire is to use the Azure names as the resource name then no changes are required. If alternate names are desired, then update the values for the <b>Azure Resource Logical Name</b> in the spreadsheet.</p> <p>The second script will then reference this spreadsheet to search through the directory with the Terraform files and update the <b>res-#</b> values to the desired values.</p> <p>The two scripts can be found here in my GitHub repo: </p> <p><b>Create the CSV file from Import.tf - Extract-import-tf-file.ps1 <br /></b><a href="https://github.com/terenceluk/Azure/blob/main/PowerShell/Extract-import-tf-file.ps1">https://github.com/terenceluk/Azure/blob/main/PowerShell/Extract-import-tf-file.ps1</a><b></b></p> <p><b>Replace all references to res-# with desired values - Replace-Text-with-CSV-Reference.ps1 <br /></b><a href="https://github.com/terenceluk/Azure/blob/main/PowerShell/Replace-Text-with-CSV-Reference.ps1">https://github.com/terenceluk/Azure/blob/main/PowerShell/Replace-Text-with-CSV-Reference.ps1</a><b></b></p> <p>I hope this helps anyone who may be looking for this automated way to update exported Terraform code.</p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-26749147359764441322023-11-29T14:27:00.001-05:002023-11-29T14:40:05.671-05:00Python script that will asynchronously receive events from an Azure Event Hub and send it to a Log Analytics Workspace custom table<p>One of the key items I’ve been working on over the past week as a follow up to my previous post: </p> <p><b>How to log the identity of a user using an Azure OpenAI service with API Management logging (Part 1 of 2) <br /></b><a href="https://terenceluk.blogspot.com/2023/11/how-to-log-identity-of-user-using-azure.html">https://terenceluk.blogspot.com/2023/11/how-to-log-identity-of-user-using-azure.html</a></p> <p>… is to write a Python script that will read events as they arrive in an Event Hub, then send it over to a Log Analytics Workspace’s custom table for logging. The topology is as such:</p> <a href="https://drive.google.com/uc?id=1ba9qraPmltyDe8nmV4v2-JAM0pleKOm-"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1pdtuLQ2q2v5UfA7AUjSo3uTxgpOeHlUA" width="644" height="229" /></a> <p>The main reason why I decided to go with this method is because:</p> <p><b>Tutorial: Ingest events from Azure Event Hubs into Azure Monitor Logs (Public Preview) <br /></b><a href="https://learn.microsoft.com/en-us/azure/azure-monitor/logs/ingest-logs-event-hub">https://learn.microsoft.com/en-us/azure/azure-monitor/logs/ingest-logs-event-hub</a></p> <p>… required the Log Analytics workspace to be <b>linked to a dedicated cluster</b> or to have a <b>commitment tier</b>. The lowest price for such a configuration would be cost prohibitive for me to deploy in a lab environment so I decided to build this simple ingestion method.</p> <p><strong>Log Analytics Pricing Tiers:</strong></p> <a href="https://drive.google.com/uc?id=14iQDVRj6Z6IJcxUgaHxk5a5D4kvM126C"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1zhcUUb9oH8bFzmpNzkIvYrRydLHUG9pT" width="244" height="233" /></a> <p>I used various documentation available to create the script, create the App Registration, configure the Data Collection Endpoint and Data Collection Rule for the Log Analytics ingestion. Here are a few for reference:</p> <p><b>Send events to or receive events from event hubs by using Python <br /></b><a href="https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-python-get-started-send?tabs=passwordless%2Croles-azure-portal">https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-python-get-started-send?tabs=passwordless%2Croles-azure-portal</a></p> <p><b>Logs Ingestion API in Azure Monitor <br /></b><a href="https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-ingestion-api-overview">https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-ingestion-api-overview</a><b></b></p> <p><b>Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal) <br /></b><a href="https://learn.microsoft.com/en-us/azure/azure-monitor/logs/tutorial-logs-ingestion-portal">https://learn.microsoft.com/en-us/azure/azure-monitor/logs/tutorial-logs-ingestion-portal</a></p> <p>The script can be found at my <strong>GitHub</strong> repository here: <a href="https://github.com/terenceluk/Azure/blob/main/Event%20Hub/Python/Receive-from-Event-Hub-with-checkpoint-store-async.py">https://github.com/terenceluk/Azure/blob/main/Event%20Hub/Python/Receive-from-Event-Hub-with-checkpoint-store-async.py</a></p> <p>The following are some screenshots of the execution and output:</p> <p><b>OpenAI API Call from Postman to API Management:</b></p> <a href="https://drive.google.com/uc?id=1GH6lX2P8_FSzb-jLUv_6AWyUBH4txtmh"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=163Nu5wiOCJFuA49CBHM1vysgIyRgSAF-" width="244" height="149" /></a> <p><b>Script Execution and Output:</b></p> <a href="https://drive.google.com/uc?id=1EWAmtxyFzpKLzq4XQ08oWsrCn-BKGXdI"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1ekMVKfHJAW0Yxf103ZES01-sess24MUW" width="244" height="163" /></a> <p><b>Log Analytics Ingestion Results:</b></p> <a href="https://drive.google.com/uc?id=13q9q-KpsmXaxvt21bsXhxqlUH6UMpRMc"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1TxWALQ6t3mR0EugIFQwzSfRE-OmR2j9n" width="244" height="119" /></a> <p>I hope this helps anyone who might be looking for a script for the processing of events and ingestion to Log Analytics as it took me quite a bit of time on and off to troubleshoot various issues encountered. With this script out of the way, I am no prepared to finish up the 2 of 2 post for an OpenAI logging end to end solution, which I will be writing shortly.</p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-65989660538362268792023-11-26T10:32:00.001-05:002023-11-27T05:54:04.169-05:00"204 No Content" returned in Postman when attempting to write logs to a data collection endpoint with a data collection rule for Log Analytics custom log ingestion<p>I’ve been working on my Part 2 of 2 post to demonstrate how we can use <strong>Event Hubs</strong> to capture the identity of incoming API access for the Azure OpenAI service published by an API Management and while doing so, noticed an odd behavior when attempting to use the Log Ingestion API as demonstrated outlined here:</p> <p><b>Logs Ingestion API in Azure Monitor <br /></b><a href="https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-ingestion-api-overview">https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-ingestion-api-overview</a></p> <a href="https://drive.google.com/uc?id=1Vop5cBjEPLdJtXd2Y-VFdwY6JfeqzicZ"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1j_nFFvu82zDSLo-Ic7rUQB2BAFh8lNHi" width="644" height="277" /></a>  <p>I configured all of the required components and wanted to test with Postman before updating the Python script I had for ingesting <strong>Event Hub</strong> logs but noticed that I would constantly get a <b>204 No Content</b> status return with no entries added to the <strong>Log Analytics</strong> table I had set up. To make a long story short, the issue was because the JSON body I was submitting was not enclosed with square brackets [] and further tests show that regardless of whether the accepted format (with square brackets) was submitted or not, the same <b>204 No Content</b> would be returned.</p> <p>The following is a demonstration of this in Postman.</p> <p>The variables I have defined in Postman are:</p> <ul> <li>Data_Collection_Endpoint_URI</li> <li>DCR_Immutable_ID</li> <li>client_id_Log_Analytics</li> <li>client_secret_Log_Analytics</li> </ul> <p><a href="https://drive.google.com/uc?id=1Z8jJ9bs-fTGl15In7OXtJCBrK6-pfw55"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Ptv1tvgj2kALLDNP63c6UWHfzgscWI6F" width="644" height="73" /></a></p> <p>The following are where we can retrieve the values:</p> <p>The <b>Data_Collection_Endpoint_URI</b> can be retrieved by navigating to the <b>Data collection endpoint</b> you had setup:</p> <a href="https://drive.google.com/uc?id=1S_bPkL3rYd_LrdxuaeoyolhwOa2cOEec"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1tdYD7HtHyhQhO67yQAdX4D8JEC5ZAjzl" width="244" height="109" /></a> <p>The <b>DCR_Immutable_ID</b> can be retrieved in the JSON view of the <b>Data collection rule</b> that was setup:</p> <p><a href="https://drive.google.com/uc?id=1NuAKdRNetLOve9CRQnyg6zPDliW_AFYZ"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1-wZ8UvlH3USdwHs4WPaLsOdxuNXFAGKQ" width="244" height="70" /></a></p> <p>The <b>client_id_Log_Analytics</b> is located in the <b>App Registration</b> object:</p> <a href="https://drive.google.com/uc?id=152Uo5u8vdNt1ShPXIAzaqb0yw2KA4ubD"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1EKDYb0uALXWpkTu3rr4b0onmhp0wUDd9" width="244" height="49" /></a> <p>The <b>client_secret_Log_Analytics </b>is the secret setup for the <b>App Registration</b>: </p> <a href="https://drive.google.com/uc?id=1SQr0HkDxEvN2zYotV2LQZcVFjq10ZhCx"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1fQm-GPmhRkt_VtLjLTXNxHtX-GTNtlSY" width="244" height="97" /></a> <p>You’ll also need your tenant ID for the <b>tenantId </b>variable.</p> <p>Set up the authorization tab in Postman with the following configuration:</p> <p><b>Type</b>: OAuth 2.0</p> <p><b>Add authorization data to</b>: Request Headers</p> <p><b>Token</b>: Available Tokens</p> <p><b>Header</b> <b>Prefix</b>: Bearer</p> <p><b>Token</b> <b>Name</b>: <Name of preference></p> <p><b>Grant</b> <b>type</b>: Client Credentials</p> <p><b>Access</b> <b>Token</b> <b>URL</b>: <a href="https://login.microsoftonline.com/%7b%7btenant_id%7d%7d/oauth2/v2.0/token">https://login.microsoftonline.com/{{tenant_id}}/oauth2/v2.0/token</a></p> <p><b>Client</b> <b>ID</b>: {{client_id_Log_Analytics}}</p> <p><b>Client</b> <b>Secret</b>: {{client_secret_Analytics }}</p> <p><b>Scope:</b> https://monitor.azure.com/.default</p> <p><b>Client</b> <b>Authentication</b>: Send as Basic Auth header</p> <p>Leave the rest as default and click on <b>Get New Access Token</b>:</p> <a href="https://drive.google.com/uc?id=1r8Jj9pLRVr4Yon49Z6vDXl6vTPGMdUIR"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1oKXhj2tX69saLitYOIcaI5rt-HpU9wDA" width="244" height="146" /></a><a href="https://drive.google.com/uc?id=10gI_jstNSTS6EgxvNUtHvVrb_FOAItfi"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1pQhihrNoMPL8lhPO6wCKMV4m9RFP1NWE" width="209" height="244" /></a> <p>The token should be successfully retrieved:</p> <a href="https://drive.google.com/uc?id=1lgZKwFKlX5WJBynUt7qfjmVo-o57ZAv1"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1_VjLKZm3ETiJkAk314Rg9-Ra2A_ethr3" width="244" height="192" /></a> <p>Click on <b>Use Token</b>:</p> <a href="https://drive.google.com/uc?id=1iiVii2eBTlhjRjNMqPRpEnzvDNm-DjGZ"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1_tMG5f1_F23JOw0xe4jwXdTyToyNrgvu" width="244" height="209" /></a> <p>Configure a POST request with the following URL: </p> <p><b>https://{{Data_Collection_Endpoint_URI}}/dataCollectionRules/{{DCR_Immutable_ID}}/streams/<font style="background-color: rgb(255, 255, 0);">Custom-APIMOpenAILogs_CL</font>?api-version=2021-11-01-preview</b></p> <p>The <b>Custom-APIMOpenAILogs_CL</b> value can be retrieved in the <b>JSON View</b> of the <b>Data collection rule</b>:</p> <a href="https://drive.google.com/uc?id=19NYTEUik7b9_1mW8r3R3rYbfdOtMPgeI"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1lTtU-TjUeq0oX7G1F8ijYUUnm8Fwx1U6" width="244" height="74" /></a> <p>Proceed to configure the following for the <b>Params</b> tab:</p> <p><b>api-version</b>: 2021-11-01-preview</p> <a href="https://drive.google.com/uc?id=1vBrvVaRfkW4cIxroQf2ta9n3qvDw7C13"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1r0YV9qyqHff1tsFxgGynJwd91j8ZGvBT" width="244" height="50" /></a> <p>The Authorization key should be filled out with the token that was retrieved.</p> <p>Set the <b>Content-Type </b>to <b>application/json</b>.</p> <a href="https://drive.google.com/uc?id=1EAEIlZDjbFP_UqxlvcZuQUvYHFnEJQS6"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1zIwdYnylpMV44lGZI-S5GqhQoMvilExd" width="244" height="53" /></a> <p>For the body, let’s test with the <b>JSON</b> content <b>WITHOUT</b> the square brackets:</p> <p><strong>{</strong></p> <p><strong>"EventTime": "11/24/2023 8:19:57 PM",</strong></p> <p><strong>"ServiceName": "dev-openai-apim.azure-api.net",</strong></p> <p><strong>"RequestId": "91ff7b54-a0eb-4ada-8d27-6081f71e44a3",</strong></p> <p><strong>"RequestIp": "74.114.240.15",</strong></p> <p><strong>"OperationName": "Creates a completion for the chat message",</strong></p> <p><strong>"apikey": "6f82e8f56e604e6cae6e0999e6bdc013",</strong></p> <p><strong>"requestbody": {</strong></p> <p><strong>"messages": [</strong></p> <p><strong>            {</strong></p> <p><strong>"role": "user",</strong></p> <p><strong>"content": "Testing without brackets."</strong></p> <p><strong>            }</strong></p> <p><strong>        ],</strong></p> <p><strong>"temperature": 0.7,</strong></p> <p><strong>"top_p": 0.95,</strong></p> <p><strong>"frequency_penalty": 0,</strong></p> <p><strong>"presence_penalty": 0,</strong></p> <p><strong>"max_tokens": 800,</strong></p> <p><strong>"stop": null</strong></p> <p><strong>    },</strong></p> <p><strong>"JWTToken": "bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6IlQxU3QtZExUdnlXUmd4Ql82NzZ1OGtyWFMtSSIsImtpZCI6IlQxU3QtZExUdnlXUmd4Ql82NzZ1OGtyWFMtSSJ9.eyJhdWQiOiJhcGk6Ly8xMmJjY2MyNi1iNzc4LTRhMmQtYWU3YS00ZjU3MzJlN2E3OWQiLCJpc3MiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC84NGY0NDcwYi0zZjFlLTQ4ODktOWY5NS1hYjBmNTE0MzAyNGYvIiwiaWF0IjoxNzAwODU2MzQ4LCJuYmYiOjE3MDA4NTYzNDgsImV4cCI6MTcwMDg2MTMyNSwiYWNyIjoiMSIsImFpbyI6IkFUUUF5LzhWQUFBQUN5NDZNdUg4VG0yWTF3VDkvazZWVjFzcU9oUWZaOFU5N0ExcWRyT0FMYThGcVVsTEhRclN2OVlwNU5hUE94QnMiLCJhbXIiOlsicHdkIl0sImFwcGlkIjoiMTJiY2NjMjYtYjc3OC00YTJkLWFlN2EtNGY1NzMyZTdhNzlkIiwiYXBwaWRhY3IiOiIxIiwiZmFtaWx5X25hbWUiOiJUdXpvIiwiZ2l2ZW5fbmFtZSI6Ilpha2lhIiwiaXBhZGRyIjoiNzQuMTE0LjI0MC4xNSIsIm5hbWUiOiJaYWtpYSBUdXpvIiwib2lkIjoiZWUxMTZkNTktZDQ5Yi00NTU3LWIyYWItYzkxMWY0NTFkNWM4Iiwib25wcmVtX3NpZCI6IlMtMS01LTIxLTIwNTcxOTExOTEtMTA1MDU2ODczNi01MjY2NjAyNjMtMTg0MDAiLCJyaCI6IjAuQVZFQUMwZjBoQjRfaVVpZmxhc1BVVU1DVHliTXZCSjR0eTFLcm5wUFZ6TG5wNTFSQUU0LiIsInJvbGVddeeQSU0uQWNjZXNzIl0sInNjcCI6IkFQSS5BY2Nlc3MiLCJzdWIiOiJKR3JLbXB4NjVDOGNqRGxUVXBDZFZKaHFoSmtkelJ6b3lJZURENWRMNUhRIiwidGlkIjoiODRmNDQ3MGItM2YxZS00ODg5LTlmOTUtYWIwZjUxNDMwMjRmIiwidW5pcXVlX25hbWUiOiJaVHV6b0BibWEuYm0iLCJ1cG4iOiJaVHV6b0BibWEuYm0iLCJ1dGkiOiJRRWx2U05CX29rUzFLZnV0NTVFNUFBIiwidmVyIjoiMS4wIn0.a__8D9kLedJi48Q9QuEPWUjhqVWJeTZVXkDIcV-gQ5DYCjU7SjwDQWGc1dsYZ_nD0SH4id-PGiTa3RaZo_y5jrtJs_UoW3L8KmViKF1llqaK5XRw7fbGtdPJsFcDXfcWd-hLlWIorjSZ6MdS4beRx4mPTOfeomFWL6e2ExMBzELe_1MzJaUtbYkfZlhoOQu1TUaIoOM5Qs5PpFO1oO-ihcKu3Vl-aY_rmItB1fzRXIip-LQqUVmOwBjOWrzSVkYWRFGnsO1jZNWp0GJKqzVJJFCqNBgZf4BfjN0vvIXRhsR5dGJqd1AAS8VsczZOSBV2uutixNnjJ3jVIZIOa31wzg",</strong></p> <p><strong>"AppId": "12bccc26-b778-4a2d-bb7a-4f5732e7a79d",</strong></p> <p><strong>"Oid": "ee116d59-d49b-4557-b2ab-c911f451d5c8",</strong></p> <p><strong>"Name": "Terence Luk"</strong></p> <p><strong>    }</strong></p> <a href="https://drive.google.com/uc?id=1_utaCdj5YWo6GI3YV0Rck10_daxGigd-"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1ddZH_yjIaRNTODFybCl_3Ax2jw2O4HP8" width="244" height="135" /></a> <p>Notice the returned 204 status:</p> <p><b>204 No Content</b></p> <p><b>The server successfully processed the request, but is not returning any content.</b></p> <p>Waiting for an indefinite time will show that the log is not written to Log Analytics.</p> <a href="https://drive.google.com/uc?id=1wow13yvUadUhRlDE96abA8uFYkKzWvEa"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Te8lmjck4F011Hm4nobiD8ZjH0GJHMIU" width="244" height="135" /></a> <p>Now <b>WITH </b>square brackets:</p> <a href="https://drive.google.com/uc?id=1weW0uDSNHX5E6Y7HW7NbRigbGR1k8iAd"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=11itHdnfEsVmDqNoSUamionbi_vAyzDgF" width="244" height="108" /></a> <p>Notice the same <b>204</b> status is returned:</p> <a href="https://drive.google.com/uc?id=1VLQOvoHWbAbbxp3EBN8aNBgY1u0nWuKN"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1jGnNUilQdDU_uH701ylRiR-oKWrQRH5g" width="244" height="153" /></a> <p>However, using the square brackets show that the log entry is successfully written:</p> <a href="https://drive.google.com/uc?id=1Zg25TL2obmTyrPRu1AqhLgjhLJ0_MwRU"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1UJDT3tE7ixAvR_pz1mtw2VAcEAtWgziM" width="244" height="128" /></a> <p>All the GitHub and forum posts have others indicating this appears to be the expected behavior so the entry will be written as long as the square brackets are included.</p> <p>I will be including the instructions on setting up the App Registration, Data Collection Endpoint, Data Collection Rule, and other components in my part 2 of 2 post for logging the identity of an OpenAI call through the API Management.</p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-72327732038531897882023-11-17T08:14:00.001-05:002023-11-17T08:21:44.356-05:00How to log the identity of a user using an Azure OpenAI service with API Management logging (Part 1 of 2)<p>The single question I’ve been asked the most over the past few months from colleagues, clients, other IT professionals is how can we identify exactly who is using the <strong>Azure OpenAI</strong> service so we can generate accurate consumption reports and allow proper charge back to a department? Those who have worked with the diagnostic settings for<strong> Azure OpenAI</strong> and <strong>API Management</strong> will know that logging is available but there are gaps that desperately needs to be addressed. A quick search over the internet will show that using <strong>API Management</strong> can log the <strong>caller’s IP address</strong> but that isn’t very useful for obvious reasons such as:</p> <ol> <li>If it’s public traffic with a public inbound IP address, how would we be able to tell who the user is? </li> <li>Even if we can tie a public IP address to an organization because that’s the outbound NAT, the identity of the user is not captured </li> <li>Even if we authenticate the user so a JWT token is provided to call the API, having the public IP address in the logs alone wouldn’t identify the user </li> <li>If these were private IP addresses, it would be a nightmare to try and match the inbound IP address with an internal workstation’s IP address that is likely DHCP </li> </ol> <p>I believe the first time I was asked this question was 3 months ago and I’ve always thought that Microsoft will likely address this soon with a checkbox in the diagnostic settings or some other easy to configure offering but fast forward to today (November 2023), I haven’t seen a solution so I thought I’d do a bit of R&D over the weekend.</p> <p>The closest solution I was able to find is from this DevRadio presentation:</p> <p><b>Azure OpenAI scalability using API Management <br /></b><a href="https://www.youtube.com/watch?v=mdRu3GJm3zE&t=1s">https://www.youtube.com/watch?v=mdRu3GJm3zE&t=1s</a></p> <p>… where the presenter used multiple instances of <strong>Azure OpenAI</strong> to separate prompts to the <strong>OpenAI</strong> service belonging to different business units. While this solution allowed costs to be separated between predefined business units, the thought of telling a client that I need multiple instances to serve this purpose didn’t seem like something they would be receptive. While the <strong>DevRadio</strong> solution did not meet the requirements I had, it did give me the idea that perhaps I can use the <b>logging of events to event hubs </b>feature of the <strong>Azure API Management</strong> to accomplish what I want in the solution. </p> <p>I have to say that this blog post is probably one of the most exciting one I’ve written in a while because I was heads down focused on learning and testing the <strong>Azure API Management</strong> inbound processing capabilities over 3 days of my vacation time off and felt extremely fulfilled that I now have an answer to what I could not provide a solution to for months. </p> <p>If you’re still reading this, you might be wondering why there is the label <b>Part 1 of 2</b> and the reason is because I ran out of time and have gotten back to a busy work schedule so could not finish the last portion of this solution but don’t worry as what I will cover in Part 1 will at least capture the information to identify the calling user. Here is a summary of what I am able to cover in this blog post:</p> <ol> <li>How to set up API Management to log events to Event Hub </li> <li>What inbound processing code should be inserted to send the OAuth JWT token to event hub </li> <li>What inbound processing code can be used to extract any values in the JWT token to event hub </li> <li>How to view the logged entries in event hub </li> </ol> <p>The following is what I will cover in <b>Part 2</b> in a future post:</p> <ol> <li>How to ingest events from <strong>Azure Event Hubs</strong> into <strong>Azure Monitor Logs</strong> </li> <li>How to use KQL to join events logged by API Management’s diagnostic settings (containing token usage, prompt information) with <strong>Azure Event Hub</strong> ingested logs (containing user identification) </li> </ol> <p>The following is a high level architecture diagram and the flow of the traffic:</p> <p><a href="https://drive.google.com/uc?id=1c1Z_i5U9cUT44OCVkK5vwuSz65jMkl9R"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1AXN8zf6e60BeYJC5L_DGHXaQ-4ZTfuLH" width="420" height="484" /></a></p> <p>I’m excited to get this post published so let’s get started.</p> <p><b><u><font size="5">Prerequisites</font></u></b></p> <p>This solution will require us to place an <strong>Azure API Management</strong> service in front of the <strong>Azure OpenAI</strong> service so API calls are:</p> <ol> <li>Logged by the APIM </li> <li>Authorized with OAuth by the API Management </li> </ol> <p>Please refer to my previous post for how to set this up:</p> <p><strong>Securing Azure OpenAI with API Management to only allow access for specified Azure AD users <br /></strong><a href="https://terenceluk.blogspot.com/2023/11/securing-azure-openai-with-api.html">https://terenceluk.blogspot.com/2023/11/securing-azure-openai-with-api.html</a></p> <p><b><u><font size="5">What is available today out-of-the-box: API Management Diagnostic Settings Logging Capabilities</font></u></b></p> <p>Assuming you have configured the API Management service as I demonstrated in my prerequisite section and Diagnostics Logging is turned on:</p> <a href="https://drive.google.com/uc?id=1q7RUAmQlwvbd-amH58cPwHv-U8vIpCpx"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1DFJ1Nmx3Upq8_CN1yS0rfKtCEKF0ScEM" width="244" height="125" /></a><a href="https://drive.google.com/uc?id=1XsWf8zhZdPV11n1mEUJKbi7U7GaC1RVC"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1ChClVPi4uQh5G8S4piCE0lLVyoNv2JMs" width="244" height="121" /></a> <p>… then a set of information for each API call would be logged in the configured <b>Log Analytics</b>. Let’s first review what is available out-of-the-box when for the <strong>API Management</strong>. The complain I hear repeatedly is that while the logs captured by the <strong>API Management</strong> provide all the following great information:</p> <ul> <li>TenantId </li> <li>TimeGenerated [UTC] </li> <li>OperationName </li> <li>Correlationid </li> <li>Region </li> <li>isRequestSuccess </li> <li>Category </li> <li>TotalTime </li> <li>CalleripAddress </li> <li>Method </li> <li>Url </li> <li>ClientProtocol </li> <li>ResponseCode </li> <li>BackendMethod </li> <li>BackendUrl </li> <li>BackendResponseCode </li> <li>BackendProtocol </li> <li>RequestSize </li> <li>ResponseSize </li> <li>Cache </li> <li>BackendTime </li> <li>Apid </li> <li>Operationid </li> <li>ApimSubscriptionid </li> <li>ApiRevision </li> <li>ClientTlsVersion </li> <li>RequestBody </li> <li>ResponseBody </li> <li>BackendRequestBody </li> </ul> <a href="https://drive.google.com/uc?id=18e_r6THW8azN5jMIzwlboZUN00jZd9fe"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1NKRH-lXkNnD4tlPd4O0Javb_ceK4ERe_" width="244" height="119" /></a><a href="https://drive.google.com/uc?id=1ZF4Stkr3Kbw16KTlf8_WSZ293hE_YQK8"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1xMKqgM_FqaNudCinEWxIxjCieGn8mMFO" width="244" height="118" /></a><a href="https://drive.google.com/uc?id=1FpttG7Xo86ySuqrWd-Sbw9otRo3YNsfm"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1eVJF-9_gYhrDzaHHAU3r18MWKBwfe5ur" width="244" height="118" /></a> <p>None of these captured fields allow for identifying the caller. To address this gap, we can leverage the <b>log-to-eventhub</b> inbound processing feature of <strong>API Management</strong> and <strong>Event Hubs</strong> to send additional information about the inbound API call to an event hub, then process it according to our requirements.</p> <p><b><u><font size="5">Turning on the logging of events for the API Management to Event Hubs</font></u></b></p> <p>The first step for this solution is to turn on the feature that has <strong>API Management</strong> log to an <strong>Event Hub</strong>. I won’t go into the usual detail I provide for setting up the components due to my limited time but begin by creating an <strong>Event Hub Instance</strong> and <strong>Event Hub</strong> as shown in the following screenshots to serve as a destination for the APIM to send its logs:</p> <a href="https://drive.google.com/uc?id=1mQ7l0PB2qUMcOg2S82F_l_m3cF0r_JfR"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1eLm8J-3z_3d2LGK6IGp_Z4akAQ96s47T" width="244" height="114" /></a><a href="https://drive.google.com/uc?id=1dTxjZFQXXHl80XQqGXy0Vk-AaOdWtoXr"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1mq8BVX03ST63c5Mt_YZJjBLsuiDLVcvR" width="244" height="114" /></a><a href="https://drive.google.com/uc?id=1DLEpuWsgyckKzvUdLbUqeeiJR0vTZYCD"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1oBcQTPOlKTAVre6LYED4OzAbNTeS_oId" width="244" height="85" /></a> <p>Once the<strong> Event Hub Instance</strong> and <strong>Event Hub</strong> is created, and the API <strong>Management’s System Managed Identity</strong> is granted, we will use the following instructions to turn on the feature in <strong>API Management</strong> and use the <strong>Event Hub</strong>:</p> <p><strong>Logging with Event Hub <br /></strong><a href="https://azure.github.io/apim-lab/apim-lab/6-analytics-monitoring/analytics-monitoring-6-3-event-hub.html">https://azure.github.io/apim-lab/apim-lab/6-analytics-monitoring/analytics-monitoring-6-3-event-hub.html</a></p> <p>More detail about how the API Management is configured is described here: </p> <p><strong>How to log events to Azure Event Hubs in Azure API Management <br /></strong><a href="https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-log-event-hubs?tabs=PowerShell">https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-log-event-hubs?tabs=PowerShell</a></p> <p><b><u><font size="5">Configuring API Management’s Inbound Processing rule to log JWT token and its values</font></u></b></p> <p>The API Management <b>log-to-eventhub</b> can send any type of information to the <strong>Event Hub</strong>. For this post, I am going to demonstrate how to send the following information:</p> <ul> <li>EventTime </li> <li>ServiceName </li> <li>RequestId </li> <li>RequestIp </li> <li>Operationname </li> <li>api-key </li> <li>request-body </li> <li>JWTToken </li> <li>AppId </li> <li>Oid </li> <li>Name </li> </ul> <p>Let’s go through these fields in a bit more detail. The following list of fields:</p> <ul> <li>EventTime </li> <li>ServiceName </li> <li>RequestId </li> <li>RequestIp </li> <li>Operationname </li> <li>request-body </li> </ul> <p>… are ones that can be retrieved from the out-of-the-box diagnostic settings logs. I haven’t looked into all the available fields but I suspect that we can send all the out-of-the-box diagnostic settings to event hub to recreate what we have and potentially allow us to turn off the built in logging. The advantage of such an approach is that all logs will be stored in a single log analytics workspace table. The disadvantage of such an approach is that if new fields are introduced into the built in logs then we would need to update our <b>log-to-eventhub</b> code to capture those fields.</p> <p>The other fields:</p> <ul> <li>api-key </li> <li>JWTToken </li> <li>AppId </li> <li>Oid </li> <li>Name </li> </ul> <p>… are ones that we’re looking for. The <b>api-key</b> probably isn’t as important, but I wanted to include this to show that it can be captured. The <strong>JWT Token</strong> that was passed to the <strong>API Management</strong> is captured and while it can be copied out, then decoded with <a href="https://jwt.io/">https://jwt.io/</a>, it isn’t very useful if we’re trying to use <strong>KQL</strong> to generate reports. The remaining fields, which is probably what everyone is looking for, <b>AppId</b>, <b>Oid</b>, <b>Name</b> are extracted from the fields in the <strong>JWT Token</strong>. These fields are just examples that I included in the demonstration, and it is possible to extract any other field you like by adding to the inbound processing XML code.</p> <p>Navigate to the <b>API Management service</b>, <b>APIs blade</b>, <b>Azure OpenAI Service API</b>, <b>All Operations</b>, then click on the <b></></b> policy code editor icon under <b>Inbound processing</b>:</p> <a href="https://drive.google.com/uc?id=1scNlJPQl1guSjj2q4V3hSOvK46JJZtkK"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1YIdLphEG4IwXuMx5OSpCixSfRAz1N-jU" width="244" height="123" /></a> <p>The following is the XML code insert that you’ll need so that the fields listed above will be captured and sent to the <b>Event Hub</b>:</p> <p><strong><!--</strong></p> <p><strong>    IMPORTANT:</strong></p> <p><strong>    - Policy elements can appear only within the <inbound>, <outbound>, <backend> section elements.</strong></p> <p><strong>    - To apply a policy to the incoming request (before it is forwarded to the backend service), place a corresponding policy element within the <inbound> section element.</strong></p> <p><strong>    - To apply a policy to the outgoing response (before it is sent back to the caller), place a corresponding policy element within the <outbound> section element.</strong></p> <p><strong>    - To add a policy, place the cursor at the desired insertion point and select a policy from the sidebar.</strong></p> <p><strong>    - To remove a policy, delete the corresponding policy statement from the policy document.</strong></p> <p><strong>    - Position the <base> element within a section element to inherit all policies from the corresponding section element in the enclosing scope.</strong></p> <p><strong>    - Remove the <base> element to prevent inheriting policies from the corresponding section element in the enclosing scope.</strong></p> <p><strong>    - Policies are applied in the order of their appearance, from the top down.</strong></p> <p><strong>    - Comments within policy elements are not supported and may disappear. Place your comments between policy elements or at a higher level scope.</strong></p> <p><strong>--></strong></p> <p><strong><policies></strong></p> <p><strong><inbound></strong></p> <p><strong><base /></strong></p> <p><strong><set-header name="api-key" exists-action="append"></strong></p> <p><strong><value>{{dev-openai}}</value></strong></p> <p><strong></set-header></strong></p> <p><strong><validate-jwt header-name="Authorization" failed-validation-httpcode="403" failed-validation-error-message="Forbidden" output-token-variable-name="jwt-token"></strong></p> <p><strong><openid-config url=</strong><a href="https://login.microsoftonline.com/%7b%7bTenant-ID%7d%7d/v2.0/.well-known/openid-configuration"><strong>https://login.microsoftonline.com/{{Tenant-ID}}/v2.0/.well-known/openid-configuration</strong></a><strong> /></strong></p> <p><strong><issuers></strong></p> <p><strong><issuer></strong><a href="https://sts.windows.net/%7b%7bTenant-ID%7d%7d/%3c/issuer"><strong>https://sts.windows.net/{{Tenant-ID}}/</issuer</strong></a><strong>></strong></p> <p><strong></issuers></strong></p> <p><strong><required-claims></strong></p> <p><strong><claim name="roles" match="any"></strong></p> <p><strong><value>APIM.Access</value></strong></p> <p><strong></claim></strong></p> <p><strong></required-claims></strong></p> <p><strong></validate-jwt></strong></p> <p><strong><set-variable name="request" value="@(context.Request.Body.As<JObject>(preserveContent: true))" /></strong></p> <p><strong><set-variable name="api-key" value="@(context.Request.Headers.GetValueOrDefault("api-key",""))" /></strong></p> <p><strong><set-variable name="jwttoken" value="@(context.Request.Headers.GetValueOrDefault("Authorization",""))" /></strong></p> <p><strong><log-to-eventhub logger-id="event-hub-logger">@{</strong></p> <p><strong>        var jwt = context.Request.Headers.GetValueOrDefault("Authorization","").AsJwt();</strong></p> <p><strong>        var appId = jwt.Claims.GetValueOrDefault("appid", string.Empty);</strong></p> <p><strong>        var oid = jwt.Claims.GetValueOrDefault("oid", string.Empty);</strong></p> <p><strong>        var name = jwt.Claims.GetValueOrDefault("name", string.Empty);</strong></p> <p><strong>         return new JObject(</strong></p> <p><strong>             new JProperty("EventTime", DateTime.UtcNow.ToString()),</strong></p> <p><strong>             new JProperty("ServiceName", context.Deployment.ServiceName),</strong></p> <p><strong>             new JProperty("RequestId", context.RequestId),</strong></p> <p><strong>             new JProperty("RequestIp", context.Request.IpAddress),</strong></p> <p><strong>             new JProperty("OperationName", context.Operation.Name),</strong></p> <p><strong>             new JProperty("api-key", context.Variables["api-key"]),</strong></p> <p><strong>             new JProperty("request-body", context.Variables["request"]),</strong></p> <p><strong>             new JProperty("JWTToken", context.Variables["jwttoken"]),</strong></p> <p><strong>             new JProperty("AppId", appId),</strong></p> <p><strong>             new JProperty("Oid", oid),</strong></p> <p><strong>             new JProperty("Name", name)</strong></p> <p><strong>         ).ToString();</strong></p> <p><strong>     }</log-to-eventhub></strong></p> <p><strong></inbound></strong></p> <p><strong><backend></strong></p> <p><strong><base /></strong></p> <p><strong></backend></strong></p> <p><strong><outbound></strong></p> <p><strong><base /></strong></p> <p><strong></outbound></strong></p> <p><strong><on-error></strong></p> <p><strong><base /></strong></p> <p><strong></on-error></strong></p> <p><strong></policies></strong></p> <a href="https://drive.google.com/uc?id=1itd00Xls-czG5OqUSV6XpfpyC7NHn9jK"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1CxEPAqt5KiaMHFLq6vCiXeqruYk0m1yv" width="244" height="122" /></a> <p>The XML code can be found at my GitHub Repository: <a href="https://github.com/terenceluk/Azure/blob/main/API%20Management/XML/Capture-APIM-Traffic-and-JWT-Token-Information.xml">https://github.com/terenceluk/Azure/blob/main/API%20Management/XML/Capture-APIM-Traffic-and-JWT-Token-Information.xml</a></p> <p>Proceed to click on the <b>Save </b>button and additional <b>set-variable</b> and <b>log-to-eventhub</b> policies should be displayed under <b>Inbound processing</b>:</p> <a href="https://drive.google.com/uc?id=1onF-Pka-us4SJMfyhPI_GsFaE1WAz6X5"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1TgYG_ZnptWQAtU3jPIVQjIXbv22Nyiaj" width="244" height="123" /></a> <p>With the API Management’s inbound processing rule updated, initiating API calls to the APIM to generate request traffic and let it capture the information. Once a few requests have been made, navigate to the <b>Event Hub</b> then <b>Process data</b>:</p> <a href="https://drive.google.com/uc?id=1ykVy_IL6feU4-Tu57S5a10T0XfN-yP-P"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1946RS7dDeFKWBYTJLG3t9AKY9FDLqN2_" width="102" height="244" /></a> <p>Within the <b>Process data</b> blade, click on the <b>Start </b>button for <b>Enable real time insights from events</b>:</p> <a href="https://drive.google.com/uc?id=1qhecmBu9ClhyOhR9vZDFnIumyt5iEDSz"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1KXpwBKu-S1IZFzydEb1uQ4Iz8xLnnmNP" width="244" height="124" /></a> <p>Click on the <b>Test Query</b> button to load the captured logs:</p> <a href="https://drive.google.com/uc?id=1EDoHaXQ2m2GBTOQ7VvIngCAvWZmWWrDZ"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1580zdVsQSvOSoBGEi-YPPA0nS41edIqS" width="244" height="149" /></a> <p>The logs typically take a minute or 2 to show up so if no logs are displayed then try executing the <b>Test query</b> again after a few minutes:</p> <a href="https://drive.google.com/uc?id=1V_tAk96o2XR4xLALhwC55L7o4hawE5fw"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=19TPS7DCH2X7VBl_kW6HcZUOLYH9w62vO" width="244" height="103" /></a> <p>We can see that it is possible for us to edit the inbound processing policy to recreate the type of log entries the <strong>API Management</strong> out-of-the-box diagnostic settings but if that is not desired, it is possible to map the logs in the <b>Event Hub</b> to the logs in the diagnostic settings with the use of the <b>RequestId</b> from the <b>Event Hub </b>logs and the <b>CorrelationId</b> of the <b>APIM diagnostic settings</b> <b>logs</b> as shown in the screenshots below:</p> <p><b><font size="4">RequestID from Event Hub</font></b></p> <a href="https://drive.google.com/uc?id=1Xx_tXisrgMJTdBjPX2TJxdAGuRSoKAvX"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1LbEf0bZGAvsrdUbTwxgzaR5UMYyoTTur" width="244" height="120" /></a> <p><b><font size="4">CorrelationId from API Management Diagnostic Settings Logs</font></b></p> <a href="https://drive.google.com/uc?id=1u-YsBDtt9--tTrY7lUDlhaeMcgXjsls2"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=11OxdyOCQxotKmQ5jovOO5ympmQViZeeI" width="244" height="120" /></a> <p>Note that there are different views available in the <b>Event Hub</b> logs. Below is a Raw view displayed as <b>JSON</b>:</p> <a href="https://drive.google.com/uc?id=16MD7aMupcvGL7u5eRh-BvF3KDkbWD4V0"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1qM13PKvsT3d3MhkQAc8Gh1d12P4VTNGN" width="244" height="122" /></a> <p>As mentioned earlier, the <strong>JWT token</strong> passed for authorization is captured and it is possible to decode the value to view the full payload. If any additional fields are desired then the inbound processing policy can be modified to capture this information:</p> <a href="https://drive.google.com/uc?id=1GNTbAF9Z26QixV7-VCFwZc1Aw-6eASYs"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1UVlq4HtAePOGBOCeWHFUWUik7HdNX1kd" width="236" height="244" /></a> <p>Now that we have the <strong>JWT token</strong> information captured, we can send the <b>Azure Event Hubs</b> logs into a <b>Log Analytics Workspace</b> and join the 2 tables together with KQL. I will be providing a walkthrough for how to accomplish this as outlined in this document:</p> <p><strong>Tutorial: Ingest events from Azure Event Hubs into Azure Monitor Logs (Public Preview) <br /></strong><a href="https://learn.microsoft.com/en-us/azure/azure-monitor/logs/ingest-logs-event-hub">https://learn.microsoft.com/en-us/azure/azure-monitor/logs/ingest-logs-event-hub</a></p> <p>… in the part 2 of my future post.</p> <p>I hope this helps anyone out there looking for a way to capture the identity of the user using the <strong>Azure OpenAI</strong> Service.</p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-37965910994488733542023-11-15T09:29:00.001-05:002023-11-25T06:01:29.268-05:00Securing Azure OpenAI with API Management to only allow access for specified Azure AD users<p>I’ve been spending most of my weekends playing around with Azure’s OpenAI service and two of the personal projects I’ve been working on are:</p> <ol> <li>How can I secure access to OpenAI’s API access so control can be applied to what and who can make API calls to it</li> <li>How can I capture identity details for the application or user making the API call if we are to secure access with OAuth</li> </ol> <p>This post will focus on item #1 while I get the notes I’ve captured for #2 organized and written as a blog post.</p> <p>A common method I’ve found to provide the type of security for #1 is through leveraging the API Management service so I gave this pattern a shot over the weekend to test using an Azure API Management to only allow specified Azure AD users to call the Azure OpenAI API. The following is a high level architecture diagram and the flow of the traffic:</p>     <a href="https://drive.google.com/uc?id=157rlDQ1gq5w3ByYhJF5YaQ9SqDBXGNZQ"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1WIR2czJYekSzhbRAyokfXoQtuOQYJGJl" width="644" height="457" /></a> <p><b><u><font size="5">Setup Azure API Management to publish Azure OpenAI</font></u></b></p> <p>Begin by downloading the latest Azure OpenAI <strong>inference.json</strong> from the following Microsoft documentation: <a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#completions">https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#completions</a></p> <a href="https://drive.google.com/uc?id=1r2y92Wei5_JSnCRDcMoDPuO3_4nXlTz9"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=11-eaXhz1AVeab6AGR1EB2oqqvyghNI_y" width="244" height="151" /></a> <p>For the purpose of this example, I will use the latest <strong>2023-09-01-preview</strong>: <a href="https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json">https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json</a></p> <a href="https://drive.google.com/uc?id=139O5ltaR1kkccUJ6v6Sqeg93zvzkOrPp"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1nXXSC3Q69QzT5_blL-uZIe9zQZiDhvdn" width="244" height="143" /></a> <p>Once downloaded, open the JSON file and edit the following two lines:</p> <p>Use the name of the OpenAI instance to replace {endpoint}:</p> <p><b>"url": "dev-openai/openai",</b></p> <p>Use the full endpoint value:</p> <p><b>"default": <a href="https://dev-openai.openai.azure.com/">https://dev-openai.openai.azure.com/</a></b></p> <p><a href="https://drive.google.com/uc?id=175CzPPqVTLdTiGLaHVL0kF7rAsHvGKYp"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1eZrJRbnjxdoS76DKqo-Ez6zJdsTuWciZ" width="244" height="133" /></a></p> <p><a href="https://drive.google.com/uc?id=1OnaQvkFy34gI46m_PF0qIRZR0Z6xjcdj"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1zlkJJKqBbbG9c_6jyQpVk0TJz08iVXBG" width="244" height="161" /></a></p> <p>With the JSON file prepared, proceed to deploy an Azure API Management resource with the SKU of choice, select the <b>APIs</b> blade, <b>Add API</b> and select <b>OpenAPI</b>:</p> <a href="https://drive.google.com/uc?id=1RFH_8w7hDP_JnggDQMEJs7mOo2eEZzzW"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Xr5VYNVpJGGtQuZkt-05i_BnKBrbYNtH" width="244" height="128" /></a> <p>Select <b>Full</b> and import the <b>inference.json</b> file that will automatically populate the fields, proceed to create the API:</p> <a href="https://drive.google.com/uc?id=1tEwcAz7qZY08v96HFmu-PHRhV5FZKtR0"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1vAN5e2PqaaWh3e_FQvO8V8PHZ00wwG4x" width="244" height="138" /></a> <p>Turn on the System Assigned Managed for the APIM:</p> <a href="https://drive.google.com/uc?id=1hOKB2J7KUJZQRj7O5U9AHlBc8k3p2jbo"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1df8JLI1qYlREV_sXK0rJ9JC4CqSOgVxf" width="226" height="244" /></a> <p>We’ll need to allow the APIM to call Azure OpenAI with the API key: </p> <a href="https://drive.google.com/uc?id=12fKkA6ngR3BlytzSn9k7x2_DQxZT1nlj"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1liDDlZ61PIAvj9y7ltwS9pdRvQ6Wu5EE" width="244" height="131" /></a> <p>… and the best way to store the key is through a KeyVault so I’ve created a secret with the API key in a KeyVault:</p> <a href="https://drive.google.com/uc?id=1qaLmR2irel5KoFXTgn1XjY-cZa10ylVi"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1ukg8SS2cNwsyFL9uUHM3rrVXgOtag7rw" width="244" height="79" /></a> <p>As well as granted the APIM managed system identity <b>Key Vault Secrets User</b> permissions to access the key:</p> <a href="https://drive.google.com/uc?id=1qMWmcVnW-l3Gyp7NeFtU55Dmj8QC1dHJ"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1X3R6LwUsDtszbQ73hee2x88AvVPIA_vr" width="244" height="19" /></a> <p>With the KeyVault and OpenAI secret configured proceed to navigate to the APIM <b>Named values</b> blade and <b>Add</b> a new value:</p> <a href="https://drive.google.com/uc?id=1jBTfXLb7FGj_A50qcmC8bbjiv7tE7Yqw"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1EoTDCg47gbXEj_jynMKrTkjdho5YAUJ_" width="244" height="242" /></a> <p>Configure a named value to reference the secret in the KeyVault:</p> <a href="https://drive.google.com/uc?id=1T_W9iICTa4GAm0HWmp0w0wsI8s2s_eSI"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=16BX54ZfFhKna8r7waCRyh0cFRWjfMcf1" width="244" height="157" /></a> <p>Note the name that you’ve used for the named value as you’ll be using it later on.</p> <p>We’ll also be using the tenant ID for another configuration so repeat the same procedure and create a plain value with the tenant ID:</p> <a href="https://drive.google.com/uc?id=1gSu7IqZ9krcrAjRqgsyoJwy3Q1Jo9FCT"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1IbEv2AG0aowSkF2WLIjyXpqRzl0Mk2bf" width="244" height="161" /></a> <p>The following named values should be listed:</p> <a href="https://drive.google.com/uc?id=16_eIvoRofKZeZHq92pUZXb7ykoBW6P9t"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1cIkmzeufILL2apEOqmUDo897hgrpJbkd" width="244" height="100" /></a> <p>Proceed by navigating to the <b>APIs</b> blade, <b>Azure OpenAI Service</b> <b>API</b>, <b>All operations</b>, <b>Design</b> tab, and then click on the <b></></b> icon under the <b>Inbound processing</b> heading:</p> <a href="https://drive.google.com/uc?id=1Kh2ioJabLgh3eOPrVQuDJ2KS_0WupPw2"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=12jLy5mTptp9_agiF0eru5_sZJMhsIqSP" width="244" height="99" /></a> <p>We’ll be configuring the following policy for the APIM to send a header with the name <b>api-key</b> and value of the secret we configured in the KeyVault:</p> <p>GitHub repository: <a title="https://github.com/terenceluk/Azure/blob/main/API%20Management/XML/Set-Header-API-Key.xml" href="https://github.com/terenceluk/Azure/blob/main/API%20Management/XML/Set-Header-API-Key.xml">https://github.com/terenceluk/Azure/blob/main/API%20Management/XML/Set-Header-API-Key.xml</a></p> <p><strong><!--</strong></p> <p><strong>    IMPORTANT:</strong></p> <p><strong>    - Policy elements can appear only within the <inbound>, <outbound>, <backend> section elements.</strong></p> <p><strong>    - To apply a policy to the incoming request (before it is forwarded to the backend service), place a corresponding policy element within the <inbound> section element.</strong></p> <p><strong>    - To apply a policy to the outgoing response (before it is sent back to the caller), place a corresponding policy element within the <outbound> section element.</strong></p> <p><strong>    - To add a policy, place the cursor at the desired insertion point and select a policy from the sidebar.</strong></p> <p><strong>    - To remove a policy, delete the corresponding policy statement from the policy document.</strong></p> <p><strong>    - Position the <base> element within a section element to inherit all policies from the corresponding section element in the enclosing scope.</strong></p> <p><strong>    - Remove the <base> element to prevent inheriting policies from the corresponding section element in the enclosing scope.</strong></p> <p><strong>    - Policies are applied in the order of their appearance, from the top down.</strong></p> <p><strong>    - Comments within policy elements are not supported and may disappear. Place your comments between policy elements or at a higher level scope.</strong></p> <p><strong>--></strong></p> <p><strong><policies></strong></p> <p><strong><inbound></strong></p> <p><strong><base /></strong></p> <p><strong><set-header name="api-key" exists-action="append"></strong></p> <p><strong><value>{{dev-openai}}</value></strong></p> <p><strong></set-header></strong></p> <p><strong></inbound></strong></p> <p><strong><backend></strong></p> <p><strong><base /></strong></p> <p><strong></backend></strong></p> <p><strong><outbound></strong></p> <p><strong><base /></strong></p> <p><strong></outbound></strong></p> <p><strong><on-error></strong></p> <p><strong><base /></strong></p> <p><strong></on-error></strong></p> <p><strong></policies></strong></p> <p><b>**Note</b> that we use the<b> {{ }}</b> brackets reference the named value as a variable.</p> <a href="https://drive.google.com/uc?id=1YpZ3YIx_yQOl8qyToL7mMB6HkLya9dvB"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1rLspOY0ctOH4qCewYR4jWxcMII_zkqSt" width="244" height="108" /></a> <p>Proceed to save the settings.</p> <p>The APIM is now set up for receiving OpenAI API calls but not with the Azure OpenAI api-key, but rather a subscription key for the APIM instance. To retrieve this key, navigate to the <b>APIs</b> blade, <b>Azure OpenAI Service API</b>, <b>Settings</b> tab, and then scroll down to the <b>Subscription </b>heading. Notice that <b>Subscription required</b> is enabled with the <b>Header name </b>and <b>Query parameter name</b> defined. The subscription key can be found in the <b>Subscriptions</b> blade:</p> <a href="https://drive.google.com/uc?id=1D30QjCPI95fWx9pJifSzRdvG4gcdNMe5"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1GDrbY0vXnSlqYdqAwqqXx_cqnwZflOaB" width="244" height="86" /></a> <p><b><u><font size="5">API Management Logging Configuration</font></u></b></p> <p>One last configuration that is important is the <b>Application Insights</b>: </p> <a href="https://drive.google.com/uc?id=1Hyag19hZ1IywYUFYUXL1-Q78CcMt3cIY"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1LAuTOcyEY2UzAquCdUiE7Ji5OFuRq2aO" width="244" height="218" /></a> <p>… and <b>Azure Monitor</b> logging:</p> <a href="https://drive.google.com/uc?id=1uAtFUcu6I3A2KVLHmxhfeE4ldJUdisAK"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1SsO36cUeVc9u0egLeMPe9avKClgA_fR_" width="244" height="222" /></a> <p>Ensure that these are enabled so APIM data plane access logs and reports can be created. A few sample reports generated with KQL can be found here: <a href="https://github.com/Azure-Samples/openai-python-enterprise-logging">https://github.com/Azure-Samples/openai-python-enterprise-logging</a></p> <p>Here are a few sample outputs from 2 KQL queries:</p> <p><b><font size="4">Query to identify token usage by ip and mode</font></b></p> <p>ApiManagementGatewayLogs</p> <p>| where tolower(OperationId) in ('completions_create','chatcompletions_create')</p> <p>| where ResponseCode == '200'</p> <p>| extend modelkey = substring(parse_json(BackendResponseBody)['model'], 0, indexof(parse_json(BackendResponseBody)['model'], '-', 0, -1, 2))</p> <p>| extend model = tostring(parse_json(BackendResponseBody)['model'])</p> <p>| extend prompttokens = parse_json(parse_json(BackendResponseBody)['usage'])['prompt_tokens']</p> <p>| extend completiontokens = parse_json(parse_json(BackendResponseBody)['usage'])['completion_tokens']</p> <p>| extend totaltokens = parse_json(parse_json(BackendResponseBody)['usage'])['total_tokens']</p> <p>| extend ip = CallerIpAddress</p> <p>| where model != ''</p> <p>| summarize</p> <p>sum(todecimal(prompttokens)),</p> <p>sum(todecimal(completiontokens)),</p> <p>sum(todecimal(totaltokens)),</p> <p>avg(todecimal(totaltokens))</p> <p>by ip, model</p> <p><a href="https://drive.google.com/uc?id=12zal4BrqyXGsLUkHmzO1L460sEX87_iL"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=14XzQSf26iQj5kfRpWQ3YR3wTZ5HECuzJ" width="244" height="90" /></a> </p> <p>GitHub repository: <a title="https://github.com/terenceluk/Azure/blob/main/Kusto%20KQL/Identify-token-usage-by-ip-and-mode.kusto" href="https://github.com/terenceluk/Azure/blob/main/Kusto%20KQL/Identify-token-usage-by-ip-and-mode.kusto">https://github.com/terenceluk/Azure/blob/main/Kusto%20KQL/Identify-token-usage-by-ip-and-mode.kusto</a></p> <p><b><font size="4">Query to monitor prompt completions</font></b><b></b></p> <p>ApiManagementGatewayLogs</p> <p>| where tolower(OperationId) in ('completions_create','chatcompletions_create')</p> <p>| where ResponseCode == '200'</p> <p>| extend model = tostring(parse_json(BackendResponseBody)['model'])</p> <p>| extend prompttokens = parse_json(parse_json(BackendResponseBody)['usage'])['prompt_tokens']</p> <p>| extend prompttext = substring(parse_json(parse_json(BackendResponseBody)['choices'])[0], 0, 100)</p> <p><a href="https://drive.google.com/uc?id=1E7XunZOBzaZxFe2_5LRwIRoH7OGMqtt4"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1V24x1Lz2ETnQBbKg_ah_L7c_k-F1c5ED" width="244" height="88" /></a></p> <p>GitHub repository: <a title="https://github.com/terenceluk/Azure/blob/main/Kusto%20KQL/Monitor-prompt-completions.kusto" href="https://github.com/terenceluk/Azure/blob/main/Kusto%20KQL/Monitor-prompt-completions.kusto">https://github.com/terenceluk/Azure/blob/main/Kusto%20KQL/Monitor-prompt-completions.kusto</a></p> <p>If you have experience setting API Management up to capture requests to Azure OpenAI then you will already know that the only information representing the calling user the <strong>Log Analytics</strong> provide is the IP address. This isn’t very useful so I have written another post to demonstrate how to capture the OAuth token details used to make the call:</p> <p><strong>How to log the identity of a user using an Azure OpenAI service with API Management logging (Part 1 of 2)</strong> <br /><a href="https://terenceluk.blogspot.com/2023/11/how-to-log-identity-of-user-using-azure.html">https://terenceluk.blogspot.com/2023/11/how-to-log-identity-of-user-using-azure.html</a></p> <h5><b><u><font size="5">Testing OpenAI API calls through API Management with Postman</font></u></b></h5> <p>With the API Management configuration completed, we should now be able to use Postman to test querying the APIM. I won’t go into the details of the configuration but will provide the screenshots:</p> <p><b>https://dev-openai-apim.azure-api.net/deployments/{{gpt_mode_4}}/chat/completions?api-version={{api_env_latest}}</b></p> <a href="https://drive.google.com/uc?id=1b1UumGB-ugpQUoAtK6fdUYSe8ALheEfP"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1G-q-RmL13X90ZYLoC9_pkShYLPUcK04t" width="244" height="65" /></a> <p>{</p> <p>"messages": [</p> <p>   {</p> <p>"role": "user",</p> <p>"content": "how many faces does a dice have?"</p> <p>    }</p> <p>  ],</p> <p>"temperature": 0.7,</p> <p>"top_p": 0.95,</p> <p>"frequency_penalty": 0,</p> <p>"presence_penalty": 0,</p> <p>"max_tokens": 800,</p> <p>"stop": <b>null</b></p> <p>}</p> <a href="https://drive.google.com/uc?id=1LM7cbkkRZSi-3jdK6f6yXnaTiRwxHGS3"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=13qeHG6GBfyl4JR6jhBxKIpx_hxEiGk0t" width="244" height="101" /></a> <p>I’ll write another post in the future to properly secure Azure OpenAI now that we APIM publishing the APIs.</p> <p><b><u><font size="5">Create an App Registration for securing APIM API access</font></u></b></p> <p>With the Azure API Management configured to publish the Azure OpenAI APIs, we will now proceed to create an <b>App Registration</b> that will allow us to lockdown APIM access for select Entra ID / Azure AD users.</p> <a href="https://drive.google.com/uc?id=1Feoesr0s6ZavAyNV-N82NJCQIowjFeak"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1H3SK6Ri9mgHtT8gfauitiRUWdleCHAOG" width="244" height="98" /></a> <p>Provide a name for the <b>App Registration</b> and create the object:</p> <a href="https://drive.google.com/uc?id=1Vvi__mZ8RXkFRSM8Q5J3pvwwnivOEDCC"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1DUslzclujpmfvngQGnfv8M4F3a8HM_jF" width="244" height="189" /></a> <p>Select the<b> App roles </b>blade, click on<b> Create app role </b>and fill out the following:<b></b></p> <p><b>Display name</b>: <Provide a display name></p> <p><b>Allowed member types</b>: Select <b>Users/Groups</b> or <b>Both (Users/Groups + Applications)</b></p> <p><b>Value</b>: APIM.Access</p> <p><b>Description</b>: Allow Azure OpenAI API access.</p> <p>Create the app role.</p> <a href="https://drive.google.com/uc?id=1kA7xKrFoadZKPdXHcsO7mUaa8llSB29U"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1JHCPBQXhzJkhXpfNy4v4YQzjtsIfZNNx" width="244" height="119" /></a><a href="https://drive.google.com/uc?id=14sntZ6HpBvc6Z2AqVqSkbRhynzLNL6Z4"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1uH9iZDw0rugG1F0j3GmgcF0Rf6WLRKsR" width="244" height="135" /></a> <p>Select the <b>Expose an API</b> blade, and click on the <b>Add</b> link beside <b>Application ID URI</b>:</p> <a href="https://drive.google.com/uc?id=10-3KVGfT44vEmF7biq4Br2K3Ccep64BW"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1k3BGlmOHSN9C4RewQv6IWT8jDeeVc8Z6" width="244" height="188" /></a> <p>Leave the <b>Application ID URI</b> as the default and click on the <b>Save</b> button:</p> <a href="https://drive.google.com/uc?id=1zsHaU6XvF4dZ9utIteEtebQBVfPgkuIy"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1DMMWOC_V-VSoKFhT3_aS9lQzO1iKt3Wl" width="244" height="90" /></a> <p>We’ll be using <strong>Azure CLI</strong> to quickly test the retrieval of the token so we’ll need to create a scope and add Azure CLI as an authorized client application.</p> <p>Proceed to click on <b>Add a scope</b> and fill in the following properties:</p> <p><b>Scope name</b>: API.Access</p> <p><b>Who can consent</b>: Admins and users</p> <p><b>Admin consent display name</b>: Access to Azure OpenAI API</p> <p><b>Admin consent description</b>: Allows users to access the Azure OpenAI API</p> <p><b>State</b>: Enabled</p> <a href="https://drive.google.com/uc?id=1heb5y_Jz8hI9dOvlVbAhQtYzckXHfX1m"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=10AaZFoxLsIx9WhTwJGkYgHpHXExHSSN9" width="244" height="75" /></a> <p>Click on <b>Add a client application</b> to add the <b>Client ID</b> of Azure CLI <b>04b07795-8ddb-461a-bbee-02f9e1bf7b46</b> as an authorized application to retrieve a delegated access token:</p> <p><a href="https://drive.google.com/uc?id=1saLMM2jGmTORi3LOj4vv7DZkV4DPFsUi"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1cO3ubEl6mRcXGO720QW6hs7fRK_zhV0H" width="244" height="121" /></a> </p> <p>I will also be demonstrating how to set up <strong>Postman</strong> to test the retrieval of the token so we’ll need to add the <strong>Redirect URI</strong> for the call back to Postman for the <strong>App Registration </strong>by navigating to the <strong>Authentication </strong>blade, click on <strong>Add a platform</strong>, and add the following URI: <a title="https://oauth.pstmn.io/v1/callback" href="https://oauth.pstmn.io/v1/callback">https://oauth.pstmn.io/v1/callback</a></p> <p><a href="https://drive.google.com/uc?id=1bNhK9dnOVQ9U5cKYMF1KT-2io1QqQYzl"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Q9pBYsmCLgprwXhhby3PtH5l6bl016W-" width="244" height="99" /></a></p> <p>We will also need to create a secret for the <strong>App Registration </strong>so <strong>Postman </strong>is able to securely authenticate and retrieve a delegated token on behalf of the user. Navigate to the <strong>Certificates & secrets</strong> blade, create a <strong>Client secret</strong> then save the secret:<strong> </strong></p> <a href="https://drive.google.com/uc?id=1dfswVXXR-XB-gOqIrjzxWlLjpntD_TcM"><img title="image" style="margin: 0px; display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1euuNDJOidMrGMCuszoiEywzIma1L7Jjz" width="244" height="92" /></a> <p>With the <b>App Registration</b> created, we’ll need to grant a user with the role to test calling the APIM’s OpenAI publish API. Copy the <b>client ID</b> of the <b>App Registration</b>, navigate to the <b>Enterprise Application</b> blade and search for the <b>Applicaiton ID</b>:</p> <a href="https://drive.google.com/uc?id=17FMkKJM6TouCSLgOO6G3L0EMsEWqbNbo"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1zWALwZOec6cn8sxopI8sM3gGe29HFoQQ" width="244" height="42" /></a> <p>Open the <b>Enterprise Application</b> object, navigate to the <b>Users and groups</b> blade, and click on <b>Add user/group</b>:</p> <a href="https://drive.google.com/uc?id=1J43CzdNADe4Iz4FDxgvu7bx3xMHNTIL9"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1DqFAb4Mi6iqVxQxU_tGtZ-HeF7OJrN2_" width="244" height="80" /></a> <p>Select the user who we’ll be testing with and assign the user:</p> <a href="https://drive.google.com/uc?id=1k2Y28e3H2Ke9REi5jDKf0fd9O-TNntFR"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1YnnEHG1ynnt2c-qq__LeMd8ZoIF9DSQm" width="244" height="61" /></a><a href="https://drive.google.com/uc?id=121mZW_-X3YOBRnqsKmpBQ_zJHlMcD8ER"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1oyy8Ck6oglAVwDw4mmeh4gBVO9fitZZ7" width="244" height="55" /></a> <p>With the <b>Enterprise Application</b> configured with the user assigned, we will now proceed to lockdown the APIM inbound processing policy. Open the APIM resource in the portal, navigate to the <b>APIs</b> blade, <b>Azure OpenAI Service API</b>, <b>Design</b> tab, and click on the <b></> </b>button under <b>Inbound processing</b>:</p> <a href="https://drive.google.com/uc?id=15sup1v62SadRvkBKXoxE07j3zFVd_GXV"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1GbeaV8x0ypVY-SmQN_kEUo02vc2V1lC3" width="244" height="84" /></a> <p>Proceed to add the <b><vadlidate-jwt></b> tag content and note that we use the <b>{{Tenant-ID}}</b> named value variable we created earlier:</p> <p>GitHub Repository: <a title="https://github.com/terenceluk/Azure/blob/main/API%20Management/XML/Validate-JWT-Access-Claim.xml" href="https://github.com/terenceluk/Azure/blob/main/API%20Management/XML/Validate-JWT-Access-Claim.xml">https://github.com/terenceluk/Azure/blob/main/API%20Management/XML/Validate-JWT-Access-Claim.xml</a></p> <p><!--</p> <p>    IMPORTANT:</p> <p>    - Policy elements can appear only within the <inbound>, <outbound>, <backend> section elements.</p> <p>    - To apply a policy to the incoming request (before it is forwarded to the backend service), place a corresponding policy element within the <inbound> section element.</p> <p>    - To apply a policy to the outgoing response (before it is sent back to the caller), place a corresponding policy element within the <outbound> section element.</p> <p>    - To add a policy, place the cursor at the desired insertion point and select a policy from the sidebar.</p> <p>    - To remove a policy, delete the corresponding policy statement from the policy document.</p> <p>    - Position the <base> element within a section element to inherit all policies from the corresponding section element in the enclosing scope.</p> <p>    - Remove the <base> element to prevent inheriting policies from the corresponding section element in the enclosing scope.</p> <p>    - Policies are applied in the order of their appearance, from the top down.</p> <p>    - Comments within policy elements are not supported and may disappear. Place your comments between policy elements or at a higher level scope.</p> <p>--></p> <p><policies></p> <p><inbound></p> <p><base /></p> <p><set-header name="api-key" exists-action="append"></p> <p><value>{{bma-dev-openai}}</value></p> <p></set-header></p> <p><validate-jwt header-name="Authorization" failed-validation-httpcode="403" failed-validation-error-message="Forbidden"></p> <p><openid-config url=<a href="https://login.microsoftonline.com/%7b%7bTenant-ID%7d%7d/v2.0/.well-known/openid-configuration">https://login.microsoftonline.com/{{Tenant-ID}}/v2.0/.well-known/openid-configuration</a> /></p> <p><issuers></p> <p><issuer><a href="https://sts.windows.net/%7b%7bTenant-ID%7d%7d/%3c/issuer">https://sts.windows.net/{{Tenant-ID}}/</issuer</a>></p> <p></issuers></p> <p><required-claims></p> <p><claim name="roles" match="any"></p> <p><value>APIM.Access</value></p> <p></claim></p> <p></required-claims></p> <p></validate-jwt></p> <p></inbound></p> <p><backend></p> <p><base /></p> <p></backend></p> <p><outbound></p> <p><base /></p> <p></outbound></p> <p><on-error></p> <p><base /></p> <p></on-error></p> <p></policies></p> <a href="https://drive.google.com/uc?id=1hCZq0BlG0AdYJRfIQyhJi9lrKxVQGd09"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1lcY8TP_fIJWYMH3eK1iKfJR1cS_F6puo" width="244" height="181" /></a> <p>Proceed to save and we are now ready to test with Azure CLI.</p> <p><b><u><font size="5">Testing Token Retrieval with Azure CLI and API Management API calls with Postman</font></u></b></p> <p>Launch a prompt with Azure CLI available and execute:</p> <p><b>Az login</b></p> <p>Complete the login with the test account:</p> <a href="https://drive.google.com/uc?id=1P1wi1CBhCRlWHE12XCz7jXYfVJi0HMp3"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1D-R48CjMGytXHrqmpGVKsAoeEmszhKZS" width="244" height="70" /></a> <p>Next, we’ll need to copy the <b>Application ID URI</b>:</p> <a href="https://drive.google.com/uc?id=1Fo4mfn7RlZijwIgjS6g1xfWXFwSbRt4a"><img title="image" style="margin: 0px; display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1R40ORuVcDvd_Q9TLxuwSKeuySKXwkfNG" width="244" height="130" /></a> <p>… and execute:</p> <p><b>az account get-access-token --resource api://12bccc26-b778-4a2d-ae7a-4f5732e7a79d</b></p> <a href="https://drive.google.com/uc?id=1DyIStNRTRdv3AqYchX3_MHu48wykBw4A"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1iFx53z63UgQv0MeDheYx9ddqqSi-LvEu" width="244" height="125" /></a> <p>A token should be returned:</p> <a href="https://drive.google.com/uc?id=1nCXwGmvlm6xtc-ZrlciFG_1hoZiE0viE"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1MQ9AfeUwuMn17R2EnymxAvWu486goHcY" width="244" height="117" /></a> <p>Copying the token and pasting it into <a href="https://jwt.io/">https://jwt.io/</a> should confirm that the token has the role <b>APIM.Access</b>:</p> <a href="https://drive.google.com/uc?id=1YHbdsn2XOBO3AxD-LFFRgo1HoxlV983B"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1sQIxx_NxZQrTjkJK__DN8YPQloxkkZ7I" width="244" height="234" /></a> <p>You should now be able use the token to call APIM with delegated access with a <b>200 OK</b> status:</p> <a href="https://drive.google.com/uc?id=15dI1_rs_9p_6XsVSwfKxSkNApgIOvlAO"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1wlMvSzMPpV3sX7D_yaUkgiPNlHcTE5oR" width="244" height="159" /></a> <p>Trying to call APIM without a token passed in the header as <b>Authorization</b> will fail with:</p> <p>{</p> <p>"statusCode": 403,</p> <p>"message": "Forbidden"</p> <p>}</p> <a href="https://drive.google.com/uc?id=15LGWAYDTv11J_3wC9xOlwmufRQeu81HJ"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1zQyVhcErOzYcEjaTO-8s3twKoOAKLwvl" width="244" height="95" /></a> <p>Removing the user from the <b>Enterprise Application</b> and attempting to call APIM will also result in the same failure message:</p> <a href="https://drive.google.com/uc?id=1FQPLjQEPUW4eLesx4a6m2t6R9KNAtmGc"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1_i_qr23Ov_XF_SKQvq393yz7lkwtMH8p" width="244" height="53" /></a> <p>{</p> <p>"statusCode": 403,</p> <p>"message": "Forbidden"</p> <p>}</p> <p><a href="https://drive.google.com/uc?id=1vSoD48hGHgSeeBtbE5unH4XXCteDEMv3"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1KTiixqnIKoGREJX80BaozgVHqQLrzOVK" width="244" height="65" /></a> </p> <p><b><u><font size="5">Testing Token Retrieval and API Management API calls with Postman</font></u></b>  </p> <p>Proceed to launch <strong>Postman</strong>, navigate to the <strong>Environments</strong> are and create the following variables.</p> <p><strong>tenant_id</strong>: <The App Registration’s Directory (tenant) ID></p> <p><strong>client_id_APIM</strong>: <The App Registration’s Application (client) ID></p> <p><strong>client_secret_APIM</strong>: <The secret we created earlier></p> <p>Next, create a new request, navigate to the <strong>Authorization </strong>tab and fill in the following:</p> <p><b>Type</b>: OAuth 2.0</p> <p><b>Add authorization data to</b>: Request Headers</p> <p><b>Token</b>: Available Tokens</p> <p><b>Header</b> <b>Prefix</b>: Bearer</p> <p><b>Token</b> <b>Name</b>: <Name of preference></p> <p><b>Grant</b> <b>type</b>: Authorization Code</p> <p><b>Callback URL</b>: <a href="https://oauth.pstmn.io/v1/callback">https://oauth.pstmn.io/v1/callback</a></p> <p><b>Authorize using browser</b>: Enabled</p> <p><b>Auth</b> <b>URL</b>: <a href="https://login.microsoftonline.com/%7b%7btenant_id%7d%7d/oauth2/v2.0/authorize">https://login.microsoftonline.com/{{tenant_id}}/oauth2/v2.0/authorize</a></p> <p><b>Access Token URL</b>: <a href="https://login.microsoftonline.com/%7b%7btenant_id%7d%7d/oauth2/v2.0/token">https://login.microsoftonline.com/{{tenant_id}}/oauth2/v2.0/token</a></p> <p><b>Client</b> <b>ID</b>: {{client_id_APIM}}</p> <p><b>Client</b> <b>Secret</b>: {{client_secret_APIM}}</p> <p><b>Scope:</b> api://12bccc26-b778-4a2d-ae7a-4f5732e7a79d/API.Access</p> <p><b>Client Authentication</b>: Send as Basic Auth header</p> <p><b>**Note</b> the default <b>Callback URL</b> is set as <a href="https://oauth.pstmn.io/v1/callback">https://oauth.pstmn.io/v1/callback</a>, which is the URL we configured earlier for the App Registration’s <b>Redirect</b> <b>URI</b>.</p> <p>Leave the rest as default and click on <b>Get New Access Token</b>:</p> <a href="https://drive.google.com/uc?id=1GOTuaJgnbQk6v6xveALVMLPfRxSDhy1e"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1QbBKIsnNnZuGrRR_P8m7Zg2BdLYNkU_q" width="244" height="145" /></a><a href="https://drive.google.com/uc?id=1YGT9TFQo1Xr9iD9lOVNFNjqU7oPhSIFm"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1PkGiOiU4gffXzoIcWFZx9e2J1xEHCEo-" width="244" height="172" /></a> <p>A window with <b>Get new access token</b> prompt will be displayed with a browser directing you to the <strong>login.microsoftonline.com</strong> to log into Entra. Proceed to log into Entra ID to retrieve the token.</p> <p>Repeat the steps for Postman as demonstrated in the Azure CLI instructions to call the OpenAI endpoints through the APIM management with the token.</p> <p>----------------------------------------------------------------------------------------------------------------------------</p> <p>I hope this helps anyone who may be looking for a way to lock down APIM access when publishing Azure OpenAI APIs. There are other infrastructure components that will need to be secured to ensure no calls can reach the Azure OpenAI API and I will write another blog post for the design and configuration in the future.</p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-19458761410759206192023-11-13T08:19:00.001-05:002023-11-14T09:42:48.121-05:00Configuring Postman to use OAuth 2.0 for application and delegated permissions in Entra ID / Azure AD<p>Entra ID / Azure AD authentication and authorization are important components for service access within any environment and one of the common questions I’ve been asked is what is the difference between application and delegated permissions in the context of App Registration permissions. The short answer is that application permissions will have the application authenticate as itself, while delegated is where the application will authenticate on behalf of a user. The following Microsoft document provides a great walkthrough of the differences between the two: <a href="https://learn.microsoft.com/en-us/graph/auth/auth-concepts">https://learn.microsoft.com/en-us/graph/auth/auth-concepts</a>. These two types of authentication and authorization can have impactful security consequences when designing applications so it is important to use the correct model. What I’ve found to be most effective in explaining this is demonstrate how to perform both authentication with Postman so the purpose of this post will be a demonstration of setting up the authentication to retrieve the Bearer token, and then calling an API using the token with the appropriate authorization.</p> <p>One point I would like to highlight is that Postman provides a built-in OAuth 2.0 provider authentication tab that makes the process of retrieving the JWT token much simpler than manually configuring an API GET call to the endpoint but this in turn means you won’t know the details of what is sent to the API endpoint. With that, I will show both methods as there are benefits of knowing exactly what header and body content are being sent if you’re writing an application or script to retrieve a token.</p> <p><b><u><font size="5">Setting up an App Registration in Entra ID</font></u></b></p> <p>Before we set up Postman, we will need to create an App Registration to represent the Postman so it can be used to authenticate against Entra ID / Azure AD and use its authorized permissions to perform activities such as use the Graph API. </p> <p>Begin by setting up an <b>App Registration </b>in Entra ID:</p> <a href="https://drive.google.com/uc?id=1BxBua2-E_Xicj3pJ8WGiljz4F0g6TfJ2"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1EuF_NiQ1IbGEWa0NwltQPlQcartew8HL" width="244" height="108" /></a> <p>Provide a name for the <b>App Registration</b>, adjust the supported account types for either <b>single tenant</b> or <b>multitenant</b>, leave the <b>Redirect URI</b> as blank for now, and create the instance:</p> <a href="https://drive.google.com/uc?id=1bRookL_9vfaQKG_LKlNyx2gzuzY9WUet"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1j01EE4zvrh6G-zflaAO5gliat0T_5Waf" width="244" height="204" /></a> <p>With the <b>App Registration</b> created, we can now copy the property values that we’ll use within Postman to authenticate against Entra ID / Azure AD. Navigate to the <b>Overview</b> blade and copy the following values:</p> <ul> <li>Application (client) ID </li> <li>Directory (tenant) ID </li> </ul> <a href="https://drive.google.com/uc?id=1hu2jAaJ_KML-YoRj5zcVkkMuuez-Xs2s"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1mMf3clvzVLe7epiJ_X6icrVU7hjgGJVU" width="244" height="90" /></a> <p>Proceed to create a secret for the <b>App Registration</b> and make sure you copy the <b>value</b> of the secret before navigating away as you cannot retrieve the secret after leaving the page:</p> <a href="https://drive.google.com/uc?id=1as2FdebNNuw-dFFbTJkzhuh2RZQ7oosD"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=17smV1W5JD4c-Z_WzNM5_2dy4KiLlZo2d" width="244" height="146" /></a> <p>Permissions need to be configured for this <b>App Registration</b> to allow our Postman to call the Graph API either on behalf of the user using Postman, or as an application identity configured for Postman. Proceed to navigate to the <b>API Permissions</b> blade and note the default <b>User.Read</b> permission already granted to the <b>App Registration</b>.</p> <p>Let’s proceed to grant 2 additional permissions to the App Registration for testing both <b>Application</b> and <b>Delegated</b> permissions by clicking on <b>Add a permissions</b>:</p> <a href="https://drive.google.com/uc?id=1U5o_yk57rChii3mXUCKfLKWLRLF0tWGC"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1iYzlZsW2dMk1sRqzsz0bZKm1IWAWhQ3q" width="244" height="124" /></a> <p>Click on <b>Microsoft Graph</b>:</p> <a href="https://drive.google.com/uc?id=1XPZ2VM7jWnEPJbqqFPzwgcgSGGHD98nn"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1hp0torlZEGt9qSWzBI0FhAev6DQQHIgV" width="244" height="192" /></a> <p>Note that this is where you configure <b>Delegated permissions </b>or <b>Application permissions</b> for the application using this <b>App Registration</b> to authenticate against Entra ID / Azure AD: </p> <a href="https://drive.google.com/uc?id=1o-CFEddp0bAq1jzqNy-IfdYdlJt2fXDC"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1RixVPbg4XQGsncCewkwyvT1fdwFbVQjK" width="244" height="83" /></a> <p>We’re going to configure both for the purpose of this walkthrough so select Delegated permissions, search for <b>User.Read</b>, select <b>User.Read.All</b> and then add the permission:</p> <a href="https://drive.google.com/uc?id=1iMJ3h7G2ZG8CMQxc0nCILaU7gYe8bFJF"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1R3AZNBFCi3Y9xXthNGYL1CRELmh09euP" width="241" height="244" /></a> <p>Repeat the process for <b>Application permissions</b>:</p> <a href="https://drive.google.com/uc?id=1bcl9ELDxWcq0sJ60i-ylFZDm6-CNksH3"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=103tjJzVpJea0F9NV-zOHVTxDoAXgNVjw" width="244" height="167" /></a> <p>Some permissions will need consent granted by an admin and the 2 that were configured require this so click on <b>Grant admin consent for <Company Name></b> to grant the consent:</p> <a href="https://drive.google.com/uc?id=1ULCeaMKzT5cLwlQ2BqNmxD4qw152-4-R"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1bN0KakfwoKGWYx85hdt6tSAOy-ZpW-Lc" width="244" height="98" /></a> <p>A green check box will be displayed once consent has been granted:</p> <a href="https://drive.google.com/uc?id=1KVX4tQgj4Gii_Cs99Rw5w74_L7IHGMMR"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1dmEt5DKrujn60afThZAPv8GAzKIdS-li" width="244" height="107" /></a> <p>The last step is to configure the Redirect URI we left as blank during the creation of the <b>App Registration</b>. Navigate to the <b>Authentication </b>blade and look for the following field:</p> <p><b>Web</b></p> <p><b>Redirect URIs</b></p> <p><b>The URIs we will accept as destinations when returning authentication responses (tokens) after successfully authenticating or signing out users. The redirect URI you send in the request to the login server should match one listed here. Also referred to as reply URLs. Learn more about Redirect URIs and their restrictions </b></p> <p>The purpose of the redirect URI, or reply URL, is for Entra ID to know where to send the authentication response once the <strong>App Registration</strong> has been successfully authorized and granted an authorization code / access token. In the case of this example, we are sending the code back to Postman and the URL to use is: </p> <p><b>https://oauth.pstmn.io/v1/callback</b></p> <a href="https://drive.google.com/uc?id=1uDEBSQLEjl7OO8Ffd81aOwKRqCb8n-KV"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1YOX15RXmpQiXVp1A272C4MlzFEyLueS0" width="244" height="174" /></a> <p>For more information about the Redirect URI, see the following documentation: <a href="https://learn.microsoft.com/en-us/entra/identity-platform/reply-url#redirect-uris-in-application-vs-service-principal-objects">https://learn.microsoft.com/en-us/entra/identity-platform/reply-url#redirect-uris-in-application-vs-service-principal-objects</a></p> <p>The configuration for the <b>App Registration</b> representing Postman is now complete.</p> <p><b><u><font size="5">Setting up Postman</font></u></b></p> <p>Having a nicely organized setup of Postman collections and variables will save you time during testing and improve the handling of sensitive values. Start by creating a <b>Collection</b> for this exercise.</p> <a href="https://drive.google.com/uc?id=12Qgms0pOezCs9IUfVKHe7YlUgnWRhOES"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1pDL_KXs8rxb1_4drp_hiPTDi1w-NC2H5" width="244" height="49" /></a> <p>Next, navigate to the <b>Environments</b> menu to configure the following secret and variables that were saved earlier during the <b>App Registration</b> creation that we be using for our API calls.</p> <ul> <li>client_id </li> <li>client_secret </li> <li>tenant_id </li> </ul> <a href="https://drive.google.com/uc?id=11qtJDBUSnVBrpy8T76Jnpc-p8ETFKx24"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1-L7vDaC37_tlpHGKFNncJSFq7x6tZZYM" width="244" height="43" /></a> <p><b><u><font size="5">Application Permissions</font></u></b></p> <p>Let’s start with the simpler application permissions configuration.</p> <p><b><font size="4">Using Postman’s Authorization Feature to get Token</font></b></p> <p>Begin by clicking on the ellipsis icon and select <b>Add request</b>: </p> <a href="https://drive.google.com/uc?id=1Ma24oLMKGIJ2_8c_dXf35mbNJaSlBHxw"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1AKqTySM3-OQ6Fpb8aHszYSk9NufHjman" width="244" height="142" /></a> <p>Then navigate to the <b>Authorization</b> tab and fill in the following:<b></b></p> <p><b>Type</b>: OAuth 2.0</p> <p><b>Add authorization data to</b>: Request Headers</p> <p><b>Token</b>: Available Tokens</p> <p><b>Header</b> <b>Prefix</b>: Bearer</p> <p><b>Token</b> <b>Name</b>: <Name of preference></p> <p><b>Grant</b> <b>type</b>: Client Credentials</p> <p><b>Access</b> <b>Token</b> <b>URL</b>: https://login.microsoftonline.com/{{tenant_id}}/oauth2/v2.0/token</p> <p><b>Client</b> <b>ID</b>: {{client_id}}</p> <p><b>Client</b> <b>Secret</b>: {{client_secret}}</p> <p><b>Scope:</b> <a href="https://graph.microsoft.com/.default">https://graph.microsoft.com/.default</a></p> <p><b>Client</b> <b>Authentication</b>: Send as Basic Auth header</p> <p>Leave the rest as default and click on <b>Get New Access Token</b>:</p> <a href="https://drive.google.com/uc?id=15LvdIchcxvX_7_LgrOLcGISyapyVu5_D"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1TBGVcm5RgfEwZLdBRrM4GI8s4nO6Go9f" width="244" height="174" /></a><a href="https://drive.google.com/uc?id=18W9Ed0saPwonaWPwKPOClgx3qG-WJAK1"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1x4SLW8ToJ-PV6e02n9HiQs4dq84Gs91g" width="244" height="190" /></a> <p>The operation should return the information as shown in the screenshots below:</p> <a href="https://drive.google.com/uc?id=1zbLu4G5MYQjk8HkhDqN2-esI2j999mTE"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1q3oKqqsDZahDNAKzmIB8JbXGyF35AAfm" width="244" height="139" /></a><a href="https://drive.google.com/uc?id=1bRKa6SsKaQaDnbNbPtn5paycjxFb2fRQ"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=18D8HnJs9mxhF2HE4613JExOk_9oPnE0f" width="244" height="138" /></a> <p>Proceed to copy the <b>Access Token</b> value and with this, we can navigate to <a href="https://jwt.io/">https://jwt.io/</a>, paste the value to inspect the details of the token and confirm that the App Registration ID and permissions are displayed for this token:</p> <a href="https://drive.google.com/uc?id=1m2N9pQX_LCVEqBPjygVFktHobu95gVRA"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Z24t_wDXybIynGIibENmKR_DCkoPckS9" width="242" height="244" /></a> <p>Next, we can use this token to perform a Graph API call and confirm that we are allowed retrieve the information. Create a new request and configure the following parameters:</p> <p><b>GET https://graph.microsoft.com/v1.0/users</b></p> <p>Under <b>Headers</b>, fill in the following <b>Key</b> and <b>Value</b>:</p> <p><b>Authorization</b>: <Paste the token created in the previous step></p> <p><strong>Content-Type</strong>: application/json</p> <p>Proceed to click on the <b>Send</b> button:</p> <a href="https://drive.google.com/uc?id=1owc5rN7KhiVM76I8d3d1ojkugfMnrCzu"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1wn7Z1GuPUc-6hi5nZFbIyJlKCBVug1im" width="244" height="56" /></a> <p>The list of users in Entra ID should be displayed:</p> <a href="https://drive.google.com/uc?id=1atZXfKey8uxekNaoMp15ICCgm-ZLKzlm"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=15mLoRDS4rSj2ZyVeyeC-cK3Hbe9Xde_2" width="244" height="198" /></a> <p>As this is an application authentication, there are certain Graph APIs that it would unable to call and an example of this is the /ME where the following error would be thrown:</p> <p><strong>{</strong></p> <p><strong>"error": {</strong></p> <p><strong>"code": "BadRequest",</strong></p> <p><strong>"message": "/me request is only valid with delegated authentication flow.",</strong></p> <p><strong>"innerError": {</strong></p> <p><strong>"date": "2023-11-11T16:03:48",</strong></p> <p><strong>"request-id": "8840028e-9a93-4e89-9818-fbcd77e19b60",</strong></p> <p><strong>"client-request-id": "8840028e-xxxx-xxxx-xxxx-fbcd77e19b60"</strong></p> <p><strong>        }</strong></p> <p><strong>    }</strong></p> <p><strong>}</strong></p> <a href="https://drive.google.com/uc?id=16Jc5aDDQ-zvpKLc5UoxLyxT-v9S6sree"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1rYUOcSlJaGaIKNC9FYlBJKzTxl_ueteX" width="244" height="145" /></a> <p>I’ll show how we can make this call this with delegated permissions later in this post.</p> <p><b><font size="4">Directly calling API Endpoint to get Token</font></b></p> <p>Retrieving an OAuth token via the Postman Authorization feature was easy but there are scenarios where we want to call the API endpoint instead because we will be using the same header and body settings in an application. The following are the header and body configuration for Postman.</p> <p>Begin by clicking on the ellipsis icon and select <b>Add request</b>, then navigate to the <b>Headers</b> tab and fill in the following:<b></b></p> <p><b>GET https://login.microsoftonline.com/{{tenant_id}}/oauth2/v2.0/token</b></p> <p><b>Content-Type</b>: application/x-www-form-urlencoded</p> <a href="https://drive.google.com/uc?id=1ggIS4SEGh8lW20AIM6-ZjOEH6GGAHA9k"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1QWQPsR33ntxjVcpgRC5XVG7G8LgqjAoM" width="244" height="51" /></a> <p>Navigate to the <b>Body</b> tab, select <b>form-data</b> and fill in the following:</p> <p><b>grant_type</b>: client_credentials</p> <p><b>client_id</b>: {{client_id}}</p> <p><b>client_secret</b>: {{client_secret}}</p> <p><b>Scope</b>: https://graph.microsoft.com/.default</p> <p>Client on the <b>Send</b> button to submit the request and confirm that the access token is returned:</p> <a href="https://drive.google.com/uc?id=1FAUaQ_rkg4j9MZSHi2o6FtBvIZgilZ81"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1O-dLE2qrgBxISBNs808cp6lSy0UgSgLs" width="244" height="186" /></a> <p>Pasting the token into <a href="https://jwt.io/">https://jwt.io/</a> will return the same results as the previous Postman Authorization request:</p> <a href="https://drive.google.com/uc?id=16Sxg75LrdjMr2R_pAAKlHO72r3poaV8t"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1TNbW8FZAstISOREbA-RHkFH7UuacuhvB" width="242" height="244" /></a> <p>Repeat the steps above to call the <b>GET <a href="https://graph.microsoft.com/v1.0/users">https://graph.microsoft.com/v1.0/users</a></b> API endpoint will return the same results.</p> <p><b><u><font size="5">Delegated Permissions</font></u></b></p> <p>Let’s continue moving onto delegated permissions where the App Registration will call the Graph API on behalf of the user.</p> <p><b><font size="4">Using Postman’s Authorization Feature</font></b></p> <p>Begin by clicking on the ellipsis icon and select <b>Add request</b>: </p> <a href="https://drive.google.com/uc?id=16E75KDTPCyhfxq9yyv9_LpgVMh7PsyJK"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1s5JA0QCdiT3AMbc_9CKyeo7vamczlOcw" width="244" height="142" /></a> <p>Then navigate to the <b>Authorization</b> tab and fill in the following:<b></b></p> <p><b>Type</b>: OAuth 2.0</p> <p><b>Add authorization data to</b>: Request Headers</p> <p><b>Token</b>: Available Tokens</p> <p><b>Header</b> <b>Prefix</b>: Bearer</p> <p><b>Token</b> <b>Name</b>: <Name of preference></p> <p><b>Grant</b> <b>type</b>: Authorization Code</p> <p><b>Callback URL</b>: <a href="https://oauth.pstmn.io/v1/callback">https://oauth.pstmn.io/v1/callback</a></p> <p><b>Authorize using browser</b>: Enabled</p> <p><b>Auth</b> <b>URL</b>: <a href="https://login.microsoftonline.com/%7b%7btenant_id%7d%7d/oauth2/v2.0/authorize">https://login.microsoftonline.com/{{tenant_id}}/oauth2/v2.0/authorize</a></p> <p><b>Access Token URL</b>: <a href="https://login.microsoftonline.com/%7b%7btenant_id%7d%7d/oauth2/v2.0/token">https://login.microsoftonline.com/{{tenant_id}}/oauth2/v2.0/token</a></p> <p><b>Client</b> <b>ID</b>: {{client_id}}</p> <p><b>Client</b> <b>Secret</b>: {{client_secret}}</p> <p><b>Scope:</b> <a href="https://graph.microsoft.com/.default">https://graph.microsoft.com/.default</a></p> <p><b>Client Authentication</b>: Send as Basic Auth header</p> <p><b>**Note</b> the default <b>Callback URL</b> is set as <a href="https://oauth.pstmn.io/v1/callback">https://oauth.pstmn.io/v1/callback</a>, which is the URL we configured earlier for the App Registration’s <b>Redirect</b> <b>URI</b>.</p> <p>Leave the rest as default and click on <b>Get New Access Token</b>:</p> <a href="https://drive.google.com/uc?id=1ww81Bic7nLEbcaXP3VytaKWYEOHBGtdu"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Xvt9b4rQiSNnQIwBx1P6HkgSQVS5sBDS" width="244" height="200" /></a><a href="https://drive.google.com/uc?id=1Z1S807RarKEoUm4HfNevznjRjYAawRPh"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1K3ls5KyjEtoHSwfjG-C0SXzG_YlnoVCf" width="244" height="181" /></a> <p>The following <b>Get new access token</b> prompt will be displayed with a browser directing you to the <strong>login.microsoftonline.com</strong> to log into Entra. Proceed to log into Entra ID to retrieve the token:</p> <a href="https://drive.google.com/uc?id=130ygyfnCVUlN-SVZs3yYf-kKotLTDvKQ"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1ZNlLk1zDMXIA0A6SwtWJrUN_eVEFplC0" width="244" height="141" /></a> <p>The following will be displayed upon successfully authenticated. Note the URL displayed in the browser:</p> <p><b>https://oauth.pstmn.io/v1/callback?code=0.AVEAC0f0hB4_iUiflasPUUMCT1d_QBUsqGVPhgiMkNV16uVRANQ.AgABAAIAAAAmoFfGtYxvRrNriQdPKIZ-AgDs_wUA9P-CoVXQzw9BTb3FWH1dw6QqHEhjwSMkqJy1t5HGBarSFJY2Ckt4VaBvEF8-wBc3iirPRCA-Eo44cVWUBspus3IsPj521nlt9KqQbNjs0Ub8QMZJbRuocDMoHEyG3v6VGxH-MM_ll5NX34ys5wwTt8v01wjx0ZhAv9I-hUovaikvHFAup5BA19MJNaNmrNZS7tD9PEfw_F-Vs84tGaVJ0NXTfSLZIZnSmlVg1nsnk9-94P3W2x_mzdIwX3eho4p2yFiHK803pZrJKN7BaQnrgK5RfYL1mdYKMUCE7dfH0w1rQsYgREVYK6_KMOEYCzX_Z8D8vMJQl2Nd3xG9zCQPdVY-tcVxmUXNf3R29OlRvAqq3w-ZlGuwTCxIFdvX0ljOt5xWpzVo5oAqR7UQKjwIo-F2aVkJ4KcTIdjbg8TWcX7lOsjBuR8ZkdGIYBVtRTzrWskElP-Dxx6NP_LpDI4VInu7j4Uxs4ebIxMAF9YsGoS0M6vvY6cSvoFZcG_PXJYYk2I4tgqDS7FrEQ9ihk26x86--YRiwEO26tExgsxee0IfJVhuIqFNMqc-3FmGalsXnZxeDzYpppJpZarmAmnr2GjXVELWJhjB5bjt-Wsg9aCHAx_eb8rAKrxveOWEqGMLp9cBp5M&session_state=378b4cdc-e06b-4624-9618-047c3f12d5fc#</b></p> <p>… and the value of <b>code=</b> in the URL. While this is not important for using Postman’s Authorization tab for retrieving the token, this is the code that is passed back to Postman to generate a token for the user who has just logged in:</p> <a href="https://drive.google.com/uc?id=1Nrhl1GQjywn35uBpoKDPpg0PsF77BAUr"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1BZxhA3BYNeA6nW_TPVWSUbmB6PSa0pD8" width="244" height="72" /></a> <p>The token will be displayed:</p> <a href="https://drive.google.com/uc?id=13xt5lw1U7nX-vVqFrY5YG0dC8ltQ2at5"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=16h7bCNkHISxDsueD6Py12bZh1Jpcrhv_" width="244" height="139" /></a><a href="https://drive.google.com/uc?id=1dTdZ2sNaZ7pdZMrgSDQFlnWbB5cKGyTw"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1NYOmogsY-zarUBqWMv4b-0UYXFXIpPJw" width="244" height="137" /></a> <p>Copying the token and pasting it into <a href="https://jwt.io/">https://jwt.io/</a> will return details and we can see that this token is issued in the content of the user who has just authenticated. The user details are provided and because this account is synchronized from an on-premise Active Directory, the <b>onprem_sid</b> is also provided:</p> <a href="https://drive.google.com/uc?id=1ItK7ge2wxyJv7e2gtoooOR99CNQ4fYPZ"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1mFHUMX0AzTPChMITemE7zIs8tBPjwnem" width="242" height="244" /></a> <p>This token can then be used to call the <b>Get Users</b> API endpoint as well as the <b>/ME</b> endpoint because the App Registration is now calling on behalf of a user and not itself:</p> <a href="https://drive.google.com/uc?id=1h42ejtApG6Hqcmr7_ppRFmfbj2Yv-Ccg"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1jzHVxaBWXwL4UAB6Tc9rzaRqfYuYe_oO" width="244" height="167" /></a> <p><b><font size="4">Directly calling API endpoint to get Token</font></b></p> <p>Using the Postman’s Authorization Feature is very easy but manually calling the endpoint requires a few extra steps. As shown in the previous steps, delegated permissions will require the user to interactively log in to retrieve a code that will then be used by the application to retrieve a token for the user. There isn’t a way for us to automatically launch a browser when sending a request to the API endpoint but we can manually retrieve it by using a browser.</p> <p>To obtain the code for a user identity we’ll need to launch browser and navigate to a URL with specific parameters. The components we’ll need are:</p> <p><b>Tenant ID</b>: <The tenant ID of Azure AD></p> <p><b>App Registration Client ID</b>: <The client ID of the App Registration></p> <p><b>Redirect URI</b>: <This is where you will use the code generated to retrieve a token for the user></p> <p><b>Scope</b>: <The scope in which we’ll be using this token for></p> <p>Here is how the browser URL string should be configured:</p> <p><strong>https://login.microsoftonline.com/<font style="background-color: rgb(255, 255, 0);"><tenantID></font>/oauth2/v2.0/authorize?client_id=<font style="background-color: rgb(255, 255, 0);"><appRegistrationClienID></font>&response_type=code&redirect_uri=<font style="background-color: rgb(255, 255, 0);"><URLwhereTheTokenIsReturned></font>&response_mode=query&scope=<font style="background-color: rgb(255, 255, 0);"><scope></font></strong></p> <p>Here is a sample of what the browser URL string would look like:</p> <p><b>https://login.microsoftonline.com/<font style="background-color: rgb(255, 255, 0);">84f4470b-3f1e-xxxx-xxxx-ab0fxxxx024f</font>/oauth2/v2.0/authorize?client_id=<font style="background-color: rgb(255, 255, 0);">154xxxx7-xxxx-4f65-xxxx-8c90d575eae5</font>&response_type=code&redirect_uri=<font style="background-color: rgb(255, 255, 0);">https://oauth.pstmn.io/v1/callback</font>&response_mode=query&scope=<font style="background-color: rgb(255, 255, 0);">https://graph.microsoft.com/.default</font></b></p> <p>Customize the tenant ID and client ID, and use the following for:</p> <p><b>Redirect URI</b>: https://oauth.pstmn.io/v1/callback</p> <p><b>Scope</b>: <a href="https://graph.microsoft.com/.default">https://graph.microsoft.com/.default</a></p> <p>Paste the URL into the browser, authenticate into Entra ID and a string similar to the following will be displayed in the browser navigation bar upon successfully authenticating as the user:</p> <p><b>https://oauth.pstmn.io/v1/callback?code=<font style="background-color: rgb(255, 255, 0);">0.AVEAC0f0hB4_iUiflasPUUMCT1d_QBUsqGVPhgiMkNV16uVRAOA.AgABAAIAAAAmoFfGtYxvRrNriQdPKIZ-AgDs_wUA9P-gz3Qp7LOLK9NV2bmKKD5T1KzlW2EwwHmIyq-a0AXhwSpil3quUTpV_dW7ruHNnenPskMN4hOoTaMY0lkTzRoMM58Ta95ayQXvNUt-yPttTUkkpYTISCrcSo29LRMX2RcKdx7opKABxxSAOwQAp1D9eIkJlTltkDjS70yv86r1Agl7MDawQx8YYcXUj1tP8wevrvFMcKPJQNPhq83YepJOypSB0nl3EYq7mPZmyjV9BWh3IMnV4t4qbfa4y14iQyMpkSuGLSudZh2JNNfZxirT56Y8B9una2oBjFHMMBvDIdSY-caXUJe1qytIXUs1jFeljMhtv9gKuBiWMMdk8aPUt-7twIpgpBjktdIJ8ihsK1MdGy0jBz0zd96eG8bSJTRGgXkQv87dsX0O761XSP_tqd8EqzthkDGR5iixvuZcBuQeaIs4YJ_tb9pVtWLSufrbmka1Rqz0ghbZiFk1TA1bqKr-S3_LJsoiUaCCKdYH_lohvfRBlzdtHXLhxx8y8iGddw82Mlj-Bzk_gc_N-Vq7YLa71CL1r__aJ89AfM_klt_ouGy_vHx_9BxPc3VRXTrjsuB9tDoqIlb9UA91JMe9oRySMQASCErucmFVZ_G7JJGYUchpNsuu4HP0kzHakuGGLTBJGHRV8gRowj63X4B4QEARL-4Khqo</font>&session_state=c78c420a-d68a-4713-bcf5-e04910dc3902</b></p> <a href="https://drive.google.com/uc?id=1PrB1AzBeEbW8L0VB1qw6ZUXKYEWmSe7c"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=11yM2FOZ2OG59GfsjnjdxqJdHOybZ849h" width="244" height="91" /></a> <p>The code we need for our Postman call is in between <b>code=</b> and <b>&session_state</b> as highlighted above.</p> <p>Navigate to the <b>Body</b> tab, select <b>form-data</b> and fill in the following:</p> <p><b>grant_type</b>: authorization_code</p> <p><b>code</b>: <paste the code retrieved from the browser URL after authentication></p> <p><b>client_id</b>: {{client_id}}</p> <p><b>client_secret</b>: {{client_secret}}</p> <p><b>redirect_uri</b>: https://oauth.pstmn.io/v1/callback</p> <p>Client on the <b>Send</b> button to submit the request and confirm that the access token is returned:</p> <a href="https://drive.google.com/uc?id=1UFV7TpdrekuSCnV04HOfeo7Lmo_WMTby"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1jsmUtCr7Aok1zImZ6xI9vGC0o_gGK7iC" width="244" height="93" /></a> <p>The token retrieval should complete:</p> <a href="https://drive.google.com/uc?id=1Nyj9LmtOSufC3Wss7Pej-xZSUfx0rqQH"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1I90RIR_yI2yh2ty6JLEyWOnhkLPvuzYb" width="244" height="189" /></a> <p>With the token successfully retrieved, we can navigate to the JWT Token portal: <a href="https://jwt.io/">https://jwt.io/</a> and paste the token into the text field to confirm that the expected User (not App Registration) and roles are configured:</p> <a href="https://drive.google.com/uc?id=1X5VQU4GDwIruKJfhHv2Vc_BHwnSNu-99"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1_aAzjosiAbGT6yf5566uh6njn8C8ub4O" width="237" height="244" /></a><a href="https://drive.google.com/uc?id=1TS8vDgDC04DmmPQ5rQV0cRDLamUGNDtJ"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1yAdNsV_Wuo1JMJOz1jFa9rXDpdJfbH9Y" width="237" height="244" /></a> <p>With the this token successfully retrieved, we can repeat the steps from the previous examples to perform a Graph API calls.</p> <p>I hope this provides a bit more clarity on the differences between application and delegated permissions as well as how to set up Postman to perform both types of authentication and authorization. </p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-29619755484828027852023-10-22T17:49:00.001-04:002023-10-25T04:42:39.564-04:00Deploy a ChatGPT service with Azure OpenAI Service in 6 minutes with PowerShell<p>OpenAI’s ChatGPT has been one of the most talked about services since its launch on November 30<sup>th</sup>, 2022 amongst my professional contacts as well as personal friends. What this Chat Generative Pre-trained Transformer can perform is truly remarkable and opens up so many possibilities in the future. Many of my colleagues have asked me whether I’ve tested it and why I haven’t written any blog posts since Azure released the OpenAI service preview in March 2023. The short answer is that I have performed some testing with it over the last few months but haven’t been able to commit the amount of time I want due to my busy work schedule. I finally had a bit of a breather over the past few weeks so I’ve managed to really try out the following:</p> <ol> <li>Pairing with Cognitive Search with a RAG (Retrieval Augmentation Generation) architecture to augment the ChatGPT LLM (Large Language Model) to add data in a Azure Storage Account </li> <li>Deploying front-end UI solutions for the OpenAI service </li> <li>Diving deep into how to secure Azure OpenAI, Cognitive Searches, and data sources with private endpoints and shared private access </li> </ol> <p>It’s amazing how much material there is for #1 and #2 but not as much as I’d like for #3. There is so much Azure’s AI Services can do and I look forward to the projects to come in the following years.</p> <p>The purpose of this blog post is to show just how fast and easy it is to deploy an <strong>Azure OpenAI service</strong> with a front-end UI for a private <strong>ChatGPT</strong> service where internal employees of organizations can safely enter questions with sensitive data. Microsoft is very clear on the usage of the inputs entered in the prompt (<a href="https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy">https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy</a>): </p> <p><strong>Your prompts (inputs) and completions (outputs), your embeddings, and your training data:</strong></p> <ul> <li><strong>are NOT available to other customers.</strong></li> </ul> <ul> <li><strong>are NOT available to OpenAI.</strong></li> <li><strong>are NOT used to improve OpenAI models.</strong></li> <li><strong>are NOT used to improve any Microsoft or 3rd party products or services.</strong></li> <li><strong>are NOT used for automatically improving Azure OpenAI models for your use in your resource (The models are stateless, unless you explicitly fine-tune models with your training data).</strong></li> <li><strong>Your fine-tuned Azure OpenAI models are available exclusively for your use.</strong></li> </ul> <p><strong>The Azure OpenAI Service is fully controlled by Microsoft; Microsoft hosts the OpenAI models in Microsoft’s Azure environment and the Service does NOT interact with any services operated by OpenAI (e.g. ChatGPT, or the OpenAI API).</strong></p> <p>This will put many organizations at ease as I’ve been to one too many dinner parties where I’ve heard people talk about entering data into OpenAI’s ChatGPT to write a letter to HR. I don’t even want to ask what they were entering in there and what else it has been used for.</p> <p>In any case, I took some time to put together a <b>PowerShell script</b> that prompts for a few questions about what to name the resource group containing all the resources to be created, the name of the Azure OpenAI instance, the LLM model to use, what Azure subscription to use, and it takes care of the rest (Container App, Log Analytics Workspace, etc). I timed the duration of the script and it took <b>5 minutes and 32 seconds</b> to run. Yes, I understand this is an imperative run rather than declarative. I’m a huge supporter of Infrastructure of Code but I needed something that would allow me to run in any Azure environment to quickly build a demo with all components in a Resource Group so I can easily tear it down by simply deleting the RG. </p> <p>The deployment is very basic with no private endpoints as I will reserve that for a future post. Here is the simple topology:</p> <a href="https://drive.google.com/uc?id=1-m4RxVz1xIy14P2RMmRx3bcPaZVKqhxc"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1BvXwRwiCuY_HTe-_rOM9dJFSl-JEHGHR" width="644" height="120" /></a> <p>With that, let’s get into it now.</p> <p><b><u><font size="5">Prerequisites</font></u></b></p> <p>As of October 22, 2023, you may see the <b>Azure OpenAI</b> service as an option in the <b>Azure AI Services</b> blade but attempting to create the service will display the following message:</p> <a href="https://drive.google.com/uc?id=1wvUbL0ZWeyqNgB1_JoeiCJb2RBeOk56B"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=17wNweJYxoSCm0tOHC68rALYaDYz3sHKZ" width="244" height="175" /></a> <p><b>Azure OpenAI Service is currently available to customers via an application form. The selected subscription has not been enabled for use of the service and does not have quota for any pricing tiers. Click here to request access to Azure OpenAI service.</b></p> <a href="https://drive.google.com/uc?id=1tRGx-GRudVceQxhUyv7k3Bu34X1GgvHG"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1dg0zZlkF-yJHn6f8wSkyMROJ1kb_ijun" width="244" height="201" /></a> <p>Clicking on the link will bring you to a Microsoft Form with questions about who you are, why you want to use the service, and what features you would want to turn on:</p> <a href="https://drive.google.com/uc?id=19DZKJ4NvMDMjHMI230hoPzadGsZSpfiD"><img title="image" style="margin: 0px; display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1a0n5vXNp91DxA06dqvLAqExFjNnmnQXo" width="227" height="244" /></a> <p><strong>**I’ve blocked out the content in the screenshot of the form as I am unsure if posting the verbiage is in violation of Microsoft’s policy.</strong></p> <p>You’ll need to fill out the form, submit it, and receive an approval that is indicated to take up to 10 business days. My form submission took only a day but I assume this can vary so if you fill out the form intend on using the service so you don’t have to wait when you actually want to deploy.</p> <p><b><u><font size="5">Using a PowerShell script to deploy all the services in 6 minutes (or less)</font></u></b></p> <p>The PowerShell script I put together can be retrieved from my GitHub repository here: <a href="https://github.com/terenceluk/Azure/blob/main/AI%20Services/Deploy-Azure-OpenAI-with-Chatbot-UI.ps1">https://github.com/terenceluk/Azure/blob/main/AI%20Services/Deploy-Azure-OpenAI-with-Chatbot-UI.ps1</a></p> <p>The script is meant to be executed from the console and it will ask for the user to input:</p> <ol> <li>Select a subscription found in the tenant </li> <li>Provide a name for a new Resource Group </li> <li>Provide a name for the OpenAI instance </li> <li>Select a model from the options </li> </ol> <a href="https://drive.google.com/uc?id=1HddlX_Ku8-7QM2v-ZwUdHBokAnyl1QwE"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1r-R6fNtTjwdao1i9z6nNUeC9SXtTpc8J" width="244" height="114" /></a> <p>The rest of the components such as Container App and Log Analytics will be automatically named (derived from the instance name) and deployed through the remaining script. At the end of a successful run, the browser will automatically launch and the following screen will be displayed:</p> <a href="https://drive.google.com/uc?id=1O2zVKSN0Bnu3C7xvIcnBn6MrMf6Mkems"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1hW-2voDjfpiDHbt1Zy_2hldmTMiF9XM_" width="244" height="132" /></a><a href="https://drive.google.com/uc?id=174rgsxeD4Qjkchx17tZM-relvzCz44o3"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=14ZY0UFYi75MRfR3jJ1qy6rroIzHet3ew" width="244" height="133" /></a> <p><b><u><font size="5">Azure Resources Deployed</font></u></b></p> <p>All of the resources for the solution are meant to be deployed into a single resource group for ease of cleanup if it is used for a demo:</p> <a href="https://drive.google.com/uc?id=1LM9d7TGy8APdssfB6AHdhhwTlxOayuW6"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1-6LfN0OEfq3syF9en1kCyphyXmNsobfD" width="244" height="72" /></a> <p>The following are screenshots of the resources:</p> <p><a href="https://drive.google.com/uc?id=1LCZKn2M-J2_eds7WjWlG5h11mfKSOE8N"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=15eSxyg2y9NmiqRNXuNXULlmCQmTeYzKe" width="244" height="96" /></a></p> <p><a href="https://drive.google.com/uc?id=1ajNoD1sg3HDGjcUZ7cPAmXLZqPEc6Guh"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1iZaZOQ8_lMUnwL3OrdJLsl-ScChJp6_O" width="244" height="77" /></a></p> <p><a href="https://drive.google.com/uc?id=1UJ7zkS1cl-G3a_CSqZ914x73X0Nu-loz"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1MFEhLCHTQgJvE9NwsxoRZgDUKpCitdPZ" width="244" height="91" /></a></p> <p>Note that the script will not place the value of the Azure OpenAI key into the environment as a variable, rather, it will store it as a secret that the environment variable references:</p> <p><a href="https://drive.google.com/uc?id=1FkFTOKVJfmEtt6gallDwf7al8qtsGOyG"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1P9Y6BZsFFjUsrk5PrHeL5DDbqxx4nvw5" width="244" height="77" /></a></p> <p><a href="https://drive.google.com/uc?id=1F1o8wf-AJV76uBcZqXxGn_z3HLWW1DjI"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1qJjPdeYEfgsmS86Sc0GJ_k4k6BaHMSqp" width="244" height="119" /></a></p> <p>I did not create a custom health probe so the one created is the default:</p> <a href="https://drive.google.com/uc?id=1tyPN3rEaGCbaU8i992AnjYLpQSu5CuIL"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1C60hv1E0Y8b1dY7AymMgLnPcqd4LWk5Z" width="244" height="160" /></a> <p><b><u><font size="5">Securing the ChatGPT UI portal with authentication</font></u></b></p> <p>One of the components I’m still working on is to use BICEP to configure the Container App with Microsoft as an identity provider so the portal would prompt the user for credentials and they are required to log in with a valid account in the tenant’s Entra ID / Azure AD before getting into the portal. If you’d like to turn this on after the script deploys the services, simply navigate to the Container App’s <b>Authentication</b> blade, click on <b>Add identity provider</b>:</p> <a href="https://drive.google.com/uc?id=1I-yue0pFm6Jv4j427l1E2hhO4NZbRkR5"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1LTBLwn7cebwetg95YF4MUWu1yJMdjycQ" width="244" height="109" /></a> <p>Select <b>Microsoft</b> as the <b>Identity provider</b>: </p> <a href="https://drive.google.com/uc?id=1-15xya86JWo4wRp8Ws-zEvGH9uOHarh7"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1hyGGWENCyqo83kp8Pz9Cbw6NHp0K1NBS" width="244" height="217" /></a> <p>You can leave the settings as default and proceed to create the identity provider:</p> <a href="https://drive.google.com/uc?id=1d6KQawk5cFy96ONx4mhcflQXj86MK2to"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1t9JaQbcB6NGKwd98Q17e_i4ADDhkS7ZA" width="187" height="244" /></a> <p>This will create an <b>App Registration</b> in the tenant’s <strong>Entra ID / Azure AD</strong> for the <b>Container App</b> to authenticate the user:</p> <a href="https://drive.google.com/uc?id=1aKXTp8qt6DyYvcAvP7dCtt-hfqa-PyfM"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1WXDLRkYtArk4SgGioPJCwGgBPk16F50D" width="244" height="95" /></a> <p>Note that you would need to consent the Container App’s <b>App Registration</b> in the portal.azure.com or perform it upon first logging in:</p> <a href="https://drive.google.com/uc?id=1c3HXdRWhMN2RqUrdrOV2BIDQZGddUZX5"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1BVgDsX4QQi889EIpDN10e3vc9QOuN3j-" width="175" height="244" /></a> <p><b><u><font size="5">Credits</font></u></b></p> <p>I want to give a huge thanks to Mckay Wrigley (<a href="https://github.com/mckaywrigley">https://github.com/mckaywrigley</a>) for developing and sharing out his <b>chatbot-ui</b> docker container (<a href="https://github.com/mckaywrigley/chatbot-ui">https://github.com/mckaywrigley/chatbot-ui</a>) for the world to use. If you search the internet for deployment demonstrations, you are bound to see 9 of the 10 demos using his Chatbot UI. I spent quite a bit of time using Postman to interact with the Azure OpenAI service APIs and as I am not a developer, it would take me quite a bit of time to develop something half as great as Mckay’s.</p> <p><b><u><font size="5">Final Remarks</font></u></b></p> <p>One of the behaviors I noticed during the creation and deletion of the services is that when an Azure Open AI instance is deleted, it is dropped into a recycling bin like location and if you decide to deploy another instance in the same name then it will fail. If you have deleted and instance and want to use the same name then use the <b>Manage deleted resources</b> in the <b>Azure OpenAI </b>blade to locate and purge the instance. From what I can tell, the purge is instant and you can proceed to redeploy a new instance with the same name.</p> <a href="https://drive.google.com/uc?id=1k3GTeEJj1FvTZ9NG1ulrwKrlEltsfayO"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Bnicln68u3_2x3inOXcnMcUZjqjlovDP" width="244" height="101" /></a> <p>I hope this provides anyone out there who is looking to test this great service offering out but haven’t had the time to get started. There are many other great posts I’d like to write about <b>Cognitive Search</b> and the “under the hood view” of the traffic flow but I will save that for another day. Happy chatting!</p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-13864089484435296812023-10-19T21:21:00.001-04:002023-10-19T21:23:33.078-04:00Generating unique IP visits rendered into a column chart with kusto query for Azure Storage Account hosted website published with an App Gateway<p>I recently worked with a client who needed to quickly host a static website requiring zero dynamic content and little to no updates for years. Given the short runway available and the team being cost conscious, we opted to use the <b>Static website</b> feature of an <b>Azure Storage Account</b> to publish the website. Other than having to deal with the [I think] widely known <b>WebContentNotFound </b>issue<b> </b>when reloading pages, the service provided an adequate way of hosting the website. There was already an App Gateway in the environment so it was used to provide custom domain and WAF protection capabilities. </p> <p>A few weeks into the launch of the website, I was asked to generate some statistics for the websites visit and given that I had the <b>Diagnostics settings</b> for the <b>App Gateway </b>set up to send <b>allLogs</b> to a <b>Log Analytics</b> <b>Workspace</b> and the logs captured on the <b>Storage Account</b> wouldn’t provide the real public IP addresses of the inbound traffic, I decided to use KQL to obtain the report requested.</p> <a href="https://drive.google.com/uc?id=1C0MipFcvXD6yw24Rz2LLIK6aFrQT4jkn"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1wcERqRM9eeAssti-oSTbIb_61VrLiiFn" width="244" height="147" /></a> <p>The following are two reports I generated and thought I’d share it in case anyone may be looking for this.</p> <p><strong>Review visits over a range of days with hours as scale <br /></strong>This report groups unique IP addresses into bins within an hour over the start and end date specified.</p> <a href="https://drive.google.com/uc?id=1WvZ7-4pnD7CQ8DMa-vZ7b1xeOev513pN"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=10LGryHuz5RMX3JjHUrnrLxLkj9C64WR3" width="244" height="127" /></a> <p><strong>Review visits over a range of days <br /></strong>This report groups unique IP addresses for each day over the start and end date specified.</p> <a href="https://drive.google.com/uc?id=1TZSLua_54WdepQwgzKpY9e1ebWjnetCJ"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1hlS_UBomXBXYzifY_sJAwL_2p86qk0Ov" width="244" height="127" /></a> <p>The queries can be retrieved from my GitHub repo: <a href="https://github.com/terenceluk/Azure/blob/main/Kusto%20KQL/Azure-App-Gateway-Website-Stats.kusto">https://github.com/terenceluk/Azure/blob/main/Kusto%20KQL/Azure-App-Gateway-Website-Stats.kusto</a></p> <p>Hope this helps anyone who needs this data. The query can easily be changed for any backend service hosting the website and modified for different results.</p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-79558024628443076692023-10-06T06:28:00.001-04:002023-10-10T05:29:41.604-04:00Using PowerShell to create multiple Azure Storage Account Containers with Metadata using a list on an Excel spreadsheet<p>I recently worked on a project where we had to create hundreds of containers in multiple Azure Storage Accounts because we needed to the storage account SFTP service and in order to jail users into their own directories, each local SFTP user account needed to have their home folders set to their own containers. This may change in the future but working with this requirement meant many containers had to be created. In addition to creating containers, I also wanted each to have metadata added for the organization that the container belonged to so to reduce the repetitive manual labour, I decided to write a script.</p> <p>The script can be I created can be found at my following GitHub repo: <a href="https://github.com/terenceluk/Azure/blob/main/PowerShell/Create-Storage-Account-Container.ps1">https://github.com/terenceluk/Azure/blob/main/PowerShell/Create-Storage-Account-Container.ps1</a></p> <p>The format of the spreadsheet should look as such:</p> <p><a href="https://drive.google.com/uc?id=19q1AQUgeq48ieF6H_XEM3pig16P5JXGd"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1sQu1WDqSh7G9RMYk76zjoKEimiGbYVOR" width="244" height="121" /></a></p> <p>To handle scenarios where new storage account containers are added at a later time after the script has been executed once, the code will check and skip the creation of the container if it already exists.</p> <p>Scenarios where I’ve noticed this script will fail is when there are non ASCII characters in the metadata value. These characters include languages such as French (é É) or Microsoft Word dash/hyphen character. I don’t think there is a way to have the PowerShell cmdlet accept these characters.</p> <p>Below is an example of when these special non ASCII characters are encountered and the metadata value add fails. Note that the container does get created.</p> <p><b>Container 17689 has been created.</b></p> <p><b>MethodInvocationException: </b></p> <p><b>Line |</b></p> <p><b> 40 | $container.BlobContainerClient.SetMetadata($metadata, $null) …</b></p> <p><b>| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~</b></p> <p><b> | Exception calling "SetMetadata" with "2" argument(s): "Retry failed after 6 tries. Retry settings can be adjusted in ClientOptions.Retry or by configuring a custom retry policy in ClientOptions.RetryPolicy. (Request headers must contain only ASCII characters.) (Request headers must contain only ASCII characters.) (Request headers must contain only ASCII characters.) (Request headers must contain only ASCII characters.) (Request headers must contain only ASCII characters.) (Request headers must contain only ASCII characters.)"</b></p> <p><b>Container 17689 has been created.</b></p> <a href="https://drive.google.com/uc?id=1_tsiNUt_6rICQrDAkn3guKfVUed8x6MF"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1dL1cLzNukhfw7MuXWtSLnL1v5TdW805B" width="244" height="45" /></a> <p>Hope this helps anyone who might be looking for a script like this.</p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-34066677543592476412023-09-28T04:42:00.001-04:002023-09-28T06:26:12.137-04:00Using an Azure Function App to automatically Start and Stop (Deallocate) Virtual Machines based on tags<p>One of the most common questions I’ve been asked in the past when it comes to cost management in Azure is what are the options for powering off and on virtual machines based on a schedule. A quick Google search on the internet for this would return a mix of Azure Automation Accounts, the Auto-Shutdown feature blade within the VM (only powers off but not power on), Logic Apps, the new Automation Tasks, and Azure Functions. Each of these options have its advantages and disadvantages, and the associated cost to execute them. Few of them have limitations on the capabilities available for this type of automation. As much as I like Logic Apps because of it’s visual and little to no code capabilities, I find it a bit cumbersome to configure each step via a GUI and the flow of the Logic App can quickly become difficult to follow when there are multiple branches of conditions. My preference with most automation are Function Apps because it allows me to write code to perform anything I needed. With the above described, it’s probably not a surprise that this post is going to demonstrate this type of automation with an Azure Function App.</p> <p>The scenario I want to provide is an ask from a client who wanted the following:</p> <ol> <li>Auto Start virtual machines at a certain time </li> <li>Auto Deallocate virtual machines at a certain time </li> <li>Capability to set start and deallocate schedules for weekdays and weekends </li> <li>Capability to indicate virtual machines should either be powered on or deallocated over the weekend (they had some workloads that did not need to be on during the week but had to be on over the weekend </li> <li>Lastly, and the most important, they wanted to use Tags to define the schedule because they use Azure Policies to enforce tagging </li> </ol> <p>There are plenty of scripts available on the internet that provides most of the functionality but I could not find one that allowed the type of control over the weekend so I spent some time to write one. </p> <p>Before I begin, I would like to explain that the script I wrote uses the <b>Azure Resource Graph</b> to query the status of the virtual machines, their resource groups, and their tags because I find the time it takes for <b>ARM</b> to interact with the <b>resource providers</b> can take a very long time as compared to interacting with the <b>Resource Graph</b>, which is much faster. Those who have used the <b>Resource Graph Explorer</b> in the Azure portal will recognize the KQL query I used to retrieve information. What’s great about this approach is that we can test the query directly in the portal:</p> <a href="https://drive.google.com/uc?id=1fRdfKi3vMRmWUJJCIcZ0UqtKmBtaa-Ls"><img title="image" style="margin: 0px; display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1isxUi7YXeAL2bhk7Luc-84L02b3i1kZQ" width="106" height="90" /></a><a href="https://drive.google.com/uc?id=1L9a8wB7oKusdjR2HEhOxen0LXNJGATcG"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1rsPvT5klS_fDWcqFj--uEvKa4LLe53dQ" width="244" height="133" /></a> <p>The design of the tagging for the virtual machines to control the power on, deallocate, and scheduling are as follows:</p> <table cellspacing="0" cellpadding="0" border="1"><tbody> <tr> <td valign="top" width="168"> <p>Tag</p> </td> <td valign="top" width="174"> <p><b>Value</b></p> </td> <td valign="top" width="116"> <p><b>Example</b></p> </td> <td valign="top" width="281"> <p><b>Purpose</b></p> </td> <td valign="top" width="274"> <p><b>Behavior</b></p> </td> </tr> <tr> <td valign="top" width="168"> <p><b>WD-AutoStart</b></p> </td> <td valign="top" width="174"> <p>Time in 24 hour format</p> </td> <td valign="top" width="116"> <p>08:00</p> </td> <td valign="top" width="281"> <p>Defines the start of the time when the VM should be powered on during the weekday</p> </td> <td valign="top" width="274"> <p>This condition is met if the time is equal or past the value for Monday to Friday</p> </td> </tr> <tr> <td valign="top" width="168"> <p><b>WD-AutoDeallocate</b></p> </td> <td valign="top" width="174"> <p>Time in 24 hour format</p> </td> <td valign="top" width="116"> <p>17:00</p> </td> <td valign="top" width="281"> <p>Defines the start of the time when the VM should be powered off during the weekday</p> </td> <td valign="top" width="274"> <p>This condition is met if the time is equal or past the value for Monday to Friday</p> </td> </tr> <tr> <td valign="top" width="168"> <p><b>WD-AutoStart</b></p> </td> <td valign="top" width="174"> <p>Time in 24 hour format</p> </td> <td valign="top" width="116"> <p>09:00</p> </td> <td valign="top" width="281"> <p>Defines the start of the time when the VM should be powered on during the weekend</p> </td> <td valign="top" width="274"> <p>This condition is met if the time is equal or past the value for Saturday and Sunday</p> </td> </tr> <tr> <td valign="top" width="168"> <p><b>WD-AutoDeallocate</b></p> </td> <td valign="top" width="174"> <p>Time in 24 hour format</p> </td> <td valign="top" width="116"> <p>15:00</p> </td> <td valign="top" width="281"> <p>Defines the start of the time when the VM should be powered off during the weekend</p> </td> <td valign="top" width="274"> <p>This condition is met if the time is equal or past the value for Saturday and Sunday</p> </td> </tr> <tr> <td valign="top" width="168"> <p><b>Weekend</b></p> </td> <td valign="top" width="174"> <p>On or Off</p> </td> <td valign="top" width="116"> <p>On</p> </td> <td valign="top" width="281"> <p>Defines whether the VM should be on or off over the weekend</p> </td> <td valign="top" width="274"> <p>This condition should be set if a weekday schedule is configured and the VM needs to be on as it is the condition to turn the VM back on after the power off on a Friday</p> </td> </tr> </tbody></table> <p>The following is an example of a virtual machine with tags applied:</p> <a href="https://drive.google.com/uc?id=1n2Q1HdyhMw4xBcFaYeHUeEx3RDT_W48t"><img title="image" style="margin: 0px; display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=17xkc8eYFjnD-M7yLg5SjkSE_nKPFTo_z" width="244" height="56" /></a> <p>With the explanation out of the way, let’s get started with the configuration.</p> <p><b><font size="5">Step #1 – Create Function App</font></b></p> <p>Begin by creating a <b>Function App</b> with the <b>Runtime</b> <b>stack</b> <b>PowerShell Core</b> version <b>7.2</b>. The hosting option can either be consumption, premium, or App Service Plan but for this example, we’ll use consumption:</p> <a href="https://drive.google.com/uc?id=1WY8xhIxWQHFGUvzpW2ope7XWuNNyN9jI"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1eYlmHt6-Lkw2H-EWsnnb_JiKZ2CODy74" width="149" height="244" /></a> <p>Proceed to configure the rest of the properties of the <b>Function</b> <b>App</b>:</p> <a href="https://drive.google.com/uc?id=1IO1Zbx4DD4VQgEDb_rpLmYWUcaAbqhJq"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Mz44gnNVWQkvv53KoOvH89eyOFaKdiLq" width="244" height="191" /></a><a href="https://drive.google.com/uc?id=10nJiqfiNT8bWhYKlTaOeMYuMRzxnc4EP"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=13dMIBgjPc3gCnIbjzGFMjRp0cdAsBJwd" width="236" height="244" /></a> <p>I always recommend turning on <b>Application Insights</b> whenever possible as it helps with debugging but it is not necessary:</p> <a href="https://drive.google.com/uc?id=1TOFZPd5LBBpIqIWlZoEStmCCui2Mm54-"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1ed3J15_u1xZYAdzXk5zbYEt4mjXm7YGi" width="241" height="244" /></a> <p>You can integration the <b>Function</b> <b>App</b> with a GitHub account for CI/CD but for this example we won’t be enabling it:</p> <a href="https://drive.google.com/uc?id=1GtMESMfPk3nQTOJNNJDdAaJU4UCY-Zjp"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1JTQ7gif2hNUCSGqZaf5-pebxXRKexEUc" width="188" height="244" /></a> <p>Proceed to create the <b>Function App</b>:</p> <a href="https://drive.google.com/uc?id=1weo3CvVqzqw9m7M0oR9h64OnGdQ_wJlE"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Gl24sWTfLTkL65Gk0TpWehNzrsBZ31J5" width="141" height="244" /></a> <p><b><font size="5">Step #2 – Turn on System Assigned Managed Identity and Assign Permissions</font></b></p> <p>To avoid managing certificates and secrets, and enhance the security posture of your Azure environment, it is recommended to use managed identities wherever possible so proceed to turn on the <b>System assigned managed identity</b> in the Identity blade of the <b>Function App</b> so an <b>Enterprise Application</b> object is created in <b>Azure AD</b> / <b>Entra ID</b>, which we can then use to assign permissions to the resources in the subscription: </p> <a href="https://drive.google.com/uc?id=1rv7rpPMtsTtygIbTt4ai6O29hR27vpqI"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1DgmFw761E29ADMevqz9OE3zkElIMg_R-" width="244" height="183" /></a> <p>You’ll see an <b>Object (principal) ID</b> created for the <b>Function App</b> after successfully turning on the <b>System assigned</b> identity:</p> <a href="https://drive.google.com/uc?id=1M7y_Y_kZA8o2qK33DO8nJAiKbKasbrdo"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1SbltdVUVRZmuqjSTG-vfeDFGSJVffGbS" width="244" height="179" /></a> <p>Browsing to the Enterprise Applications in <b>Entra ID</b> will display the identity of the <b>Function App</b>:</p> <a href="https://drive.google.com/uc?id=1e7z0Nim4qiuLlLSYvRYtMu2CYFp20HsY"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1LEsXfAJe78gwopTu3vIHzPqjVVlHU_rU" width="244" height="49" /></a><a href="https://drive.google.com/uc?id=1ZG3Rpfs0-56QIhr545DHsp7o3JEmZqi1"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1ir5cccBTYil8aey4lSoL33Zbu_RKSl9O" width="244" height="143" /></a> <p>With the system managed identity created for the Function App, we can now proceed to grant it permissions to the resources it needs access to. This example will assign the managed identity as a <b>Virtual Machine Contributor</b> to the subscription so it can perform start and deallocate operations on all the virtual machines. Navigate to the subscription’s <b>Access control (IAM)</b> blade and click on <b>Role assignments</b>:</p> <a href="https://drive.google.com/uc?id=1d2XULhxHSNLrrDY8MrC2xYodydTMlrxD"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1caq2QekYGH_zL5KAzgIf6sgkWh3TW7MJ" width="244" height="104" /></a> <p>Proceed to select the <b>Virtual Machine Contributor</b> role:</p> <a href="https://drive.google.com/uc?id=1REf7BXKb5-ZFn1AkzbNsPP0v7M5uOrir"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1qOY-_37vU9Ecceng9wBh0UCuLFmL3LYw" width="244" height="110" /></a> <p>Locate the <b>Function App</b> for the managed identity and save the permissions:</p> <a href="https://drive.google.com/uc?id=1bUpTEWc6ELffmZsGkHygzT0zyxFIpIQZ"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=16h6XDRuTjZLEXG6lhuf3kM5lCMdom3Ri" width="244" height="128" /></a> <p><b><font size="5">Step #3 – Configure the Function App</font></b></p> <p>It’s currently September 27, 2023 as I write this post and I noticed that the <b>Function App</b> page layout and blades have changed. The <b>Functions</b> blade under <b>Functions</b> option no longer appears to exist so create the application by selecting <b>Overview</b>, under the <b>Functions</b> tab, click on <b>Create in Azure Portal</b>:</p> <a href="https://drive.google.com/uc?id=1Hpoe6WLIDJTF4ybGDjZoysEesprEq634"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1OuQaV3_7n7JV0souXhQHiqpddxBt2aqF" width="244" height="142" /></a> <p>The type of function we’ll be creating will be the <b>Timer Trigger</b> and the <b>Schedule</b> will be configured as the following <b>CRON</b> expression:</p> <p><b>0 0 * * * 0-6</b></p> <p>The above CRON expression allows the function to run at every hour on every day, every month, Sunday to Saturday (every day, every hour).</p> <a href="https://drive.google.com/uc?id=1NxWyQm1OQQr50Z0R6QcNS3C3qCdsihuS"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1aeJgs9G0VvqowWchH3U3xIO9uUsiMHJC" width="244" height="158" /></a> <p>Once the Function is created, proceed to click on <b>Code + Test</b>:</p> <a href="https://drive.google.com/uc?id=1X_RTQaU7get6iPHGqb_EMAQB5yF4F-cY"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Z8BgsuLBh3FIn-uxSvk1sd3srG8t8EMP" width="244" height="166" /></a> <p>The code for this the Function can be copied from my GitHub repo at the following URL: <a title="https://github.com/terenceluk/Azure/blob/main/Function%20App/Start-Stop-VM-Function-Based-On-Tags.ps1" href="https://github.com/terenceluk/Azure/blob/main/Function%20App/Start-Stop-VM-Function-Based-On-Tags.ps1">https://github.com/terenceluk/Azure/blob/main/Function%20App/Start-Stop-VM-Function-Based-On-Tags.ps1</a></p> <p>Make sure you update the <b>subscriptions</b> list and the <b>timezone</b> you want this script to use for the <b>Tags</b>:</p> <a href="https://drive.google.com/uc?id=1AMdMEBGIejjzmP0-29KL8M6XavIx9A_6"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1F_ZUt0ZnuK3kO9egV5fPeROxdj3Q5AMt" width="244" height="128" /></a> <p>Save the code and navigate back out to the <b>Function App</b>, select <b>App Files</b>, then select the <b>requirements.psd1</b> in the heading to load the file. Note that the default template of this file has everything commented out. We can simply remove the hash character in front of <b>'Az' = '10.* '</b> to load all Az modules but I’ve had terrible luck in doing so as the process of downloading the files would cause the <b>Function App</b> to timeout. What I like to do is indicate exactly what modules I need and specify them.</p> <a href="https://drive.google.com/uc?id=1G5hrmHJDE2h8WfOP1kWEUAhwACiTZYLI"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=17g4_fxeq0n2O6zwBrmelxJj45zxOAkXM" width="244" height="110" /></a> <p>The following are the modules my PowerShell script uses so proceed to copy and paste module requirements into the <b>requirements.psd1</b> file:</p> <p><b> 'Az.Accounts' = '2.*'</b></p> <p><b> 'Az.Compute' = '2.*'</b></p> <p><b> 'Az.ResourceGraph' = '0.*'</b></p> <a href="https://drive.google.com/uc?id=1aTn2HI2G4EU4KPzE6M7yUKblxwht-Yee"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1vEGhtrKEYlMWMF8nleNY8pxIxe4HXH_i" width="244" height="104" /></a> <p>Save the file and then switch to the <b>host.json</b> file:</p> <a href="https://drive.google.com/uc?id=1TtizkoTi7eFd8VZCQouW8nXD7I6LTBmc"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1miRrtdCoz1z3lVyoGGU9TsKwpW1bSdnm" width="244" height="138" /></a> <p>As specified in the following Microsoft documentation: <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-host-json#functiontimeout">https://learn.microsoft.com/en-us/azure/azure-functions/functions-host-json#functiontimeout</a>, , we can increase the default timeout of a <b>consumption</b> based <b>Function App</b> by adding the following attribute and value into the file:</p> <p><b>{</b></p> <p><b></b><b>"functionTimeout"</b><b>: </b><b>"00:10:00"</b><b></b></p> <p><b>}</b></p> <p>Proceed to add the value to handle large environments that may cause the Function App to exceed the 5-minute limit:</p> <a href="https://drive.google.com/uc?id=1azFjlhOUL2qYcBSoCVOHzbiIdXYh4wTW"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1CGH1ZJkinV4djDnhOa7Kq7ZUQhnbd4PD" width="244" height="135" /></a> <p>Save the file and navigate back to the <b>Function</b>, <b>Code + Test</b> blade, and proceed to test the <b>Function</b>:</p> <a href="https://drive.google.com/uc?id=1oh0wiroP11XrvFo46C77yiyMBdfCMqkf"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1r4COEJk73SwBXFj1BxMit9tW1cfzwf5n" width="244" height="128" /></a> <p>The execution should return a <b>202 Accepted</b> <b>HTTP</b> <b>response</b> <b>code</b> and the virtual machines should now be powered on and off at the scheduled time:</p> <p><a href="https://drive.google.com/uc?id=1w7TG_wmN6SP6yh7K2l_q32M_WfNpuFwY"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=15L5OlaDhWe1IuO8aHCldeZdbxq3VCp5-" width="244" height="126" /></a></p> <p>I hope this blog post helps anyone who might be looking for a script that can handle weekend scheduling of VM start and stop operations. </p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-61504724703485806632023-08-23T17:02:00.001-04:002023-08-23T17:02:31.486-04:00Using Azure Resource Graph Explorer to determine what resources are sending diagnostic data to a Log Analytics Workspace<p>One of the questions I am frequently asked is how we can effectively determine what resources are sending data to a particular <b>Log Analytics Workspace</b>. Those who are administrators of Azure will know that most subscriptions will eventually contain <b>Log Analytics Workspaces</b> as shown in the list and screenshot below:</p> <ul> <li>DefaultWorkspace-d3f0e229-2fcd-45df-a791-614ba183e648-canadaea </li> <li>DefaultWorkspace-d3f0e229-2fcd-45df-a791-614ba183e648-CCAN </li> <li>DefaultWorkspace-d3f0e229-2fcd-45df-a791-614ba183e648-EUS </li> </ul> <a href="https://drive.google.com/uc?id=18-c4_xH13uxI2nW0qGnFrs_9o3wLAPP2"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=18uXDghKk9T_GqtOrzTHB5KlnN2BjlCP_" width="644" height="179" /></a> <p>This isn’t the fault of poor management as many resources such as <b>Insights</b> would automatically default to these types of workspaces when they are enabled.</p> <p>Attempting to browse the blades in these <b>Log Analytics Workspaces</b> will not allow us to easily determine what resources in Azure are sending data to the <b>Log Analytics Workspace</b>:</p> <a href="https://drive.google.com/uc?id=18TnUnrCxW7yREW_YpEQdeHDcFiiV52UG"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1eLvsmMgVIXBbaMBlytgIHaMGasptNd3s" width="106" height="484" /></a> <p>While it is possible to review the type of tables created and if the schema and data stored is known, then we could possibly query the data for the resources but this can be prone to errors causing resources to be missed:</p> <a href="https://drive.google.com/uc?id=10AJp671c5F4hB1xCdF86aiMc-dDZA0xO"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1RNqPA2eLzD4kbnX664UaNypk4CQ5E4WJ" width="244" height="136" /></a> <p>Trying to search for how to achieve this lead me to the PowerShell cmdlet: <b>Get-AzOperationalInsightsDataSource</b> (<a href="https://learn.microsoft.com/en-us/powershell/module/az.operationalinsights/get-azoperationalinsightsdatasource?view=azps-10.2.0">https://learn.microsoft.com/en-us/powershell/module/az.operationalinsights/get-azoperationalinsightsdatasource?view=azps-10.2.0</a>) but this did not allow me to obtain the information I needed.</p> <p>What I ended up thinking of was whether it was possible to use <b>Resource Graph Explorer</b> to retrieve this information and after viewing the properties of a resource that I need was sending logs to a <b>Logs Analytics Workspace</b>, I was able to confirm that it could be done.</p> <p>The following the is properties of a Function App:</p> <a href="https://drive.google.com/uc?id=19yZgpAXA7fcCxNXij1dZflc7nPgoxOQN"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1wtbjkZIdPjfSEfXW5TVSWPWpz_dh5TLw" width="244" height="92" /></a> <p>If we scroll down the properties of the resource, we will find the following <b>name/value pair</b>:</p> <p><b>Name</b>: "WorkspaceResourceId" <br /><b>Value</b>: "/subscriptions/dxxxxxx9-2fcd-xxxx-a791-xxxxxxxxe648/resourceGroups/DefaultResourceGroup-CCAN/providers/Microsoft.OperationalInsights/workspaces/DefaultWorkspace-d3f0e229-2fcd-45df-a791-614ba183e648-CCAN",</p> <a href="https://drive.google.com/uc?id=1R5Lmcj4dXKGaCZOLYxeyYehUBOmrQx6x"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1n022U8dN5wlIGkLIBnNkYBRWMIPQBz4j" width="244" height="243" /></a> <p>Validating that a resource would have the <b>Log Analytics Workspace</b> defined in its properties, we can use the following query to list all resources that contain this property:</p> <p><strong>resources</strong> <br /><strong>| where properties.WorkspaceResourceId == "/subscriptions/d3xxxxx-2fcd-xxxx-xxxx-6xxxxxe648/resourceGroups/DefaultResourceGroup-CCAN/providers/Microsoft.OperationalInsights/workspaces/DefaultWorkspace-d3f0e229-2fcd-45df-a791-614ba183e648-CCAN" <br />| project name</strong></p> <a href="https://drive.google.com/uc?id=1z64ENUnmLRh5zfVlYvpJyjxTgwaoClZv"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1_4hbw9ypFYlSQ8KPA6dxGbI12UOjrqzX" width="244" height="76" /></a> <p>Note that if you do not know of at least one resource that uses the <b>Log Analytics Workspace</b>, we can retrieve the <b>WorkspaceResourceId</b> of the workspace by navigating to the Log Analytics Workspace in portal.azure.com and copying the string from the URL:</p> <a href="https://drive.google.com/uc?id=11VPbvKM-CHwFJBTmaQ7BxN5M1alPV8Md"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1ymJ7avyt6wtTjtKMa4Q31alQofPtWPs1" width="244" height="70" /></a> <p>I hope this helps anyone who may be looking for this information as I did but unable to find an easy way to achieve this.</p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-80130550969669003812023-08-20T23:41:00.001-04:002023-08-20T23:41:52.989-04:00PowerShell script to bulk convert Azure Firewall logs in JSON Line format stored on a Storage Container to CSV format<p>This post serves as a follow up to my:</p> <p><b>Converting Azure Firewall logs in JSON format created from Archive to a storage account diagnostic setting to CSV format <br /></b><a href="http://terenceluk.blogspot.com/2023/07/converting-azure-firewall-logs-in-json.html">http://terenceluk.blogspot.com/2023/07/converting-azure-firewall-logs-in-json.html</a></p> <p>… where I provided a script to convert a single JSON file that stored Azure Firewall Logs in a Storage Account container.</p> <p>As noted in the previous post, I want to follow up with a script that would traverse through a folder reading JSON files in the sub directories and converting them to CSVs to avoid manually generating each CSV for every hour of the day. </p> <p>Additional reasons for using this script are:</p> <ol> <li>Allow the retrieval of archived Azure Firewall Logs that are no longer stored in Log Analytics </li> <li>Bulk converting JSON Line files to CSVs with a specified start and end date </li> <li>A method for working around the 30,000 records return limit when using Log Analytics Workspaces to query data </li> </ol> <p>The entries for Azure Firewall Log activities can get fairly large so this script will read through each JSON file that are broken up in each hour of the day and convert them to CSV files. I originally thought about combining the files but working out the math for days of longs meant file sizes can get into the GBs and attempting to work with CSV files that large won’t be pleasant.</p> <p>The JSON script can be found at GitHub repository here: <a href="https://github.com/terenceluk/Azure/blob/main/Azure%20Firewall/Bulk-Convert-Az-Firewall-Logs-JSON-to-CSV.ps1">https://github.com/terenceluk/Azure/blob/main/Azure%20Firewall/Bulk-Convert-Az-Firewall-Logs-JSON-to-CSV.ps1</a></p> <p>The output file list would look as such: </p> <p><a href="https://drive.google.com/uc?id=1uUmOtBDmfqJuyWKV4cIxdS2h3xghE7si"><img title="image" style="margin: 0px; display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1UaRho6LCZYrWekuf_o1JfF11jBi-FIiP" width="233" height="244" /></a></p> <p>Hope this helps anyone who may be looking for such a script that will save them the time required to manually convert the logs.</p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-62547970319371014082023-08-12T14:46:00.001-04:002023-08-12T14:46:43.819-04:00Creating Azure Firewall Policy Rule Collections in Network Collection Group with PowerShell and Excel reference files<p>Those who have configured <strong>Rule Collections</strong> for a <strong>Azure Firewall Policy </strong>whether via GUI or scripting will know how tedious the task can be due to the amount of time for any type of change to be applied and the non-parallel stream of updates you can push to the firewall. I’ve also noticed that attempting to use multiple browser windows to copy and apply changes can potentially overwrite changes to the configuration. Case in point, I had a negative experience where I had window #1 to copy similar rule collections to window #2, and everything went as planned as long as I only saved to window #2. However, if I were to make a change in window #1 where it had not been refreshed with the changes applied to window #2, the <b>save</b> operation would overwrite the changes I made in window #2. I lost quite a bit of configuration due to this scenario.</p> <p>To minimize the mistakes and amount of time I spent staring at the <b>Azure Firewall Policy</b> window and slowly applying configuration updates one at a time, I decide to spend a bit of time to create <strong>PowerShell</strong> scripts to reference an Excel file with configuration parameters. The first script I created was one that read an Excel spreadsheet to create the list of <b>Rule Collections</b> that are placed under a predefined <b>Rule Collection Group</b>.</p> <p>The PowerShell script can be found here in my GitHub repository: <a href="https://github.com/terenceluk/Azure/blob/main/Azure%20Firewall/Create-NetworkRuleCollection.ps1">https://github.com/terenceluk/Azure/blob/main/Azure%20Firewall/Create-NetworkRuleCollection.ps1</a></p> <p>The following is a sample spreadsheet for the PowerShell script to read from:</p> <a href="https://drive.google.com/uc?id=1d59eSi1JuhTRYQqu6PF89L9dIDej3_j_"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1M5p-ouRBLq1ojUT_xE0N8GK4y3GHnhL4" width="244" height="150" /></a> <p>Here is a sample screenshot of the Rule Collections in the <b>Azure management</b> portal:</p> <a href="https://drive.google.com/uc?id=1orZucMWFbIboCdWBfERbb-rkXlD3Y_CV"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1BCzukDWwEhuN0WhyWgqSfWYIt7qjNxFd" width="244" height="111" /></a> <p>Hope this helps anyone who may be looking for such a script as the creation of <strong>Rule Collections </strong>can only be created one at a time.</p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-42615165196074852932023-08-10T05:43:00.001-04:002023-08-10T05:43:33.359-04:00Attempting to create a folder on an Azure Data Lake Storage Account with Private Endpoint fails with: "Failed to add directory 'Test'. Error: AuthorizationFailure: This request is not authorized to perform this operation."<p><b><u>Problem</u></b></p> <p>A colleague of mine recently asked me to help troubleshoot an issue with an Azure <b>Storage</b> <b>Account</b> that has <b>Hierarchical</b> <b>Namespace</b> enabled, which is essentially an <b>Azure</b> <b>Data</b> <b>Lake</b>, where any attempts to create a folder would fail:</p> <a href="https://drive.google.com/uc?id=12y-Tw5rQU3aaFDpX-EBRSyXcC-ESn5QF"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=10FB_tKMhZztOr3VbwRUQHUv9S0_iFPUg" width="244" height="59" /></a> <p>The error message presented was generic and appears to suggest that it is caused by a permissions issue:</p> <p><b>Failed to add directory</b></p> <p>Failed to add directory 'Test'. Error: AuthorizationFailure: This request is not authorized to perform this operation. RequestId:da720a90-c01f-0053-5d3f-c61ef5000000 Time:2023-08-03T19:22:01.2257950Z</p> <a href="https://drive.google.com/uc?id=1KtCgWWWOxauzD9A-lirlUbG6EQoRHawi"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1yhPRo0tjPmgdGDqiSYT83viVdk3uUuJe" width="244" height="60" /></a> <p>Creating containers or uploading blobs (files) to the storage account did not have any issues as those operations were successful as shown in the following screenshot:</p> <a href="https://drive.google.com/uc?id=1zT92c3W7B5jBh8yXuJkwamae8DaXNvtb"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=17UMjkmo-J52nPEq4tm5Kv78J_kbK84bA" width="244" height="91" /></a> <p>This error has been one that I’ve come across frequently in the past and it is usually because the storage account is locked down with only a private endpoint for the blob service and not for the data lake service created. The following Microsoft documentation explains the reason:</p> <p><b>Use private endpoints for Azure Storage</b></p> <p><a href="https://learn.microsoft.com/en-us/azure/storage/common/storage-private-endpoints#creating-a-private-endpoint">https://learn.microsoft.com/en-us/azure/storage/common/storage-private-endpoints#creating-a-private-endpoint</a></p> <p><b><i>If you create a private endpoint for the Data Lake Storage Gen2 storage resource, then you should also create one for the Blob Storage resource. That's because operations that target the Data Lake Storage Gen2 endpoint might be redirected to the Blob endpoint. Similarly, if you add a private endpoint for Blob Storage only, and not for Data Lake Storage Gen2, some operations (such as Manage ACL, Create Directory, Delete Directory, etc.) will fail since the Gen2 APIs require a DFS private endpoint. By creating a private endpoint for both resources, you ensure that all operations can complete successfully.</i></b></p> <a href="https://drive.google.com/uc?id=1BxAvkMZX_XkBVRIqC41cE5_LakKjntSh"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1EXPwu727FVU7FKpNU6v8veunUuqU2l_1" width="244" height="133" /></a> <p>The following are screenshots confirming the missing configuration.</p> <p>Note that <b>Hierarchical Namespace</b> is enabled:</p> <a href="https://drive.google.com/uc?id=1y8qsHtc68zF6pd6uON88Y-SA6ASrNH4K"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1JvZJNi-YB-ozBQakrQOb0r9cWA5b5OvW" width="244" height="103" /></a> <p>Note that <b>Public network acce</b>ss is set to <b>Disabled</b>:</p> <a href="https://drive.google.com/uc?id=1BXKkkutJrJnUynI-Xhm9JW0TwkrOGCTp"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1ELQJmePOVbRymD0daUKleySkX7WvVEcs" width="244" height="200" /></a> <p>Note that there is only 1 private endpoint configured for the storage account:</p> <a href="https://drive.google.com/uc?id=1e-trFYZPQgLGiDj8A04POnPxDGsPDg8Q"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Rj2gGvr0Ek-e2DMsjmO9tVNR2nSM5-vk" width="244" height="61" /></a> <p>… and the <b>Target sub-resource</b> of the private endpoint is <b>blob</b>:</p> <a href="https://drive.google.com/uc?id=1Zrf5uuFMtNq_-j-_FaVIW6JzYcXHbMmL"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1HF8IHSAdfHUjPVOZt3zPEN4okbnrfwHC" width="244" height="51" /></a> <p><b><u><font size="5">Solution</font></u></b></p> <p>To correct the issue, we’ll need to create an additional private endpoint that has the <b>Target sub-resource</b> configured as DFS (Data Lake Storage Gen2). Begin by navigating to the Networking blade for the storage account and create a new <b>Private Endpoint</b>:</p> <a href="https://drive.google.com/uc?id=1flopWfrmhprEoeowN_0pQbK6_D3z9hel"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1qc-5MavMBbekUTJKdvrEO8E8pcnozfFt" width="244" height="80" /></a> <p>Proceed to fill in the details for the private endpoint:</p> <a href="https://drive.google.com/uc?id=1a5-rsFZkBVBjSun9qzBl0x7_X3fEQCzc"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1dX5naPKb9KYOvGUDGTFeNDayJ6t9zohX" width="244" height="155" /></a> <p>Select <b>dfs </b>as the <b>Target sub-resource</b>:</p> <a href="https://drive.google.com/uc?id=1GNfnODlos77EZeHJJfxkF0ROIzft4n4S"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=13DNpYr22V83Y2B5mprxSh6kqGpoL26-z" width="244" height="186" /></a> <p>Complete the creation of the private endpoint:</p> <a href="https://drive.google.com/uc?id=1XQ5c4QrG3ruB2WO6Fh3gmT8TpDj2LLky"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1cObUOUNxw-gAE9G_i6vBtDx9DIYrMrUG" width="244" height="76" /></a> <p>Folder creation should now succeed:</p> <a href="https://drive.google.com/uc?id=1BG4QSGOU5CjLb_8_0AQ-K0_UStou7_1o"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1_-aNHiGgv8b98iou7QmpYUdBHTA9f9Cv" width="244" height="67" /></a> <p>Hope this provides anyone who might have ran into this issue and is looking for a solution. I’ve found that searching for the error message does not always return results to this solution.</p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-4471937194189295412023-07-25T08:00:00.001-04:002023-07-25T08:03:12.871-04:00Creating Azure Route Tables, UDRs, and IPGroups with PowerShell and Excel reference files<p>I recently worked with a colleague to complete a deployment and one of the laborious activities we had to complete were:</p> <ol> <li>Create Route Tables with UDRs (user defined routes) </li> <li>IP Groups </li> </ol> <p>There are a significant amount of entries for both resources and while it was possible to create these manually in the portal, I felt that it was better to create a PowerShell script to accelerate the creation and minimize human typo and copy and paste errors. The 2 scripts I created for this are as follows.</p> <p><b><u><font size="5">Creating Route Tables and UDRs</font></u></b></p> <p>The PowerShell script I created, which can be found here in my Github repo: <a href="https://github.com/terenceluk/Azure/blob/main/PowerShell/Create-Route-Tables-and-UDRs.ps1">https://github.com/terenceluk/Azure/blob/main/PowerShell/Create-Route-Tables-and-UDRs.ps1</a>, will read an Excel file and create the route tables and the corresponding UDRs (all route tables should have the same UDRs). One of the conditions I’ve added in is an IF statement that checks to see if the UDR to be added is the same subnet as where the route table will be attached. If it is the same, then the script will skip the creation of the UDR additional so we don’t end up routing traffic from the same subnet up to the firewall. The naming convention designed allows me to compare the Route Table and UDR name to determine if it is a match but if your environment is different then you’ll need to adjust the check. Here are screenshots of the sample spreadsheet that is read:</p> <a href="https://drive.google.com/uc?id=1MqN4ilnPxhtgjNns-ZhuKMncJs30nLMc"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=15BEEnSnuk1R_47UfGmLYCuLEI3KOtp4w" width="244" height="226" /></a><a href="https://drive.google.com/uc?id=1ZafWHVUcyFLYGNovY5DGJzugRuRnYVEe"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Ymz0nQJPVs5lWRoChfrkG3CfW01mCLmF" width="244" height="192" /></a> <p><b><u><font size="5">Create IPGroups</font></u></b></p> <p>There were many IP Groups that needed to be created as well because the environment had an IP Group for each subnet. The script that will read an Excel file and create the list of IP Groups can be found here at my GitHub repo: <a href="https://github.com/terenceluk/Azure/blob/main/PowerShell/Create-IP-Groups.ps1">https://github.com/terenceluk/Azure/blob/main/PowerShell/Create-IP-Groups.ps1</a></p> <p>Here are sample screenshots of the Excel file: </p> <a href="https://drive.google.com/uc?id=1jMNjG9Hgz-vhNSBOQDFrZmU5xTLs3qJu"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Prq6A4Ye5Li_XCnPExtlroouBj34sPB5" width="244" height="144" /></a>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-48808505788427090042023-07-11T08:09:00.001-04:002023-07-11T08:09:11.476-04:00Sysprep fails due to Notepad++ when preparing virtual machine for image capture to deploy Azure Virtual Desktop<p>One of the common issues I’ve continuously come across while preparing Windows 10 and Windows 2016 above operating systems for virtual desktops or remote desktop services is when sysprep fails due to an installed application linked to a user account. I ran encountered this issue again last month with the application Notepad++ when preparing a Windows 11 Enterprise Multi-Session virtual machine for an Azure Virtual Desktop deployment. There are plenty of different PowerShell cmdlets that can be run in an attempt to fix the issue, but I find some of them result with rendering the virtual machine in a state that I would no longer be confident in deploying so I wanted to some of the steps I use for personal reference and to help anyone who may encounter a similar issue.</p> <p><b><u><font size="5">Problem</font></u></b></p> <p>You attempt to run sysprep after finishing the preparation of a master image:</p> <a href="https://drive.google.com/uc?id=1JT8K1KTAvFo_bJ7JlVXD7G9gV7YNTKpj"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Dn6lLT2It4mblG9CvvWltAuxA6kIHFVL" width="244" height="217" /></a> <p>Sysprep immediately fails with the error:</p> <p><b>Sysprep was not able to validate your Windows installation. <br />Review the log file at <br />%WINDIR%\System32\Sysprep\Panther\setupact.log for <br />details. After resolving the issue, use Sysprep to validate your installation again.</b></p> <a href="https://drive.google.com/uc?id=1Y8Mrk0zlLvJ7W79IfkrHJExXtshN2TGK"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1ZXeZl0VEPZryeLYGNzs-RlxpS4YJpRsq" width="244" height="112" /></a> <p>Opening the <b>setupact.log</b> will reveal the following line:</p> <p><b>Error                 SYSPRP Package NotepadPlusPlus_1.0.0.0_neutral__7njy0v32s6xk6 was installed for a user, but not provisioned for all users. This package will not function properly in the sysprep image.</b></p> <a href="https://drive.google.com/uc?id=1t8NcD6IW9lg0MM-iA3OAthoJG72SaCg1"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=12Ckjs0HYr4A8xL_EzJPX7c_OVe3nYYpb" width="244" height="111" /></a> <p>Opening the <b>setuperr.log</b> will reveal the following lines:</p> <p><b>2023-05-05 07:20:10, Error                 SYSPRP BCD: BiUpdateEfiEntry failed c000000d</b></p> <p><b>2023-05-05 07:20:10, Error                 SYSPRP BCD: BiExportBcdObjects failed c000000d</b></p> <p><b>2023-05-05 07:20:10, Error                 SYSPRP BCD: BiExportStoreAlterationsToEfi failed c000000d</b></p> <p><b>2023-05-05 07:20:10, Error                 SYSPRP BCD: Failed to export alterations to firmware. Status: c000000d</b></p> <p><b><font color="#ff0000">2023-07-07 19:51:46, Error                 SYSPRP Package NotepadPlusPlus_1.0.0.0_neutral__7njy0v32s6xk6 was installed for a user, but not provisioned for all users. This package will not function properly in the sysprep image.</font></b></p> <p><b>2023-07-07 19:51:46, Error                 SYSPRP Failed to remove apps for the current user: 0x80073cf2.</b></p> <p><b>2023-07-07 19:51:46, Error                 SYSPRP Exit code of RemoveAllApps thread was 0x3cf2.</b></p> <p><b>2023-07-07 19:51:46, Error                 SYSPRP ActionPlatform::LaunchModule: Failure occurred while executing 'SysprepGeneralizeValidate' from C:\Windows\System32\AppxSysprep.dll; dwRet = 0x3cf2</b></p> <p><b>2023-07-07 19:51:46, Error                 SYSPRP SysprepSession::Validate: Error in validating actions from C:\Windows\System32\Sysprep\ActionFiles\Generalize.xml; dwRet = 0x3cf2</b></p> <p><b>2023-07-07 19:51:46, Error                 SYSPRP RunPlatformActions:Failed while validating Sysprep session actions; dwRet = 0x3cf2</b></p> <p><b>2023-07-07 19:51:46, Error      [0x0f0070] SYSPRP RunDlls:An error occurred while running registry sysprep DLLs, halting sysprep execution. dwRet = 0x3cf2</b></p> <p><b>2023-07-07 19:51:46, Error      [0x0f00d8] SYSPRP WinMain:Hit failure while pre-validate sysprep generalize internal providers; hr = 0x80073cf2</b></p> <a href="https://drive.google.com/uc?id=1N4Q1y2vT-72kE1CpJzefnX25x8bQHHkW"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1y3NLxIC1t8-IC3uhsMRG1PmA3beQsKtT" width="244" height="128" /></a> <p>Proceeding to uninstall Notepad++ from the image will allow sysprep to run and complete successfully but this means the deployed virtual desktops would need the application installed manually.</p> <p><b><u><font size="5">Solution</font></u></b></p> <p>The first step to take for resolving this issue is to restore the virtual machine from a snapshot that had not failed on a sysprep because the sysprep process removes packages from the operating system and there will be times when:</p> <ol> <li>After fixing the Notepad++ application, sysprep would fail and error out on other native Microsoft applications </li> <li>You would notice that Notepad is no longer available on the virtual machine </li> <li>Other odd errors would occur </li> </ol> <p>It is better to troubleshoot and perform sysprep on a machine that has no experienced a half executed but failed sysprep.</p> <p>Once a fresh snapshot is restored, we can now work on determining which accounts Notepad++ is linked to. This can reviewed by starting PowerShell and executing the following cmdlet:</p> <p><b>Get-AppxPackage -AllUser | Format-List -Property PackageFullName,PackageUserInformation</b></p> <p>The cmdlet above will list all packages installed and this example coincidentally places the Notepad++ package at the end of the output:</p> <a href="https://drive.google.com/uc?id=1iMpF2C5aGCMSlJrP9iyozvrHmVHzyqm1"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1_BbWWinXC5rVB9RiDwQO7F_0-9SNYoc6" width="244" height="98" /></a> <p>If the package in question starts with a letter earlier than M (for Microsoft) and results in being nested within the long output, we can use the following cmdlet to filter the <b>PackageFullName</b> to what is being searched for:</p> <p><b>Get-AppxPackage -AllUser | Where-Object {$_.PackageFullName -like "NotepadPlusPlus*"} | Format-List -Property PackageFullName,PackageUserInformation</b></p> <p>With the package located identify which accounts are listed to have the application installed. The screenshot above only lists one account but if there are more, the easiest approach is to delete all the accounts and their profiles. If there is only one account listed and it is the built-in administrator account, you won’t be able to delete it because the following error will be displayed when you try to do so:</p> <p><b>The following error occurred while attempting to delete the user admin:</b></p> <p><b>Cannot perform this operation on built-in accounts.</b></p> <a href="https://drive.google.com/uc?id=18--hIJqoV6A2lR9l5PrV_Z9pmScTUOBV"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1-cmosFLpOCooHLr1YDAuoSWGO66kKCJz" width="244" height="112" /></a> <p>To get around this, log in as the account with Notepad++ install linked to, launch PowerShell and execute the following cmdlet:</p> <p><b>Remove-AppxPackage -Package <packagefullname></b></p> <p>The following is the cmdlet that is used to remove Notepad++ from the account:</p> <p><strong>Remove-AppxPackage -Package NotepadPlusPlus_1.0.0.0_neutral__7njy0v32s6xk6</strong></p> <a href="https://drive.google.com/uc?id=1ZZWRAXjzcBY5W0FiOkzxI5IGY8KVsoB2"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1lvTFvz9O_n1Kz2Fy9t6Ik-_wdn8WiL7L" width="244" height="122" /></a> <p>You should no longer find the Notepad++ when executing the following cmdlet:</p> <p><b>Get-AppxPackage -AllUser | Where-Object {$_.PackageFullName -like "NotepadPlusPlus*"} | Format-List -Property PackageFullName,PackageUserInformation</b></p> <a href="https://drive.google.com/uc?id=16W3HTcQ6blcqFyfDmChseN5iV5kPRvzW"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Du3OAgXWsHZozNT3Xc_89IZBKkxVi1HR" width="244" height="48" /></a> <p>Running sysprep should now complete so the virtual machine can be captured to an image for session host deployments.</p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com3tag:blogger.com,1999:blog-2228947945609574437.post-13191570833317475342023-07-06T20:48:00.001-04:002023-08-20T23:39:28.552-04:00Converting Azure Firewall logs in JSON format created from Archive to a storage account diagnostic setting to CSV format<p>One of the clients I recently worked with had a requirement that all traffic traversing through the Azure Firewall need to be stored for at least 6 months due to auditing requirements. Accomplishing this wasn’t difficult because it was a matter of either increasing the retention for the Log Analytics Workspace or sending the log files to a storage account for archiving. Given the long period of 6 months, I opted to set the Log Analytics workspace retention to 3 months and provide the remaining retention by sending the logs to a storage account:</p> <a href="https://drive.google.com/uc?id=1YnFb82NKKUCkcF-WhS6NJSxp6J2EKCRg"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1t9xZwAdMqYHcDSzIhpIv6-PHz0Gb1lgT" width="244" height="124" /></a> <a href="https://drive.google.com/uc?id=1llSclpfk6xLHkExa6P6xpfDrJnBszjej"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1NN4TG4OPsVWmojEg7JlXgI0KsfRu2Vgn" width="244" height="215" /></a> <p>The Firewall logs that are sent to the storage account will be stored in a container named <b>insights-logs-azurefirewall</b>:</p> <a href="https://drive.google.com/uc?id=13COlDAhAi9bvo5Kl5Em3CRRBZo6qBWqO"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1svD_07bW2_ggyf5ZYpx24R40uJvofP9i" width="244" height="82" /></a> <p>Navigating into this container will show that it a folder tree consisting of multiple subfolders containing the subscription, the name of the resource group containing the firewall, which also contains the VNet because it is a requirement to store the firewall resource in the same RG as the VNet:</p> <p><b>insights-logs-azurefirewall / resourceId= / SUBSCRIPTIONS / CCE9BD62-xxxx-xxxx-xxxx-xxxx51CE27DA / RESOURCEGROUPS / RG-CA-C-VNET-PROD / PROVIDERS / MICROSOFT.NETWORK / AZUREFIREWALLS / AFW-CA-C-PROD</b></p> <p>It then splits to the logs into subfolders with:</p> <ul> <li>Year </li> <li>Month </li> <li>Day </li> <li>Hour </li> <li>Minute (only 1 folder labeled as 00) </li> </ul> <a href="https://drive.google.com/uc?id=1cncSmmlDshTd4N014j_Jfum402D2FCQW"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1cpFgHlxyEZesdiloqmwyzfU-uxBrEdIs" width="166" height="244" /></a> <p>Drilling all the way down to the minute folder will contain a <b>PT1H.json</b> file that is the <b>Append blob</b> type. This is the file that will contain the firewall traffic log entries:</p> <a href="https://drive.google.com/uc?id=1bEY_v0mTgU5n3pVtPd3NFPwcaGiAKSTW"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1_X7_3bzJZ_6WpEUuP-wSky7xYgiTURjK" width="244" height="55" /></a> <p>While browsing through the content of the PT1H.json file, I immediately noticed that the format of the entries did not appear to conform to any of the JSON Specifications (RFC 4627, 7159, 8259) because while I’m not very familiar with JSON format, I can see that:</p> <ol> <li>The beginning entries for the whole JSON file is missing an open square bracket and the end of the file is missing a close square bracket <b><- Line 1</b></li> <li>The nested properties values do not have an open square bracket before the brace and a close square bracket at the end of the close brace <b><- Line 5 and Line</b> <b>22</b></li> <li>The close brace for each entry does not have a comma that separates each log <b><- Line 24</b></li> </ol> <a href="https://drive.google.com/uc?id=1SvECB1uHv3F9xCRyKu63r7AbqsACzIgz"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1htyqtKvG98AR0Iydp_KcmcA02gcrZPhB" width="244" height="117" /></a> <p>Trying to paste this into a JSON validator (<a href="https://jsonformatter.curiousconcept.com/">https://jsonformatter.curiousconcept.com/</a>) would show it does not conform to any RFC format:</p> <a href="https://drive.google.com/uc?id=11bqR4MJ5CG_CW8spFG9qGoIY1FRLLiMh"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1tu-xPebuKr-TX2IxzyB2UpdqbIaS8aJE" width="244" height="226" /></a> <p>Reviewing the Microsoft confirms that the format of blobs in a Storage Account is in JSON lines, where each record is delimited by a new line, with no outer records array and no commas between JSON records: <a href="https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-data-export?tabs=portal#storage-account">https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-data-export?tabs=portal#storage-account</a></p> <p>Further reading shows that this was put in place since November 1<sup>st</sup>, 2018: </p> <p><b>Prepare for format change to Azure Monitor platform logs archived to a storage account <br /></b><a href="https://learn.microsoft.com/en-us/previous-versions/azure/azure-monitor/essentials/resource-logs-blob-format">https://learn.microsoft.com/en-us/previous-versions/azure/azure-monitor/essentials/resource-logs-blob-format</a><b></b></p> <p>More reading about the JSON Lines format can be found here: <a href="https://jsonlines.org/">https://jsonlines.org/</a></p> <p>My objective was to simply use a PowerShell script to convert a JSON file into a CSV so it can be sent to the client for review but my script would not work with the JSON Line format. Fixing this manually by hand if there were 2 records wouldn’t be difficult, but these firewall logs have thousands of entries and I needed a way to automate the conversion. The whole process of getting this to work too quite a bit of my time so I wanted to write this blob post to help anyone who may come across the same challenge.</p> <p><b>Step #1 – Fixing the poorly formatted JSON file</b></p> <p>The first step was to fix the JSON Line formatted JSON file so it conforms to an RFC 8259 format. What this basically meant is addressing these 3 items:</p> <ol> <li>The beginning entries for the whole JSON file is missing an open square bracket and the end of the file is missing a close square bracket <b><- Line 1</b></li> <li>The nested properties values does not have an open square bracket before the brace and a close square bracket at the end of the close brace <b><- Line 5 and Line</b> <b>22</b></li> <li>The close brace for each entry does not have a comma that separates each log <b><- Line 24</b></li> </ol> <p>I’ve reduced the JSON file to only 2 log entries to show all the changes required:</p> <ol> <li>Add an open [ bracket after properties": </li> <li>Add a close ] bracket at the end of properties } </li> <li>Add a comma after close } brace for each log entry but exclude last entry </li> <li>Add a bracket at the beginning of the JSON </li> <li>Add a bracket at the end of the JSON </li> </ol> <a href="https://drive.google.com/uc?id=1xemRXJQ9sc4VLBUjAloD8k3sapigxFMS"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1rSMuRAAcOT9qYwAO_jJod_MwCLrnXPfU" width="244" height="80" /></a> <p>The best way to approach this is to use Regex expressions to match the desired block or blocks of lines and add the desired brackets and/or comma. My days of using Regex goes back to when I worked on voice deployments for OCS 2007, Lync Server, Skype for Business Server, and Teams Direct Routing. My role over the past few years does not include this product so if you (the reader) see a better way of writing these expressions, please feel free to provide suggestions in the comments.</p> <p><b>Add an open [ bracket after properties": and add a close ] bracket at the end of properties }</b></p> <p>The Regex expression to match all the contents in the nested properties block is: </p> <p><b>(?<=”properties”: )([\s\S]*?})</b></p> <p>This can be validated on a Regex validator such as: <a href="https://regexr.com/">https://regexr.com/</a></p> <a href="https://drive.google.com/uc?id=1mF4eWzeQDXqYZGzILriQGbEuOvM9Pru-"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1i3zW4T6BSPO2RIXzIFR-VM2ud4F34X3a" width="244" height="133" /></a> <p>We can use the following PowerShell regex replace function to add the missing open and close square brackets:</p> <p><b>$fixedJson = [regex]::Replace($badJson, $regexPattern, { param($match) "[{0}]" -f $match.Value })</b></p> <p><b>Add a comma after close } brace for each log entry but exclude last entry</b></p> <p>With the missing open and square brackets added, we can use the output and the following regex expression to match all of the log entries to add a comma for separation <b><u>AND NOT</u></b> include the last log at the end of the entries:</p> <p><b>(?="category": )([\s\S]*?}]\s}\W)</b></p> <a href="https://drive.google.com/uc?id=1JYlaQWuzO-QPpHG7Boxf7bbhb1Vw1L7b"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1XyBVirtQoFrmhMKHdARqFryK5U_SYtKS" width="244" height="133" /></a> <p>Note that the last block for the log entry is excluded:</p> <p><a href="https://drive.google.com/uc?id=1fL73JMqjOr-JhZ-hTTe1umKq4GsFy7jT"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1GsV5AOmakjv1a3HQDvkOsI1QVBqIi43E" width="244" height="133" /></a> </p> <p>--------------------------------- <strong>Update August 21-2023</strong> ---------------------------------------</p> <p>I realized that the previous RegEx expression I used would fail to match scenarios where there are spaces or line breaks between the square and curly brackets so I’ve updated the expression for the script on GitHub and adding the changes here.</p> <p><b>(?="category": )([\s\S]*?}[\s\S]*?][\s\S]*?}\W)</b></p> <p>The following is a break down of each section of the RegEx expression:</p> <p><b>(?="category": )</b> <-- Match the first block before spanning the text after this</p> <p><b>([\s\S]*?}[\s\S]*?][\s\S]*?}\W)</b> <-- This is to match everything from the category and to the end</p> <p><b>([\s\S]*?}</b> <-- Match everything that is a whitespace and not whitespace, words, digits and end at the curly bracket }</p> <p><b>[\s\S]*?]</b> <-- Match everything that is a whitespace and not whitespace, words, digits and end at the square bracket ]</p> <p><b>[\s\S]*?}</b> <-- Continue matching everything that is a whitespace and not whitespace, words, digits and end at the next curly bracket }</p> <p><b>\W)</b> <-- This excludes the last block</p> <p>----------------------------------------------------------------------------------------------------------------</p> <p>We can use the following PowerShell regex replace function add the missing comma between entries:</p> <p><b>$fixedJson = [regex]::Replace($badJson, $regexPattern, { param($match) "{0}," -f $match.Value })</b></p> <p><b>Add a bracket at the beginning of the JSON and add a bracket at the end of the JSON</b></p> <p>With the comma added between each log, we can now proceed to add the open and close square bracket to the beginning and end of the file with the following regex expression: </p> <p><b>^([^$]+)</b></p> <a href="https://drive.google.com/uc?id=1adfYoj6zPxpht-Ic0XQcTQ6K-xTNofry"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1EOvvQe_FBvfwWpsbtuN_AivdFrtMW0I4" width="244" height="133" /></a> <p>We can use the following PowerShell regex replace function add the missing open and close square bracket to the beginning and end:</p> <p><b>$fixedJson = [regex]::Replace($badJson, $regexPattern, { param($match) "[{0}]" -f $match.Value })</b></p> <p>With the missing formatting added, we should now be able to validate the JSON file:</p> <a href="https://drive.google.com/uc?id=11YQ5t7niSSC7C9GV8GBRl4M6ajMNmzeK"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=18_tblsUnzcMy2bpCsMePgh8vreebqoVq" width="244" height="226" /></a> <p><b>Step #2 – Create a PowerShell script that will read the Azure Firewall Storage Account JSON and convert to CSV</b><b></b></p> <p>With the Regex expressions defined and missing brackets and braces defined, the next step is to write a PowerShell script that will read the native JSON file, format the JSON so it is RFC 8259 compliant, parse through each entry and place the log entry details into the rows and columns of the CSV file.</p> <p>The script can be found in my following GitHub: <a href="https://github.com/terenceluk/Azure/blob/main/Azure%20Firewall/Convert-JSON-Logs-to-CSV.ps1">https://github.com/terenceluk/Azure/blob/main/Azure%20Firewall/Convert-JSON-Logs-to-CSV.ps1</a></p> <p>The components of the script are as follows:</p> <p>1. The first portion where we use Regex to fix the JSON formatting</p> <a href="https://drive.google.com/uc?id=15UTbMJB1XVbdds5u4vmTeOSZOdwa1Tbv"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1HCNgcdkkNpRsZGY9Vw1PcapnM_LAP9jF" width="244" height="161" /></a> <p>2. Begin parsing the formatted JSON file:</p> <p><b>**Update the following 2 variables:</b></p> <ol> <li>$pathToJsonFile = "PT1H2.json" </li> <li>$pathToOutputFile = "PT1H2.csv"</li> </ol> <a href="https://drive.google.com/uc?id=1A23LIUF2NhRgpZPxQcp_I_h56o8-ALa7"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1lS_Mi7B-vrsTJ8D5pmVxppb9f0TUDgA-" width="155" height="244" /></a> <p>When writing the portion of the code used for parsing the JSON file, I noticed that there wasn’t an easy way for me to automatically read through the column headings to avoid defining them directly because there are different type of records in the JSON file with varying headings. This meant in order to transfer all the records into a CSV, I would need to define all of the headings upfront. Since not all the headings will be used for every record, any entries that does not have the heading will have that cell blank.</p> <p>The end result of the export will look something like the following CSV:</p> <p><a href="https://drive.google.com/uc?id=1i6s22YidXh7Q_hgkKFXN0tx6Qza5XZiN"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Cew9WDRSledmcbSKbyllSqj6G86GnCX1" width="244" height="133" /></a><a href="https://drive.google.com/uc?id=1ObCq84ucjBCi8uO7XqOGmuGvp_vKB4vx"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1IBFeP-D7g8DaXnZXgtMGnl8C8EKf-S24" width="244" height="133" /></a><a href="https://drive.google.com/uc?id=1X3AnxrFuDBeDaJIubgSL6mYV0-vx0ypz"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1d-hidVlq8qGX53GSDBhNkqUBYHiBqq6z" width="244" height="133" /></a></p> <p>The diagnostics settings I selected for this example included the <b>Legacy Azure Diagnostics</b> category so the logs will have some redundant records where the legacy entries have the details in the Msg column, while the newer category will have the record details split into their own columns.</p> <p>I hope this blog post helps anyone who may be looking for a way to parse and create a CSV file from the Azure Firewall log JSON files. I’ll be writing a follow up post in the future to demonstrate using a script to read the folders in the storage account so this doesn’t have to be done manually with every JSON file at every hour of the day.</p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-55801181827570291862023-07-04T05:52:00.001-04:002023-07-04T05:52:43.194-04:00Creating a Logic App that retrieves AAD sign-in events from Log Analytics and sends a report in an email with a CSV attachment and HTML table insert<p>Two of the common questions I’ve been asked since publishing the following post over a year ago:</p> <p><b>Monitoring, Alerting, Reporting Azure AD logins and login failures with Log Analytics and Logic Apps</b></p> <p><a href="http://terenceluk.blogspot.com/2022/02/monitoring-alerting-reporting-azure-ad.html">http://terenceluk.blogspot.com/2022/02/monitoring-alerting-reporting-azure-ad.html</a></p> <p>… is whether there was a way to:</p> <ol> <li>Provide the report as a CSV attachment </li> <li>Pretty up the table that is inserted into the email </li> </ol> <p>Providing the report as a CSV attachment is fairly easy but making the Html table more aesthetically pleasing wasn’t. After trying a few methods and not being very successful, I ended up landing on using an Azure Function App that takes the report in JSON format, create the HTML formatted table with colour output, then return it back to the Logic App. The method isn’t very efficient but provides the desired result so this post serves to demonstrate the configuration.</p> <p>The screenshot below, shows two reports and emails sent out in the Logic App flow. The first <b>Run query and visualize results</b> and <b>Send an email (V2)</b> that is highlighted in red is what my previous post demonstrated, and it sends out an email that contain a plainly formatted HTML table in an email. The second <b>Run query and list results,</b> <b>Create blob (V2), Convert JSON to HTML, Delete blob (V2), Initialize Variable, Set Variable, Create CSV table, Send an email (V2) 2 </b>that I highlighted in green are the additional steps to create a CSV file with the report and send an email with a coloured HTML table:</p> <a href="https://drive.google.com/uc?id=1tHsX9PGpevOtrpHOaa-HxgmbYFM9GZHV"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Qeq56HfYHqeMZ40uuG4Uw3gT3tAbTL0V" width="244" height="174" /></a> <p><b><font size="5">Step #1 - Create Storage Account</font></b></p> <p>While it is possible to send the full JSON directly to a Function App’s HTTP Trigger, logs exceeding the maximum size would fail so I opted to first create a JSON file and temporarily place it onto a Storage Account container so it can be retrieved by a Function App for processing. The storage of the JSON can be permanent as well but most environments I work with typically sends AAD logs to a storage account for audit retention so this design will only have the file stored for processing then deleted after.</p> <p>Begin by creating a Storage Account and a container that will store the JSON file. For the purpose of this example, the container will be named: <b>integration</b></p> <a href="https://drive.google.com/uc?id=12rt_xL6X5ZducVVpXEyaXW_n0LqcOq5H"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=11D7EJpe6rcvdkDVPYDN3CqmoPlpiTwiU" width="244" height="78" /></a> <p>The Function App that will be created will temporarily place a file similar to the one shown in this screenshot: </p> <a href="https://drive.google.com/uc?id=19ZH-GrhKuCvn1cM2bMY-of7TSp3ediXk"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1VXD0kXPzLU20m5P8VksO6cix5cGJ_Img" width="244" height="48" /></a> <p>Due to the sensitivity of data, we want to ensure that the container is not publicly assessable so the <b>Public access level</b> should be configured as <b>Private (no anonymous access)</b>:</p> <a href="https://drive.google.com/uc?id=1haf6dtd9Noxlbe-5MlaMfPyuizsjaI3o"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1_vP_6q3Msop9vE0DXGWBcwBTXJWl-2te" width="244" height="118" /></a> <p>For improved security, I always prefer to disable <b>Allow storage account key access </b>(Shared Key authorization) and use<b> </b>Azure Active Directory (Azure AD) for authorization. The method in which the Function App will securely access the Storage Account container is through a <b>managed identity</b> maintained by AAD so unless there is a need to allow shared key authorization, we can go ahead and disable it:</p> <a href="https://drive.google.com/uc?id=1ZdEcwfUk0y37aooSnB2tVjbeVByHTZWS"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1C7Ud9x8Ap2dSQKBiVD6MR_EGNwLaRCxw" width="244" height="130" /></a> <p>You’ll notice that browsing into the container through the will now require the <b>Authentication method</b> to be configured as <b>Azure AD User Account</b>: </p> <a href="https://drive.google.com/uc?id=16SRerpAbVRBn0RZm-QnbJd9ZmKlp_Vk5"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1WidfS8UvAzImxdqn4uAwBcJm3VJRN4jU" width="244" height="69" /></a> <p><b><font size="5">Step #2 - Create Azure Function App</font></b></p> <p>With the storage account created, we can proceed to create the Azure Function App that will be triggered via HTTP with the URL of the JSON file passed to it.</p> <a href="https://drive.google.com/uc?id=1E6CfE_e_LVoAu-6Z0bJwVPqIaQ7lk6BP"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1q5-ydHfE11sDlgMDLBXO6Kt4ywYO0cl4" width="244" height="86" /></a> <p>Create a new function of the type<strong> HTTP Trigger</strong>:</p> <a href="https://drive.google.com/uc?id=1jb2tJ4XQX6u6tiMfBVUKbqRVnOCo0-OR"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1fVSudxSxpJlZNpewQdSlP_hB-_Q6KBkl" width="216" height="244" /></a> <p>Open the function, navigate to <b>Code + Test</b> and paste the code from my GitHub repo into the function: <a href="https://github.com/terenceluk/Azure/blob/main/Function%20App/JSON-To-HTML-Function.ps1">https://github.com/terenceluk/Azure/blob/main/Function%20App/JSON-To-HTML-Function.ps1</a></p> <a href="https://drive.google.com/uc?id=1KSDHh1CByUq9lVvJ75N906o96IfHMLbA"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1UyJ2FOOB8fqVn1-_Fs3XRVma12DSUx4g" width="244" height="111" /></a> <p>Notable items in the code are the following:</p> <ol> <li>The container name is extracted from the full path to the JSON file with Regex </li> <li>The blob and storage account name are extracted from the full path to the JSON file with <b>substring</b> and <b>indexOf</b> method </li> <li>The function app expects the full URL path to be passed as a JSON like the following: </li> </ol> <p><b>{</b></p> <p><b> "body": "https://rgcacinfratemp.blob.core.windows.net/integration/AD-Report-06-29-2023.json"</b></p> <p><b>}</b></p> <p>Another way for defining the storage account and container name for the function app is in the <b>Application settings </b>but this hardcodes the value and requires updating:</p> <a href="https://drive.google.com/uc?id=1aaMGnzGV8yNEfnLz2kgcFqElawi2rCvZ"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1gDOXMR4z9awduBYyF4N4IZqFH9CexmU6" width="244" height="146" /></a> <p>The function app uses two Az modules to authenticate as a managed identity and retrieve the JSON file. Rather than loading the full Az module, which I have never had any luck because the amount of time it requires to be downloaded causes my function apps to time out, we will only load <b>Az.Accounts</b> and <b>Az.Storage</b>. Proceed to navigate to the <b>App files</b> blade, open the <b>requirements.psd1</b> and edit the file as such:</p> <p><b># This file enables modules to be automatically managed by the Functions service.</b></p> <p><b># See <a href="https://aka.ms/functionsmanageddependency">https://aka.ms/functionsmanageddependency</a> for additional information.</b></p> <p><b>#</b></p> <p><b>@{</b></p> <p><b> # For latest supported version, go to 'https://www.powershellgallery.com/packages/Az'. </b></p> <p><b> # To use the Az module in your function app, please uncomment the line below.</b></p> <p><b> # 'Az' = '10.*'</b></p> <p><b> 'Az.Accounts' = '2.*'</b></p> <p><b> 'Az.Storage' = '4.*'</b></p> <p><b>}</b></p> <a href="https://drive.google.com/uc?id=1k7V2m9tiyErE4ed33WUHGdjSVvPZfRom"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1GzLedBtDggvypbbSmZSGAcUgRLKWI8zL" width="244" height="119" /></a> <p>I’ve ran into scenarios where the modules do not get downloaded or loaded properly and the way I typically troubleshoot the issue is to navigate into Kudo for the function app to check the downloaded or not downloaded modules via the URL:</p> <p>https://json-to-html-converter<b>.<font size="4">scm</font>.</b>azurewebsites.net/</p> <a href="https://drive.google.com/uc?id=1gSGg3NNJn6W98DxMjKW9wJbqCUsVfhgf"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1j_sOFK6ISIx-68s5w2FyV7b36AS_lbbL" width="244" height="80" /></a><a href="https://drive.google.com/uc?id=1x7YaGWe3kC1ljbbzr0hh807Z07Vi-bE0"><img title="image" style="margin: 0px; display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1upB2EyvN92SP8cmvs_Vij1w5TlbA6IJG" width="244" height="87" /></a> <p>Once the function app code has been saved and configuration updated, proceed to navigate to the <b>Identity </b>blade and turn on <b>system managed identity</b>:</p> <a href="https://drive.google.com/uc?id=16gCDgobWGp_27vck16eX3z5uFzZNuWg7"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1YuHpPvZnMYbWZOgBxkhZ-Ex6jjh-IIU7" width="244" height="112" /></a> <p><b><font size="5">Step #3 - Create Logic App</font></b></p> <p>One of the key differences between the plain table report and the new report is that the old one uses <b>Run query and visualize results</b> to query Log Analytics or the report details, while the new report uses <b>Run query and list results</b> to query Log Analytics for the data. The <b>Run query and visualize results </b>action provides output options:</p> <ul> <li>Html Table </li> <li>Pie Chart </li> <li>Time Chart </li> <li>Bar Chart </li> </ul> <a href="https://drive.google.com/uc?id=1FyjifdUOnKBK5NoXbRJJfR8aP2vs863y"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1kH5WRZi16W2mk6vgK_va7vw1nbG-oLt9" width="244" height="235" /></a> <p>In order to generate an output that will allow us to create a customized Html table and CSV file, we would need to use <b>Run query and list results </b>action that generates a JSON file. This JSON file will allows us to create a blob on a storage account container that will be used to generate a customized Html table, as well as create a CSV file:</p> <a href="https://drive.google.com/uc?id=18AY9_3-dPixrRgp31xjx7Af4pc5jYfJK"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1sLJriqCx2g0k83Hn4PeOUGObkhr_mc5I" width="183" height="244" /></a> <p>We want to create the JSON file with a meaningful name so we’ll be using the concat function to name the file:</p> <p><b>concat(‘AD-Report-‘,formatDateTime(utcNow(), ‘MM-dd-yyy’),’.json’)</b></p> <p>This expression will generate a file with the name <b>AD-Report-<today’s date>.json</b></p> <p>The blob content will be provided by the results from the <b>Run query and list results </b>action.</p> <a href="https://drive.google.com/uc?id=1ohEAKj0nYyAyqOYdnCSpYZygl99asFZx"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1bHB4rCLs6Q1xfspdh3G6O-5YXf6Nh5Qp" width="244" height="209" /></a> <p>Once the JSON file with the AAD logs is created and placed into a storage account, the Logic App will call an Azure Function App and pass the full URL path so the Function App can retrieve the retrieve the JSON file, format the data into a Html table, then return it to the Logic App. Upon receiving the Html formatted results, the Logic App will then delete the log file. The remaining 2 steps after obtaining the properly formatted Html code is to create and set a variable so it can be used to send the logs as a table.</p> <a href="https://drive.google.com/uc?id=1t7D1jJBvTK7evWfHzF5dtuMvGJFmM7kc"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1B3lOFVHi2T6G2fzw8270k0iWreZ1iFEe" width="173" height="244" /></a> <p>With the Html email reported ready, we will then use the Create CSV table action to create a CSV file from the <b>Run Query and List Results</b> action and send the email:</p> <a href="https://drive.google.com/uc?id=1YYo7-8b4jdKYmlmNhJSsuc3T2aKbDKNq"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1wtn4KHprNQWy2MHAjJAhzAx9yRYGODG0" width="186" height="244" /></a> <p>The following is a screenshot of how the email is composed with the <b>EmailBody</b> variable containing the HTML content, attaching the CSV table as an attachment and provide it the same name format:</p> <a href="https://drive.google.com/uc?id=1Z4FjxfgQ6hWbr5FbkAkX2Qr5KBzEoLse"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=189JqYK9PjFBY6fMguSpbQaxbYw6_y0D8" width="243" height="244" /></a> <p>Once the Logic App has been saved, proceed to navigate to the <b>Identity </b>blade and turn on <b>system managed identity</b>:</p> <a href="https://drive.google.com/uc?id=1ZE_aPuGnx5Lbax9RJDmgNm_gFxoyoeR7"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1KuCstJuhLgVmpSNhKeD9_d_11ciYCKLK" width="244" height="94" /></a> <p><b><font size="5">Step 4 – Assign managed identity for the Function App and Logic App permissions to the Storage Account</font></b></p> <p>The last step is to grant the managed identities the appropriate permissions to the storage account.</p> <p>The <b>Azure Function App</b> will only need <b>Storage Blob Data Reader</b> because it will only need to retrieve the JSON file.</p> <p>The <b>Logic App </b>will need <b>Storage Blob Data Contributor</b> because it will need to write the JSON file to the storage account and then delete it afterwards.</p> <a href="https://drive.google.com/uc?id=1MYtW9BYgiYN7SL_2qculFp1L-RV9jZhN"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1W5XgDNm49frdpT4GmYeRCfe9ufxRAirR" width="244" height="40" /></a> <p><b><font size="5">Step 5 – Test Report</font></b></p> <p>Proceed to run the Logic App and the following report should arrive in the configured mailbox:</p> <a href="https://drive.google.com/uc?id=1hkE8mPKVfzv2qB_y27Q_valb1zel6Jz2"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1YeRkO_rLeUXHtKNYGUu8KeuKCxV8wzkf" width="244" height="125" /></a> <p>Note that the <strong>CSS nth-child selector</strong> for even and odd rows does not work with Outlook so while the Html generated would display alternating blues for rows as shown in the screenshot below, the report sent in Outlook would not be the same.</p> <a href="https://drive.google.com/uc?id=1QJ3B5EzOtPlUG-VZrnDxoq0e5kn6Z2J0"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1oMdF0N1lrmbki0cq-qaoOwr5YU_6oPeD" width="244" height="133" /></a> <p><b><font size="5">Troubleshooting</font></b></p> <p>The following PowerShell script can call the Function App directly if the Logic App does not generate the report and you want to troubleshoot by calling the Function App directly.</p> <p>GitHub: <a href="https://github.com/terenceluk/Azure/blob/main/Function%20App/Test-Calling-API.ps1$Body = @{">https://github.com/terenceluk/Azure/blob/main/Function%20App/Test-Calling-API.ps1<strong>$Body = @{</strong></a></p> <p><strong>path = </strong><a href="https://storageAccountName.blob.core.windows.net/integration/AD-Report-06-28-2023.json"><strong>https://storageAccountName.blob.core.windows.net/integration/AD-Report-06-28-2023.json</strong></a></p> <p><strong>}</strong></p> <p><strong>$Parameters = @{</strong></p> <p><strong>Method = "POST"</strong></p> <p><strong>Uri = "https://youFunctionName.azurewebsites.net/api/Converter?code=xxxxxxxxxxxxm_Dnc_avHxxxxxxxxxxxxxxDH1A=="</strong></p> <p><strong>Body = $Body | ConvertTo-Json</strong></p> <p><strong>ContentType = "application/json"</strong></p> <p><strong>}</strong></p> <p><strong>Invoke-RestMethod @Parameters | Out-File "C:\Temp\Call-API.html"</strong></p> <p>The Function App URI can be located in the field shown in the screenshot below:</p> <a href="https://drive.google.com/uc?id=1ouI1jt-rTI1PRR9mERte7rLvfIID0MZ3"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1PKi6FrB9OwfNROKMKj38flLzdcVN2ljG" width="244" height="60" /></a>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-12518590260117251332023-06-23T06:40:00.001-04:002023-06-23T06:46:16.519-04:00Useful Kusto Query / KQL queries for Azure Firewall Troubleshooting<p>I do not often have the opportunity to do as many hands on deployment of Azure services on projects due to my role as an architect so when I do, I tend to spend a lot of time working with the service to try and understand the ins and outs of the product. One of my recent projects provided me the opportunity to deploy the Azure firewall that I designed and I noticed that there weren’t many Kusto query examples available for troubleshooting inbound and outbound traffic so I wanted to post a link to my GitHub repo where I have and continue to build upon KQL queries for querying Azure Firewall logs to monitor traffic: <a href="https://github.com/terenceluk/Azure/blob/main/Kusto%20KQL/Azure-Firewall.kusto">https://github.com/terenceluk/Azure/blob/main/Kusto%20KQL/Azure-Firewall.kusto</a></p> <p>I tried to demonstrate as many customizations such as time zones, days ago, start and end time, variables that allowed these basic KQL queries to help me troubleshoot all the Teams outbound traffic that were being blocked as well as weekly reporting I needed to deliver to the client. Hope this helps anyone who might be looking for example queries and can use these as a start.</p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-44589041710655307642023-06-23T06:39:00.001-04:002023-06-23T06:39:42.523-04:00Microsoft Teams audio calling fails with the error: “We ran into a problem – Try again in a few minutes” on Azure Virtual Desktop with Teams Media Optimization<p>One of the issues I recently encountered during a Azure Virtual Desktop deployment with Teams Media Optimization was where outbound calls from the virtual desktop would display the spinning wheel while constantly playing the dialing audio until the call fails with:</p> <p><b>We ran into a problem <br /><b>Try again in a few minutes</b></b></p> <p><a href="https://drive.google.com/uc?id=11QlMcdxhMHvOmeSQu9pvyeuqaZO-CDns"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=18eY753hUxs5VAD9k_h0es-LQmM9WIlwG" width="244" height="154" /></a></p> <p><a href="https://drive.google.com/uc?id=16X_8KuAXvKC8osj3SuaM1IutL6qQrTig"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1iZDSQRKkVA8_e5KJipMtC8nBq8ITFk_r" width="244" height="72" /></a></p> <p>I wasn’t sure if it was the ordering of the software installation or the Remote Desktop Client app and after reinstalling all the components yet still receiving this error, I remembered that the profile cache may be the cause so went ahead and navigated to <b>%appdata%\Microsoft\Teams</b> to delete the files:</p> <a href="https://drive.google.com/uc?id=1SXyBs5gW_SfE_yAQj9D0lqB6ZIRb7ABn"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1CnHG1qoA1pV3uPsigh0yKVgYzmTpmUac" width="244" height="187" /></a> <p>Then tried dialing again and this corrected the issue. The issue took a bit if time to resolve so I hope this short blog post will help anyone who may encounter this problem.</p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-41920271594580783892023-06-22T12:34:00.001-04:002023-06-22T12:38:55.938-04:00Azure Virtual Desktop Teams Media Optimization fails to display local client devices<p>I’ve configured quite a few Teams Media Optimization with Azure Virtual Desktop as per the following Microsoft documentation in the past:</p> <p><strong>Use Microsoft Teams on Azure Virtual Desktop <br /></strong><a href="https://learn.microsoft.com/en-us/azure/virtual-desktop/teams-on-avd">https://learn.microsoft.com/en-us/azure/virtual-desktop/teams-on-avd</a></p> <p>The configuration isn’t difficult and I never had any issues until recently when I had to repeat the same for an environment I worked on. After performing all the steps, I noticed that the settings in Teams would either display:</p> <p><strong>Audio Devices: Custom Setup <br />Speaker: None <br />Microphone: None</strong></p> <p>Which means no devices are redirected or optimized:</p> <a href="https://drive.google.com/uc?id=1gu9HAqrQ9yTaTIs6O9J9Ar4qjSat0c3U"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=171Y6eTe1771RqcfRzHnOi8dObEfMK956" width="244" height="241" /></a> <p>Or:</p> <p><b>Audio Devices: Custom Setup <br />Speaker: Remote Audio <br />Microphone: Remote Audio <br />Camera: Integrated Camera (Redirected)</b></p> <p>Which means redirected audio and video devices were taking place but not optimization. </p> <p><b>**Note</b> that redirect works if these RDP settings are configured:</p> <p><b>audiocapturemode:i:1 </b>Enable audio capture from the local device and redirection to an audio application in the remote session</p> <p><b>audiomode:i:0 </b>Plays sound on the local computer</p> <p><b>camerastoredirect:s:* </b>Redirect cameras</p> <a href="https://drive.google.com/uc?id=1c5_bHKYFSn6osoYkXaDh45SF6M2lDl-R"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1w3b8l3B-JWT0aR3bGtvSX20F7K2_kIqU" width="244" height="244" /></a> <p>After going through all the steps multiple times and not having any luck, I recalled a long time ago when I experienced an issue where if I had logged into Teams on an Azure Virtual Desktop <b>BEFORE</b> configuring Microsoft Teams Media Optimization, the optimization would fail. This generally wasn’t an issue for me as I always configure the optimization before rolling out the desktops but for this instance I had not so I went into the folder <b>%appdata%\Microsoft\Teams</b> to delete all the items and long behold it corrected the issue.</p> <a href="https://drive.google.com/uc?id=1O3zeFWG92myhegaShVWrZjabQu6fdvPr"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=18HtcULxV_qWfXGajrUu_cvzU0Xntb9Si" width="644" height="182" /></a> <p>I haven’t encountered this issue as much but this took up quite a bit of my time to troubleshoot so I hope others with this issue will find this post and be able to resolve it quicker.</p> <p>The versions of the applications I used for this deployment are:</p> <p><b>Microsoft Teams</b>: 1.6.00.11166</p> <p><b>Remote Desktop WebRTC Redirector Service</b>: 1.33.2302.07001</p> <p><b>Microsoft Remote Desktop</b>: 1.2.4337.0 (x64)</p> <p><b>Microsoft Visual C++ 2015-2022 Redistributable (x64)</b>: 14.36.32532.0</p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-64946254637924425772023-06-13T09:21:00.001-04:002023-06-13T09:21:27.876-04:00Designing Azure Storage Account Regional Failover with Private Endpoints<p>I’ve had the opportunity to work on several projects over the past year to design disaster recovery to recover from one Azure region to another. One of the most common topics that comes up is how to handle storage accounts that are accessed through private endpoints and have public endpoints disabled:</p> <p><a href="https://drive.google.com/uc?id=1xt_iOVJt1Y3gHvxL103kDqgeRL7gYpya"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1W5Azr1xLv11QxYrB0qHGVI7dYHOEvlIw" width="244" height="133" /></a></p> <a href="https://drive.google.com/uc?id=17-i42lrzke1qHzN5rrCecy4hJrQJnjCA"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=15w_sWk0QA5ch7gjhFhLeaAw4UCLg_hjX" width="244" height="58" /></a> <p>The purpose of this blog post is to provide a walkthrough of possible methods to design regional failover with private endpoints.</p> <p><b><u><font size="5">Sample Environment</font></u></b></p> <p>Take the following topology as an example:</p> <a href="https://drive.google.com/uc?id=17AQseviAASWWsMfqonRoAUpiF13Omj3u"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1yxourPvzaIt1wwR6wW_qpgt8iVScz-ib" width="168" height="244" /></a> <p>In this topology, we have a storage account in the <b>East US</b> region that is configured with <b>Read-access geo-redundant storage (RA-GRS)</b> so all data written to it will automatically get written to the paired region in <b>West US</b>:</p> <a href="https://drive.google.com/uc?id=1KgAfNGC8ywIHlVFZJQG1wlcFwsPdS67K"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Cqod1cTrqcKEOwQEwY9qTdYx8CPUwJEz" width="244" height="190" /></a> <p>Since <b>Read Access</b> is configured, a secondary endpoint is available for read access on the replicated copy in the secondary region:</p> <a href="https://drive.google.com/uc?id=1a90Q-TJoWf5zQgvFHGeAjrIaaWU6XV2l"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1mPiDUzycYB7GcLmhl-r2A7V26huu_rXU" width="244" height="156" /></a> <p>A private endpoint is provisioned in the <b>East US </b>region so the <b>vm-east-us-prod</b> virtual machine can access the storage account privately from its subnet 10.1.0.4 to the private endpoint at 10.1.2.4 within the <b>vnet-east-us</b> VNet:</p> <a href="https://drive.google.com/uc?id=12X_AsCmEAkQU739YlSGajLvBEEYSxMNn"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1srJmHEyRSfdZoLg93rfn5cwzOCLKRKoA" width="150" height="244" /></a> <p>Although a secondary endpoint is available, this should not be mistaken for an endpoint that can be used for DR purposes because it allows for read access via the public endpoint to the replicated copy in <b>West US</b> during normal operation. </p> <p>Notice that there is a pre-deployed virtual machine in the <b>West US</b> region that serve to provide continue operation of access to the storage account in the event where the <b>East US</b> region is unavailable. This type of very common for most environments as a DR failover region is typically pre-staged with networks that serve to host resources to continue operations in the event where the primary region is down.</p> <p><b><u><font size="5">Scenario #1 – Shared Private DNS Zone for Primary and Secondary Regions</font></u></b></p> <p>One common design that can be used between two regions is where the <b>Private DNS Zone</b> is shared between the VNets in the two regions. This configuration allows for both VNets to use the same DNS zone for name resolution and therefore will resolve the same private IP address configured for the private endpoint in the primary region providing access to the storage account:</p> <a href="https://drive.google.com/uc?id=1sjy2ZXBIHNoq4YUIlziqSKHERJPtwqYN"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1W4ROezBWK-_epXO_TEKvYTwYqkkh319y" width="244" height="240" /></a> <p>In the diagram above, the secondary region’s virtual machine is placed in a VNet that linked to the same private DNS zone:</p> <a href="https://drive.google.com/uc?id=1krpDRVlO6D83yXmzkY5lei2RxE2wQSdC"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1RBDPLYD9SkS0K5MNgKPQlJUJwqS6Dft_" width="244" height="77" /></a> <p>It is important to note that the reason why we are able to link the two VNets to the same <b>Private DNS Zone</b> is because these are <b>Global</b> resources even though it is placed in a regional resource group:</p> <a href="https://drive.google.com/uc?id=14bxFtF0GQbEgbrmQ8j_hKHlfB9EiTT5x"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1kxWIXxNExqnPry4S5fFYah2EgzyXfWE5" width="244" height="152" /></a> <p>This type of configuration means that attempting to resolve <b>easteusblobprod.privatelink.blob.core.windows.net</b> in both regions will direct the traffic to the private endpoint deployed in <b>East US</b> and since the two regions have <b>Global VNet Peering</b> configured, the <b>West US</b> traffic will traverse through that connection to the <b>East US</b> region.</p> <a href="https://drive.google.com/uc?id=14X4N3tuvwm0K7bSJIBLcUoEvrUNjbk2G"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1uI7FvsACkEnrDxIA3S0L4HYPwBpaIReD" width="244" height="135" /></a> <p>In the event where the storage account is unavailable in <b>East US</b> or it has been manually failed over to the West US region, traffic will continue to be directed to the private endpoint in <b>East US</b>, then sent over a private link to the failed over storage account in the <b>West US </b>region, which now has become <b>LRS</b> (Locally-redundant storage):</p> <p><a href="https://drive.google.com/uc?id=1k5Q_K7KSM4KDb9y5oHGCzvZQLJIcFEgL"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1ycsGXC9AUaLchVgY775GpaRPpkvfO5xj" width="244" height="169" /></a></p> <p><a href="https://drive.google.com/uc?id=15MtNW7f64NRhDIe3CHsrlju61-UQviwp"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1mg8rsWXdh-2LvdyR9jeysDjeaDflHdj-" width="244" height="240" /></a></p> <p>Such a design unfortunately would not provide the required access in the event of an <b>East US</b> regional failure because the primary private endpoint will no longer be available if <b>East</b> <b>US</b> becomes unavailable:</p> <a href="https://drive.google.com/uc?id=1SfrPU7hLMJyF8M7hEcTZEAYbEI1nxnS5"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=17ppja2XOPNcJVOTCwGcwxLYFk9Niw8gw" width="237" height="244" /></a> <p>A common design is to have a DR runbook that performs the following in the event of a regional failure:</p> <ol> <li>Provision a new private endpoint in the <b>West US</b> region </li> <li>Update the <b>Private DNS Zone</b>’s record to direct traffic to the new private endpoint </li> </ol> <a href="https://drive.google.com/uc?id=1HzhFWh1fYm-VzWwa4mK1BJ-dplJytAPI"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1RELqqZJWLB2k5trVOyKm2SUA20oTrfwY" width="244" height="242" /></a> <p>This type of design requires manual steps to be executed but saves cost in the disaster recovery region because while private endpoints costs $0.014 (CAD) per hour, which equates to around $10.22/month, larger environments can have many private endpoints and the charges for resources that are not actively used isn’t well received by organizations. Environments leveraging automation using Infrastructure as Code are great candidates for this type of design as the resources and changes can be executed with little manual labour. Furthermore, disaster recovery solutions are not always automatically invoked so having to provision private endpoints in the event of a catatrophic event is not uncommon. An example of this could be leveraging Azure Site Recovery to recover VMs with its recovery plan capability to execute Azure Automation runbooks.</p> <p><b><u><font size="5">Scenario #2 – Separate Private DNS Zone for Primary and Secondary Regions</font></u></b></p> <p>If there is a desire to pre-provision all resources to either fully automate or reduce the amount of manual labour involved in the event of a DR, it is possible to provision a private link in the disaster recovery West US region that is linked to the storage account. The important design change here is that a second private DNS zone is created for the DR region and linked to the VNet as shown in the diagram below:</p> <a href="https://drive.google.com/uc?id=19jQIPiCiUbILyopwOlfeLwXOwGdIg259"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1z8SZKP1D62eaJkO9_JrQFpTj-A0jdD7U" width="237" height="244" /></a> <p>Notice that the pre-provisioned private link will now allow the virtual machine in the West US DR region to access the storage account through a private link rather than the global VNet peering. I won’t go into the details but I have had cross-region active/active deployments configured with such a design.</p> <p>Here is how the configuration would look like in the Azure portal:</p> <p><a href="https://drive.google.com/uc?id=1wxsvE-Rp1ToqDxfwwW5b4Omk49BEnfyJ"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1uUk-UB4qzLAR-pTgr_EILMXiOeQISBV1" width="244" height="61" /></a></p> <p><a href="https://drive.google.com/uc?id=1huBUfX9_xKBgrBq5hYF3n_pLTOz8vyz2"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1FTTq8OYkUInc6cqGGXJyVF7sbKkML4tJ" width="244" height="89" /></a></p> <p><a href="https://drive.google.com/uc?id=1Nf2rjoY5vgOlFWhPbLs9EHKhQJFZ4iT0"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1c-FbznRZXF_E23nT5NqSg_vRWV1iTlg4" width="244" height="47" /></a></p> <p><a href="https://drive.google.com/uc?id=1j4KVFnV_ozjF_4DYHlSNadrb_umSQ2i2"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1yVOXeV8c-txDuVppwm7DNBP10VCefRRg" width="244" height="115" /></a></p> <p>With the above design, a regional loss will require no manual configuration to access the storage account failed over to West US:</p> <a href="https://drive.google.com/uc?id=1FQ8POqy7pZba5AgzgzlFgU78viRI2ijg"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1sWbxgewBWsRDT4upq4fzRfgtR4MzgRA6" width="219" height="244" /></a> <p>In summary, this design removes the requirement for provisioning a private endpoint and updating DNS in the event of a disaster recovery. However, this does incur additional cost as well as maintaining multiple private DNS zones that are associated to the different VNets in each region. There will also be additional considerations required when there is an on-premise hybrid cloud connectivity to the Azure regions and traffic originating outside of Azure needs to reach the private endpoint.</p> <p>Hope this gives the reader a good idea about the designs available for providing private endpoint connectivity in the event of a disaster recovery.</p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-89121967696633385612023-06-13T09:11:00.001-04:002023-06-13T09:11:41.084-04:00PowerShell Script that will use the OneTimeSecret service to generate and return a URL to access a password<p>One of the frequent questions I have been asked after my post:</p> <p><b>Using Microsoft Forms and Logic App to create an automated submissions and approval process for Azure AD User Creation <br /></b><a href="http://terenceluk.blogspot.com/2023/04/using-microsoft-forms-and-logic-app-to.html">http://terenceluk.blogspot.com/2023/04/using-microsoft-forms-and-logic-app-to.html</a></p> <p>… was whether there is a more secured way to include the password of the newly created user in an email rather than just pasting it into the confirmation email. The main reason why I chose to include the password in plain text is because the password is temporary and would require the user to change upon successfully log on. Nevertheless, I’ve always preached that passwords should never be included in email so I would like to provide an alternate way to better the protection with the included the password.</p> <p>The method I would recommend is to use a service such as OneTimeSecret that allows you to provide a link to a page that provides the password and this link can only be opened once and it has an expiry. The following is a PowerShell script that can be used in an Automation Account with a webhook that receives a passed password, uses OneTimeSecret to create a link, then return that link.</p> <p>The PowerShell script can be found at my following GitHub repo: <a href="https://github.com/terenceluk/Azure/blob/main/PowerShell/Generate-OneTimeSecret-URL.ps1">https://github.com/terenceluk/Azure/blob/main/PowerShell/Generate-OneTimeSecret-URL.ps1</a></p> <a href="https://drive.google.com/uc?id=1cokZ9H7MF-O2GrLSi6KEGhNP8NAejpm2"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1MRAoi6udDVl6Ma5RzH9ITgut14aVPvuM" width="244" height="166" /></a>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0tag:blogger.com,1999:blog-2228947945609574437.post-24638901985860702752023-06-10T13:08:00.001-04:002023-06-13T09:03:42.539-04:00Attempting to join a Windows desktop to a Active Directory Domain Services (AD DS) fails with: "The following error occurred attempting to join the domain contoso.local": The specified network name is no longer available.<p>One of the projects I’ve been working on was a small Azure Virtual Desktop deployment for resources outside of Canada to securely access a VDI in Azure’s Canada Central region. To provide a “block all traffic and only allow whitelisted domain” solution, I opted to use the new Azure Firewall Basic SKU with Application Rules. Given there wasn’t any ingress traffic originating from the internet for published applications and connectivity to the AVDs were going to be through Microsoft’s managed gateway, I decided to place the Azure Firewall in the same VNet as the virtual desktops and servers. This doesn’t conform to the usual hub and spoke topology and the main reason for this is to avoid VNet to VNet peering costs between the subnets. What I have elected for the security network design was to send all traffic between the subnets within the same VNet through the firewall for visibility and logging so the default of traffic free flowing within the same VNet is not allowed. The following is a diagram of the topology:</p> <a href="https://drive.google.com/uc?id=1hOgtLIfrhtkaR6d8PtTw3mNw7MfxDfJZ"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1ipdJlGgqQHVLlj6VHjHhBKWpZkVVBmU8" width="244" height="116" /></a> <p>The traffic originating from the AVD subnet containing the virtual desktops to the server subnet containing the AD DS servers are protected by the firewall. After placing the required route in the UDR associated to the AVD subnet and configuring the required firewall ports from client to server in the <b>Network rules</b> of the firewall policy:</p> <ul> <li>UDP Port <b>88</b> for Kerberos authentication. </li> <li>UDP and TCP Port <b>135</b> for the client to domain controller operations and domain controllers to domain controller operations. </li> <li>TCP Port <b>139</b> and UDP <b>138</b> are used for File Replication Service between domain controllers. </li> <li>UDP Port <b>389</b> for LDAP to handle regular queries from client computers to domain controllers. </li> <li>TCP and UDP Port <b>445</b> for File Replication Service. </li> <li>TCP and UDP Port <b>464</b> for Kerberos Password Change. </li> <li>TCP Port <b>3268</b> and <b>3269</b> for Global Catalog from client to domain controller. </li> <li>TCP and UDP Port <b>53</b> for DNS from domain controller to domain controller and client to the domain controller. </li> </ul> <a href="https://drive.google.com/uc?id=1NRVJr2_KDMyJ-g5OcFHtKbH99zH7mZaz"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Shlba_6xW4nb9e20v3Me3RwGLU-PvFKs" width="244" height="43" /></a> <p>… then proceeding to deploy the desktops with AVD, it would fail to join the desktop to the domain with the error message:</p> <p><b>VM has reported a failure when processing extension 'joindomain'. Error message: "Exception(s) occurred while joining Domain contoso.local</b></p> <p>Trying to manually join the desktops to the domain will display the following message:</p> <p><b>"The following error occurred attempting to join the domain contoso.local": The specified network name is no longer available.</b></p> <a href="https://drive.google.com/uc?id=1jQkqEQDkMkT5e6UmtzLYLIt8h6tsaUNz"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1V8n_qr__KnrxIyRoJ-y0cGfSchjXObzp" width="244" height="152" /></a> <p>Parsing through the logs of the Azure Firewall did not reveal any <b>Deny</b> activity but I did notice that there wasn’t any return traffic captured. It was then that I found I had forgotten to associate the UDR that would force traffic from the server subnet to the VDI subnet through the firewall. </p> <a href="https://drive.google.com/uc?id=1p11HyRSkah4p5bt5j4Ev-LrOfaD2xj1q"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1Dijsy3SSAIAz8H8hIT6El9BslKTKOi8V" width="244" height="128" /></a> <p>This meant that any traffic originating from the VDI subnet would be sent through the firewall:</p> <a href="https://drive.google.com/uc?id=1WYoX54fr8SiPo5Hg49apMYFzM8KIXtUb"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=1jJd4w8aYH6c9c8A87Vxbk-i5diVARkRd" width="244" height="128" /></a> <p>… while any traffic originating from the server subnet to the VDI subnet would just be sent through subnet to subnet within the same VNet. I’m not completely sure why this would be a problem given return traffic should have returned through the firewall and only new traffic from the domain controllers would not.</p> <p>In any case, I went ahead and updated the server subnet to use the UDR that would route the traffic through the firewall and the domain join operation succeeded. Firewall logs would also began displaying the domain communication traffic to the AVD subnet.</p> <p>This probably would have been resolved when I completed the configuration but I hope this blog post would help anyone who may encounter a similar issue.</p>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com1tag:blogger.com,1999:blog-2228947945609574437.post-68323446413897946892023-06-10T13:06:00.003-04:002023-06-10T13:11:47.452-04:00PowerShell script for updating the domain of Azure AD accounts<p>One of the projects I’ve been involved in took over a year for a decision to be made on the custom domain that will be used for user accounts and the services that will be offered. This meant that all the accounts used the <strong>@somecompany.onmicrosoft.com</strong> domain for a year during development and when the time came to register and use the new domain, there was already hundreds of accounts. Using the portal.azure.com GUI wasn’t practical given the amount of accounts so I wrote a PowerShell script to update the accounts. The script can be found at my GitHub repo here: <a title="https://github.com/terenceluk/Azure/blob/main/PowerShell/Update-Azure-AD-UPN-Domain.ps1" href="https://github.com/terenceluk/Azure/blob/main/PowerShell/Update-Azure-AD-UPN-Domain.ps1">https://github.com/terenceluk/Azure/blob/main/PowerShell/Update-Azure-AD-UPN-Domain.ps1</a></p> <a href="https://drive.google.com/uc?id=1r0Tie6IDOwDU5oN54PbYg8DNns_4waKv"><img title="image" style="display: inline; background-image: none;" border="0" alt="image" src="https://drive.google.com/uc?id=18jXOCoqjRZ0HvIsdACfo_4zhvBitnXNe" width="644" height="205" /></a>Terence Lukhttp://www.blogger.com/profile/02612575579652280306noreply@blogger.com0