Pages

Friday, May 13, 2022

Let's learn IaC using Terraform with GitHub Actions to deploy into Microsoft Azure

It has been a busy start to the year and I regret that I haven’t been able to dedicate more time to blogging so I intend on catching up on my backlog in the following months. One of the topics I’ve been meaning to write about is how to use Terraform with GitHub actions to deploy infrastructure in Azure. Both Terraform (IaC) and GitHub Actions (orchestration engine) are technologies I’ve been self-training over the past few months and I am excited to continue building the knowledge I’ve acquired. Those who may not be familiar with IaC can refer to one of my previous posts here:

Infrastructure as Code in 15 Minutes PowerPoint Presentation
http://terenceluk.blogspot.com/2022/04/infrastructure-as-code-in-15-minutes.html

My journey through learning these two technologies has been challenging but very fulfilling, and the purpose of this post is to share the various features I came across and what I’ve been able put together to demo all of them. I have to admit that I am not an expert and there may be better approaches so please feel free to comment on this post.

A few of my colleagues have indicated that they feel it would be beneficial to include more diagrams in my posts rather than just writing so I have taken the time to create a series of diagrams to better illustrate the workflow and the Terraform code that used.

What I’m trying to Demo

The components and features I’m trying to demo are the following:

  1. How to use Terraform for IaC
  2. How to execute terraform init, format, validate, plan, apply, destroy in a workflow
  3. How to use different .tfvars variables to deploy different environments: dev, stg, prod
  4. How to use GitHub Actions as an orchestration engine for pipelines
  5. How to initiate a workflow manually, on push, on pull request, on complete of another workflow
  6. How to have the workflow store the terrafirn .tfstate file to an Azure Storage Account Container
  7. How to set a dependency on a previous step
  8. How to use and reference custom self-written Terraform modules stored in a different registry and in subfolders (some may not know about the // when accessing a module in a sub directory)
  9. How to use and reference a Terraform Registry module
  10. How to use the Super-Linter to scan the code
  11. How to get a branch name
  12. How to pass a branch name a different step
  13. How to store and use secrets in GitHub
  14. How to configure different environments in GitHub
  15. How to configure a protection rule for a GitHub environment
  16. How to require an approval before executing a workflow
  17. How to configure an Azure Storage Account Container to store the .tfstate file

GitHub Repositories

Let me begin by providing the links to the GitHub repositories I will be using for the demonstration.

GitHub Repository that contains the GitHub Actions workflows, Terraform code for deploying dev, stg, and production environments:

https://github.com/terenceluk/terraform-k8s-acr-psql-vms-demo

GitHub Repository that contains the Terraform modules that are referenced and used for the deployment of Azure Kubernetes Service, Azure Container Registry, and PostgreSQL server and database:

https://github.com/terenceluk/terraform-modules

I’ve added as many comments into the code to explain the purpose in hopes that whoever is reading it will understand the function. Feel free to fork them to your repo and test or modify as you see fit.

GitHub Repository Branches

There will be 3 branches in the GitHub repo:

  • Dev
  • Stg
  • Prod
image

The Terraform code and workflows will be directly pushed to the dev branch to test, then merged into stg and production.

Terraform and GitHub Actions Code

The GitHub Actions YAML files will be stored in the mandatory .github/workflows directory of the repository.

The Terraform code will be split as follows.

terraform-k8s-acr-psql-vms-demo repository

  • The main.tf, output.tf, provider.tf, and variables.tf files are stored in the root
  • The .tfvars files containing the variable values for dev, stg, and prod environments are stored in the subfolder Variables
  • The main.tf references modules that are stored outside of its repository:
    • Another GitHub public repository named terraform-modules
    • A Terraform Registry module

terraform-modules repository

  • This repository contains 3 modules that are used to deploy:
    • Azure Container Registry
    • Azure Kubernetes Service
    • PostgreSQL Server and Database
image

What we are deploying with Terraform

The resources that will be deployed are the following:

  1. Azure Container Registry
  2. Azure Kubernetes Service
  3. PostgreSQL Server and Database
  4. Linux Virtual Machine
  5. Windows Virtual Machine
  6. VNet with subnets
  7. Management lock for the Azure Container Registry

Workflow: terraform-plan.yml and terraform-apply-dev.yml

The terraform-plan.yml and terraform-apply-dev.yml workflows are dispatched whenever there is a push to the dev branch in the GitHub repo. I have included a diagram below that walks through the process and will also list the flow here:

  1. User updates Terraform or GitHub Action YAML code and pushes it to the dev branch of the GitHub repo
  2. The terraform-plan.yml workflow is started as it is configured to start on push to dev
  3. Two steps are now executed in parallel:
    1. Get-Branch-Name to determine what branch this was pushed on
    2. The download and use of the Super-Linter is ran in parallel to scan the code
  4. Several Terraform commands are executed:
    1. fmt is ran on the Terraform code to ensure it is formatted properly
    2. validate is ran on the Terraform code
    3. init is ran to initialize and configure an Azure Storage Account Container to store the .tfstate file
    4. plan is ran to generate a plan with the appropriate terraform-dev.tfvars file
  5. Once the terraform-plan.yml workflow is complete, initiate the terraform-apply-dev.yml workflow
  6. Get-Branch-Name will start to obtain the previous workflow run conclusion
  7. If previous workflow was not successful then end the workflow, if it was successful then get the branch path that is currently being worked on
  8. If the branch path is not dev then end the workflow, if it is the dev branch then set the environment to GitHub dev-deploy and go to the next step
  9. The dev-deploy GitHub environment has a protection rule configured that requires review and approval so the reviewer will receive an email to approve or reject
  10. If the reviewer has approved then proceed with the deploy where the following are executed:
    1. fmt is ran on the Terraform code to ensure it is formatted properly
    2. validate is ran on the Terraform code
    3. init is ran to initialize and configure an Azure Storage Account Container to store the .tfstate file
    4. plan is ran to generate a plan with the appropriate terraform-dev.tfvars file and the -out switch is used to create the plan.tfdata file
    5. apply with -auto-approve using the plan.tfdata file is executed
  11. Resources will not get deployed to Azure
image

The following is a screenshot of the jobs in the workflows and the process during the deployment:

image

What a pending review looks like:

image

The email a reviewer would receive:

image

The review prompt in GitHub:

image

The apply output when deploying infrastructure:

image

A successfully deployment (note that the duration of 20h 13m 31s is because I left the review pending over a day):

image

Workflow: terraform-apply.yml

The terraform-apply.yml workflow is executed upon completing a pull request for stg and prod. It is much less complex so I will simply include the two diagrams to describe the process:

Staging:

image

Production:

image

Workflow: terraform-destroy.yml

The terraform-destroy.yml workflow is dispatched manually when we want to remove the environment. The following are a few screenshots of manually dispatching the workflow:

image

The output during a destroy of the infrastructure:

image

Successfully destroying the infrastructure:

image

Setting up Azure Storage Account Container and Resource Group

With the walkthrough of the Terraform and GitHub Actions completed, I would like to provide the steps required to set up the Azure Storage Account Container that will be used to store the terraform .tfstate file as none of this would work without it.

We’ll be using the Azure CLI to configure this:

# Log into Azure

az login

image

image

# Define variables for subscription ID, Azure Region, storage account, container

subscriptionID = "xxxxxxxx-71c2-40f2-b3d4-xxxxxxxxxx"

resourceGroup = "ca-cn-dev-demo-rg"

azureRegion = "canadacentral"

storageAccount = "cacndevdemost"

containerName = "terraform-state"

image

# List available subscriptions

az account list

image

# Specify the subscription to use

az account set -s $subscriptionID

# Create a App Registration and corresponding Enterprise Application / Service Principal and assign it contributor role to the subscription – Ref: https://docs.microsoft.com/en-us/cli/azure/ad/sp?view=azure-cli-latest

az ad sp create-for-rbac --name $servicePrincipalName --role Contributor --scopes /subscriptions/$subscriptionID --sdk-auth

Copy the clientId, clientSecret, tenantId values.

image

Note that the following App Registration will be configured along with a secret in Azure AD:

image

The corresponding Enterprise application (Service Principal) will be created:

image

We’ll need to grant permissions to the Service Principal to the subscription that Terraform will deploy resources to. Contributor typically sufficient but there are some configurations such as Resource Locks that require Owner:

image

# Create resource group that will store storage account for saving the Terraform State

az group create -g $resourceGroup -l $azureRegion

image

# Create a new storage account and place it in the resource group

az storage account create -n $storageAccount -g $resourceGroup -l $azureRegion --sku Standard_LRS

image

The following storage account will be created:

image

# Create a container in the storage account to store the terraform state

az storage container create -n $containerName --account-name $storageAccount

image

The following Container will be created and when used a .tfstate will be stored here:

image

Setting up GitHub Secrets

Various parameters such as service principal attributes, secrets, storage account access keys should not be stored directly in the Terraform .tfvars files and should be stored in the GitHub secrets vault for retrieval.

Proceed to navigate to the previously configured Storage Account’s Access Keys and copy the key1 as we’ll need to configure it in GitHub secrets:

image

For the purpose of this example, the dev environment will require the following secrets configured as they are referenced in the workflows and terraform code:

  • DEV_ARM_CLIENT_ID
  • DEV_ARM_CLIENT_SECRET
  • DEV_ARM_SUBSCRIPTION_ID
  • DEV_ARM_TENANT_ID
  • DEV_PSQL_ADMINISTRATOR_LOGIN_PASSWORD
  • DEV_PSQL_ADMIN_LOGIN
  • DEV_STORAGE_ACCESS_KEY
  • DEV_STORAGE_ACCOUNT_NAME
  • DEV_STORAGE_CONTAINER_NAME

Note that you will not be able to view the values of these secrets once they are configured in GitHub.

In addition to the DEV secrets, the STG and PROD secrets will also need to be configured for the other branches.

image

Setting up GitHub Environments

The last requirement for this demo is to set up the different environments in GitHub for the branches. It’s important to note that Environments are NOT available in private repositories for free accounts so you’ll need to use a public repo for it. This demo has the following environments configured:

  • dev-deploy
  • prod
  • dev
  • stg

The additional dev-deploy environment is really just a way for me to execute the plan step to verify the code is free of errors and then requiring a review and approve to initiate the deployment of the resources. This method likely isn’t best practice but I thought I’d use this to demonstrate how to set the environment in the workflow to force a review or reject.

image

With the environments setup, the dev-deploy is configured with the Required reviewers protection rule:

image

… and that’s it. I hope this was beneficial for anyone who may be trying to learn Terraform and GitHub actions. There are plenty of blog posts available but I’ve noticed that some were not very clear on the steps and I’ve spent countless hours troubleshooting the code from start to finish. The process can be very frustrating at times but it’s also very satisfying when everything starts to work.

I’ll be working on another new project to incorporate an actual application in the future and will be sharing it.

PowerShell scripts for exporting an Azure subscription's Azure SQL Databases to an Excel file and using the updated Excel file to export/backup the databases

I’ve recently been involved in a manual migration of multiple subscriptions from a tenant to another due to an organization change and one of the components that I was responsible for was the migration of the Azure SQL Databases. I had originally hoped that I would be able to use the DMA (Data Migration Assistant) but attempting to select an Azure SQL Database as a source would throw an error indicating it was not supported. Given that there weren’t too many databases and there were only 2 that would require an outage, we decided that we’ll perform an export/backup of the databases to a storage account and then import/restore them in the destination subscription.

While it is possible to manually export them via the GUI:

image

A more efficient way was to use PowerShell to export all of the subscription’s databases and its properties into an Excel file, update the Excel file with the SQL credentials, then use PowerShell to read through the Excel spreadsheet to export/backup the databases to a Storage account.

The PowerShell script I created that will export all of a subscription’s Azure SQL Database properties can be found here: https://github.com/terenceluk/Azure/blob/main/PowerShell/Export-All-Subscriptions-AzureSQLDatabases-To-Excel.ps1

The following screenshot is an example of the export:

image

Assuming that each database has different credentials, add the additional columns to store the SQL Authentication username and password:

  • Username
  • Password
image

With the spreadsheet updated, we can now use this PowerShell script export/backup all of a subscription’s Azure SQL Database to a storage account container: https://github.com/terenceluk/Azure/blob/main/PowerShell/Backup-AzureSQLDatabases.ps1

Hope this helps anyone who might be looking for a way to automate the process of exporting a subscription’s Azure SQL Database to Excel and then using the list to backup the databases.

Tuesday, May 3, 2022

PowerShell script for exporting Microsoft Teams user configuration to an Excel and importing user configuration with updated Excel file

I used to work with Microsoft Teams Direct Routing voice deployments quite often in the past before moving to a more Azure focused role so one of my ex-colleague recently reached out to me to ask if I had a script where we could bulk configure settings for user accounts to enable them for Enterprise Voice. There was a deployment a I’ve worked on before where there were several hundred users and all of them had already been configured for Enterprise Voice but had extensions that were not DIDs because they still had their existing PBX forward inbound calls to the SBC and then to Teams. On the day of the cutover when we move the SIP trunks to the SBC, we had to bulk update their LineURI field. The approach I took was twofold:

  1. Write a script that exported all of the users’ Teams configuration to an Excel file
  2. Write a script that imported user settings from the same spreadsheet after it was updated with the full DID extensions

The following is a sample of the export:

image

This spreadsheet was then updated to have the appropriate LineUri, which will then be used with an import script to update the settings. In addition to updating the LineUri attribute, the script will also enable Enterprise Voice, configure the dial plan and voice routing policy.

I don’t have much use for the scripts anymore given that I don’t work in the Teams space but I wanted to share them in case anyone may be looking for this.

The export script can be found here at my Github: https://github.com/terenceluk/Microsoft-365/blob/main/Teams/Export-TeamsUserConfig.ps1

The import script can be found here at my Github: https://github.com/terenceluk/Microsoft-365/blob/main/Teams/Import-TeamsUserConfig.ps1

Hope this helps!

Sunday, April 24, 2022

Infrastructure as Code in 15 Minutes PowerPoint Presentation

It’s finally April and this month is typically when I perform a bit of spring cleaning on my laptop to avoid having files and folders become too disorganized. One of the files I came across as I sorted away my documents folder is a PowerPoint I created a while back when I interviewed for a role where I was asked to create a presentation on a topic of my choice and present it to 5 interviewers. Rather than choosing a topic I was extremely fluent with, I decided to try my luck with something I was learning at the time and that was Infrastructure as Code with Terraform. I did not want the presentation to be too focused on a specific vendor so I spent most of the time talking about the benefits of IaC, then using Terraform as a solution. The window I had to work with was 30 minutes so I kept the presentation short to leave some time at the end for questions. The feedback I received was very positive as 3 of the 5 interviewers expressed how much they liked my presentation. Given that this presentation was created with my own personal time, I think it would be great to share it out in case anyone is looking for material to introduce an audience to IaC. This specific role I interviewed for had a special place in my heart because of the interviewers I had the opportunity to meet and how supportive everyone of them were. The marathon of interviews were long but extremely gratifying and I enjoyed the experience even though I wasn’t selected in the end.

Without further without further ado, the PowerPoint presentation can be downloaded here: https://docs.google.com/presentation/d/1v8X1e9RimDdkpiR01Rj5et_Mnip5n0u0/edit?usp=sharing&ouid=111702981669472586918&rtpof=true&sd=true

I will also paste the content of the presentation below along with the notes I used during the presentation. Enjoy!

image

Intro

Good afternoon everyone and thank you for attending this presentation. The topic I will be presenting is Infrastructure as Code in 15 minutes.

image

Agenda

The agenda today will begin with a look at how we traditionally deploy infrastructure, followed by What is Infrastructure as Code, also known as IaC. Then the benefits of IaC, what is imperative vs declarative, IaC with Terraform, IaC in DevOps Pipelines, a sample setup and finally Q&A.

image

Traditional infrastructure deployment

The tools for infrastructure deployment has traditionally been through the use of a graphical user interface and scripts. As user friendly GUIs are, the obvious challenges is that it is very much a manual and time-consuming process and prone to the errors that the administrators performing the configuration can make. Attempting to maintain consistency is very difficult thus leading to configuration drift and trying to keep multiple environments that are meant to mirror one another in lockstep is challenging. Trying to scale the environment is cumbersome (e.g. deploy more instances of VMs or add new subnets). Lastly, there isn’t an easy way to easily document the environment other than screenshots and spreadsheets containing configuration values.

Scripting adds a bit of automation but often difficult to maintain through time.

image

What is Infrastructure as Code?

Infrastructure as Code is the essentially managing and provisioning infrastructure through code. Leveraging code means that we can now introduce automation of the management of the infrastructure whether it is creating new resources or making modifications to them. Infrastructure of Code can be implemented as imperative or declarative, which is an important topic we will cover shortly.

image

Benefits of IaC

To further elaborate on the benefits of IaC, it is now possible to not only automate the deployment in one cloud but across multiple clouds such as GCP, Azure and AWS. The speed and efficiency of deployment can be greatly increased as the process eliminates the manual points and clicks of the administrator, the process is also repeatable and consistent allowing multiple environments to be deployed in lockstep. The code can easily be source controlled with versioning which will give way to team collaboration through Continuous Integration. CI/CD pipelines can be used to develop and deploy the infrastructure leveraging all the benefits of DevOps. Infrastructure management can simplified and standardized through policies and scale at ease – so think about tweaking a variable to scale from 1 to 100 rather than going into a GUI and deploying or cloning resources multiple times. Static application security testing, which is the process of reviewing source code and detecting vulnerabilities can now be performed rather than trying to comb through the deployment configuration documentation or GUI post deployment of the infrastructure. Manual labour is significantly reduced.

image

Imperative vs Declarative

One of the important aspects of IaC is the concept of imperative vs declarative. To put it in simple terms, let’s consider the end state or goal we want to achieve is to get to a pizza restaurant. Imperative can be described as “what to do” while declarative is “what is wanted.” So let’s say we hop into a taxi and want to get to this end state. An example of imperative instructions would be to tell the taxi driver to go:

  • Forward 1 mile
  • Turn right
  • Forward 2 miles
  • Turn left
  • Forward 3 miles
  • Arrive at pizza restaurant

While declarative is:

  • Go to the pizza restaurant.

image

Let’s dissect the differences and outline them.

With imperative, the starting point matters because we are explicitly calling out each action to get to the end state. This leads to difficulty in auditing the steps and trying to detect drift when changes are made. Version control is challenging if even possible, if the steps execute half way and stop due to error, you cannot repeat the steps without ending in a completely different state. The logic can get very complex as ordering matters and trying to change the destination state requires modifications to the steps.

Declarative, on the other hand, allows the starting point to be anywhere because the engine delivering or carrying you to the end state will handle the directions. Declarative is idempotent in nature so you can run it as many times as you want without affecting the state. The code can also be repeatedly ran in a pipeline to create multiple environments in lockstep. Having removed the detailed imperative steps, we can easily validate and detect any drift and introduce version control. Lastly, we can change the destination without worrying about changing the steps.

image

IaC with Terraform

One of the popular IaC solutions currently on the market is Terraform. It is written and compiled in Go and is a declarative language known as HashiCorp Configuration Language (HCL) and has multi-cloud support. The way it handles deployments to multiple clouds is through the use of providers and there are approximately 1521 providers currently available on their site. Terraform is written in plain text and can be source controlled with Git or Terraform cloud. Security can be introduced through RBAC so multiple workspaces for different teams managing different environments or components of it can only make changes to their environments. Lastly, policies with automation can be introduced to provide control and governance.

image

IaC with DevOps Pipelines

What IaC enables, which I feel is the most powerful aspect, is the use of pipelines. With IaC we can now leverage the DevOps methodology with CI/CD pipelines to deploy infrastructure. Pipelines can be created to only deploy infrastructure or can incorporate the deployment of infrastructure as a part of an application., which means the IaC is only a small component of the pipeline. The flow diagram shown here is a simplified version depicting of the process as we can integrate many different stages into the pipelines such as security scans and testing. This unlocks true automation and different release strategies.

image

Sample Setup

To demonstrate how we can fully automate the deployment of cloud resources, I have prepared a simple sample configuration where I will go through the setup in the following slides.

image

Prerequisites

We will assume that Jenkins along with the Terraform plugin is deployed, a GitHub repot with terraform deployment code is created and a service principal (in this case Azure) will be setup for Jenkins so it can deploy resources. So as show in the screenshots, we’ll have Jenkins, the Terraform plugin installed, the GitHub repo where the Terraform code is pulled and finally the service principal created in Azure.

image

Create Jenkins Pipeline

First, we’ll write the following Jenkins pipeline 4 stages for the infrastructure deployment. The first stage is named:

  • Checkout, which will checkout the code in the GitHub repo
  • The second will be to initialize Terraform downloading the required provider
  • Then Terraform plan will be executed so Terraform can perform a dry run which typically outputs to the console for the changes
  • Then Terraform apply or destroy will be executed to either deploy or remove the infrastructure

image

Parameterize the Jenkins pipeline

This simple setup will require administration intervention by choosing either to apply or destroy so we’ll configure a choice parameter for the pipeline. Note that we can also use triggers to automatically initiate the build through commits.

image

With the execution parameters setup, we will proceed to paste the code into the pipeline.

image

Build Pipeline

Then finally with the pipeline configured, we’ll initiate the pipeline build interactively by choosing apply, then we can view the progress as shown in the screenshot above. Once the build is complete, we should see the resources in Azure.

This short demonstration only scratches the surface of what are the limitless possibilities of IaC with pipelines. Other examples could be that a pipeline deploys an application which will include the infrastructure build as a step for the target infrastructure.

image

Ending

This concludes my IaC in 15 minutes presentation. Thank you for attending and feel free to ask any questions or provide any comment.