ARM Azure DevOps Development

Azure DevOps – Commit URL on Azure Resource Tag (Part 1)


When you deploy an Azure resource like VM, storage account etc. via Azure DevOps you don’t know from which commit in your repository it had been deployed. The only thing you can assume that it was probably the latest run of your pipeline which deployed that resource. There is no obvious connection between the ARM template in the repository and the effective deployed resource in Azure. Wouldn’t it be nice if you just would have a link on your resource you could use, to see the specific last commit from the Azure DevOps repository,  this resource has been deployed or updated from?

Well this blog post tries to provide a solution to that specific problem.

Let’s assume we had the information (commit URL) and want to add it to a resource. The only technique I can think of is adding a tag to that resource. Sounds good, BUT there are some limitations of Tags. There is an entire documentation on tagging found here.

    • Not all resource types support tags. To determine if you can apply a tag to a resource type, see Tag support for Azure resources.
    • Each resource or resource group can have a maximum of 50 tag name/value pairs. Currently, storage accounts only support 15 tags, but that limit will be raised to 50 in a future release. If you need to apply more tags than the maximum allowed number, use a JSON string for the tag value. The JSON string can contain many values that are applied to a single tag name. A resource group can contain many resources that each have 50 tag name/value pairs.
    • The tag name is limited to 512 characters, and the tag value is limited to 256 characters. For storage accounts, the tag name is limited to 128 characters, and the tag value is limited to 256 characters.
    • Generalized VMs don’t support tags.
    • Tags applied to the resource group are not inherited by the resources in that resource group.
    • Tags can’t be applied to classic resources such as Cloud Services.
    • Tag names can’t contain these characters: <, >, %, &, \, ?, /

As you can see there are strong limitations what we can put into that place. The idea I came up with is, why don’t we just create a shortened URL and pass this to the ARM template so using an Azure DevOps pipeline. Going down this road, we are under full control of all steps, deploying IaC is in todays world a common thing and it is fully automated. What do we want more?

Ok let’s start. First we need to do our homework and create a few things. Here are the high-level steps…

  1. Have a repository with the ARM template. In my case I am just deploying a storage account. If you are not familiar with ARM templates there is a good starting point with Azure Quickstart Templates.
  2. Next we create a service principal to connect the pipeline with Azure Key Vault and creating a service connection within Azure DevOps to deploy the ARM template.
  3. Next we need to create an Azure Key Vault to store all the secrets and strings we need.
  4. Then we use a public service called to use as URL shorting service with our own custom domain.
  5. I have written a small PowerShell module to connect to service. We need this module within the pipeline.
  6. Create an Azure Artifacts feed in Azure DevOps to upload the PowerShell module from step 5.
  7. Build the pipeline in YAML format.
  8. Execute the pipe and see what happens Smile.



In Azure DevOps I created a repository with the storage account ARM template. I added a parameter “commitURL” and adding that information to the tag section…


…tag section….


That’s all to it, at the end of this post I will provide the repository link where you can find all this stuff.

Service Principal

Next we need the service principal to connect to Azure from Azure DevOps, accessing the Key Vault and to deploy the ARM template. In the CloudShell execute this command to create the service principal.

az ad sp create-for-rbac --name url-shortener-sp

The output will look like this…


That’s it we set the specific permissions later in these steps.

Key Vault

Create the Azure Key Vault using this command…

New-AzureRmKeyVault -Name url-shortener-kv -ResourceGroupName url-shortener  –Location ‘west europe’

…and the output is like this…


…then we make sure “Contributer” permission is set on this Key Vault…


…next we need to give the service principal “Get” and “List” access to the Key Vault through the access policy…


This is also all there is in this step.

Rebrandly URL Service

Navigate to, create a login and setup your custom domain. I am not going to document it here, because it is pretty self-explanatory and there is documentation available Winking smile. It is important, that you create the a workspace, and connect it to your DNS zone…


…in my case I added an A-record to get the subdomain


The developers hub shows you how to access the API, create a new API key and keep it in a safe place…


In order to access the API you need the domain id, which is a kind of tricky to get. First get the id for your workspace ID using this API explorer


…then go and query your domain


Finally we have all information to access the API…

If you want to test it using PowerShell you can try it with these few lines of code…

$WorkspaceId='[Workspace Id here]'
$Key='[API key here]'
$DomainId='[Domain Id here]'

$uri= ''
$requestHeaders = @{'Content-Type'= "application/json";'apikey'=$Key;'workspace_id'=$WorkspaceId}
$body = @{'destination'=$Url;'domain'=@{'id'=$DomainId}} | ConvertTo-Json
$response = Invoke-WebRequest -Uri $uri -Headers $requestHeaders -Method Post -Body $body
Write-Verbose ($response.Content | ConvertFrom-Json).shortUrl
return ($response.Content | ConvertFrom-Json).shortUrl

…or just use the code examples in the developer documentation

Package PowerShell Module

Luckily, I have written a script module which we will use to interact with Rebrandly API. We will use this module later on in the pipeline. It contains the nuspec file (for packaing with Nuget), the module file and the module manifest.


In order to make the module available in Azure DevOps we need to package the module using NuGet. Basically we will use Azure Artifacts in Azure DevOps services to provide a private repository for NuGet packages. There is an awesome documentation here how to do it, so I’ll pack the module as described in the article…


…go to Azure DevOps and create a new feed…


…then connect to the feed.

It is easier to create the feed in Azure DevOps, because it will provide the correct formatted commands…


…make sure you have downloaded the nuget.exe to your directory, in my case c:\temp and I follow the documentation above to “Add this feed” and “Push a package”….


…if everything works, you should have the package uploaded and see it in your feed…


Next we create a service connection in Azure DevOps.

Service Connection

Go to your project, mine is called “URL-Shortener”, navigate to “Project Settings” and create a new service connection, according to your service principal information, which you have created previously….


Now we have setup all prerequisites and as a last step we will add all the secrets from above into Key Vault. But what do we have at all? OK, let’s put all these things together.

Here I am listing all things we need:

  • Name of your DevOps organization, in my case tfsreturnone
    • ORGANIZATION: ‘tfsreturnone’
  • Name of the service connection in Azure DevOps
    • SUBSCRIPTION: ‘ServiceConnection (URL-Shortener)’
  • The value of the personal access token (PAT) which we created to access the Azure Artifacts
    • DEVOPS-PAT: ‘yctwgdxxxxxxxxxxxxxxxxxxxxxxxxxx’
  • User which you can access Azure DevOps project
  • Name for the module repository on your build server, just pick one
    • DEVOPS-MODULE-REPOSITORY: ‘AzureDevopsModules’
  • Name of the NuGet Artifacts stream
  • Rebrandly API Key
    • REBRANDLY-API-KEY: ‘b572xxxxxxxxxxxxxxxxxxxxx’
  • Rebrandly domain ID
    • REBRANDLY-DOMAIN-ID: ‘f87f5bxxxxxxxxxxxxxxxx’
  • Rebrandly workspace name
    • REBRANDLY-WORKSPACE: ‘stefanrothnet’

We add all these information into the Key Vault except the SUBSCRIPTION variable, because this is the variable for the service connection string in Azure DevOps to access the Key Vault and we need it in the pipeline before we access the Key Vault at all…



Finally we can build the pipeline. I used YAML to define the pipeline which has just a Build stage, 2 jobs and 3 tasks. The job section “Create_URL” job will first download all Key Vault secrets and then execute a PowerShell script which will download and install the ShortenURL module from Azure DevOps and finally create a short URL from the commit URL in Azure DevOps which is this command here….

New-RebrandlyURL -Url "$(System.TeamProject)/_git/$(Build.Repository.ID)/commit/$(Build.SourceVersion)/" -Key $(REBRANDLY-API-KEY) -DomainId $(REBRANDLY-DOMAIN-ID) -WorkspaceId $(REBRANDLY-WORKSPACE) -Verbose

In the next job “Deploy_ARM_Template”, the short URL will be passed to the ARM template, which will add the information as tag on the Azure Storage account in Azure…


…the pipeline YAML file looks like this. Notice here the first two lines defines the variable (SUBSCRIPTION) for service connection string…


After successful execution of the pipeline, the storage account is deployed (and updated after another run) and it looks like this…


…and if I use this shorten URL in my browser, it will redirect to my commit page from where this build started…


Phuhh, this is a kind of a long and complicated post, but finally you managed to read through and I hope you like it as much as I do. In my upcoming post I’ll show you a cool solution which uses your private URL shortening service. Stay tuned!

Find here all necessary files on GitHub.

One Reply to “Azure DevOps – Commit URL on Azure Resource Tag (Part 1)

  1. magnificent submit, very informative. I ponder why the other
    experts of this sector do not understand this. You should continue your writing.
    I’m confident, you’ve a huge readers’ base already!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.