.net, Azure

Load your work items into Azure DevOps Boards with .NET

This post is about how to write a .NET application to move workitems from another source (e.g. JIRA, Excel etc) into Azure Boards in Azure DevOps, and a Nuget package I’ve built to hopefully make it a bit easier for anyone else doing this as well.

So here’s a problem…

Let’s say you’ve convinced your boss to move your projects to Azure Devops – great! You’re happy, and your team are happy, but before you can really start, there’s still some work to be done – migration of all the historical project data from your existing company systems….

Maybe your company has its own custom story/issue/bug tracking system (maybe it’s JIRA, maybe it’s Mantis, or something else), and you don’t want to lose or archive all that valuable content. You want to load all that content in your project’s Azure Board as well – how do you do that?

Use .NET with Azure Boards to solve this problem

I had exactly this problem recently – my project’s history was exported into one big CSV file, and I needed to get it into Azure Boards. I had loads of fields which I needed to keep and I don’t want to lose all this…

…so I ‘.NET’ted  my way out of trouble.

A bit of searching on the internet also leads me to the option of bulk loading using Excel and the TFS Standalone Office Integration pack, but I’m a programmer and I prefer the flexibility of using code. Though, y’know, YMMV.

excel link

First I created a .NET Framework console application, and added a couple of NuGet packages for Azure DevOps:

Install-Package Microsoft.TeamFoundationServer.Client
Install-Package Microsoft.VisualStudio.Services.Client

These are both projects that target .NET Framework, so I can’t use .NET Core for this yet.

With these included in my application, I now have access to objects which allow me to connect to Azure DevOps through .NET, and also connect to a work item client that allows me to perform create/read/update/delete operations on work items in my project’s board.

It’s pretty easy to load up my project history CSV into a list in a .NET application, so I knew I had all the puzzle pieces to solve this problem, I just needed to put them together.

In order to connect to Azure DevOps and add items using .NET, I used:

  • The name of the project I want to add work items to – my project codename is “Corvette
  • The Url of my Azure DevOps instance – http://dev.azure.com/jeremylindsay
  • My personal access token.

If you’ve not generated a personal access token in Azure DevOps before, check this link out for details on how to do it – it’s really straightforward from the Azure DevOps portal:

https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=vsts

I can now use the code below to connect to AzureDevOps and create a work item client.

var uri = new Uri("http://dev.azure.com/jeremylindsay");
var personalAccessToken = "[***my access token***]";
var projectName = "Corvette";
 
var credentials = new VssBasicCredential("", personalAccessToken);
 
var connection = new VssConnection(uri, credentials);
var workItemTrackingHttpClient = connection.GetClient<WorkItemTrackingHttpClient>();

Next, I need to create what is basically a list of name and value pairs which describes the name of the work item field (e.g. title, description etc), and the value that I want to put in that field.

This link below describes the fields you can access through code:

https://docs.microsoft.com/en-us/azure/devops/reference/xml/reportable-fields-reference?view=vsts

It’s a little bit more complex than a normal dictionaries or other key-value pair objects in .NET but not that difficult. The work item client uses custom objects called JsonPatchDocuments and JsonPatchOperations. Also, the names of the fields are not intuitive out of the box, but given all that, I can still create a work item in .NET using the code below:

var bug = new JsonPatchDocument
{
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/System.Title",
        Value = "Spelling mistake on the home page"
    },
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/Microsoft.VSTS.TCM.ReproSteps",
        Value = "Log in, look at the home page - there is a spelling mistake."
    },
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/Microsoft.VSTS.Common.Priority",
        Value = "1"
    },
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/Microsoft.VSTS.Common.Severity",
        Value = "2 - High"
    }
};

Then I can add the bug to my Board with the code below:

workItemTrackingHttpClient.CreateWorkItemAsync(bug, ProjectName, "Bug").Result;

Now this works and is very flexible, but I think my code could be made more readable and easy to use. So I refactored the code, moved most of it into library, and uploaded it to NuGet here. My refactoring is pretty simple – I’m not going to go into lots of detail on how I did it, but if you’re interested the code is up on GitHub here.

If you’d like to get this package, you can use the command below

Install-Package AzureDevOpsBoardsCustomWorkItemObjects -pre

This package depends on the two NuGet packages I referred to earlier in this post, so they’ll be added automatically if you install my NuGet package.

This allows us to instantiate a bug object look much more like creation of a normal POCO, as shown below:

var bug = new AzureDevOpsBug
{
    Title = "Spelling mistake on the home page",
    ReproSteps = "Log in, look at the home page - there is a spelling mistake.",
    Priority = AzureDevOpsWorkItemPriority.Medium,
    Severity = AzureDevOpsWorkItemSeverity.Low,
    AssignedTo = "Jeremy Lindsay",
    Comment = "First comment from me",
    Activity = "Development",
    AcceptanceCriteria = "This is the acceptance criteria",
    SystemInformation = "This is the system information",
    Effort = 13,
    Tag = "Cosmetic; UI Only"
};

And to push this bug to my Azure Board, I can use the code below which is a little simpler than what I wrote previously.

using AzureDevOpsCustomObjects;
using AzureDevOpsCustomObjects.Enumerations;
using AzureDevOpsCustomObjects.WorkItems;
 
namespace ConsoleApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string personalAccessToken = "[[***my personal access token***]]";
            const string projectName = "Corvette";
 
            var workItemCreator = new WorkItemCreator(uri, personalAccessToken, projectName);
 
            var bug = new AzureDevOpsBug
            {
                Title = "Spelling mistake on the home page",
                ReproSteps = "Log in, look at the home page - there is a spelling mistake.",
                Priority = AzureDevOpsWorkItemPriority.Medium,
                Severity = AzureDevOpsWorkItemSeverity.Low,
                AssignedTo = "Jeremy Lindsay",
                Comment = "First comment from me",
                Activity = "Development",
                AcceptanceCriteria = "This is the acceptance criteria",
                SystemInformation = "This is the system information",
                Effort = 13,
                Tag = "Cosmetic; UI Only"
            };
 
            var createdBug = workItemCreator.Create(bug);
        }
    }
}

I’ve chosen to instantiate the bug with hard-coded text in the example above for clarity – but obviously you can instantiate the POCO any way you like, for example from a database, or perhaps parsing data out of a CSV file.

Anyway, the image below shows the bug added to my Azure Board.

bug

Of course, Bugs are not the only types of work item – let’s say I want to add Product Backlog Items also. And there are many, many different fields used in Azure Boards, and I haven’t coded for all of them in my NuGet package. So:

  • I’ve also added a Product Backlog object into my NuGet package,
  • I’ve made the creation method generic so it can detect the object type and work out what type of work item is being added to the Board
  • I’ve made the work item objects extensible so I users can add any fields which I haven’t coded for yet.

For example, the code below how to add a task and include a comment in the System.History field:

private static void Main(string[] args)
{
const string uri = "https://dev.azure.com/jeremylindsay";
const string personalAccessToken = "[[***my personal access token***]]";
const string projectName = "Corvette";
 
var workItemCreator = new WorkItemCreator(uri, personalAccessToken, projectName);
 
var productBacklogItem = new AzureDevOpsProductBacklogItem
{
    Title = "Add reports for how many users log in each day",
    Description = "Need a new report with log in statistics.",
    Priority = AzureDevOpsWorkItemPriority.Low,
    Severity = AzureDevOpsWorkItemSeverity.Low,
    AssignedTo = "Jeremy Lindsay",
    Activity = "Development",
    AcceptanceCriteria = "This is the acceptance criteria",
    SystemInformation = "This is the system information",
    Effort = 13,
    Tag = "Reporting; Users"
};
 
productBacklogItem.Add(
    new JsonPatchOperation
    {
        Path = "/fields/System.History",
        Value = "Comment from product owner."
    }
);
 
var createdBacklogItem = workItemCreator.Create(productBacklogItem);
}

Obviously I can change the code to allow addition of comments through a property in the AzureDevOpsProductBacklogItem POCO, but this is just an example to demonstrate how it can be done by adding a JsonPatchOperation.

The image below shows the product backlog item successfully added to my Azure Board.

bug

Wrapping up

The Boards component of Azure DevOps is a useful and effective way to track your teams work items. And if you want to populate a new Board with a list of existing bugs or backlog items, you can do this with .NET. I guess a lot of these functions aren’t new, and they were available in VSTS, and it’s still nice to see these powerful functions and libraries continue to be supported. And hopefully the NuGet package I’ve created to assist in the process will be useful to some of you who are working through the same migration challenges that I am. Obviously this NuGet package can still be improved a lot – it just covers Backlog Items and Bugs right now, and it’d be better if it flagged those fields that are read only – but it’s good enough to meet minimum viable standards for me right now, and maybe it’ll be helpful for you too.


About me: I regularly post about Microsoft technologies like Azure and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

Azure, GIS, Leaflet, Security, Web Development

Getting started with Azure Maps, using Leaflet to display roads and satellite images, and comparing different browser behaviours

In this post I’m going to describe how to use the new(ish) Azure Maps service from Microsoft with the Leaflet JavaScript library. Azure Maps provides its own API for Geoservices, but I have an existing application that uses Leaflet, and I wanted to try out using the Azure Maps tiling services.

Rather than just replicating the example that already exists on the excellent Azure Maps Code Samples site, I’ll go a bit further:

  • I’ll show how to display both the tiles with roads and those with aerial images
  • I’ll show how to switch between the layers using a UI component on the map
  • I’ll show how Leaflet can identify your present location
  • And I’ll talk about my experiences of location accuracy in Chrome, Firefox and Edge browsers on my desktop machine.

As usual, I’ve made my code open source and posted it to GitHub here.

First, use your Azure account to get your map API Key

I won’t go into lots of detail about this part – Microsoft have documented the process very well here. In summary:

  • If you don’t have an Azure account, there are instructions here on how to create one.
  • Create a Maps account within the Azure Portal and get your API Key (instructions here).

Once you have set up a resource group in Azure to manage your mapping services, you’ll be able to track usage and errors through the portal – I’ve pasted graphs of my usage and recorded errors below.

graphs

You’ll use this API Key to identify yourself to the Azure Maps tiling service. Azure Maps is not a free service – pricing information is here – although presently on the S0 tier there is an included free quantity of tiles and services.

API Key security is one key area of Azure Maps that I would like to be enhanced – the API Key has to be rendered on the client somewhere in plain text and then passed back to the maps API. Even with HTTPS, the API Key could be easily intercepted by someone viewing the page source, or using a tool to read outgoing requests.

Many other tiling services use CORS to restrict which domains can make requests, but:

  • Azure Maps doesn’t do this at the time of writing and
  • This isn’t real security because the Origin header can be easily modified (I know it’s a forbidden header name for a browser but tools like cUrl can spoof the Origin). More discussion here and here.

So this isn’t a solved problem yet – I’d recommend you consider how you use your API Key very carefully and bear in mind that if you expose it on the internet you’re leaving your account open to abuse. There’s an open issue about this raised on GitHub and hopefully there will be an announcement soon.

Next, set up your web page to use the Leaflet JS library

There’s a very helpful ‘getting started‘ tutorial on the Leaflet website – I added the stylesheet and javascript to my webpage’s head using the code below.

<link rel="stylesheet" href="https://unpkg.com/leaflet@1.3.1/dist/leaflet.css"
      integrity="sha512-Rksm5RenBEKSKFjgI3a41vrjkw4EVPlJ3+OiI65vTjIdo9brlAacEuKOiQ5OFh7cOI1bkDwLqdLw3Zg0cRJAAQ=="
      crossorigin="" />
 
<script src="https://unpkg.com/leaflet@1.3.1/dist/leaflet.js"
        integrity="sha512-/Nsx9X4HebavoBvEBuyp3I7od5tA0UzAxs+j83KgC8PU0kgB4XiK4Lfe4y4cgBtaRJQEIFCW+oC506aPT2L1zw=="
        crossorigin=""></script>

Now add the URLs to your JavaScript to access the tiling services

I’ve included some very simple JavaScript code below for accessing two Azure Maps services – the tiles which display roadmaps and also those which have satellite images.

function satelliteImageryUrl() {
    return "https://atlas.microsoft.com/map/imagery/png?api-version=1&style=satellite&tileSize=512&zoom={z}&x={x}&y={y}&subscription-key={subscriptionKey}";
}
 
function roadMapTilesUrl() {
    return "https://atlas.microsoft.com/map/tile/png?api-version=1&layer=basic&style=main&TileFormat=pbf&tileSize=512&zoom={z}&x={x}&y={y}&subscription-key={subscriptionKey}";
}

If you’re interested in reading more about these two tiling services, there’s more about the road map service here and more about the satellite image service here.

Now add the tiling layers to Leaflet and create the map

I’ve written a JavaScript function below which registers the two tiling layers (satellite and roads) with Leaflet. It also instantiates the map object, and attempts to identify the user’s location from the browser. Finally it registers a control which will appear on the map and list the available tiling services, allowing me to toggle between them on the fly.

var map;
 
function GetMap() {
    var subscriptionKey = '[[[**YOUR API KEY HERE**]]]';
 
    var satellite = L.tileLayer(satelliteImageryUrl(), {
        attribution: '© ' + new Date().getFullYear() + ' Microsoft, © 1992 - ' + new Date().getFullYear() + ' TomTom',
        maxZoom: 18,
        tileSize: 512,
        zoomOffset: -1,
        id: 'azureSatelliteMaps',
        crossOrigin: true,
        subscriptionKey: subscriptionKey
    });
 
    var roads = L.tileLayer(roadMapTilesUrl(), {
        attribution: '© ' + new Date().getFullYear() + ' Microsoft, © 1992 - ' + new Date().getFullYear() + ' TomTom',
        maxZoom: 18,
        tileSize: 512,
        zoomOffset: -1,
        id: 'azureRoadMaps',
        crossOrigin: true,
        subscriptionKey: subscriptionKey
    });
 
    // instantiate the map object and display the 'roads' layer
    map = L.map('myMap', { layers: [roads] });
 
    // attempt to identify the user's location from the browser
    map.locate({ setView: true, enableHighAccuracy: true });
    map.on('locationfound', onLocationFound);
 
    // create an array of the tiling base layers and their 'friendly' names
    var baseMaps = {
        "Azure Satellite Imagery": satellite,
        "Azure Roads": roads
    };
 
    // add a control to map (top-right by default) allowing the user to toggle the layer
    L.control.layers(baseMaps, null, { collapsed: false }).addTo(map);
}

Finally, I’ve added a div to my page which specifies the size of the map, gives it the Id “mymap” (which I’ve used in the JavaScript above when instantiating the map object), and I call the GetMap() method when the page loads.

<body onload="GetMap()">
    <div id="myMap" style="position:relative;width:900px;height:600px;"></div>
</body>

If the browser GeoServices have identified my location, I’ll also be given an accuracy in meters – the JavaScript below allows me to draw a circle on my map to indicate where the browser believes my location to be.

map.on('locationfound', onLocationFound);
 
function onLocationFound(e) {
    var radius = e.accuracy / 2;
 
    L.marker(e.latlng)
        .addTo(map)
        .bindPopup("You are within " + radius + " meters from this point")
        .openPopup();
 
    L.circle(e.latlng, radius).addTo(map);
}

And I’ve taken some screenshots of the results below – first of all the results in the MS Edge browser showing roads and landmarks near my location…

roads

…and swapping to the satellite imagery using the control at the top right of the map.

satellite

Results in Firefox and Chrome

When I ran this in Firefox and Chrome, I found that my location was identified with much less accuracy. I know both of these browsers use the Google GeoLocation API and MS Edge uses the Windows Location API so this might account for the difference on my machine (Windows 10), but I’d need to do more experimentation to better understand. Obviously my laptop machine doesn’t have GPS hardware, so testing on a mobile phone probably would give very different results.

roads2

Wrapping up

We’ve seen how to use the Azure Maps tiling services with the Leaflet JS library, and create a very basic web application which uses the Azure Maps tiling services to display both road and landmark data, and also satellite aerial imagery. It seems to me that MS Edge is able to identify my location much more accurately on a desktop machine than Firefox or Chrome on my Windows 10 machine (within a 75m radius on Edge, and over 3.114km radius on Firefox and Chrome) – however, your mileage may vary.

Finally, as I emphasised above, I’ve concerns about the security of a production application using an API Key in plain text inside my JavaScript, and hopefully Microsoft will deploy a solution with improved security soon.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net core, Azure, C# tip, Clean Code, Cloud architecture, Security

Simplifying Azure Key Vault and .NET Core Web App (includes NuGet package)

In my previous post I wrote about securing my application secrets using Azure Key Vault, and in this post I’m going to write about how to simplify the code that a .NET Core web app needs to use the Key Vault.

I previously went into a bit of detail about how to create a Key Vault and add a secret to that vault, and then add a Managed Service Identity to a web app. At the end of the post, I showed some C# code about how to access a secret inside a controller action.

public class HomeController : Controller
{
    public async Task<ActionResult> Index()
    {
        var azureServiceTokenProvider = new AzureServiceTokenProvider();
        var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
        var secret = await keyVaultClient.GetSecretAsync("https://mywebsitesecret.vault.azure.net/secrets/TheSecret").ConfigureAwait(false);
        ViewBag.Secret = secret.Value;
        return View();
    }
    // rest of the class...
}

Whereas this works for the purposes of an example of how to use it in a .NET Core MVC application, it’s not amazingly pretty code – for any serious application, I wouldn’t have all this code in my controller.

I think it would be more logical to access my secrets at the time my web application starts up, and put them into the Configuration for the app. Therefore if I need them later, I can just inject an IConfiguration object into my class, and use that to get the secret values.

using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Configuration.AzureKeyVault;

namespace MyWebApplication
{
    public class Program
    {
        public static void Main(string[] args)
        {
            BuildWebHost(args).Run();
        }
 
        private static IWebHost BuildWebHost(string[] args)
        {
            return WebHost.CreateDefaultBuilder(args)
                .ConfigureAppConfiguration(builder =>
                {
                    var azureServiceTokenProvider = new AzureServiceTokenProvider();
                    var keyVaultClient =
                        new KeyVaultClient(
                            new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
                    builder.AddAzureKeyVault("https://mywebsitesecret.vault.azure.net/",
                            keyVaultClient, new DefaultKeyVaultSecretManager());
                })
                .UseStartup<Startup>()
                .Build();
        }
    }
}

But as you can see above, the code I need to add here to build my web host is not very clear either – it seemed a shame to lose all the good work done in .NET Core 2 to simplify this class.

So I’ve created a NuGet package to allow me to have a simpler and cleaner interface – it’s at uploaded to the NuGet repository here, and I’ve open sourced the code at GitHub here.

As usual you can install pretty easily from the command-line:

Install-Package Kodiak.Azure.WebHostExtension -prerelease

Now my BuildWebHost method looks much cleaner – I can just add the fluent extension AddAzureKeyVaultSecretsToConfiguration and pass in the URL of the vault).

private static IWebHost BuildWebHost(string[] args)
{
    return WebHost.CreateDefaultBuilder(args)
        .AddAzureKeyVaultSecretsToConfiguration("https://myvaultname.vault.azure.net")
        .UseStartup<Startup>()
        .Build();
}

I think this is a more elegant implementation, and now if I need to access the secret inside my controller’s action, I can use the cleaner code below.

using System.Diagnostics;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Configuration;
 
namespace MyWebApplication.Controllers
{
    public class HomeController : Controller
    {
        private readonly IConfiguration _configuration;
 
        public HomeController(IConfiguration configuration)
        {
            _configuration = configuration;
        }
 
        public IActionResult Index()
        {
            ViewBag.Secret = _configuration["MySecret"];
 
            return View();
        }
    }
}

Summing up

Azure Key Vault (AKV) is a useful tool to help keep production application secrets secure, although like any tool it’s possible to mis-use it, so don’t assume that just because you’re using AKV that your secrets are secured – always remember to examine threat vectors and impacts.

There’s a lot of documentation on the internet about AKV, and a lot of it recommends using the client Id and secrets in your code – I personally don’t like this because it’s always risky to have any kind of secret in your code. The last couple of posts that I’ve written have been about how to use an Azure Managed Service Identity with your application to avoid secrets in your code, and how to simplify the C# code you might use in your application to access AKV secrets.

.net, Azure, Cloud architecture, Security

Using the Azure Key Vault to keep secrets out of your web app’s source code

Ahead of the Global Azure Bootcamp, I’ve been looking how I could allow a distributed team to develop and deploy a web application to access an Azure SQL Server instance in a secure way. There are a few different ways that I could share credentials to access my Azure SQL database:

  • Environment variables – this keeps secrets (like passwords) out of the code and mitigates the risk they’d be committed to source code. But Environment Variables are stored in plain text, so if the host is compromised, those secrets are lost.
  • .NET Core Secret Manager tool – there is a NuGet package which allows the user to keep application secrets (like a password) in a JSON file which is stored in the user profile directory – again, this mitigates the risk that secrets would be committed to source code, but I’d still have to share that secret to be stored in plain text.

Neither of these options are ideal for me – I would rather grant access to my Azure SQL database by role and not share passwords with developers which need to be written down in somewhere, either in JSON or in my automated deployment scripts. And even though the two options above mitigate the risk that passwords are committed to source code, they don’t eliminate the risk.

So I was pretty excited to read about Azure Key Vault (AKV)- a way to securely store secrets in the cloud and avoid any risk of secrets being committed to source code.

This page from Microsoft presents a few different user stories and how AKV meets these needs, specifically around:

(There’s more information on Stack Overflow here)

But after reading the document here I was a bit surprise that the implementation described still used the Secret Manager tool – it seemed like we’re just swapping storing secrets in one place for another. I searched around to find how this could be done without the Secret Manager tool, and in numerous blog posts and videos I saw developers setting up a secret in AKV, but then copying a “client secret” from Azure into their code, and I thought this really defeated the purpose of having a vault for secrets.

Fortunately I’ve found what I need to do to use AKV with my .NET Core web application and not have to add any secrets to code – I secure my application with a Managed Service Identity. I’ve described how to do this below, with the C# code I needed to use the feature.

It looks like there’s a lot of steps but they’re all very simple – this is a lot less complicated than I thought it would be!

How to keep the secrets out of your source code.

  • First create a vault
  • Add a secret to your vault
  • Secure your app service using Managed Service Identity
  • Access the secret from your source code with a KeyVaultClient

I’ll cover each of these in turn, with code samples at the end to show how to access the AKV.

First create a vault

Open the Azure portal and log in – click on the “All services” menu item on the left hand side, and search for “key vault” – this should filter the options so you have a screen like the one below.

akv1

Once you’ve got the Key Vaults option, click on it to see a screen like the one below which will list the Key Vaults in your subscription. To create a new vault, click on the “Add” button, highlighted in the document below.

akv2

This will open another “blade” (which I just see as jargon for a floating window) in the portal where you can enter information about your new vault.

As you can see in the image below, I’ve called my vault “MyWebsiteSecret”, and I’ve created a new resource group for it called “Development_Secret”. I’ve chosen the location to be “UK West”, and by default my user has been added as the first principal who has permission to access this.

akv3

I clicked on the Create button at the bottom of the screen, and the portal presents a toast at the top right to say my vault is in the process of being created.

akv4

Eventually this changes when the deployment has succeeded.

akv6

So the Azure portal screen now shows the list page again, and my new vault is on this page.

akv5

Add a secret to the vault

Now the vault is created, we can create a new secret in it. Click on the vault created in the previous step to see the details for this vault (shown below).

akv7

Now click on the “Secrets” menu item to open a blade showing secrets in this vault. Obviously as I’ve just created it, there are no secrets yet. We can create on by clicking on the “Generate/Import” button, highlighted in the image below.

akv8

After clicking on the “Generate/Import” button, a new blade opens where you can enter details of your secret. I chose a name of “TheSecret”, entered a secret value which is masked, and entered in a bit of text for the Content Type to describe the type of secret.

akv9

Once I click on “Create” at the bottom of the blade, the site returns me to the list of secrets in this vault – but this time, you can see my secret in the list, as shown below.

akv10

Secure the app service using Managed Service Identity

I have deployed my .NET Core application to Azure previously – I won’t go into lots of detail about how to deploy a .NET Core application since it’s in a million other blog posts and videos – basically I created a new App Service through the Azure portal, and linked it to a .NET Core application on my GitHub profile. Now when I push code to that application on GitHub, Azure will automatically build and deploy it.

Obviously this isn’t a full deployment pipeline – but it works for this simple application.

But I do want to show how to create a Managed Service Identity for this application – as shown in the image below, I’ve searched for my App Service on Azure.

akv12

I selected my app service to open a blade with options for this service, and selected “Managed Service Identity”, as shown below. By default it’s off – I’ve drawn an arrow below beside the button I pressed to turn it on for the app service, and after that I clicked on Save to persist my changes.

akv13

Once it was saved, I needed to go back to the key vault and secret that I created earlier and select “Access Policies”, as shown below. As I mentioned earlier, my name is in there as having permission by default, but I want my application to have permission too – so I clicked on the “Add new” option, which I’ve highlighted with a red arrow below.

akv14

The blade below opens up – for the principle, I selected my app service (called “MyAppServiceForTestingVaults”) – by default nothing is selected so you just need to click on the option to open another blade where you can search for your app service. It’ll only be available if you’ve correctly configured the Managed Service Identity as described above.

Also, I selected two “Secret permissions” from the dropdown – Get and List.

akv15

Once I click OK, I can now see the my application is in the list of app services which have access to the secret I created earlier.

akv16

Add code to my .NET application to access these secrets

I use the Azure Services Authentication Extension to simplify development with my Visual Studio account.

I’m going to choose a really simple example – modifying the Index action of a HomeController class in the default .NET Core MVC website. I also need to add a NuGet package to my project:

Install-Package Microsoft.Azure.Services.AppAuthentication -Version 1.1.0-preview

The code below allows me to authenticate myself to my Azure instance, and get the secret from my vault.

You can see there’s a URL specified in the action below – it’s in the format of:

https://<<name of the vault>>.vault.azure.net/secrets/<<name of the secret>>

public class HomeController : Controller
{
    public async Task<ActionResult> Index()
    {
        var azureServiceTokenProvider = new AzureServiceTokenProvider();
        var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
        var secret = await keyVaultClient.GetSecretAsync("https://mywebsitesecret.vault.azure.net/secrets/TheSecret").ConfigureAwait(false);
        ViewBag.Secret = secret.Value;
        return View();
    }
    // rest of the class...
}

 

Now I can just modify the Index.cshtml view and add some code to show the secret (as simple as adding @ViewBag.Secret into the cshtml) – and when I run the project locally, I can now see that my application has been able to access the vault and decrypt my secret (as highlighted in the image below) without any client Id or client secret information in my code – this is because my machine recognises that I’m authenticated to access my own Azure instance.

I can also deploy this code to my Azure App Service and I’ll get the same results, because the application’s Managed Service Identity ensures that my application in Azure has permission to access the secret.

secret in image

Summing up

This was a really simple example, and it’s just to illustrate how to allow developers to access AKV secrets without having to add secret information to source code. Obviously, if a developer is determined to compromise security, they could obviously decrypt passwords and disseminate in another way –  so we’d have to tighten up security for a real-world application. For example, we could have different secrets stored in different environmental resource groups as we promote our application from Dev to QA/Staging and finally to Production.

Continue reading