.net, Azure

Load your work items into Azure DevOps Boards with .NET

This post is about how to write a .NET application to move workitems from another source (e.g. JIRA, Excel etc) into Azure Boards in Azure DevOps, and a Nuget package I’ve built to hopefully make it a bit easier for anyone else doing this as well.

So here’s a problem…

Let’s say you’ve convinced your boss to move your projects to Azure Devops – great! You’re happy, and your team are happy, but before you can really start, there’s still some work to be done – migration of all the historical project data from your existing company systems….

Maybe your company has its own custom story/issue/bug tracking system (maybe it’s JIRA, maybe it’s Mantis, or something else), and you don’t want to lose or archive all that valuable content. You want to load all that content in your project’s Azure Board as well – how do you do that?

Use .NET with Azure Boards to solve this problem

I had exactly this problem recently – my project’s history was exported into one big CSV file, and I needed to get it into Azure Boards. I had loads of fields which I needed to keep and I don’t want to lose all this…

…so I ‘.NET’ted  my way out of trouble.

A bit of searching on the internet also leads me to the option of bulk loading using Excel and the TFS Standalone Office Integration pack, but I’m a programmer and I prefer the flexibility of using code. Though, y’know, YMMV.

excel link

First I created a .NET Framework console application, and added a couple of NuGet packages for Azure DevOps:

Install-Package Microsoft.TeamFoundationServer.Client
Install-Package Microsoft.VisualStudio.Services.Client

These are both projects that target .NET Framework, so I can’t use .NET Core for this yet.

With these included in my application, I now have access to objects which allow me to connect to Azure DevOps through .NET, and also connect to a work item client that allows me to perform create/read/update/delete operations on work items in my project’s board.

It’s pretty easy to load up my project history CSV into a list in a .NET application, so I knew I had all the puzzle pieces to solve this problem, I just needed to put them together.

In order to connect to Azure DevOps and add items using .NET, I used:

  • The name of the project I want to add work items to – my project codename is “Corvette
  • The Url of my Azure DevOps instance – http://dev.azure.com/jeremylindsay
  • My personal access token.

If you’ve not generated a personal access token in Azure DevOps before, check this link out for details on how to do it – it’s really straightforward from the Azure DevOps portal:

https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=vsts

I can now use the code below to connect to AzureDevOps and create a work item client.

var uri = new Uri("http://dev.azure.com/jeremylindsay");
var personalAccessToken = "[***my access token***]";
var projectName = "Corvette";
 
var credentials = new VssBasicCredential("", personalAccessToken);
 
var connection = new VssConnection(uri, credentials);
var workItemTrackingHttpClient = connection.GetClient<WorkItemTrackingHttpClient>();

Next, I need to create what is basically a list of name and value pairs which describes the name of the work item field (e.g. title, description etc), and the value that I want to put in that field.

This link below describes the fields you can access through code:

https://docs.microsoft.com/en-us/azure/devops/reference/xml/reportable-fields-reference?view=vsts

It’s a little bit more complex than a normal dictionaries or other key-value pair objects in .NET but not that difficult. The work item client uses custom objects called JsonPatchDocuments and JsonPatchOperations. Also, the names of the fields are not intuitive out of the box, but given all that, I can still create a work item in .NET using the code below:

var bug = new JsonPatchDocument
{
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/System.Title",
        Value = "Spelling mistake on the home page"
    },
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/Microsoft.VSTS.TCM.ReproSteps",
        Value = "Log in, look at the home page - there is a spelling mistake."
    },
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/Microsoft.VSTS.Common.Priority",
        Value = "1"
    },
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/Microsoft.VSTS.Common.Severity",
        Value = "2 - High"
    }
};

Then I can add the bug to my Board with the code below:

workItemTrackingHttpClient.CreateWorkItemAsync(bug, ProjectName, "Bug").Result;

Now this works and is very flexible, but I think my code could be made more readable and easy to use. So I refactored the code, moved most of it into library, and uploaded it to NuGet here. My refactoring is pretty simple – I’m not going to go into lots of detail on how I did it, but if you’re interested the code is up on GitHub here.

If you’d like to get this package, you can use the command below

Install-Package AzureDevOpsBoardsCustomWorkItemObjects -pre

This package depends on the two NuGet packages I referred to earlier in this post, so they’ll be added automatically if you install my NuGet package.

This allows us to instantiate a bug object look much more like creation of a normal POCO, as shown below:

var bug = new AzureDevOpsBug
{
    Title = "Spelling mistake on the home page",
    ReproSteps = "Log in, look at the home page - there is a spelling mistake.",
    Priority = AzureDevOpsWorkItemPriority.Medium,
    Severity = AzureDevOpsWorkItemSeverity.Low,
    AssignedTo = "Jeremy Lindsay",
    Comment = "First comment from me",
    Activity = "Development",
    AcceptanceCriteria = "This is the acceptance criteria",
    SystemInformation = "This is the system information",
    Effort = 13,
    Tag = "Cosmetic; UI Only"
};

And to push this bug to my Azure Board, I can use the code below which is a little simpler than what I wrote previously.

using AzureDevOpsCustomObjects;
using AzureDevOpsCustomObjects.Enumerations;
using AzureDevOpsCustomObjects.WorkItems;
 
namespace ConsoleApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string personalAccessToken = "[[***my personal access token***]]";
            const string projectName = "Corvette";
 
            var workItemCreator = new WorkItemCreator(uri, personalAccessToken, projectName);
 
            var bug = new AzureDevOpsBug
            {
                Title = "Spelling mistake on the home page",
                ReproSteps = "Log in, look at the home page - there is a spelling mistake.",
                Priority = AzureDevOpsWorkItemPriority.Medium,
                Severity = AzureDevOpsWorkItemSeverity.Low,
                AssignedTo = "Jeremy Lindsay",
                Comment = "First comment from me",
                Activity = "Development",
                AcceptanceCriteria = "This is the acceptance criteria",
                SystemInformation = "This is the system information",
                Effort = 13,
                Tag = "Cosmetic; UI Only"
            };
 
            var createdBug = workItemCreator.Create(bug);
        }
    }
}

I’ve chosen to instantiate the bug with hard-coded text in the example above for clarity – but obviously you can instantiate the POCO any way you like, for example from a database, or perhaps parsing data out of a CSV file.

Anyway, the image below shows the bug added to my Azure Board.

bug

Of course, Bugs are not the only types of work item – let’s say I want to add Product Backlog Items also. And there are many, many different fields used in Azure Boards, and I haven’t coded for all of them in my NuGet package. So:

  • I’ve also added a Product Backlog object into my NuGet package,
  • I’ve made the creation method generic so it can detect the object type and work out what type of work item is being added to the Board
  • I’ve made the work item objects extensible so I users can add any fields which I haven’t coded for yet.

For example, the code below how to add a task and include a comment in the System.History field:

private static void Main(string[] args)
{
const string uri = "https://dev.azure.com/jeremylindsay";
const string personalAccessToken = "[[***my personal access token***]]";
const string projectName = "Corvette";
 
var workItemCreator = new WorkItemCreator(uri, personalAccessToken, projectName);
 
var productBacklogItem = new AzureDevOpsProductBacklogItem
{
    Title = "Add reports for how many users log in each day",
    Description = "Need a new report with log in statistics.",
    Priority = AzureDevOpsWorkItemPriority.Low,
    Severity = AzureDevOpsWorkItemSeverity.Low,
    AssignedTo = "Jeremy Lindsay",
    Activity = "Development",
    AcceptanceCriteria = "This is the acceptance criteria",
    SystemInformation = "This is the system information",
    Effort = 13,
    Tag = "Reporting; Users"
};
 
productBacklogItem.Add(
    new JsonPatchOperation
    {
        Path = "/fields/System.History",
        Value = "Comment from product owner."
    }
);
 
var createdBacklogItem = workItemCreator.Create(productBacklogItem);
}

Obviously I can change the code to allow addition of comments through a property in the AzureDevOpsProductBacklogItem POCO, but this is just an example to demonstrate how it can be done by adding a JsonPatchOperation.

The image below shows the product backlog item successfully added to my Azure Board.

bug

Wrapping up

The Boards component of Azure DevOps is a useful and effective way to track your teams work items. And if you want to populate a new Board with a list of existing bugs or backlog items, you can do this with .NET. I guess a lot of these functions aren’t new, and they were available in VSTS, and it’s still nice to see these powerful functions and libraries continue to be supported. And hopefully the NuGet package I’ve created to assist in the process will be useful to some of you who are working through the same migration challenges that I am. Obviously this NuGet package can still be improved a lot – it just covers Backlog Items and Bugs right now, and it’d be better if it flagged those fields that are read only – but it’s good enough to meet minimum viable standards for me right now, and maybe it’ll be helpful for you too.


About me: I regularly post about Microsoft technologies like Azure and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, .net core, Flurl, Polly

Using Polly and Flurl to improve your website

This post is about how to use The Polly Project to make a .NET website better. I use Flurl to consume Restful web services so I’ve some Flurl specific code later on, but I hope this post is useful to anyone who’s interested in learning what Polly is, what it’s for and how it can help you.

So here’s a problem

Let’s pretend you run your business through a website, and part of your code calls out to a web service that another company supplies.

And, every once in a while, errors from this web service appear in your logs. Sometimes the HTTP status code is a 404 (not found), sometimes the code is a 503 (service unavailable), and other times you see a 504 (timeout). There’s no pattern, it goes away as quickly as it starts, and you’d really really like to get this fixed before customers start cancelling their subscriptions to your service.

You call up the business running the remote web service, and their answer is a bit… vague. Every so often they restart their web servers which takes their service down for a couple of seconds, and at certain times of the day they get spikes of traffic which causes their system to max out for up to 5 seconds at a time. They’re apologetic, and they expect to migrate to new, better infrastructure in about 6 months. But their only workaround is for you to re-query the service.

So you could be forgiven for going spare right now – this response doesn’t fix anything. This company is the only place you can get the data you need so you’re locked in. And you know your customers are seeing errors because it’s right there staring at you from your website logs. Asking your customers to ‘just hit refresh’ when they get an error is a great way to lose business and win a bad reputation.

You can use Polly to help solve this problem

When I first read about Polly a long while back, I was really interested but I wasn’t sure how I could apply it to the project I was working on.  What I wanted was to find a post that described a real world scenario that I could recognise and identify with, and how Polly would help with that.

Since then, I’ve worked on projects a little bit like the one I described above – one time when I’ve raised a ticket to say that we’re having intermittent problems with a web service, I’ve been told that the workaround is ‘hit refresh’. And since there’s a workaround, it’s only going to be raised as medium priority issue (which feels like a coded message for ‘we’re not even going to look at this’). This kind of thing drives me crazy and it’s exactly the kind of problem that Polly can at least mitigate.

I’ve also met people who are doing really interesting work with hardware devices in .NET, and need to be able to handle hardware that can only deal with single threads – Polly allows the application to handle occasions when it doesn’t receive an acknowledgement from the hardware by waiting for a while and then retrying.

Let’s get to some code

I’ve pushed all of the code below to a repo in my Github, so you pull it locally and step through it yourself.

First, a couple of harnesses to simulate a flakey web-service

So I’ve written a simple (and really awful) web-service project to simulate random transient errors. The service is just meant to return what day it is, but it’ll only work about two times out of three. The rest of the time it’ll return either a 404 (Not Found), a 503 (Service Unavailable), or it’ll hang for 10 seconds and then return a 504 (Service timed out).

using System;
using System.Diagnostics;
using System.Threading;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Newtonsoft.Json;
 
namespace WorldsWorstWebService.Controllers
{
    [Route("api/[controller]")]
    [ApiController]
    public class WeekDayController : ControllerBase
    {
        [HttpGet]
        public IActionResult Get()
        {
            // Manufacture 404, 503 and 504 errors for about a third of all responses
            var randomNumber = new Random();
            var randomInteger = randomNumber.Next(08);
 
            switch (randomInteger)
            {
                case 0:
                    Debug.WriteLine("Webservice:About to serve a 404...");
                    return StatusCode(StatusCodes.Status404NotFound);
 
                case 1:
                    Debug.WriteLine("Webservice:About to serve a 503...");
                    return StatusCode(StatusCodes.Status503ServiceUnavailable);
 
                case 2:
                    Debug.WriteLine("Webservice:Sleeping for 10 seconds then serving a 504...");
                    Thread.Sleep(10000);
                    Debug.WriteLine("Webservice:About to serve a 504...");
 
                    return StatusCode(StatusCodes.Status504GatewayTimeout);
                default:
                {
                    var formattedCustomObject = JsonConvert.SerializeObject(
                        new
                        {
                            WeekDay = DateTime.Today.DayOfWeek.ToString()
                        });
 
                    Debug.WriteLine("Webservice:About to correctly serve a 200 response");
 
                    return Ok(formattedCustomObject);
                }
            }
        }
    }
}

I’ve also written another web application project that consumes this service using Flurl.

If you’re interested in Flurl and Restful web services, I’ve written more about using it here.

using System.Diagnostics;
using System.Threading.Tasks;
using Flurl.Http;
using Microsoft.AspNetCore.Mvc;
using MyWebsite.Models;
 
namespace MyWebsite.Controllers
{
    public class HomeController : Controller
    {
        public async Task<IActionResult> Index()
        {
            try
            {
                var weekday = await "https://localhost:44357/api/weekday"
                    .GetJsonAsync<WeekdayModel>();
 
                Debug.WriteLine("[App]: successful");
 
                return View(weekday);
            }
            catch (Exception e)
            {
                Debug.WriteLine("[App]: Failed - " + e.Message);
                throw;
            }
        }
    }
}

So I carried out a simple experiment – run these projects and try to hit my website 20 times, I mostly get successful responses, but I still get a load of failures. I’ve pasted the debug log below.

[App]: successful
[App]: Failed - Call failed with status code 503 (Service Unavailable): GET https://localhost:44357/api/weekday
[App]: successful
[App]: successful
[App]: successful
[App]: Failed - Call failed with status code 504 (Gateway Timeout): GET https://localhost:44357/api/weekday
[App]: successful
[App]: successful
[App]: Failed - Call failed with status code 503 (Service Unavailable): GET https://localhost:44357/api/weekday
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App]: Failed - Call failed with status code 503 (Service Unavailable): GET https://localhost:44357/api/weekday
[App]: successful
[App]: Failed - Call failed with status code 503 (Service Unavailable): GET https://localhost:44357/api/weekday
[App]: successful
[App]: Failed - Call failed with status code 404 (Not Found): GET https://localhost:44357/api/weekday

So out of 20 page hits, my test web app failed 6 times – about a 30% failure rate. That’s pretty poor (and about consistent with what we expect from the flakey web service).

Let’s say I don’t control the behaviour of the web services upstream of my web app, so I can’t change reason why my web app is failing, but let’s see if Polly allows me to reduce the number of failures that my web app users see.

Wiring up Polly

First let’s design some rules, also known as ‘policies’

So what’s a ‘policy’? Basically it’s just a rule that’ll help mitigate the intermittent problem.

For example – the web service frequently delivers 404 and 503 messages, but it’s back up again quickly. So a policy could be:

Retry Policy: When the web services returns an unsuccessful HTTP code, wait a second and try again. If it still fails, wait three seconds and try again, and if it still fails, then wait five more seconds and try one more time. If it fails after that, the service is dead and we need to deal with the error.

We also know that the web service hangs for 10 seconds before delivering a 504 timeout message. I don’t want my customers to wait for this long – after a couple of seconds I’d like to my app to give up, and execute the ‘Retry Policy’ above.

Timeout Policy: When I’ve been waiting for a response for longer than 2 seconds, cut my losses and execute the Retry Policy.

Wrapping these policies together forms a ‘Policy Strategy’.

So the first step is to install the Polly nuget package to the web app project:

Install-Package Polly

Polly is an open source project hosted on Github, with a BSD licence. It’s also a member of the .NET Foundation,

So what would these policies look like in code? The timeout policy is like the code below, where we can just pass the number of seconds to wait as a parameter:

var timeoutPolicy = Policy.TimeoutAsync<HttpResponseMessage>(2);

There’s also an overload, and I’ve specified some debug messages using that below.

var timeoutPolicy = Policy.TimeoutAsync<HttpResponseMessage>(2, (context, timeSpan, task) =>
{
    Debug.WriteLine($"[App|Policy]: Timeout delegate fired after {timeSpan.Seconds} seconds");
    return Task.CompletedTask;
});

The retry policy is a little different from the timeout policy:

  • I first specify the conditions under which I should retry – there must be an unsuccessful HTTP status code, or there must be a timeout exception.
  • Then I can specify how to wait and retry – first wait 1 second before retrying, then wait 3 seconds, then wait 5 seconds.
  • Finally I’ve used the overload with a delegate to write comments to debug.
var retryPolicy = Policy
    .HandleResult<HttpResponseMessage>(r => !r.IsSuccessStatusCode)
    .Or<TimeoutRejectedException>()
    .WaitAndRetryAsync(new[]
        {
            TimeSpan.FromSeconds(1),
            TimeSpan.FromSeconds(3),
            TimeSpan.FromSeconds(5)
        },
        (result, timeSpan, retryCount, context) =>
        {
            Debug.WriteLine($"[App|Policy]: Retry delegate fired, attempt {retryCount}");
        });

And I can bundle these policies together as a single policy strategy like this:

var policyStrategy = Policy.WrapAsync(RetryPolicy, TimeoutPolicy);

I’ve grouped these policies in their own class and pasted the code below.

public static class Policies
{
    private static TimeoutPolicy<HttpResponseMessage> TimeoutPolicy
    {
        get
        {
            return Policy.TimeoutAsync<HttpResponseMessage>(2, (context, timeSpan, task) =>
            {
                Debug.WriteLine($"[App|Policy]: Timeout delegate fired after {timeSpan.Seconds} seconds");
                return Task.CompletedTask;
            });
        }
    }
 
    private static RetryPolicy<HttpResponseMessage> RetryPolicy
    {
        get
        {
            return Policy
                .HandleResult<HttpResponseMessage>(r => !r.IsSuccessStatusCode)
                .Or<TimeoutRejectedException>()
                .WaitAndRetryAsync(new[]
                    {
                        TimeSpan.FromSeconds(1),
                        TimeSpan.FromSeconds(2),
                        TimeSpan.FromSeconds(5)
                    },
                    (delegateResult, retryCount) =>
                    {
                        Debug.WriteLine(
                            $"[App|Policy]: Retry delegate fired, attempt {retryCount}");
                    });
        }
    }
 
    public static PolicyWrap<HttpResponseMessage> PolicyStrategy => Policy.WrapAsync(RetryPolicy, TimeoutPolicy);
}

Now I want to apply this Policy Strategy to every outgoing call to the 3rd party web service.

How do I apply these policies when I’m using Flurl?

One of the things I really like about using Flurl to consume 3rd party web services is that I don’t need to instantiate an HttpClient, or worry about running out of available sockets every time I make a call – Flurl handles all of this in the background for me.

But that also means it’s not immediately obvious how I can configure calls to the HttpClient used in the background so that my policy strategy is applied to each call.

Fortunately Flurl provides a way to do this by adding a few new classes to my web app project, and a configuration instruction. I can configure Flurl’s settings in my web app’s Startup file to make it use a different implementation of Flurl’s default HttpClientFactory (which overrides how HTTP messages are handled).

public void ConfigureServices(IServiceCollection services)
{
    //...other service configuration here
 
    FlurlHttp.Configure(settings => settings.HttpClientFactory = new PollyHttpClientFactory());
}

The PollyHttpClientFactory is an extension of Flurl’s default HttpClientFactory. This overrides how HttpMessages are handled, and instead uses our own PolicyHandler.

public class PollyHttpClientFactory : DefaultHttpClientFactory
{
    public override HttpMessageHandler CreateMessageHandler()
    {
        return new PolicyHandler
        {
            InnerHandler = base.CreateMessageHandler()
        };
    }
}

And the PolicyHandler is where we apply our rules (the policy strategy) to outgoing HTTP requests.

public class PolicyHandler : DelegatingHandler
{
    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        return Policies.PolicyStrategy.ExecuteAsync(ct => base.SendAsync(request, ct), cancellationToken);
    }
}

Now let’s see if this improves things

With the policies applied to requests to the 3rd party web service, I repeated the earlier experiment and hit my application again 20 times.

[App]: successful
[App]: successful
[App|Policy]: Timeout delegate fired after 2000
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Timeout delegate fired after 2000
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App]: successful
[App]: successful
[App|Policy]: Timeout delegate fired after 2000
[App|Policy]: Retry delegate fired, attempt 1
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App]: successful
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App]: successful
[App]: successful
[App]: successful

This time, my users would have experienced no application failures in those 20 page hits. But all those orange lines are the times that the web service failed, and our policy was to try again – which eventually lead to a successful response from my web app.

In fact, I went on to hit the page 100 times and only saw two errors in total, so the total failure rate that my users experience now is at about 2% – way better than the 30% failure rate experienced originally.

Obviously this is a very contrived example – real world examples are likely to be a bit more complex. And your rules and policies will be different to mine. Instead of retrying, maybe you want to fallback to a different action (e.g. hit a different web service, pull from a cache etc.) – and Polly has its own fallback mechanism to do this. You’ll have to design your own rules and policies to handle the particular failure modes that you face.

Wrapping up

I’d a couple of aims when writing this post – first of all I wanted to come up with a couple of different scenarios for how Polly could be used in your application. I mostly work with web applications and web services, and I also like using Flurl for accessing these services, so that’s what this article focusses on. But I’ve just scratched the surface here – Polly can do way more than that. Check out the Polly Wiki to find out more about it, or look at the samples.

 


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!
Continue reading

.net, .net core, C# tip, Flurl

Comparing RestSharp and Flurl.Http while consuming a web service in .NET Core

Just before the holidays I was working on a .NET Core project that needed data available from some web services. I’ve done this a bunch of times previously, and always seem to spend a couple of hours writing code using the HttpClient object before remembering there are libraries out there that have done the heavy lifting for me.

So I thought I’d do a little write up of a couple of popular library options that I’ve used – RestSharp and Flurl. I find that learn quickest from reading example code, so I’ve written sample code showing how to use both of these libraries with a few different publically available APIs.

I’ll look at three different services in this post:

  • api.postcodes.io – no authentication required, uses GET and POST verbs
  • api.nasa.gov – authentication via an API key passed in the query string
  • api.github.com – Basic Authentication required to access private repo information

And as an architect, I’m sometimes asked how to get started (and sometimes ‘why did you chose library X instead of library Y?’), so I’ve wrapped up with a comparison and which library I like best right now.

Reading data using RestSharp

This is a very mature and well documented open source project (released under the Apache 2.0 licence), with the code available on Github. You can install the nuget package in your project using package manager with the command:

Install-Package RestSharp

First – using the GET verb with RestSharp.

Using HTTP GET to return data from a web service

Using Postcodes.io

I’ve been working with mapping software recently – some of my data sources don’t have latitude and longitude for locations, and instead they only have a UK postcode. Fortunately I can use the free Postcodes.io RESTful web API to determine a latitude and longitude for each of the postcode values. I can either just send a postcode using a GET request to get the corresponding geocode (latitude and longitude) back, or I can use a POST request to send a list of postcodes and get a list of geocodes back, which speeds things up a bit with bulk processing.

Let’ start with a simple example – using the GET verb for a single postcode. I can request a geocode corresponding to a postcode from the Postcodes.io service through a browser with a URL like the one below:

https://api.postcodes.io/postcodes/IP1 3JR

This service doesn’t require any authentication, and the code below shows how to use RestSharp and C# to get data using a GET request.

// instantiate the RestClient with the base API url
var client = new RestClient("https://api.postcodes.io");
 
// specify the resource, e.g. https://api.postcodes.io/postcodes/IP1 3JR
var getRequest = new RestRequest("postcodes/{postcode}");
getRequest.AddUrlSegment("postcode""IP1 3JR");
 
// send the GET request and return an object which contains the API's JSON response
var singleGeocodeResponseContainer = client.Execute(getRequest);
 
// get the API's JSON response
var singleGeocodeResponse = singleGeocodeResponseContainer.Content;

The example above returns raw JSON content, which I can deserialise into a custom POCO, such as the one below.

public class GeocodeResponse
{
    public string Status { getset; }
 
    public Result Result { getset; }
}
 
public class Result
{
    public string Postcode { getset; }
 
    public string Longitude { getset; }
 
    public string Latitude { getset; }
}

But I can do better than the code above – if I specify the GeocodeResponse type in the Execute method (as shown below), RestSharp uses the classes above and intelligently hydrates the POCO  from the raw JSON content returned:

// instantiate the RestClient with the base API url
var client = new RestClient("https://api.postcodes.io");
 
// specify the resource, e.g. https://api.postcodes.io/postcodes/OX495NU
var getRequest = new RestRequest("postcodes/{postcode}");
getRequest.AddUrlSegment("postcode""OX495NU");
 
// send the GET request and return an object which contains a strongly typed response
var singleGeocodeResponseContainer = client.Execute<GeocodeResponse>(getRequest);
 
// get the strongly typed response
var singleGeocodeResponse = singleGeocodeResponseContainer.Data;

Of course, not APIs all work in the same way, so here are another couple of examples of how to return data from different publically available APIs.

NASA Astronomy Picture of the Day

This NASA API is also freely available, but slightly different from the Postcodes.io API in that it requires an API subscription key. NASA requires that the key is passed as a query string parameter, and RestSharp facilitates this with the AddQueryParameter method (as shown below).

This method of securing a service isn’t that unusual – goodreads.com/api also uses this method.

// instantiate the RestClient with the base API url
var client = new RestClient("https://api.nasa.gov/");
 
// specify the resource, e.g. https://api.nasa.gov/planetary/apod
var getRequest = new RestRequest("planetary/apod");
 
// Add the authentication key which NASA expects to be passed as a parameter
// This gives https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY
getRequest.AddQueryParameter("api_key""DEMO_KEY");
 
// send the GET request and return an object which contains the API's JSON response
var pictureOfTheDayResponseContainer = client.Execute(getRequest);
 
// get the API's JSON response
var pictureOfTheDayJson  = pictureOfTheDayResponseContainer.Content;

Again, I could create a custom POCO corresponding to the JSON structure and populate an instance of this by passing the type with the Execute method.

Github’s API

The Github API will return public data any authentication, but if I provide Basic Authentication data it will also return extra information relevant to me about my profile, such as information about my private repositories.

RestSharp allows us to set an Authenticator property to specify the userid and password.

// instantiate the RestClient with the base API url
var client = new RestClient("https://api.github.com/");
 
// pass in user id and password 
client.Authenticator = new HttpBasicAuthenticator("jeremylindsayni""[[my password]]");
 
// specify the resource that requires authentication
// e.g. https://api.github.com/users/jeremylindsayni
var getRequest = new RestRequest("users/jeremylindsayni");
 
// send the GET request and return an object which contains the API's JSON response
var response = client.Execute(getRequest);

Obviously you shouldn’t hard code your password into your code – these are just examples of how to return data, they’re not meant to be best practices. You might want to store your password in an environment variable, or you could do even better and use Azure Key Vault – I’ve written about how to do that here and here.

Using the POST verb to obtain data from a web service

The code in the previous example refers to GET requests  – a POST request is slightly more complex.

The api.postcodes.io service has a few different endpoints – the one I described earlier only finds geocode information for a single postcode – but I’m also able to post a JSON list of up to 100 postcodes, and get corresponding geocode information back as a JSON list. The JSON needs to be in the format below:

{
   "postcodes" : ["IP1 3JR", "M32 0JG"]
}

Normally I prefer to manipulate data in C# structures, so I can add my list of postcodes to the object below.

public class PostCodeCollection
{
    public List<string> postcodes { getset; }
}

I’m able to create a POCO object with the data I want to post to the body of the POST request, and RestSharp will automatically convert it to JSON when I pass the object into the AddJsonBody method.

// instantiate the ResttClient with the base API url
var client = new RestClient("https://api.postcodes.io");
 
// specify the resource, e.g. https://api.postcodes.io/postcodes
var postRequest = new RestRequest("postcodes"Method.POST, DataFormat.Json);
 
// instantiate and hydrate a POCO object with the list postcodes we want geocode data for
var postcodes = new PostCodeCollection { postcodes = new List<string> { "IP1 3JR""M32 0JG" } };
 
// add this POCO object to the request body, RestSharp automatically serialises it to JSON
postRequest.AddJsonBody(postcodes);
 
// send the POST request and return an object which contains JSON
var bulkGeocodeResponseContainer = client.Execute(postRequest);

One gotcha – RestSharp Serialization and Deserialization

One aspect of RestSharp that I don’t like is how the JSON serialisation and deserialisation works. RestSharp uses its own engine for processing JSON, but basically I prefer Json.NET for this. For example, if I use the default JSON processing engine in RestSharp, then my PostcodeCollection POCO needs to have property names which exactly match the JSON property names (including case sensitivity).

I’m used to working with Json.NET and decorating properties with attributes describing how to serialise into JSON, but this won’t work with RestSharp by default.

// THIS DOESN'T WORK WITH RESTSHARP UNLESS YOU ALSO USE **AND REGISTER** JSON.NET
public class PostCodeCollection
{
    [JsonProperty(PropertyName = "postcodes")]
    public List<string> Postcodes { getset; }
}

Instead I need to override the default RestSharp serializer and instruct it to use Json.NET. The RestSharp maintainers have written about their reasons here and also here – and helped out by writing the code to show how to override the default RestSharp serializer. But personally I’d rather just use Json.NET the way I normally do, and not have to jump through an extra hoop to use it.

Reading Data using Flurl

Flurl is newer than RestSharp, but it’s still a reasonably mature and well documented open source project (released under the MIT licence). Again, the code is on Github.

Flurl is different from RestSharp in that it allows you to consume the web service by building a fluent chain of instructions.

You can install the nuget package in your project using package manager with the command:

Install-Package Flurl.Http

Using HTTP GET to return data from a web service

Let’s look at how to use the GET verb to read data from the api.postcodes.io. api.nasa.gov. and api.github.com.

First, using Flurl with api.postcodes.io

The code below searches for geocode data from the specified postcode, and returns the raw JSON response. There’s no need to instantiate a client, and I’ve written much less code than I wrote with RestSharp.

var singleGeocodeResponse = await "https://api.postcodes.io"
    .AppendPathSegment("postcodes")
    .AppendPathSegment("IP1 3JR")
    .GetJsonAsync();

I also find using the POST method with postcodes.io easier with Flurl. Even though Flurl doesn’t have a build in JSON serialiser, it’s easy for me to install the Json.NET package – this means I can now use a POCO like the one below…

public class PostCodeCollection
{
    [JsonProperty(PropertyName = "postcodes")]
    public List<string> Postcodes { getset; }
}

… to fluently build up a post request like the one below. I can also createmy own custom POCO – GeocodeResponseCollection – which Flurl will automatically populate with the JSON fields.

var postcodes = new PostCodeCollection { Postcodes = new List<string> { "OX49 5NU""M32 0JG" } };
 
var url = await "https://api.postcodes.io"
    .AppendPathSegment("postcodes")
    .PostJsonAsync(postcodes)
    .ReceiveJson<GeocodeResponseCollection>();

Next, using Flurl with api.nasa.gov

As mentioned previously, NASA’s astronomy picture of the day requires a demo key passed in the query string – I can do this with Flurl using the code below:

var astronomyPictureOfTheDayJsonResponse = await "https://api.nasa.gov/"
    .AppendPathSegments("planetary""apod")
    .SetQueryParam("api_key""DEMO_KEY")
    .GetJsonAsync();

Again, it’s a very concise way of retrieving data from a web service.

Finally using Flurl with api.github.com

Lastly for this post, the code below show how to use Flurl with Basic Authentication and the Github API.

var singleGeocodeResponse = await "https://api.github.com/"
    .AppendPathSegments("users""jeremylindsayni")
    .WithBasicAuth("jeremylindsayni""[[my password]]")
    .WithHeader("user-agent""csharp-console-app")
    .GetJsonAsync();

One interesting difference in this example between RestSharp and Flurl is that I had to send user-agent information to the Github API with Flurl – I didn’t need to do this with RestSharp.

Wrapping up

Both RestSharp and Flurl are great options for consuming Restful web services – they’re both stable, source for both is on Github, and there’s great documentation.  They let me write less code and do the thing I want to do quickly, rather than spending ages writing my own code and tests.

Right now, I prefer working with Flurl, though the choice comes down to personal preference. Things I like are:

  • Flurl’s MIT licence
  • I can achieve the same results with less code, and
  • I can integrate Json.NET with Flurl out of the box, with no extra classes needed.

About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, Non-functional Requirements, Security

Serve CSS and JavaScript from CDNs safely with subresource integrity (SRI) attributes

I’m building a web application at the moment which plots data on a map using the Leaflet JS framework. Leaflet JS is fantastic, and has a huge number of open-source community plugins which make it even more useful.

For these plugins, I can download them and host the JavaScript and CSS on my own website, but I’d prefer to use a CDN (Content Delivery Network) like CloudFlare. Using a service like this this means so I don’t have to host the files, and also these files will be served to my users from a site that’s close to them.

Obviously this means that the CDN is now in control of my files – how can I make sure these files haven’t been tampered with before I serve them up to my users?

How can I make sure these files on the CDN haven’t been tampered with before I serve them up to my users?

W3C.org recommends that “compromise of a third-party service should not automatically mean compromise of every site which includes its scripts“.

Troy Hunt wrote about this a while back and recommends using the ‘integrity’ attributes in script and link tags that reference subresources – supported browsers will calculate a hash of the file served by the CDN and compare that hash with the value in the integrity attribute. If they don’t match, the browser doesn’t serve the file.

The catch is that not all browsers support this – though coverage in modern browsers is pretty good. You can check out caniuse.com to see which browsers support the integrity attribute.

sripic

How can I calculate the hash of my file to put in the integrity attribute?

I like to use Scott Helme’s utility at https://report-uri.com/home/sri_hash/ to create the hash of JavaScript and CSS files. This calculates 3 different hashes, using SHA256, SHA384 and SHA512.

So instead of my script tag looking like this:

<script src="https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.3.4/leaflet.js"></script>

My script tags now look like this:

<script src="https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.3.4/leaflet.js" 
        integrity="sha256-tfcLorv/GWSrbbsn6NVgflWp1YOmTjyJ8HWtfXaOaJc= sha384-/I247jMyT/djAL4ijcbNXfX+PA8OZmkwzUr6Gotpgjz1Rxti1ZECG9Ne0Dj1pXrx sha512-nMMmRyTVoLYqjP9hrbed9S+FzjZHW5gY1TWCHA5ckwXZBadntCNs8kEqAWdrb9O7rxbCaA4lKTIWjDXZxflOcA==" 
        crossorigin="anonymous"></script>

This also works for CSS – without the integrity attribute it would look like the code below:

<link rel="stylesheet" href="https://unpkg.com/leaflet@1.3.1/dist/leaflet.css" />

But a more secure version is below:

<link rel="stylesheet" 
      href="https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.3.4/leaflet.css" integrity="sha256-YR4HrDE479EpYZgeTkQfgVJq08+277UXxMLbi/YP69o= sha384-BF7C732iE6WuqJMhUnTNJJLVvW1TIP87P2nMDY7aN2j2EJFWIaqK89j3WlirhFZU sha512-puBpdR0798OZvTTbP4A8Ix/l+A4dHDD0DGqYW6RQ+9jxkRFclaxxQb/SJAWZfWAkuyeQUytO7+7N4QKrDh+drA==" 
      crossorigin="anonymous">

Wrapping up

Hopefully this is useful information, and provides a guide on how to make sure your site doesn’t serve up JavaScript or CSS content that has been tampered with.

.net, C# tip, Visual Studio, Xamarin

How to detect nearby Bluetooth devices with .NET and Xamarin.Android

I’m working on an Xamarin.Android app at the moment – for this app, I need to detect what Bluetooth devices are available to my Android phone (so the user can choose which one to pair with).

For modern versions of Android, it’s not as simple as just using a BroadcastReceiver (although that is part of the solution). In this post I’ll write about the steps needed to successfully use the Bluetooth hardware on your Android phone with .NET.

One thing to note – I can test detecting Bluetooth devices by deploying my code directly onto an Android device, but I can’t use the Android emulator as it doesn’t have Bluetooth support.

As usual I’ve uploaded my code to GitHub (you can get it here).

Update AndroidManifest.xml with Bluetooth and Location permissions

First I had to make sure that my application told the device what hardware services it needed to access. For detecting and interacting with Bluetooth hardware, there are four services to add to the application AndroidManifest.xml:

  • Bluetooth
  • Bluetooth Admin
  • Access Coarse Location
  • Access Fine Location

When the application loads on the Android device for the first time, the user will be challenged to allow the application permission to use these hardware services.

I’ve pasted my AndroidManifest.xml file below – yours will look slightly different, but I’ve highlighted the important bit in red.

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
          android:versionCode="1"
          android:versionName="1.0"
          package="Bluetooth_Device_Scanner.Bluetooth_Device_Scanner">
  <uses-sdk android:minSdkVersion="23" android:targetSdkVersion="27" />
  <application
    android:allowBackup="true"
    android:icon="@mipmap/ic_launcher"
    android:label="@string/app_name"
    android:roundIcon="@mipmap/ic_launcher_round"
    android:supportsRtl="true"
    android:theme="@style/AppTheme">
  </application>
  <uses-permission android:name="android.permission.BLUETOOTH" />
  <uses-permission android:name="android.permission.BLUETOOTH_ADMIN" />
  <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
  <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
</manifest>

 

List the Bluetooth devices which the Android device has already paired with

This part is very straightforward – remember the code below will list to console only the Bluetooth devices which have already been detected and paired with the Android device. It will not list other devices which haven’t already been paired with each other (I write about this later in the article).

Obviously you’d probably want to write the output to a UI component rather than write to the console, but for this article I wanted to cut this down to only the Bluetooth interactions and not focus on UI interactions.

if (BluetoothAdapter.DefaultAdapter != null && BluetoothAdapter.DefaultAdapter.IsEnabled)
{
    foreach (var pairedDevice in BluetoothAdapter.DefaultAdapter.BondedDevices)
    {
        Console.WriteLine(
            $"Found device with name: {pairedDevice.Name} and MAC address: {pairedDevice.Address}");
    }
}

There’s not much more to say about this – I can put this pretty much anywhere in the C# code and it’ll just work as expected.

List new Bluetooth devices by creating a BluetoothDeviceReceiver class that extends BroadcastReceiver

Next I wanted to list the Bluetooth devices that haven’t been paired with the Android device. I can do this by creating a receiver class, which extends the ‘BroadcastReceiver’ base class, and overrides the ‘OnReceive’ method – I’ve included the code for my class below.

using System;
using Android.Bluetooth;
using Android.Content;
 
namespace Bluetooth_Device_Scanner
{
    public class BluetoothDeviceReceiver : BroadcastReceiver
    {
        public override void OnReceive(Context context, Intent intent)
        {
            var action = intent.Action;
            
            if (action != BluetoothDevice.ActionFound)
            {
                return;
            }
 
            // Get the device
            var device = (BluetoothDevice)intent.GetParcelableExtra(BluetoothDevice.ExtraDevice);
 
            if (device.BondState != Bond.Bonded)
            {
                Console.WriteLine($"Found device with name: {device.Name} and MAC address: {device.Address}");
            }
        }
    }
}

This receiver class is registered with the application and told to activate when the Android device detects specific events – such as finding a new Bluetooth device. Xamarin.Android does this through something called an ‘Intent’. The code below shows how to register the receiver to trigger when a Bluetooth device is detected.

// Register for broadcasts when a device is discovered
_receiver = new BluetoothDeviceReceiver();
RegisterReceiver(_receiver, new IntentFilter(BluetoothDevice.ActionFound));

When the Android device finds a new Bluetooth device and calls the OnReceive method, the class checks that the event is definitely the right one (i.e. BluetoothDevice.ActionFound).

Then it checks that the devices are not already paired (i.e. ‘Bonded’) and again my class just writes some details to the console about the Bluetooth device that its found.

But we’re not quite done yet – there’s one more very important piece of code which is necessary for modern versions of Android.

Finally – check permissions are applied at runtime

This is the bit that is sometimes missed in other tutorials, and that’s possibly because this is only needed for more recent versions of Android, so older tutorials wouldn’t have needed this step.

Basically even though the Access Coarse and Fine Location permissions are already specified in the AndroidManifest.xml file, if you’re using later than version 23 of the Android SDK, you need to also check that the permissions are correctly set at runtime. If they aren’t, you need to add code to prompt the user to grant these permissions.

There’s lots more about this topic on the Xamarin blog here.

I’ve pasted my MainActivity class below. This class:

  • Checks permissions,
  • Prompts the user for any permissions that are missing,
  • Registers the receiver to trigger when Bluetooth devices are detected, and
  • Starts scanning for Bluetooth devices.
using Android;
using Android.App;
using Android.Bluetooth;
using Android.Content;
using Android.Content.PM;
using Android.OS;
using Android.Support.V4.App;
using Android.Support.V4.Content;
 
namespace Bluetooth_Device_Scanner
{
    [Activity(Label = "Bluetooth Device Scanner", MainLauncher = true)]
    public class MainActivity : Activity
    {
        private BluetoothDeviceReceiver _receiver;
 
        protected override void OnCreate(Bundle savedInstanceState)
        {
            base.OnCreate(savedInstanceState);
 
            SetContentView(Resource.Layout.activity_main);
 
            const int locationPermissionsRequestCode = 1000;
 
            var locationPermissions = new[]
            {
                Manifest.Permission.AccessCoarseLocation,
                Manifest.Permission.AccessFineLocation
            };
 
            // check if the app has permission to access coarse location
            var coarseLocationPermissionGranted =
                ContextCompat.CheckSelfPermission(thisManifest.Permission.AccessCoarseLocation);
 
            // check if the app has permission to access fine location
            var fineLocationPermissionGranted =
                ContextCompat.CheckSelfPermission(thisManifest.Permission.AccessFineLocation);
 
            // if either is denied permission, request permission from the user
            if (coarseLocationPermissionGranted == Permission.Denied ||
                fineLocationPermissionGranted == Permission.Denied)
            {
                ActivityCompat.RequestPermissions(this, locationPermissions, locationPermissionsRequestCode);
            }
 
            // Register for broadcasts when a device is discovered
            _receiver = new BluetoothDeviceReceiver();
 
            RegisterReceiver(_receiver, new IntentFilter(BluetoothDevice.ActionFound));
 
            BluetoothDeviceReceiver.Adapter.StartDiscovery();
        }
    }
}

Now the application will call the BluetoothDeviceReceiver class’s OnReceive method when it detects Bluetooth hardware.

Wrapping up

Hopefully this is useful to anyone writing a Xamarin.Android application that interacts with Bluetooth devices – I struggled with this for a while and wasn’t able to find an article which detailed all the pieces of the puzzle:

  • Update the manifest with the 4 required application permissions
  • Create a class that extends BroadcastReceiver,
  • Check at runtime that the location permissions have been granted and prompt the user if they haven’t, and
  • Register the receiver class and start discovery.

About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, Azure, Cloud architecture, Security

Using the Azure Key Vault to keep secrets out of your web app’s source code

Ahead of the Global Azure Bootcamp, I’ve been looking how I could allow a distributed team to develop and deploy a web application to access an Azure SQL Server instance in a secure way. There are a few different ways that I could share credentials to access my Azure SQL database:

  • Environment variables – this keeps secrets (like passwords) out of the code and mitigates the risk they’d be committed to source code. But Environment Variables are stored in plain text, so if the host is compromised, those secrets are lost.
  • .NET Core Secret Manager tool – there is a NuGet package which allows the user to keep application secrets (like a password) in a JSON file which is stored in the user profile directory – again, this mitigates the risk that secrets would be committed to source code, but I’d still have to share that secret to be stored in plain text.

Neither of these options are ideal for me – I would rather grant access to my Azure SQL database by role and not share passwords with developers which need to be written down in somewhere, either in JSON or in my automated deployment scripts. And even though the two options above mitigate the risk that passwords are committed to source code, they don’t eliminate the risk.

So I was pretty excited to read about Azure Key Vault (AKV)- a way to securely store secrets in the cloud and avoid any risk of secrets being committed to source code.

This page from Microsoft presents a few different user stories and how AKV meets these needs, specifically around:

(There’s more information on Stack Overflow here)

But after reading the document here I was a bit surprise that the implementation described still used the Secret Manager tool – it seemed like we’re just swapping storing secrets in one place for another. I searched around to find how this could be done without the Secret Manager tool, and in numerous blog posts and videos I saw developers setting up a secret in AKV, but then copying a “client secret” from Azure into their code, and I thought this really defeated the purpose of having a vault for secrets.

Fortunately I’ve found what I need to do to use AKV with my .NET Core web application and not have to add any secrets to code – I secure my application with a Managed Service Identity. I’ve described how to do this below, with the C# code I needed to use the feature.

It looks like there’s a lot of steps but they’re all very simple – this is a lot less complicated than I thought it would be!

How to keep the secrets out of your source code.

  • First create a vault
  • Add a secret to your vault
  • Secure your app service using Managed Service Identity
  • Access the secret from your source code with a KeyVaultClient

I’ll cover each of these in turn, with code samples at the end to show how to access the AKV.

First create a vault

Open the Azure portal and log in – click on the “All services” menu item on the left hand side, and search for “key vault” – this should filter the options so you have a screen like the one below.

akv1

Once you’ve got the Key Vaults option, click on it to see a screen like the one below which will list the Key Vaults in your subscription. To create a new vault, click on the “Add” button, highlighted in the document below.

akv2

This will open another “blade” (which I just see as jargon for a floating window) in the portal where you can enter information about your new vault.

As you can see in the image below, I’ve called my vault “MyWebsiteSecret”, and I’ve created a new resource group for it called “Development_Secret”. I’ve chosen the location to be “UK West”, and by default my user has been added as the first principal who has permission to access this.

akv3

I clicked on the Create button at the bottom of the screen, and the portal presents a toast at the top right to say my vault is in the process of being created.

akv4

Eventually this changes when the deployment has succeeded.

akv6

So the Azure portal screen now shows the list page again, and my new vault is on this page.

akv5

Add a secret to the vault

Now the vault is created, we can create a new secret in it. Click on the vault created in the previous step to see the details for this vault (shown below).

akv7

Now click on the “Secrets” menu item to open a blade showing secrets in this vault. Obviously as I’ve just created it, there are no secrets yet. We can create on by clicking on the “Generate/Import” button, highlighted in the image below.

akv8

After clicking on the “Generate/Import” button, a new blade opens where you can enter details of your secret. I chose a name of “TheSecret”, entered a secret value which is masked, and entered in a bit of text for the Content Type to describe the type of secret.

akv9

Once I click on “Create” at the bottom of the blade, the site returns me to the list of secrets in this vault – but this time, you can see my secret in the list, as shown below.

akv10

Secure the app service using Managed Service Identity

I have deployed my .NET Core application to Azure previously – I won’t go into lots of detail about how to deploy a .NET Core application since it’s in a million other blog posts and videos – basically I created a new App Service through the Azure portal, and linked it to a .NET Core application on my GitHub profile. Now when I push code to that application on GitHub, Azure will automatically build and deploy it.

Obviously this isn’t a full deployment pipeline – but it works for this simple application.

But I do want to show how to create a Managed Service Identity for this application – as shown in the image below, I’ve searched for my App Service on Azure.

akv12

I selected my app service to open a blade with options for this service, and selected “Managed Service Identity”, as shown below. By default it’s off – I’ve drawn an arrow below beside the button I pressed to turn it on for the app service, and after that I clicked on Save to persist my changes.

akv13

Once it was saved, I needed to go back to the key vault and secret that I created earlier and select “Access Policies”, as shown below. As I mentioned earlier, my name is in there as having permission by default, but I want my application to have permission too – so I clicked on the “Add new” option, which I’ve highlighted with a red arrow below.

akv14

The blade below opens up – for the principle, I selected my app service (called “MyAppServiceForTestingVaults”) – by default nothing is selected so you just need to click on the option to open another blade where you can search for your app service. It’ll only be available if you’ve correctly configured the Managed Service Identity as described above.

Also, I selected two “Secret permissions” from the dropdown – Get and List.

akv15

Once I click OK, I can now see the my application is in the list of app services which have access to the secret I created earlier.

akv16

Add code to my .NET application to access these secrets

I use the Azure Services Authentication Extension to simplify development with my Visual Studio account.

I’m going to choose a really simple example – modifying the Index action of a HomeController class in the default .NET Core MVC website. I also need to add a NuGet package to my project:

Install-Package Microsoft.Azure.Services.AppAuthentication -Version 1.1.0-preview

The code below allows me to authenticate myself to my Azure instance, and get the secret from my vault.

You can see there’s a URL specified in the action below – it’s in the format of:

https://<<name of the vault>>.vault.azure.net/secrets/<<name of the secret>>

public class HomeController : Controller
{
    public async Task<ActionResult> Index()
    {
        var azureServiceTokenProvider = new AzureServiceTokenProvider();
        var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
        var secret = await keyVaultClient.GetSecretAsync("https://mywebsitesecret.vault.azure.net/secrets/TheSecret").ConfigureAwait(false);
        ViewBag.Secret = secret.Value;
        return View();
    }
    // rest of the class...
}

 

Now I can just modify the Index.cshtml view and add some code to show the secret (as simple as adding @ViewBag.Secret into the cshtml) – and when I run the project locally, I can now see that my application has been able to access the vault and decrypt my secret (as highlighted in the image below) without any client Id or client secret information in my code – this is because my machine recognises that I’m authenticated to access my own Azure instance.

I can also deploy this code to my Azure App Service and I’ll get the same results, because the application’s Managed Service Identity ensures that my application in Azure has permission to access the secret.

secret in image

Summing up

This was a really simple example, and it’s just to illustrate how to allow developers to access AKV secrets without having to add secret information to source code. Obviously, if a developer is determined to compromise security, they could obviously decrypt passwords and disseminate in another way –  so we’d have to tighten up security for a real-world application. For example, we could have different secrets stored in different environmental resource groups as we promote our application from Dev to QA/Staging and finally to Production.

Continue reading

.net, Cake, Continuous Integration, Raspberry Pi 3, UWP

Deploy a UWP application to a Windows 10 device from the command line with Cake

I’ve wanted to improve my continuous integration process for building, testing and deploying UWP applications for a while. For these UWP apps, I’ve been tied to using VS2017 for build and deploy operations – and VS2017 is great, but I’ve felt restricted by the ‘point and click’ nature of these operations in VS2017.

Running automated tests for any .NET project is well documented, but until relatively recently I’ve not had a really good way to use a command line to:

  • build my UWP project and solution,
  • run tests for the solution,
  • build an .appxbundle file if the tests pass, and
  • and deploy the appxbundle to my Windows 10 device.

Trying to find out what’s going on under the hood is the kind of challenge that’s catnip for me, and this is my chance to share what I’ve learned with the community.

This is part of a series of posts about building and deploying different types of C# applications to different operating systems.

If you follow this blog, you’ll probably know I normally deploy apps to my Raspberry Pi 3, but the principles in here could be applied to other Windows 10 devices, such as an Xbox or Windows Phone.

Step 1 – Create the demo UWP and test projects.

I’ll keep the description of this bit quick – I’ll just use the UWP template in Visual Studio 2017 – it’s only a blank white screen but that’s ok for this demonstration.

screenshot.1500076564

I’ve also created an empty unit test project – again the function isn’t important for this demonstration, we just need a project with a runnable unit test.

screenshot.1500076646

I’ve written a simple dummy ‘test’, shown below – this is just created for the purposes of demonstrating how Cake can run a Unit Test project written using MSTest:

using Microsoft.VisualStudio.TestTools.UnitTesting;
 
namespace UnitTestProject2
{
    [TestClass]
    public class UnitTest1
    {
        [TestMethod]
        public void TestMethod1()
        {
            Assert.IsTrue(true);
        }
    }
}

Step 2: Let’s build our project and run the tests using Cake

Open a powershell prompt (I use the package manager console in VS2017) and navigate to the UWP project folder. Now get the Cake bootstrapper script and example Cake build file using the commands below:

Invoke-WebRequest http://cakebuild.net/download/bootstrapper/windows -OutFile build.ps1

Invoke-WebRequest https://raw.githubusercontent.com/cake-build/example/master/build.cake -OutFile build.cake

I edited the build.cake file to have the text below – this script cleans the binaries, restores the NuGet packages for the projects, builds them and runs the MSTests we created.

#tool nuget:?package=NUnit.ConsoleRunner&version=3.4.0
//////////////////////////////////////////////////////////////////////
// ARGUMENTS
//////////////////////////////////////////////////////////////////////

var target = Argument("target", "Default");
var configuration = Argument("configuration", "Release");

//////////////////////////////////////////////////////////////////////
// PREPARATION
//////////////////////////////////////////////////////////////////////

// Define directories.
var buildDir = Directory("./App3/bin") + Directory(configuration);

//////////////////////////////////////////////////////////////////////
// TASKS
//////////////////////////////////////////////////////////////////////

Task("Clean")
    .Does(() =>
{
    CleanDirectory(buildDir);
});

Task("Restore-NuGet-Packages")
    .IsDependentOn("Clean")
    .Does(() =>
{
    NuGetRestore("../App3.sln");
});

Task("Build")
    .IsDependentOn("Restore-NuGet-Packages")
    .Does(() =>
{
    if(IsRunningOnWindows())
    {
      // Use MSBuild
      MSBuild("../App3.sln", settings =>
        settings.SetConfiguration(configuration));
    }
    else
    {
      // Use XBuild
      XBuild("../App3.sln", settings =>
        settings.SetConfiguration(configuration));
    }
});

Task("Run-Unit-Tests")
    .IsDependentOn("Build")
    .Does(() =>
{
    MSTest("../**/bin/" + configuration + "/UnitTestProject2.dll");
});

//////////////////////////////////////////////////////////////////////
// TASK TARGETS
//////////////////////////////////////////////////////////////////////

Task("Default")
    .IsDependentOn("Run-Unit-Tests");

//////////////////////////////////////////////////////////////////////
// EXECUTION
//////////////////////////////////////////////////////////////////////

RunTarget(target);

Cake’s built in benchmarking shows the order in which the tasks are executed

Task Duration 
--------------------------------------------------
Clean                  00:00:00.0124995 
Restore-NuGet-Packages 00:00:03.5300892 
Build                  00:00:00.8472346 
Run-Unit-Tests         00:00:01.4200992 
Default                00:00:00.0016743 
--------------------------------------------------
Total:                 00:00:05.8115968

And obviously if any of these steps had failed (for example if a test failed), execution would stop at this point.

Step 3: Building an AppxBundle in Cake

If I want to build an appxbundle for a UWP project from the command line, I’d run the code below:

MSBuild ..\App3\App3.csproj /p:AppxBundle=Always /p:AppxBundlePlatforms="x86|arm" /Verbosity:minimal

There’s four arguments have told MSBuild about:

  • The location of the csproj file that I want to target
  • I want to build the AppxBundle
  • I want to target x86 and ARM platforms (ARM doesn’t work on its own)
  • And that I want to minimise the verbosity of the output logs.

I could use StartProcess to get Cake to run MSBuild in a task, but Cake already has methods for MSBuild (and many of its parameters) baked in. For those parameters which Cake doesn’t know about, it’s very easy to use the WithProperty fluent method to add the argument’s parameter and value. The code below shows how I can implement the command to build the AppxBundle in Cake’s C# syntax.

var applicationProjectFile = @"../App3/App3.csproj";
 
// ...

MSBuild(applicationProjectFile, new MSBuildSettings
    {
        Verbosity = Verbosity.Minimal
    }
    .WithProperty("AppxBundle", "Always")
    .WithProperty("AppxBundlePlatforms", "x86|arm")
);

After this code runs in a task, an AppxBundle is generated in a folder in the project with the path:

AppPackages\App3_1.0.0.0_Debug_Test\App3_1.0.0.0_x86_arm_Debug.appxbundle

The path and file name isn’t massively readable, and is also likely to change, so I wrote a short method to search the project directories and return the path of the first AppxBundle found.

private string FindFirstAppxBundlePath()
{
    var files = System.IO.Directory.GetFiles(@"..\", @"*.appxbundle", SearchOption.AllDirectories);
    
    if (files.Count() > 0)
    {
        return files[0];
    }
    else
    {
        throw new System.Exception("No appxbundle found");
    }
}

Now that I have the path to the AppxBundle, I’m ready to deploy it to my Windows device.

Step 4: Deploying the AppxBundle

Microsoft have provided a command line tool in the Windows 10 SDK for deploying AppxBundles – this tool is called WinAppDeployCmd. The syntax used to deploy an AppxBundle is:

WinAppDeployCmd install -file "\MyApp.appxbundle" -ip 192.168.0.1

It’s very straightforward to use a command line tool with Cake – I’ve blogged about this before and how to use StartProcess to call an executable which Cake’s context is aware about.

But what about command line tools which Cake doesn’t know about? It turns out that it’s easy to register tools within Cake’s context – you just need to know the path to the tool, and the code below shows how to add the UWP app deployment tool to the context:

Setup(context => {
    context.Tools.RegisterFile(@"C:\Program Files (x86)\Windows Kits\10\bin\x86\WinAppDeployCmd.exe");
});

If you don’t have this tool on your development machine or CI box, it might be because you don’t have the Windows 10 SDK installed.

So with this tool in Cake’s context, it’s very simple to create a dedicated task and pull the details of this tool out of context for use with StartProcess, as shown below.

Task("Deploy-Appxbundle")
	.IsDependentOn("Build-Appxbundle")
	.Does(() =>
{
    FilePath deployTool = Context.Tools.Resolve("WinAppDeployCmd.exe");
 
    Information(appxBundlePath);
 
    var processSuccessCode = StartProcess(deployTool, new ProcessSettings {
        Arguments = new ProcessArgumentBuilder()
            .Append(@"install")
            .Append(@"-file")
            .Append(appxBundlePath)
            .Append(@"-ip")
            .Append(raspberryPiIpAddress)
        });
 
    if (processSuccessCode != 0)
    {
        throw new Exception("Deploy-Appxbundle: UWP application was not successfully deployed");
    }
});

And now we can run our Cake script to automatically build and deploy the UWP application – I’ve pasted the benchmarking statistics from Cake below.

Task                     Duration
--------------------------------------------------
Clean                    00:00:00.0821960
Restore-NuGet-Packages   00:00:09.7173174
Build                    00:00:01.5771689
Run-Unit-Tests           00:00:03.2204312
Build-Appxbundle         00:01:09.6506712
Deploy-Appxbundle        00:02:13.8439852
--------------------------------------------------
Total:                   00:03:38.0917699

And to prove it was actually deployed, here’s a screenshot of the list of apps on my Raspberry Pi (from the device portal) before running the script…

screenshot.1500907026

…and here’s one from after – you can see the UWP app was successfully deployed.

screenshot.1500907690

I’ve uploaded my project’s build.cake file into a public gist – you can copy this and chan-ge it to suit your particular project (I haven’t uploaded a full UWP project because sometimes people have issues with the *.pfx file).

Wrapping up

I’ve found it’s possible to build and deploy a UWP app using the command line, and beyond that it’s possible to integrate the build and deployment process into a Cake script. So even though I still create my application in VS2017 – and I’ll probably keep using VS2017 – it means that I have a much more structured and automated integration process.


About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!