.net, C# tip

Instantiating a C# object from a string using Activator.CreateInstance in .NET

Recently I hit an interesting programming challenge – I need to write a library which can instantiate and use a C# class object from a second C# assembly.

Sounds simple enough…but the catch is that I’m only given some string information about the class at runtime, such as the class name, its namespace, and what assembly it belongs to.

Fortunately this is possible using the Activator.CreateInstance method in C#. First I need to format the namespace, class name and assembly name in a special way – as an assembly qualified name.

Let’s look at an example – the second assembly is called “MyTestProject” and the object I need to instantiate from my library looks like the one below.

namespace SampleProject.Domain
{
    public class MyNewTestClass
    {
        public int Id { getset; }
 
        public string Name { getset; }
 
        public string DoSpecialThing()
        {
            return "My name is MyNewTestClass";
        }
    }
}

This leads to the assembly qualified name:

"SampleProject.Domain.MyNewTestClass, MyTestProject"

Note that the format here is along the lines of:

"{namespace}.{class name}, "{assembly name}"

Another way of finding this assembly qualified name is to run the code below:

Console.WriteLine(typeof(MyNewTestClass).AssemblyQualifiedName);

This will output something like:

SampleProject.Domain.MyNewTestClass, MyTestProject, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null

There’s extra information about Version, Culture and PublicKeyToken – you might need to use this if you’re targetting different versions of a library, but in my simple example I don’t need this so I won’t elaborate further here.

Now I have the qualified name of the class, I can instantiate the object in my library using the Activator.CreateInstance, as shown below:

const string objectToInstantiate = "SampleProject.Domain.MyNewTestClass, MyTestProject";
 
var objectType = Type.GetType(objectToInstantiate);

var instantiatedObject = Activator.CreateInstance(objectType);

But it’s a bit difficult to do anything useful with the instantiated object – at runtime we obviously know it has the type of Object, and I suppose we could call the ToString() method, but for a new class that’s of limited use. How can we access properties and methods in the instantiated class?

One way is to use the dynamic keyword to manipulate the new object

We could use the dynamic keyword with the instantiated object, and set/get methods dynamically, like in the code below:

const string objectToInstantiate = "SampleProject.Domain.MyNewTestClass, MyTestProject";
 
var objectType = Type.GetType(objectToInstantiate);

dynamic instantiatedObject = Activator.CreateInstance(objectTypeas ITestClass;
 
// set a property value
instantiatedObject.Name = "Test Name";
 
// get a property value
string name = instantiatedObject.Name;
 
// call a method - this outputs "My name is MyNewTestClass"
Console.Write(instantiatedObject.DoSpecialThing());

Another way to manipulate the instantiated object is through using a shared interface

We could make the original object implement an interface shared across all projects. If I add the interface below to my project…

namespace SampleProject.Domain
{
    public interface ITestClass
    {
        int Id { getset; }
        string Name { getset; }
        string DoSpecialThing();
    }
}

…then our original object could implement this interface:

namespace SampleProject.Domain
{
    public class MyNewTestClass : ITestClass
    {
        public int Id { getset; }
 
        public string Name { getset; }
 
        public string DoSpecialThing()
        {
            return "My name is MyNewTestClass";
        }
    }
}

So if we happen to know at design time that the object we want to instantiate implements the ITestClass interface, we can access methods exposed by that interface – there’s no need to use the dynamic keyword now.

const string objectToInstantiate = "SampleProject.Domain.MyNewTestClass, MyTestProject";
 
var objectType = Type.GetType(objectToInstantiate);

var instantiatedObject = Activator.CreateInstance(objectTypeas ITestClass;
 
// set a property value
instantiatedObject.Name = "Test Name";
 
// get a property value
var name = instantiatedObject.Name;
 
// call a method - this outputs "My name is MyNewTestClass"
Console.Write(instantiatedObject.DoSpecialThing());

And of course if I have another domain object which implements the same interface but has different behaviour, like the one below…

namespace SampleProject.Domain
{
    public class DifferentTestClass : ITestClass
    {
        public int Id { getset; }
 
        public string Name { getset; }
 
        public string DoSpecialThing()
        {
            return "This is a different special thing";
        }
    }
}

..then I can use similar code to instantiate and manipulate the object – I just need to use the different object’s assembly qualified name:

const string objectToInstantiate = "SampleProject.Domain.DifferentTestClass, MyTestProject";
 
var objectType = Type.GetType(objectToInstantiate);

var instantiatedObject = Activator.CreateInstance(objectTypeas ITestClass;
 
// set a property value
instantiatedObject.Name = "Other Test Name";
 
// get a property value
string name = instantiatedObject.Name;
 
// call a method - but this now outputs "This is a different special thing"
Console.Write(instantiatedObject.DoSpecialThing());

Hopefully this helps anyone else facing a similar challenge – it’s worth bearing in mind that reflection is very powerful, but also can be a bit slower than other techniques.

.net, C# tip

An extension method for .NET enumerations that uses the DescriptionAttribute

Sometimes little improvements make a big difference to my day to day programming. Like when I discovered the String.IsNullOrEmpty method (I think it came into .NET Framework way back in v2.0) – when I was testing for non-empty strings , I was able to use that super useful language feature without having to remember to write two comparisons (like myString != null && myString != string.Empty). I’ve probably used that bit of syntactical candy in every project I’ve worked on since then.

I work with enumerations all the time too. And a lot of the time, I need a text representation of the enumeration values which is a bit more complex than the value itself. This is where I find the DescriptionAttribute so useful – it’s a place supplied natively by .NET where I can add that text representation.

I’ve provided a simple example of this kind of enumeration with descriptions for each item below – maybe for a production application I’d localise the text, but you get the idea.

public enum Priority
{
    [Description("Highest Priority")]
    Top,
    [Description("MIiddle Priority")]
    Medium,
    [Description("Lowest Priority")]
    Low
}

But I’ve always felt I’d love to have native access to a method that would give me the description. Something like:

Priority.Medium.ToDescription();

Obviously it’s really easy to write an extension method like this, but every time I need it on a new project for a new client, I have to google for how to use types and reflection to access the attribute method, and then write that extension method. Then I think about the other extension methods that I might like to write (what about ToDisplayName(), that might be handy…), and how to make it generic so I can extend it later, and what about error handling…

…and anyway, this process always includes the thought, “I’ve lost count how many times I’ve done this for my .NET enumerations, why don’t I write this down somewhere so I can re-use it?

So I’ve written it down below.

public static class EnumerationExtensions
{
    public static string ToDescription(this Enum enumeration)
    {
        var attribute = GetText<DescriptionAttribute>(enumeration);
 
        return attribute.Description;
    }
 
    public static T GetText<T>(Enum enumeration) where T : Attribute
    {
        var type = enumeration.GetType();
        
        var memberInfo = type.GetMember(enumeration.ToString());
 
        if (!memberInfo.Any())
            throw new ArgumentException($"No public members for the argument '{enumeration}'.");
 
        var attributes = memberInfo[0].GetCustomAttributes(typeof(T), false);
 
        if (attributes == null || attributes.Length != 1)
            throw new ArgumentException($"Can't find an attribute matching '{typeof(T).Name}' for the argument '{enumeration}'");
 
        return attributes.Single() as T;
    }
}

I’ve split it into two methods, so if I want to create a ‘ToDisplayName()’ extension later, it’s really easy for me to do that. I might include it in a NuGet package so it’s even easier for me to re-use later. I’ll update this post if I do.

Wrapping up

This was a quick blog post covering a simple area, about something that bugs me – having to rewrite a simple function each time I start a new project. I can imagine a bunch of reasons why this isn’t a native extension in .NET Framework/Core – and my implementation is an opinionated extension. I’m sure there’d be a lot of disagreement about what good practice is. Anyway, this class works for me – I hope it’s useful to you also.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, C# tip

Correctly reading encoded text with the StreamReader in .NET

So here’s a problem…

I’ve got a list of my team members in a text file which I need to parse and process in .NET. The file is pretty simple – it’s called MyTeamNames.txt and it contains the following names:

  • Adèle
  • José
  • Chloë
  • Auróra

I created the text file on Windows 10 machine and used Notepad. I saved the file with the default ANSI encoding.

ansi save

I’m going to read names from this text file using a .NET Framework StreamReader – there’s a simple and clear example on the docs.microsoft.com site describing how to do this. So I’ve written a spike of code to use a StreamReader – I’ve more or less copied directly from the link above – and it looks like this:

using System;
using System.IO;
 
namespace ConsoleApp
{
    internal static class Program
    {
        private static void Main()
        {
            try
            {
                const string myTeamNamesFile = @"C:\Users\jeremy.lindsay\Desktop\MyTeamNames.txt";
 
                // Open the text file using a stream reader.
                using (var streamReader = new StreamReader(myTeamNamesFile))
                {
                    // Read the stream to a string, and write the string to the console.
                    var line = streamReader.ReadToEnd();
                    Console.WriteLine(line);
                }
            }
            catch (Exception e)
            {
                Console.WriteLine("The file could not be read:");
                Console.WriteLine(e.Message);
            }
        }
    }
}

But when I run the code, there’s a problem. I expected to see my team’s names written to the console – but instead all those names now have question marks scattered throughout them, as shown in the image below.

wrongnames

What’s gone wrong?

The StreamReader object and original text file need to have compatible encoding types

It’s pretty obvious that the question marks relate to the non-ASCII characters, and each name on my list have either an accent or a grave, or an umlaut/diaeresis.

The problem in my .NET Framework console application is that my StreamReader is assuming that my file is encoded one way when it’s actually encoded in another way, and it doesn’t know what to do with some characters, so it uses a question mark as a default.

Can you detect the file’s encoding type with .NET?

Big thanks to Erich Brunner for pointing out a new bit of information to me about the default encoding type – I’ve updated this post to reflect his helpful steer.

It turns out detecting the file’s encoding type is quite a difficult thing to do in .NET. But as I mentioned earlier, I know I saved this file with the encoding type ANSI selected from the dropdown list in Notepad.

Interestingly, this doesn’t mean that I’ve saved the file as ANSI – this is a misnomer. From the MSDN glossary:

“The term “ANSI” as used to signify Windows code pages is a historical reference, but is nowadays a misnomer that continues to persist in the Windows community. The source of this comes from the fact that the Windows code page 1252 was originally based on an ANSI draft—which became International Organization for Standardization (ISO) Standard 8859-1. “ANSI applications” are usually a reference to non-Unicode or code page–based applications.”

There are a couple of different options open to me here:

Change the file’s encoding type and save it with a specified encoding – e.g. UTF-8, or Unicode.

This is very straightforward – I’ve chosen to just select the UTF-8 option from the dropdown list in NotePad’s ‘Save As…’ dialog box.

save as utf-8

This time when I run the code above, the names are displayed correctly on the console, as shown below.

correct display

Alternatively, try using the StreamReader overload to specify the encoding type.

I can use an overload where I specify the encoding type, which comes from System.Text.Encoding.

var streamReader = new StreamReader(myTeamNamesFile, encodingType)

But what values do these encoding types resolve to? Well that depends on whether I use the .NET Framework or .NET Core. I’ve listed the values below, and notice that the Encoding.Default is different depending on whether you use the .NET Framework or .NET Core.

I’ve highlighted the values for “Encoding.Default“, because this is a special case.

System.Encoding value .NET Framework Encoding Header Name .NET Core Encoding Header Name
Encoding.Default Windows-1252 (on my machine) utf-8
Encoding.ASCII us-ascii us-ascii
Encoding.BigEndianUnicode utf-16BE utf-16BE
Encoding.UTF32 utf-32 utf-32
Encoding.UTF7 utf-7 utf-7
Encoding.UTF8 utf-8 utf-8
Encoding.Unicode utf-16 utf-16

So let’s say I use Encoding.Default in my .NET Framework console application, as shown in the code snippet below.

var streamReader = new StreamReader(myTeamNamesFile, Encoding.Default)

And now my names are now correctly rendered in the Console, as shown in the image below. This makes sense – the text file with my team names was saved with “ANSI” encoding, which we know actually corresponds Windows Code Page 1252. The default encoding on my own Windows machine turns out to also be the 1252 encoding (as highlighted above in red), so I’m instructing my StreamReader to use the 1252 encoding when reading a file which has been encoded as “ANSI” (also known as 1252). They match up, and the text displays correctly.

correct display

Problem solved, right? Well, no, not really.

Microsoft actually do not recommend using Encoding.Default. From docs.microsoft.com:

“Different computers can use different encodings as the default, and the default encoding can change on a single computer. If you use the Default encoding to encode and decode data streamed between computers or retrieved at different times on the same computer, it may translate that data incorrectly. In addition, the encoding returned by the Default property uses best-fit fallback to map unsupported characters to characters supported by the code page. For these reasons, using the default encoding is not recommended.”

If I target .NET Core instead of .NET Framework in my console application – with exactly the same code – I’m back to displaying question marks in my console text.

wrongnames

So even though telling the StreamReader in my .NET Framework console application to use Encoding.Default seems to work, it’s a case of it only working on my machine – it might not work on someone else’s machine. It certainly doesn’t work in .NET Core.

So it seems to me that saving my original text file as UTF-8 or Unicode is a better option.

And as a final reason to save the text file to UTF-8 or Unicode, let’s say I add a new team member, called Łukasz. If I try to save my file with ANSI encoding type, I get this warning:

unicode warning

If I press on and save the file as ANSI, the text for “Łukasz” is changed to “Lukasz” (note the change in the first character). But if I save the file as UTF-8 or Unicode, the name stays the same, including the initial “Ł”.

Wrapping up

It’s pretty common to be asked to read and process text files with non-ASCII characters, and even though .NET provides some really helpful APIs to assist with this, compatibility issues can still occur. This is a complex area, with variations across the .NET Framework and .NET Core runtimes.

I think the best solution is to change the encoding type of the original document to be Unicode or UTF-8, rather than ANSI (more correctly, Windows Code Page 1252). Telling the StreamReader to use Encoding.Default also worked for me, but it might not work on someone else’s machine with different defaults, leading to incorrect translations.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, Azure, C# tip

Setting up relationships between work items on Azure DevOps boards, and using .NET to read these relationships

So here’s a question…

How do you set up relationships between work items, and display this relationship in Azure Devops?

There’s a well known relationship between epics, features, user stories and tasks/bugs in the Agile process, but on the ‘Work Items’ screen, Azure DevOps lists them without showing the relationship –  like in the screen below.

workliist

Display relationships using the Backlogs view in Azure DevOps Boards

The thing is that the Work Items screen just lists all the work items – I prefer to view my work items in the Backlogs screen which visually represents the relationship between them.

But how do you set up relationships between work items?

Let’s work an example through from the start. I set up a new project in Azure DevOps – once I created the project, I’m shown the Summary screen, and for me this looks like the screenshot below.

project home

On the left hand navigation menu, I click on the ‘Boards’ item, and when this expands, I select the ‘Backlogs’ sub-item. This presents me with a screen where I’ve a few options, like the one at the top left – ‘New Work Item’.

backlog view

But when I click on this ‘New Work Item’ button, the pop-up only allows me to create a work item with type of ‘User Story’ (as shown below). This is not what I want – I want to create an Epic.

user story

To do this, I have to change some defaults in my Project Settings. At the bottom left of my screen, I click on the ‘Project settings’ button, and then select the ‘Team configuration’ sub-menu item which sits under the ‘Boards’ menu heading. This shows me a screen like the one below.

team configuration

There’s a section on this page which shows me the navigation levels available to me – by default, I don’t have the Epics checkbox ticked. So I can just tick the box as shown below to make this available. No need to click save anywhere – that setting is automatically saved back to the cloud.

backlogs with epic

Now if I go back to the ‘Backlogs’ menu item under ‘Boards’ in my projects left hand navigation menu, I need to select the dropdown list in the top right of the screen – my default setting here is ‘Stories’, but I can open the menu and now choose ‘Epics’ instead.

backlogs with epics dropdown

Now when I click on the ‘New Work Item’ button, I can create an Epic, and enter in the epic’s title, as shown below.

my epic title 2

And I’ve created my first epic in an Azure DevOps Board!

Ok, but what about nesting other items under that Epic?

There are a few different ways, but it’s straightforward (when you know how) – the way I like to do this is by selecting the ‘+’ button on the right hand side of my Epic. If you hover over this ‘+’ button, a tooltip appears that says ‘Add Feature’, and clicking on the button does exactly that.

add feature hover

A large dialog appears once you’ve clicked ‘+’ where you can add feature details – and note that in the bottom right of this dialog, there’s a ‘Related Work’ section, that shows the Epic we previously created as a parent.

new features

After clicking the blue ‘Save & Close’ button on the top right of the New Feature dialog, you’ll be taken back to the project board’s Backlog view, and you can see the feature that you just created below the epic we created previously, and it’s indented one place to the right to visually represent the parent-child relationship, as shown below.

add user story dropdown

And if you hover over the ‘+’ button to the left of the feature you just created, you’ll see the hint that this button now allows you to create a new user story. So if you click on ‘+’, you’ll have a similar experience to before, except the dialog that pops up is for a work item type of ‘User Story’. And you can see the relationship between this and the parent feature again by looking in the bottom right corner of the dialog, in the ‘Related Work’ section.

my new user story

And just to finish off this section, when you save that user story you’ll be taken to the backlog screen, and again see the user story sitting below its parent feature, indented one place to the right, as shown below. From this user story, you can click on the ‘+’ button on the left, and this time you’ve got a couple of options – either create a bug or a task with that user story as a parent.

show task and bug

I went ahead and create a task and a bug – the experience of creating them is identical to before where a dialog pops up with the type you select, and any existing relationed work detailed in the bottom right of the dialog box. So the image below shows my 5 new work items (an epic, a feature, a story, a task and a bug), and it’s easy to see the relationship between them by how they’re indented relative to each other.

indented

What about getting these items in .NET – how do I find out what items are related to?

I’ve previously written about creating Azure DevOps work items using the .NET framework, and you can re-apply some of the same principles to read work items into .NET objects.

I created a .NET Framework console app and installed the required NuGet packages using:

Install-Package Microsoft.TeamFoundationServer.Client
Install-Package Microsoft.VisualStudio.Services.Client

Then I used the code below to read back information about item 64 in my backlog – this is a user story which has a parent feature, and two children – a task and a bug.

So I expected the code below to tell me that there were a list of three relations in the workItem.Relations property.

using Microsoft.TeamFoundation.WorkItemTracking.WebApi;
using Microsoft.VisualStudio.Services.Common;
using Microsoft.VisualStudio.Services.WebApi;
using System;
 
namespace ConsoleApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            // my unique organization's Azure DevOps uri
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string personalAccessToken = "[[***my personal access token***]]";
 
            var credentials = new VssBasicCredential(string.Empty, personalAccessToken);
 
            // connect to Azure DevOps
            var connection = new VssConnection(new Uri(uri), credentials);
            var workItemTrackingHttpClient = connection.GetClient<WorkItemTrackingHttpClient>();
 
            // get information about workitem 64
            const int workItemId = 64;
            var workItem = workItemTrackingHttpClient.GetWorkItemAsync(workItemId).Result;
 
            // get relations
            var relations = workItem.Relations;
            Console.WriteLine(relations.Count); // uh oh - reports there are zero relations!
        }
    }

But this code doesn’t show relationships – why isn’t it working?

It turns out that the GetWorkItemAsync method doesn’t return relations by default. Instead, the GetWorkItemAsync method has an overload where you can specify to what extra information to return using a WorkItemExpand enumeration. In the code below I’ve chosen to return everything using:

expand: WorkItemExpand.All

But if I only wanted to return relations I could use:

expand: WorkItemExpand.Relations

The code below now correctly reports there are three items related to workitem 64.

using Microsoft.TeamFoundation.WorkItemTracking.WebApi;
using Microsoft.TeamFoundation.WorkItemTracking.WebApi.Models;
using Microsoft.VisualStudio.Services.Common;
using Microsoft.VisualStudio.Services.WebApi;
using System;
 
namespace ConsoleApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            // my unique organization's Azure DevOps uri
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string personalAccessToken = "[[***my personal access token***]]";
 
            var credentials = new VssBasicCredential(string.Empty, personalAccessToken);
 
            // connect to Azure DevOps
            var connection = new VssConnection(new Uri(uri), credentials);
            var workItemTrackingHttpClient = connection.GetClient<WorkItemTrackingHttpClient>();
 
            // get information about workitem 64
            const int workItemId = 64;
            var workItem = workItemTrackingHttpClient.GetWorkItemAsync(workItemId, expand: WorkItemExpand.All).Result;
 
            // get relations
            var relations = workItem.Relations;
            Console.WriteLine(relations.Count); // now correctly reports there are 3 relations
        }
    }
}

The relations list now correctly reports:

Wrapping up

This post has been about how to create work items with a hierarchical relationship using the Azure DevOps web user interface, and view them using the Backlog view in Azure -DevOps boards. I’ve also written about how to read these items and relationships between them using the .NET framework – I hope this helps!


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, .net core, C# tip, Flurl

Comparing RestSharp and Flurl.Http while consuming a web service in .NET Core

Just before the holidays I was working on a .NET Core project that needed data available from some web services. I’ve done this a bunch of times previously, and always seem to spend a couple of hours writing code using the HttpClient object before remembering there are libraries out there that have done the heavy lifting for me.

So I thought I’d do a little write up of a couple of popular library options that I’ve used – RestSharp and Flurl. I find that learn quickest from reading example code, so I’ve written sample code showing how to use both of these libraries with a few different publically available APIs.

I’ll look at three different services in this post:

  • api.postcodes.io – no authentication required, uses GET and POST verbs
  • api.nasa.gov – authentication via an API key passed in the query string
  • api.github.com – Basic Authentication required to access private repo information

And as an architect, I’m sometimes asked how to get started (and sometimes ‘why did you chose library X instead of library Y?’), so I’ve wrapped up with a comparison and which library I like best right now.

Reading data using RestSharp

This is a very mature and well documented open source project (released under the Apache 2.0 licence), with the code available on Github. You can install the nuget package in your project using package manager with the command:

Install-Package RestSharp

First – using the GET verb with RestSharp.

Using HTTP GET to return data from a web service

Using Postcodes.io

I’ve been working with mapping software recently – some of my data sources don’t have latitude and longitude for locations, and instead they only have a UK postcode. Fortunately I can use the free Postcodes.io RESTful web API to determine a latitude and longitude for each of the postcode values. I can either just send a postcode using a GET request to get the corresponding geocode (latitude and longitude) back, or I can use a POST request to send a list of postcodes and get a list of geocodes back, which speeds things up a bit with bulk processing.

Let’ start with a simple example – using the GET verb for a single postcode. I can request a geocode corresponding to a postcode from the Postcodes.io service through a browser with a URL like the one below:

https://api.postcodes.io/postcodes/IP1 3JR

This service doesn’t require any authentication, and the code below shows how to use RestSharp and C# to get data using a GET request.

// instantiate the RestClient with the base API url
var client = new RestClient("https://api.postcodes.io");
 
// specify the resource, e.g. https://api.postcodes.io/postcodes/IP1 3JR
var getRequest = new RestRequest("postcodes/{postcode}");
getRequest.AddUrlSegment("postcode""IP1 3JR");
 
// send the GET request and return an object which contains the API's JSON response
var singleGeocodeResponseContainer = client.Execute(getRequest);
 
// get the API's JSON response
var singleGeocodeResponse = singleGeocodeResponseContainer.Content;

The example above returns raw JSON content, which I can deserialise into a custom POCO, such as the one below.

public class GeocodeResponse
{
    public string Status { getset; }
 
    public Result Result { getset; }
}
 
public class Result
{
    public string Postcode { getset; }
 
    public string Longitude { getset; }
 
    public string Latitude { getset; }
}

But I can do better than the code above – if I specify the GeocodeResponse type in the Execute method (as shown below), RestSharp uses the classes above and intelligently hydrates the POCO  from the raw JSON content returned:

// instantiate the RestClient with the base API url
var client = new RestClient("https://api.postcodes.io");
 
// specify the resource, e.g. https://api.postcodes.io/postcodes/OX495NU
var getRequest = new RestRequest("postcodes/{postcode}");
getRequest.AddUrlSegment("postcode""OX495NU");
 
// send the GET request and return an object which contains a strongly typed response
var singleGeocodeResponseContainer = client.Execute<GeocodeResponse>(getRequest);
 
// get the strongly typed response
var singleGeocodeResponse = singleGeocodeResponseContainer.Data;

Of course, not APIs all work in the same way, so here are another couple of examples of how to return data from different publically available APIs.

NASA Astronomy Picture of the Day

This NASA API is also freely available, but slightly different from the Postcodes.io API in that it requires an API subscription key. NASA requires that the key is passed as a query string parameter, and RestSharp facilitates this with the AddQueryParameter method (as shown below).

This method of securing a service isn’t that unusual – goodreads.com/api also uses this method.

// instantiate the RestClient with the base API url
var client = new RestClient("https://api.nasa.gov/");
 
// specify the resource, e.g. https://api.nasa.gov/planetary/apod
var getRequest = new RestRequest("planetary/apod");
 
// Add the authentication key which NASA expects to be passed as a parameter
// This gives https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY
getRequest.AddQueryParameter("api_key""DEMO_KEY");
 
// send the GET request and return an object which contains the API's JSON response
var pictureOfTheDayResponseContainer = client.Execute(getRequest);
 
// get the API's JSON response
var pictureOfTheDayJson  = pictureOfTheDayResponseContainer.Content;

Again, I could create a custom POCO corresponding to the JSON structure and populate an instance of this by passing the type with the Execute method.

Github’s API

The Github API will return public data any authentication, but if I provide Basic Authentication data it will also return extra information relevant to me about my profile, such as information about my private repositories.

RestSharp allows us to set an Authenticator property to specify the userid and password.

// instantiate the RestClient with the base API url
var client = new RestClient("https://api.github.com/");
 
// pass in user id and password 
client.Authenticator = new HttpBasicAuthenticator("jeremylindsayni""[[my password]]");
 
// specify the resource that requires authentication
// e.g. https://api.github.com/users/jeremylindsayni
var getRequest = new RestRequest("users/jeremylindsayni");
 
// send the GET request and return an object which contains the API's JSON response
var response = client.Execute(getRequest);

Obviously you shouldn’t hard code your password into your code – these are just examples of how to return data, they’re not meant to be best practices. You might want to store your password in an environment variable, or you could do even better and use Azure Key Vault – I’ve written about how to do that here and here.

Using the POST verb to obtain data from a web service

The code in the previous example refers to GET requests  – a POST request is slightly more complex.

The api.postcodes.io service has a few different endpoints – the one I described earlier only finds geocode information for a single postcode – but I’m also able to post a JSON list of up to 100 postcodes, and get corresponding geocode information back as a JSON list. The JSON needs to be in the format below:

{
   "postcodes" : ["IP1 3JR", "M32 0JG"]
}

Normally I prefer to manipulate data in C# structures, so I can add my list of postcodes to the object below.

public class PostCodeCollection
{
    public List<string> postcodes { getset; }
}

I’m able to create a POCO object with the data I want to post to the body of the POST request, and RestSharp will automatically convert it to JSON when I pass the object into the AddJsonBody method.

// instantiate the ResttClient with the base API url
var client = new RestClient("https://api.postcodes.io");
 
// specify the resource, e.g. https://api.postcodes.io/postcodes
var postRequest = new RestRequest("postcodes"Method.POST, DataFormat.Json);
 
// instantiate and hydrate a POCO object with the list postcodes we want geocode data for
var postcodes = new PostCodeCollection { postcodes = new List<string> { "IP1 3JR""M32 0JG" } };
 
// add this POCO object to the request body, RestSharp automatically serialises it to JSON
postRequest.AddJsonBody(postcodes);
 
// send the POST request and return an object which contains JSON
var bulkGeocodeResponseContainer = client.Execute(postRequest);

One gotcha – RestSharp Serialization and Deserialization

One aspect of RestSharp that I don’t like is how the JSON serialisation and deserialisation works. RestSharp uses its own engine for processing JSON, but basically I prefer Json.NET for this. For example, if I use the default JSON processing engine in RestSharp, then my PostcodeCollection POCO needs to have property names which exactly match the JSON property names (including case sensitivity).

I’m used to working with Json.NET and decorating properties with attributes describing how to serialise into JSON, but this won’t work with RestSharp by default.

// THIS DOESN'T WORK WITH RESTSHARP UNLESS YOU ALSO USE **AND REGISTER** JSON.NET
public class PostCodeCollection
{
    [JsonProperty(PropertyName = "postcodes")]
    public List<string> Postcodes { getset; }
}

Instead I need to override the default RestSharp serializer and instruct it to use Json.NET. The RestSharp maintainers have written about their reasons here and also here – and helped out by writing the code to show how to override the default RestSharp serializer. But personally I’d rather just use Json.NET the way I normally do, and not have to jump through an extra hoop to use it.

Reading Data using Flurl

Flurl is newer than RestSharp, but it’s still a reasonably mature and well documented open source project (released under the MIT licence). Again, the code is on Github.

Flurl is different from RestSharp in that it allows you to consume the web service by building a fluent chain of instructions.

You can install the nuget package in your project using package manager with the command:

Install-Package Flurl.Http

Using HTTP GET to return data from a web service

Let’s look at how to use the GET verb to read data from the api.postcodes.io. api.nasa.gov. and api.github.com.

First, using Flurl with api.postcodes.io

The code below searches for geocode data from the specified postcode, and returns the raw JSON response. There’s no need to instantiate a client, and I’ve written much less code than I wrote with RestSharp.

var singleGeocodeResponse = await "https://api.postcodes.io"
    .AppendPathSegment("postcodes")
    .AppendPathSegment("IP1 3JR")
    .GetJsonAsync();

I also find using the POST method with postcodes.io easier with Flurl. Even though Flurl doesn’t have a build in JSON serialiser, it’s easy for me to install the Json.NET package – this means I can now use a POCO like the one below…

public class PostCodeCollection
{
    [JsonProperty(PropertyName = "postcodes")]
    public List<string> Postcodes { getset; }
}

… to fluently build up a post request like the one below. I can also createmy own custom POCO – GeocodeResponseCollection – which Flurl will automatically populate with the JSON fields.

var postcodes = new PostCodeCollection { Postcodes = new List<string> { "OX49 5NU""M32 0JG" } };
 
var url = await "https://api.postcodes.io"
    .AppendPathSegment("postcodes")
    .PostJsonAsync(postcodes)
    .ReceiveJson<GeocodeResponseCollection>();

Next, using Flurl with api.nasa.gov

As mentioned previously, NASA’s astronomy picture of the day requires a demo key passed in the query string – I can do this with Flurl using the code below:

var astronomyPictureOfTheDayJsonResponse = await "https://api.nasa.gov/"
    .AppendPathSegments("planetary""apod")
    .SetQueryParam("api_key""DEMO_KEY")
    .GetJsonAsync();

Again, it’s a very concise way of retrieving data from a web service.

Finally using Flurl with api.github.com

Lastly for this post, the code below show how to use Flurl with Basic Authentication and the Github API.

var singleGeocodeResponse = await "https://api.github.com/"
    .AppendPathSegments("users""jeremylindsayni")
    .WithBasicAuth("jeremylindsayni""[[my password]]")
    .WithHeader("user-agent""csharp-console-app")
    .GetJsonAsync();

One interesting difference in this example between RestSharp and Flurl is that I had to send user-agent information to the Github API with Flurl – I didn’t need to do this with RestSharp.

Wrapping up

Both RestSharp and Flurl are great options for consuming Restful web services – they’re both stable, source for both is on Github, and there’s great documentation.  They let me write less code and do the thing I want to do quickly, rather than spending ages writing my own code and tests.

Right now, I prefer working with Flurl, though the choice comes down to personal preference. Things I like are:

  • Flurl’s MIT licence
  • I can achieve the same results with less code, and
  • I can integrate Json.NET with Flurl out of the box, with no extra classes needed.

About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, C# tip, Visual Studio, Xamarin

How to detect nearby Bluetooth devices with .NET and Xamarin.Android

I’m working on an Xamarin.Android app at the moment – for this app, I need to detect what Bluetooth devices are available to my Android phone (so the user can choose which one to pair with).

For modern versions of Android, it’s not as simple as just using a BroadcastReceiver (although that is part of the solution). In this post I’ll write about the steps needed to successfully use the Bluetooth hardware on your Android phone with .NET.

One thing to note – I can test detecting Bluetooth devices by deploying my code directly onto an Android device, but I can’t use the Android emulator as it doesn’t have Bluetooth support.

As usual I’ve uploaded my code to GitHub (you can get it here).

Update AndroidManifest.xml with Bluetooth and Location permissions

First I had to make sure that my application told the device what hardware services it needed to access. For detecting and interacting with Bluetooth hardware, there are four services to add to the application AndroidManifest.xml:

  • Bluetooth
  • Bluetooth Admin
  • Access Coarse Location
  • Access Fine Location

When the application loads on the Android device for the first time, the user will be challenged to allow the application permission to use these hardware services.

I’ve pasted my AndroidManifest.xml file below – yours will look slightly different, but I’ve highlighted the important bit in red.

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
          android:versionCode="1"
          android:versionName="1.0"
          package="Bluetooth_Device_Scanner.Bluetooth_Device_Scanner">
  <uses-sdk android:minSdkVersion="23" android:targetSdkVersion="27" />
  <application
    android:allowBackup="true"
    android:icon="@mipmap/ic_launcher"
    android:label="@string/app_name"
    android:roundIcon="@mipmap/ic_launcher_round"
    android:supportsRtl="true"
    android:theme="@style/AppTheme">
  </application>
  <uses-permission android:name="android.permission.BLUETOOTH" />
  <uses-permission android:name="android.permission.BLUETOOTH_ADMIN" />
  <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
  <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
</manifest>

 

List the Bluetooth devices which the Android device has already paired with

This part is very straightforward – remember the code below will list to console only the Bluetooth devices which have already been detected and paired with the Android device. It will not list other devices which haven’t already been paired with each other (I write about this later in the article).

Obviously you’d probably want to write the output to a UI component rather than write to the console, but for this article I wanted to cut this down to only the Bluetooth interactions and not focus on UI interactions.

if (BluetoothAdapter.DefaultAdapter != null && BluetoothAdapter.DefaultAdapter.IsEnabled)
{
    foreach (var pairedDevice in BluetoothAdapter.DefaultAdapter.BondedDevices)
    {
        Console.WriteLine(
            $"Found device with name: {pairedDevice.Name} and MAC address: {pairedDevice.Address}");
    }
}

There’s not much more to say about this – I can put this pretty much anywhere in the C# code and it’ll just work as expected.

List new Bluetooth devices by creating a BluetoothDeviceReceiver class that extends BroadcastReceiver

Next I wanted to list the Bluetooth devices that haven’t been paired with the Android device. I can do this by creating a receiver class, which extends the ‘BroadcastReceiver’ base class, and overrides the ‘OnReceive’ method – I’ve included the code for my class below.

using System;
using Android.Bluetooth;
using Android.Content;
 
namespace Bluetooth_Device_Scanner
{
    public class BluetoothDeviceReceiver : BroadcastReceiver
    {
        public override void OnReceive(Context context, Intent intent)
        {
            var action = intent.Action;
            
            if (action != BluetoothDevice.ActionFound)
            {
                return;
            }
 
            // Get the device
            var device = (BluetoothDevice)intent.GetParcelableExtra(BluetoothDevice.ExtraDevice);
 
            if (device.BondState != Bond.Bonded)
            {
                Console.WriteLine($"Found device with name: {device.Name} and MAC address: {device.Address}");
            }
        }
    }
}

This receiver class is registered with the application and told to activate when the Android device detects specific events – such as finding a new Bluetooth device. Xamarin.Android does this through something called an ‘Intent’. The code below shows how to register the receiver to trigger when a Bluetooth device is detected.

// Register for broadcasts when a device is discovered
_receiver = new BluetoothDeviceReceiver();
RegisterReceiver(_receiver, new IntentFilter(BluetoothDevice.ActionFound));

When the Android device finds a new Bluetooth device and calls the OnReceive method, the class checks that the event is definitely the right one (i.e. BluetoothDevice.ActionFound).

Then it checks that the devices are not already paired (i.e. ‘Bonded’) and again my class just writes some details to the console about the Bluetooth device that its found.

But we’re not quite done yet – there’s one more very important piece of code which is necessary for modern versions of Android.

Finally – check permissions are applied at runtime

This is the bit that is sometimes missed in other tutorials, and that’s possibly because this is only needed for more recent versions of Android, so older tutorials wouldn’t have needed this step.

Basically even though the Access Coarse and Fine Location permissions are already specified in the AndroidManifest.xml file, if you’re using later than version 23 of the Android SDK, you need to also check that the permissions are correctly set at runtime. If they aren’t, you need to add code to prompt the user to grant these permissions.

There’s lots more about this topic on the Xamarin blog here.

I’ve pasted my MainActivity class below. This class:

  • Checks permissions,
  • Prompts the user for any permissions that are missing,
  • Registers the receiver to trigger when Bluetooth devices are detected, and
  • Starts scanning for Bluetooth devices.
using Android;
using Android.App;
using Android.Bluetooth;
using Android.Content;
using Android.Content.PM;
using Android.OS;
using Android.Support.V4.App;
using Android.Support.V4.Content;
 
namespace Bluetooth_Device_Scanner
{
    [Activity(Label = "Bluetooth Device Scanner", MainLauncher = true)]
    public class MainActivity : Activity
    {
        private BluetoothDeviceReceiver _receiver;
 
        protected override void OnCreate(Bundle savedInstanceState)
        {
            base.OnCreate(savedInstanceState);
 
            SetContentView(Resource.Layout.activity_main);
 
            const int locationPermissionsRequestCode = 1000;
 
            var locationPermissions = new[]
            {
                Manifest.Permission.AccessCoarseLocation,
                Manifest.Permission.AccessFineLocation
            };
 
            // check if the app has permission to access coarse location
            var coarseLocationPermissionGranted =
                ContextCompat.CheckSelfPermission(thisManifest.Permission.AccessCoarseLocation);
 
            // check if the app has permission to access fine location
            var fineLocationPermissionGranted =
                ContextCompat.CheckSelfPermission(thisManifest.Permission.AccessFineLocation);
 
            // if either is denied permission, request permission from the user
            if (coarseLocationPermissionGranted == Permission.Denied ||
                fineLocationPermissionGranted == Permission.Denied)
            {
                ActivityCompat.RequestPermissions(this, locationPermissions, locationPermissionsRequestCode);
            }
 
            // Register for broadcasts when a device is discovered
            _receiver = new BluetoothDeviceReceiver();
 
            RegisterReceiver(_receiver, new IntentFilter(BluetoothDevice.ActionFound));
 
            BluetoothDeviceReceiver.Adapter.StartDiscovery();
        }
    }
}

Now the application will call the BluetoothDeviceReceiver class’s OnReceive method when it detects Bluetooth hardware.

Wrapping up

Hopefully this is useful to anyone writing a Xamarin.Android application that interacts with Bluetooth devices – I struggled with this for a while and wasn’t able to find an article which detailed all the pieces of the puzzle:

  • Update the manifest with the 4 required application permissions
  • Create a class that extends BroadcastReceiver,
  • Check at runtime that the location permissions have been granted and prompt the user if they haven’t, and
  • Register the receiver class and start discovery.

About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net core, Azure, C# tip, Clean Code, Cloud architecture, Security

Simplifying Azure Key Vault and .NET Core Web App (includes NuGet package)

In my previous post I wrote about securing my application secrets using Azure Key Vault, and in this post I’m going to write about how to simplify the code that a .NET Core web app needs to use the Key Vault.

I previously went into a bit of detail about how to create a Key Vault and add a secret to that vault, and then add a Managed Service Identity to a web app. At the end of the post, I showed some C# code about how to access a secret inside a controller action.

public class HomeController : Controller
{
    public async Task<ActionResult> Index()
    {
        var azureServiceTokenProvider = new AzureServiceTokenProvider();
        var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
        var secret = await keyVaultClient.GetSecretAsync("https://mywebsitesecret.vault.azure.net/secrets/TheSecret").ConfigureAwait(false);
        ViewBag.Secret = secret.Value;
        return View();
    }
    // rest of the class...
}

Whereas this works for the purposes of an example of how to use it in a .NET Core MVC application, it’s not amazingly pretty code – for any serious application, I wouldn’t have all this code in my controller.

I think it would be more logical to access my secrets at the time my web application starts up, and put them into the Configuration for the app. Therefore if I need them later, I can just inject an IConfiguration object into my class, and use that to get the secret values.

using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Configuration.AzureKeyVault;

namespace MyWebApplication
{
    public class Program
    {
        public static void Main(string[] args)
        {
            BuildWebHost(args).Run();
        }
 
        private static IWebHost BuildWebHost(string[] args)
        {
            return WebHost.CreateDefaultBuilder(args)
                .ConfigureAppConfiguration(builder =>
                {
                    var azureServiceTokenProvider = new AzureServiceTokenProvider();
                    var keyVaultClient =
                        new KeyVaultClient(
                            new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
                    builder.AddAzureKeyVault("https://mywebsitesecret.vault.azure.net/",
                            keyVaultClient, new DefaultKeyVaultSecretManager());
                })
                .UseStartup<Startup>()
                .Build();
        }
    }
}

But as you can see above, the code I need to add here to build my web host is not very clear either – it seemed a shame to lose all the good work done in .NET Core 2 to simplify this class.

So I’ve created a NuGet package to allow me to have a simpler and cleaner interface – it’s at uploaded to the NuGet repository here, and I’ve open sourced the code at GitHub here.

As usual you can install pretty easily from the command-line:

Install-Package Kodiak.Azure.WebHostExtension -prerelease

Now my BuildWebHost method looks much cleaner – I can just add the fluent extension AddAzureKeyVaultSecretsToConfiguration and pass in the URL of the vault).

private static IWebHost BuildWebHost(string[] args)
{
    return WebHost.CreateDefaultBuilder(args)
        .AddAzureKeyVaultSecretsToConfiguration("https://myvaultname.vault.azure.net")
        .UseStartup<Startup>()
        .Build();
}

I think this is a more elegant implementation, and now if I need to access the secret inside my controller’s action, I can use the cleaner code below.

using System.Diagnostics;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Configuration;
 
namespace MyWebApplication.Controllers
{
    public class HomeController : Controller
    {
        private readonly IConfiguration _configuration;
 
        public HomeController(IConfiguration configuration)
        {
            _configuration = configuration;
        }
 
        public IActionResult Index()
        {
            ViewBag.Secret = _configuration["MySecret"];
 
            return View();
        }
    }
}

Summing up

Azure Key Vault (AKV) is a useful tool to help keep production application secrets secure, although like any tool it’s possible to mis-use it, so don’t assume that just because you’re using AKV that your secrets are secured – always remember to examine threat vectors and impacts.

There’s a lot of documentation on the internet about AKV, and a lot of it recommends using the client Id and secrets in your code – I personally don’t like this because it’s always risky to have any kind of secret in your code. The last couple of posts that I’ve written have been about how to use an Azure Managed Service Identity with your application to avoid secrets in your code, and how to simplify the C# code you might use in your application to access AKV secrets.