.net, C# tip

An extension method for .NET enumerations that uses the DescriptionAttribute

Sometimes little improvements make a big difference to my day to day programming. Like when I discovered the String.IsNullOrEmpty method (I think it came into .NET Framework way back in v2.0) – when I was testing for non-empty strings , I was able to use that super useful language feature without having to remember to write two comparisons (like myString != null && myString != string.Empty). I’ve probably used that bit of syntactical candy in every project I’ve worked on since then.

I work with enumerations all the time too. And a lot of the time, I need a text representation of the enumeration values which is a bit more complex than the value itself. This is where I find the DescriptionAttribute so useful – it’s a place supplied natively by .NET where I can add that text representation.

I’ve provided a simple example of this kind of enumeration with descriptions for each item below – maybe for a production application I’d localise the text, but you get the idea.

public enum Priority
{
    [Description("Highest Priority")]
    Top,
    [Description("MIiddle Priority")]
    Medium,
    [Description("Lowest Priority")]
    Low
}

But I’ve always felt I’d love to have native access to a method that would give me the description. Something like:

Priority.Medium.ToDescription();

Obviously it’s really easy to write an extension method like this, but every time I need it on a new project for a new client, I have to google for how to use types and reflection to access the attribute method, and then write that extension method. Then I think about the other extension methods that I might like to write (what about ToDisplayName(), that might be handy…), and how to make it generic so I can extend it later, and what about error handling…

…and anyway, this process always includes the thought, “I’ve lost count how many times I’ve done this for my .NET enumerations, why don’t I write this down somewhere so I can re-use it?

So I’ve written it down below.

public static class EnumerationExtensions
{
    public static string ToDescription(this Enum enumeration)
    {
        var attribute = GetText<DescriptionAttribute>(enumeration);
 
        return attribute.Description;
    }
 
    public static T GetText<T>(Enum enumeration) where T : Attribute
    {
        var type = enumeration.GetType();
        
        var memberInfo = type.GetMember(enumeration.ToString());
 
        if (!memberInfo.Any())
            throw new ArgumentException($"No public members for the argument '{enumeration}'.");
 
        var attributes = memberInfo[0].GetCustomAttributes(typeof(T), false);
 
        if (attributes == null || attributes.Length != 1)
            throw new ArgumentException($"Can't find an attribute matching '{typeof(T).Name}' for the argument '{enumeration}'");
 
        return attributes.Single() as T;
    }
}

I’ve split it into two methods, so if I want to create a ‘ToDisplayName()’ extension later, it’s really easy for me to do that. I might include it in a NuGet package so it’s even easier for me to re-use later. I’ll update this post if I do.

Wrapping up

This was a quick blog post covering a simple area, about something that bugs me – having to rewrite a simple function each time I start a new project. I can imagine a bunch of reasons why this isn’t a native extension in .NET Framework/Core – and my implementation is an opinionated extension. I’m sure there’d be a lot of disagreement about what good practice is. Anyway, this class works for me – I hope it’s useful to you also.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, C# tip

Correctly reading encoded text with the StreamReader in .NET

So here’s a problem…

I’ve got a list of my team members in a text file which I need to parse and process in .NET. The file is pretty simple – it’s called MyTeamNames.txt and it contains the following names:

  • Adèle
  • José
  • Chloë
  • Auróra

I created the text file on Windows 10 machine and used Notepad. I saved the file with the default ANSI encoding.

ansi save

I’m going to read names from this text file using a .NET Framework StreamReader – there’s a simple and clear example on the docs.microsoft.com site describing how to do this. So I’ve written a spike of code to use a StreamReader – I’ve more or less copied directly from the link above – and it looks like this:

using System;
using System.IO;
 
namespace ConsoleApp
{
    internal static class Program
    {
        private static void Main()
        {
            try
            {
                const string myTeamNamesFile = @"C:\Users\jeremy.lindsay\Desktop\MyTeamNames.txt";
 
                // Open the text file using a stream reader.
                using (var streamReader = new StreamReader(myTeamNamesFile))
                {
                    // Read the stream to a string, and write the string to the console.
                    var line = streamReader.ReadToEnd();
                    Console.WriteLine(line);
                }
            }
            catch (Exception e)
            {
                Console.WriteLine("The file could not be read:");
                Console.WriteLine(e.Message);
            }
        }
    }
}

But when I run the code, there’s a problem. I expected to see my team’s names written to the console – but instead all those names now have question marks scattered throughout them, as shown in the image below.

wrongnames

What’s gone wrong?

The StreamReader object and original text file need to have compatible encoding types

It’s pretty obvious that the question marks relate to the non-ASCII characters, and each name on my list have either an accent or a grave, or an umlaut/diaeresis.

The problem in my .NET Framework console application is that my StreamReader is assuming that my file is encoded one way when it’s actually encoded in another way, and it doesn’t know what to do with some characters, so it uses a question mark as a default.

Can you detect the file’s encoding type with .NET?

Big thanks to Erich Brunner for pointing out a new bit of information to me about the default encoding type – I’ve updated this post to reflect his helpful steer.

It turns out detecting the file’s encoding type is quite a difficult thing to do in .NET. But as I mentioned earlier, I know I saved this file with the encoding type ANSI selected from the dropdown list in Notepad.

Interestingly, this doesn’t mean that I’ve saved the file as ANSI – this is a misnomer. From the MSDN glossary:

“The term “ANSI” as used to signify Windows code pages is a historical reference, but is nowadays a misnomer that continues to persist in the Windows community. The source of this comes from the fact that the Windows code page 1252 was originally based on an ANSI draft—which became International Organization for Standardization (ISO) Standard 8859-1. “ANSI applications” are usually a reference to non-Unicode or code page–based applications.”

There are a couple of different options open to me here:

Change the file’s encoding type and save it with a specified encoding – e.g. UTF-8, or Unicode.

This is very straightforward – I’ve chosen to just select the UTF-8 option from the dropdown list in NotePad’s ‘Save As…’ dialog box.

save as utf-8

This time when I run the code above, the names are displayed correctly on the console, as shown below.

correct display

Alternatively, try using the StreamReader overload to specify the encoding type.

I can use an overload where I specify the encoding type, which comes from System.Text.Encoding.

var streamReader = new StreamReader(myTeamNamesFile, encodingType)

But what values do these encoding types resolve to? Well that depends on whether I use the .NET Framework or .NET Core. I’ve listed the values below, and notice that the Encoding.Default is different depending on whether you use the .NET Framework or .NET Core.

I’ve highlighted the values for “Encoding.Default“, because this is a special case.

System.Encoding value .NET Framework Encoding Header Name .NET Core Encoding Header Name
Encoding.Default Windows-1252 (on my machine) utf-8
Encoding.ASCII us-ascii us-ascii
Encoding.BigEndianUnicode utf-16BE utf-16BE
Encoding.UTF32 utf-32 utf-32
Encoding.UTF7 utf-7 utf-7
Encoding.UTF8 utf-8 utf-8
Encoding.Unicode utf-16 utf-16

So let’s say I use Encoding.Default in my .NET Framework console application, as shown in the code snippet below.

var streamReader = new StreamReader(myTeamNamesFile, Encoding.Default)

And now my names are now correctly rendered in the Console, as shown in the image below. This makes sense – the text file with my team names was saved with “ANSI” encoding, which we know actually corresponds Windows Code Page 1252. The default encoding on my own Windows machine turns out to also be the 1252 encoding (as highlighted above in red), so I’m instructing my StreamReader to use the 1252 encoding when reading a file which has been encoded as “ANSI” (also known as 1252). They match up, and the text displays correctly.

correct display

Problem solved, right? Well, no, not really.

Microsoft actually do not recommend using Encoding.Default. From docs.microsoft.com:

“Different computers can use different encodings as the default, and the default encoding can change on a single computer. If you use the Default encoding to encode and decode data streamed between computers or retrieved at different times on the same computer, it may translate that data incorrectly. In addition, the encoding returned by the Default property uses best-fit fallback to map unsupported characters to characters supported by the code page. For these reasons, using the default encoding is not recommended.”

If I target .NET Core instead of .NET Framework in my console application – with exactly the same code – I’m back to displaying question marks in my console text.

wrongnames

So even though telling the StreamReader in my .NET Framework console application to use Encoding.Default seems to work, it’s a case of it only working on my machine – it might not work on someone else’s machine. It certainly doesn’t work in .NET Core.

So it seems to me that saving my original text file as UTF-8 or Unicode is a better option.

And as a final reason to save the text file to UTF-8 or Unicode, let’s say I add a new team member, called Łukasz. If I try to save my file with ANSI encoding type, I get this warning:

unicode warning

If I press on and save the file as ANSI, the text for “Łukasz” is changed to “Lukasz” (note the change in the first character). But if I save the file as UTF-8 or Unicode, the name stays the same, including the initial “Ł”.

Wrapping up

It’s pretty common to be asked to read and process text files with non-ASCII characters, and even though .NET provides some really helpful APIs to assist with this, compatibility issues can still occur. This is a complex area, with variations across the .NET Framework and .NET Core runtimes.

I think the best solution is to change the encoding type of the original document to be Unicode or UTF-8, rather than ANSI (more correctly, Windows Code Page 1252). Telling the StreamReader to use Encoding.Default also worked for me, but it might not work on someone else’s machine with different defaults, leading to incorrect translations.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, Azure, C# tip

Setting up relationships between work items on Azure DevOps boards, and using .NET to read these relationships

So here’s a question…

How do you set up relationships between work items, and display this relationship in Azure Devops?

There’s a well known relationship between epics, features, user stories and tasks/bugs in the Agile process, but on the ‘Work Items’ screen, Azure DevOps lists them without showing the relationship –  like in the screen below.

workliist

Display relationships using the Backlogs view in Azure DevOps Boards

The thing is that the Work Items screen just lists all the work items – I prefer to view my work items in the Backlogs screen which visually represents the relationship between them.

But how do you set up relationships between work items?

Let’s work an example through from the start. I set up a new project in Azure DevOps – once I created the project, I’m shown the Summary screen, and for me this looks like the screenshot below.

project home

On the left hand navigation menu, I click on the ‘Boards’ item, and when this expands, I select the ‘Backlogs’ sub-item. This presents me with a screen where I’ve a few options, like the one at the top left – ‘New Work Item’.

backlog view

But when I click on this ‘New Work Item’ button, the pop-up only allows me to create a work item with type of ‘User Story’ (as shown below). This is not what I want – I want to create an Epic.

user story

To do this, I have to change some defaults in my Project Settings. At the bottom left of my screen, I click on the ‘Project settings’ button, and then select the ‘Team configuration’ sub-menu item which sits under the ‘Boards’ menu heading. This shows me a screen like the one below.

team configuration

There’s a section on this page which shows me the navigation levels available to me – by default, I don’t have the Epics checkbox ticked. So I can just tick the box as shown below to make this available. No need to click save anywhere – that setting is automatically saved back to the cloud.

backlogs with epic

Now if I go back to the ‘Backlogs’ menu item under ‘Boards’ in my projects left hand navigation menu, I need to select the dropdown list in the top right of the screen – my default setting here is ‘Stories’, but I can open the menu and now choose ‘Epics’ instead.

backlogs with epics dropdown

Now when I click on the ‘New Work Item’ button, I can create an Epic, and enter in the epic’s title, as shown below.

my epic title 2

And I’ve created my first epic in an Azure DevOps Board!

Ok, but what about nesting other items under that Epic?

There are a few different ways, but it’s straightforward (when you know how) – the way I like to do this is by selecting the ‘+’ button on the right hand side of my Epic. If you hover over this ‘+’ button, a tooltip appears that says ‘Add Feature’, and clicking on the button does exactly that.

add feature hover

A large dialog appears once you’ve clicked ‘+’ where you can add feature details – and note that in the bottom right of this dialog, there’s a ‘Related Work’ section, that shows the Epic we previously created as a parent.

new features

After clicking the blue ‘Save & Close’ button on the top right of the New Feature dialog, you’ll be taken back to the project board’s Backlog view, and you can see the feature that you just created below the epic we created previously, and it’s indented one place to the right to visually represent the parent-child relationship, as shown below.

add user story dropdown

And if you hover over the ‘+’ button to the left of the feature you just created, you’ll see the hint that this button now allows you to create a new user story. So if you click on ‘+’, you’ll have a similar experience to before, except the dialog that pops up is for a work item type of ‘User Story’. And you can see the relationship between this and the parent feature again by looking in the bottom right corner of the dialog, in the ‘Related Work’ section.

my new user story

And just to finish off this section, when you save that user story you’ll be taken to the backlog screen, and again see the user story sitting below its parent feature, indented one place to the right, as shown below. From this user story, you can click on the ‘+’ button on the left, and this time you’ve got a couple of options – either create a bug or a task with that user story as a parent.

show task and bug

I went ahead and create a task and a bug – the experience of creating them is identical to before where a dialog pops up with the type you select, and any existing relationed work detailed in the bottom right of the dialog box. So the image below shows my 5 new work items (an epic, a feature, a story, a task and a bug), and it’s easy to see the relationship between them by how they’re indented relative to each other.

indented

What about getting these items in .NET – how do I find out what items are related to?

I’ve previously written about creating Azure DevOps work items using the .NET framework, and you can re-apply some of the same principles to read work items into .NET objects.

I created a .NET Framework console app and installed the required NuGet packages using:

Install-Package Microsoft.TeamFoundationServer.Client
Install-Package Microsoft.VisualStudio.Services.Client

Then I used the code below to read back information about item 64 in my backlog – this is a user story which has a parent feature, and two children – a task and a bug.

So I expected the code below to tell me that there were a list of three relations in the workItem.Relations property.

using Microsoft.TeamFoundation.WorkItemTracking.WebApi;
using Microsoft.VisualStudio.Services.Common;
using Microsoft.VisualStudio.Services.WebApi;
using System;
 
namespace ConsoleApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            // my unique organization's Azure DevOps uri
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string personalAccessToken = "[[***my personal access token***]]";
 
            var credentials = new VssBasicCredential(string.Empty, personalAccessToken);
 
            // connect to Azure DevOps
            var connection = new VssConnection(new Uri(uri), credentials);
            var workItemTrackingHttpClient = connection.GetClient<WorkItemTrackingHttpClient>();
 
            // get information about workitem 64
            const int workItemId = 64;
            var workItem = workItemTrackingHttpClient.GetWorkItemAsync(workItemId).Result;
 
            // get relations
            var relations = workItem.Relations;
            Console.WriteLine(relations.Count); // uh oh - reports there are zero relations!
        }
    }

But this code doesn’t show relationships – why isn’t it working?

It turns out that the GetWorkItemAsync method doesn’t return relations by default. Instead, the GetWorkItemAsync method has an overload where you can specify to what extra information to return using a WorkItemExpand enumeration. In the code below I’ve chosen to return everything using:

expand: WorkItemExpand.All

But if I only wanted to return relations I could use:

expand: WorkItemExpand.Relations

The code below now correctly reports there are three items related to workitem 64.

using Microsoft.TeamFoundation.WorkItemTracking.WebApi;
using Microsoft.TeamFoundation.WorkItemTracking.WebApi.Models;
using Microsoft.VisualStudio.Services.Common;
using Microsoft.VisualStudio.Services.WebApi;
using System;
 
namespace ConsoleApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            // my unique organization's Azure DevOps uri
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string personalAccessToken = "[[***my personal access token***]]";
 
            var credentials = new VssBasicCredential(string.Empty, personalAccessToken);
 
            // connect to Azure DevOps
            var connection = new VssConnection(new Uri(uri), credentials);
            var workItemTrackingHttpClient = connection.GetClient<WorkItemTrackingHttpClient>();
 
            // get information about workitem 64
            const int workItemId = 64;
            var workItem = workItemTrackingHttpClient.GetWorkItemAsync(workItemId, expand: WorkItemExpand.All).Result;
 
            // get relations
            var relations = workItem.Relations;
            Console.WriteLine(relations.Count); // now correctly reports there are 3 relations
        }
    }
}

The relations list now correctly reports:

Wrapping up

This post has been about how to create work items with a hierarchical relationship using the Azure DevOps web user interface, and view them using the Backlog view in Azure -DevOps boards. I’ve also written about how to read these items and relationships between them using the .NET framework – I hope this helps!


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

GeoJson, GIS, SQL Server

Use SQL Server to import GeoJSON files, and convert geo-data back into GeoJSON

This post is about how to use SQL Server to load GeoJSON spatial data in a SQL Server table, and how to export geo-data from a SQL Server table back into valid GeoJSON.

So here’s a problem…

You’re working with some geodata which is stored in files formatted as JSON – specifically GeoJSON. And you need to query this data, and make a few modifications too. But working with this textual data by hand is kind of tedious – it’d be much nicer to be able query and manipulate this data using software with dedicated JSON querying functions.

SQL Server’s JSON capabilities can help solve this problem

I’ve mentioned in previous couple of blogs that I’ve been working with geodata recently, specifically in the GeoJSON format. GeoJSON is just JSON, but it adheres to a particular standard to describe a geographical feature.

You can read more about GeoJSON at http://geojson.org/.

An example of a GeoJSON geographical feature is shown below:

{
  "type": "Feature",
  "properties": {
    "BuildingReference": "BR-123: City Hall",
    "Address": "Donegall Square",
    "City": "Belfast",
    "Postcode": "BT1 5GS",
    "CurrentStatus": "In Use"
  },
  "geometry": {
    "type": "Point",
    "coordinates": [
      -5.9301,
      54.5967
    ]
  }
}

The really interesting parts of this JSON object are the ‘properties‘ and ‘geometry‘, which tells us information about the feature, and where it is. The example above shows a geographical point with latitude and longitude, but it could also be a shape or a line.

Below is an example of a GeoJSON FeatureCollection, which contains a couple of different features.

{
  "type": "FeatureCollection",
  "features": [
    {
      "type": "Feature",
      "properties": {
        "BuildingReference": "BR-123: City Hall",
        "Address": "Donegall Square",
        "City": "Belfast",
        "Postcode": "BT1 5GS",
        "CurrentStatus": "In Use"
      },
      "geometry": {
        "type": "Point",
        "coordinates": [
          -5.9301,
          54.5967
        ]
      }
    },
    {
      "type": "Feature",
      "properties": {
        "BuildingReference": "BR-456: Queen's University",
        "Address": "University Road",
        "City": "Belfast",
        "Postcode": "BT7 1NN",
        "CurrentStatus": "In Use"
      },
      "geometry": {
        "type": "Point",
        "coordinates": [
          -5.934,
          54.5844
        ]
      }
    }
  ]
}

If you’d like to see your GeoJSON feature collection represented on a map, check out http://geojson.io/.

So I’m working with files that have tens of thousands of features, and I need to run a few reports. For example, I need to know how many features per city are represented in the data. This would be a really easy query in T-SQL – but how can I get this data into SQL Server?

Fortunately SQL Server handles JSON really well. I can use the OPENROWSET keyword to bulk load my GeoJSON files into a variable, and it also provides the useful OPENJSON keyword, which allows me to parse a JSON string into its different components.

So if my GeoJSON is stored in a file named buildings.geojson, I can access the file using the code below, and represent it in SQL Server’s tabular format.

DECLARE @JSON nvarchar(max)
 
-- load the geojson into the variable
SELECT @JSON = BulkColumn
FROM OPENROWSET (BULK 'C:\Users\jeremy.lindsay\buildings.geojson', SINGLE_CLOB) as JSON
 
-- use OPENJSON to split the different JSON nodes into separate columns
SELECT
	*
FROM
OPENJSON(@JSON, '$.features')
	WITH (
		BuildingReference nvarchar(300) '$.properties.BuildingReference',
		Address nvarchar(300) '$.properties.Address',
		City nvarchar(300) '$.properties.City',
		Postcode nvarchar(300) '$.properties.Postcode',
		CurrentStatus nvarchar(300) '$.properties.CurrentStatus',
		Longitude nvarchar(300) '$.geometry.coordinates[0]',
		Latitude nvarchar(300) '$.geometry.coordinates[1]'
	)

Or if I wanted, I can just as easily load this data into a dedicated table, and represent the feature’s location as the SQL Server geography spatial type:

DROP TABLE IF EXISTS dbo.Buildings
 
CREATE TABLE dbo.Buildings
(
	Id int IDENTITY PRIMARY KEY,
	BuildingReference nvarchar(300),
	Address nvarchar(300),
	City nvarchar(300),
	Postcode nvarchar(300),
	CurrentStatus nvarchar(300),
	Coordinates GEOGRAPHY,
	Longitude nvarchar(100),
	Latitude nvarchar(100)
)
 
 
DECLARE @JSON nvarchar(max)
 
-- load the geojson into the variable
SELECT @JSON = BulkColumn
FROM OPENROWSET (BULK 'C:\Users\jeremy.lindsay\buildings.geojson', SINGLE_CLOB) as JSON
 
Insert Into dbo.Buildings (BuildingReference, Address, City, Postcode, CurrentStatus, Longitude, Latitude, Coordinates)
SELECT
	BuildingReference, 
	Address, 
	City,
	Postcode,
	CurrentStatus, 
	Longitude, 
	Latitude,
	geography::Point(Latitude, Longitude, 4326) AS Geography
FROM
OPENJSON(@JSON, '$.features')
	WITH (
		BuildingReference nvarchar(300) '$.properties.BuildingReference',
		Address nvarchar(300) '$.properties.Address',
		City nvarchar(300) '$.properties.City',
		Postcode nvarchar(300) '$.properties.Postcode',
		CurrentStatus nvarchar(300) '$.properties.CurrentStatus',
		Longitude nvarchar(300) '$.geometry.coordinates[0]',
		Latitude nvarchar(300) '$.geometry.coordinates[1]'
	)

How about exporting from SQL Server back to GeoJSON?

So querying data in the table is really easy for me now – but how about the scenario where I have data in SQL Server, and I want to export the results of a SELECT query to GeoJSON format?

Fortunately we can use the JSON querying capabilities of SQL Server – I can suffix my query with ‘FOR JSON PATH’ to convert the results of a SELECT query from a tabular format to a JSON format, as shown below:

DECLARE @featureList nvarchar(max) =
(
	SELECT
		'Feature'                                           as 'type',
		BuildingReference                                   as 'properties.BuildingReference',
		Address                                             as 'properties.Address',
		City                                                as 'properties.City',
		Postcode                                            as 'properties.Postcode',
		CurrentStatus                                       as 'properties.CurrentStatus',
		Coordinates.STGeometryType()                        as 'geometry.type',
		JSON_QUERY('[' + Longitude + ', ' + Latitude + ']') as 'geometry.coordinates'
	FROM Buildings
		FOR JSON PATH
)

But this doesn’t get me a result that’s quite right – it’s just a JSON formatted list of GeoJSON features. To make this a properly formatted GeoJSON featurecollection, I need to give this list a name – ‘features’, and specify the type as a ‘FeatureCollection’. Again this is reasonably straightforward with the built in JSON querying features of SQL Server.

DECLARE @featureList nvarchar(max) =
(
	SELECT
		'Feature'                                           as 'type',
		BuildingReference                                   as 'properties.BuildingReference',
		Address                                             as 'properties.Address',
		City                                                as 'properties.City',
		Postcode                                            as 'properties.Postcode',
		CurrentStatus                                       as 'properties.CurrentStatus',
		Coordinates.STGeometryType()                        as 'geometry.type',
		JSON_QUERY('[' + Longitude + ', ' + Latitude + ']') as 'geometry.coordinates'
	FROM Buildings
		FOR JSON PATH
)
 
DECLARE @featureCollection nvarchar(max) = (
	SELECT 'FeatureCollection' as 'type',
	JSON_QUERY(@featureList)   as 'Features'
	FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
)
 
SELECT @featureCollection

If you want to validate your GeoJSON, you can use a site like GeoJSONLint.com.

Wrapping up

SQL Server has great JSON querying capabilities, and I’ve found this really useful when I’ve combined this also with its support for geospatial querying. Hopefully this post is helpful to anyone working with spatial data in GeoJSON format.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, Azure

Load your work items into Azure DevOps Boards with .NET

This post is about how to write a .NET application to move workitems from another source (e.g. JIRA, Excel etc) into Azure Boards in Azure DevOps, and a Nuget package I’ve built to hopefully make it a bit easier for anyone else doing this as well.

So here’s a problem…

Let’s say you’ve convinced your boss to move your projects to Azure Devops – great! You’re happy, and your team are happy, but before you can really start, there’s still some work to be done – migration of all the historical project data from your existing company systems….

Maybe your company has its own custom story/issue/bug tracking system (maybe it’s JIRA, maybe it’s Mantis, or something else), and you don’t want to lose or archive all that valuable content. You want to load all that content in your project’s Azure Board as well – how do you do that?

Use .NET with Azure Boards to solve this problem

I had exactly this problem recently – my project’s history was exported into one big CSV file, and I needed to get it into Azure Boards. I had loads of fields which I needed to keep and I don’t want to lose all this…

…so I ‘.NET’ted  my way out of trouble.

A bit of searching on the internet also leads me to the option of bulk loading using Excel and the TFS Standalone Office Integration pack, but I’m a programmer and I prefer the flexibility of using code. Though, y’know, YMMV.

excel link

First I created a .NET Framework console application, and added a couple of NuGet packages for Azure DevOps:

Install-Package Microsoft.TeamFoundationServer.Client
Install-Package Microsoft.VisualStudio.Services.Client

These are both projects that target .NET Framework, so I can’t use .NET Core for this yet.

With these included in my application, I now have access to objects which allow me to connect to Azure DevOps through .NET, and also connect to a work item client that allows me to perform create/read/update/delete operations on work items in my project’s board.

It’s pretty easy to load up my project history CSV into a list in a .NET application, so I knew I had all the puzzle pieces to solve this problem, I just needed to put them together.

In order to connect to Azure DevOps and add items using .NET, I used:

  • The name of the project I want to add work items to – my project codename is “Corvette
  • The Url of my Azure DevOps instance – http://dev.azure.com/jeremylindsay
  • My personal access token.

If you’ve not generated a personal access token in Azure DevOps before, check this link out for details on how to do it – it’s really straightforward from the Azure DevOps portal:

https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=vsts

I can now use the code below to connect to AzureDevOps and create a work item client.

var uri = new Uri("http://dev.azure.com/jeremylindsay");
var personalAccessToken = "[***my access token***]";
var projectName = "Corvette";
 
var credentials = new VssBasicCredential("", personalAccessToken);
 
var connection = new VssConnection(uri, credentials);
var workItemTrackingHttpClient = connection.GetClient<WorkItemTrackingHttpClient>();

Next, I need to create what is basically a list of name and value pairs which describes the name of the work item field (e.g. title, description etc), and the value that I want to put in that field.

This link below describes the fields you can access through code:

https://docs.microsoft.com/en-us/azure/devops/reference/xml/reportable-fields-reference?view=vsts

It’s a little bit more complex than a normal dictionaries or other key-value pair objects in .NET but not that difficult. The work item client uses custom objects called JsonPatchDocuments and JsonPatchOperations. Also, the names of the fields are not intuitive out of the box, but given all that, I can still create a work item in .NET using the code below:

var bug = new JsonPatchDocument
{
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/System.Title",
        Value = "Spelling mistake on the home page"
    },
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/Microsoft.VSTS.TCM.ReproSteps",
        Value = "Log in, look at the home page - there is a spelling mistake."
    },
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/Microsoft.VSTS.Common.Priority",
        Value = "1"
    },
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/Microsoft.VSTS.Common.Severity",
        Value = "2 - High"
    }
};

Then I can add the bug to my Board with the code below:

workItemTrackingHttpClient.CreateWorkItemAsync(bug, ProjectName, "Bug").Result;

Now this works and is very flexible, but I think my code could be made more readable and easy to use. So I refactored the code, moved most of it into library, and uploaded it to NuGet here. My refactoring is pretty simple – I’m not going to go into lots of detail on how I did it, but if you’re interested the code is up on GitHub here.

If you’d like to get this package, you can use the command below

Install-Package AzureDevOpsBoardsCustomWorkItemObjects -pre

This package depends on the two NuGet packages I referred to earlier in this post, so they’ll be added automatically if you install my NuGet package.

This allows us to instantiate a bug object look much more like creation of a normal POCO, as shown below:

var bug = new AzureDevOpsBug
{
    Title = "Spelling mistake on the home page",
    ReproSteps = "Log in, look at the home page - there is a spelling mistake.",
    Priority = AzureDevOpsWorkItemPriority.Medium,
    Severity = AzureDevOpsWorkItemSeverity.Low,
    AssignedTo = "Jeremy Lindsay",
    Comment = "First comment from me",
    Activity = "Development",
    AcceptanceCriteria = "This is the acceptance criteria",
    SystemInformation = "This is the system information",
    Effort = 13,
    Tag = "Cosmetic; UI Only"
};

And to push this bug to my Azure Board, I can use the code below which is a little simpler than what I wrote previously.

using AzureDevOpsCustomObjects;
using AzureDevOpsCustomObjects.Enumerations;
using AzureDevOpsCustomObjects.WorkItems;
 
namespace ConsoleApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string personalAccessToken = "[[***my personal access token***]]";
            const string projectName = "Corvette";
 
            var workItemCreator = new WorkItemCreator(uri, personalAccessToken, projectName);
 
            var bug = new AzureDevOpsBug
            {
                Title = "Spelling mistake on the home page",
                ReproSteps = "Log in, look at the home page - there is a spelling mistake.",
                Priority = AzureDevOpsWorkItemPriority.Medium,
                Severity = AzureDevOpsWorkItemSeverity.Low,
                AssignedTo = "Jeremy Lindsay",
                Comment = "First comment from me",
                Activity = "Development",
                AcceptanceCriteria = "This is the acceptance criteria",
                SystemInformation = "This is the system information",
                Effort = 13,
                Tag = "Cosmetic; UI Only"
            };
 
            var createdBug = workItemCreator.Create(bug);
        }
    }
}

I’ve chosen to instantiate the bug with hard-coded text in the example above for clarity – but obviously you can instantiate the POCO any way you like, for example from a database, or perhaps parsing data out of a CSV file.

Anyway, the image below shows the bug added to my Azure Board.

bug

Of course, Bugs are not the only types of work item – let’s say I want to add Product Backlog Items also. And there are many, many different fields used in Azure Boards, and I haven’t coded for all of them in my NuGet package. So:

  • I’ve also added a Product Backlog object into my NuGet package,
  • I’ve made the creation method generic so it can detect the object type and work out what type of work item is being added to the Board
  • I’ve made the work item objects extensible so I users can add any fields which I haven’t coded for yet.

For example, the code below how to add a task and include a comment in the System.History field:

private static void Main(string[] args)
{
const string uri = "https://dev.azure.com/jeremylindsay";
const string personalAccessToken = "[[***my personal access token***]]";
const string projectName = "Corvette";
 
var workItemCreator = new WorkItemCreator(uri, personalAccessToken, projectName);
 
var productBacklogItem = new AzureDevOpsProductBacklogItem
{
    Title = "Add reports for how many users log in each day",
    Description = "Need a new report with log in statistics.",
    Priority = AzureDevOpsWorkItemPriority.Low,
    Severity = AzureDevOpsWorkItemSeverity.Low,
    AssignedTo = "Jeremy Lindsay",
    Activity = "Development",
    AcceptanceCriteria = "This is the acceptance criteria",
    SystemInformation = "This is the system information",
    Effort = 13,
    Tag = "Reporting; Users"
};
 
productBacklogItem.Add(
    new JsonPatchOperation
    {
        Path = "/fields/System.History",
        Value = "Comment from product owner."
    }
);
 
var createdBacklogItem = workItemCreator.Create(productBacklogItem);
}

Obviously I can change the code to allow addition of comments through a property in the AzureDevOpsProductBacklogItem POCO, but this is just an example to demonstrate how it can be done by adding a JsonPatchOperation.

The image below shows the product backlog item successfully added to my Azure Board.

bug

Wrapping up

The Boards component of Azure DevOps is a useful and effective way to track your teams work items. And if you want to populate a new Board with a list of existing bugs or backlog items, you can do this with .NET. I guess a lot of these functions aren’t new, and they were available in VSTS, and it’s still nice to see these powerful functions and libraries continue to be supported. And hopefully the NuGet package I’ve created to assist in the process will be useful to some of you who are working through the same migration challenges that I am. Obviously this NuGet package can still be improved a lot – it just covers Backlog Items and Bugs right now, and it’d be better if it flagged those fields that are read only – but it’s good enough to meet minimum viable standards for me right now, and maybe it’ll be helpful for you too.


About me: I regularly post about Microsoft technologies like Azure and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

Non-functional Requirements, Security

Another good reason to use GitHub.com to host your .NET source code – automated NuGet package vulnerability scans

This quick post is about how you can use GitHub and the OSS Index to scan your project’s NuGet packages for vulnerabilities – a good example of how perform your application security early on in the application life cycle (also known as ‘shift left‘)

So here’s a problem

You’re working on a .NET Core application, and obviously using some of the libraries provided by Microsoft. Behind the scenes, Microsoft’s security teams have found a security hole in one of these libraries and issued a patched NuGet package.

You wouldn’t always immediately update NuGet packages everytime there’s an upgrade, but you definitely would upgrade if you knew there was a vulnerability in one of your dependencies.

How are you going to know that Microsoft have found and fixed a vulnerability, so you can prioritise an application upgrade?

GitHub is helping to solve this problem

A little while back I got a couple of notifications from GitHub that one of my projects refers to a Microsoft NuGet package – Microsoft.AspNetCore.All, version 2.0.5 – and the library contains a potential security hazard.

security-image

I was pretty surprised – I already use the Audit.NET extension within Visual Studio 2017 to audit my NuGet packages against the OSS index. This extension triggers an error at design time in Visual Studio if it detects that my project uses a package that has a known vulnerability.

If you haven’t checked out the Sonatype OSS index, I really encourage you to do so – it has a bunch of useful tools and information to identify if there are known security vulnerabilities lurking in your open source dependencies.

sonatype

GitHub helpfully provided me with a bit more detail on what the problem was – my project used version 2.0.5 of the Microsoft.AspNetCore.All NuGet package, and this version is vulnerable to a couple of forms of attack (denial of service and excess consumption of resources).

vulnerability

This makes me extremely glad to be using GitHub to store my code, as it directly highlights to me that there’s a potential hazard lurking in the project dependencies. Now I can do something about it – like upgrade my libraries to the patched version v2.0.9, and push the changes to GitHub.

And after pushing the updated version, the security alerts disappear.

patched

Wrapping up

I like to ‘shift left‘ as much as possible with my application security testing – it’s unambiguously better to carry out security testing as early as possible in the application lift cycle. Having an security extra test automatically built into my GitHub source code repository is a great addition to the security testing process.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, .net core, Flurl, Polly

Using Polly and Flurl to improve your website

This post is about how to use The Polly Project to make a .NET website better. I use Flurl to consume Restful web services so I’ve some Flurl specific code later on, but I hope this post is useful to anyone who’s interested in learning what Polly is, what it’s for and how it can help you.

So here’s a problem

Let’s pretend you run your business through a website, and part of your code calls out to a web service that another company supplies.

And, every once in a while, errors from this web service appear in your logs. Sometimes the HTTP status code is a 404 (not found), sometimes the code is a 503 (service unavailable), and other times you see a 504 (timeout). There’s no pattern, it goes away as quickly as it starts, and you’d really really like to get this fixed before customers start cancelling their subscriptions to your service.

You call up the business running the remote web service, and their answer is a bit… vague. Every so often they restart their web servers which takes their service down for a couple of seconds, and at certain times of the day they get spikes of traffic which causes their system to max out for up to 5 seconds at a time. They’re apologetic, and they expect to migrate to new, better infrastructure in about 6 months. But their only workaround is for you to re-query the service.

So you could be forgiven for going spare right now – this response doesn’t fix anything. This company is the only place you can get the data you need so you’re locked in. And you know your customers are seeing errors because it’s right there staring at you from your website logs. Asking your customers to ‘just hit refresh’ when they get an error is a great way to lose business and win a bad reputation.

You can use Polly to help solve this problem

When I first read about Polly a long while back, I was really interested but I wasn’t sure how I could apply it to the project I was working on.  What I wanted was to find a post that described a real world scenario that I could recognise and identify with, and how Polly would help with that.

Since then, I’ve worked on projects a little bit like the one I described above – one time when I’ve raised a ticket to say that we’re having intermittent problems with a web service, I’ve been told that the workaround is ‘hit refresh’. And since there’s a workaround, it’s only going to be raised as medium priority issue (which feels like a coded message for ‘we’re not even going to look at this’). This kind of thing drives me crazy and it’s exactly the kind of problem that Polly can at least mitigate.

I’ve also met people who are doing really interesting work with hardware devices in .NET, and need to be able to handle hardware that can only deal with single threads – Polly allows the application to handle occasions when it doesn’t receive an acknowledgement from the hardware by waiting for a while and then retrying.

Let’s get to some code

I’ve pushed all of the code below to a repo in my Github, so you pull it locally and step through it yourself.

First, a couple of harnesses to simulate a flakey web-service

So I’ve written a simple (and really awful) web-service project to simulate random transient errors. The service is just meant to return what day it is, but it’ll only work about two times out of three. The rest of the time it’ll return either a 404 (Not Found), a 503 (Service Unavailable), or it’ll hang for 10 seconds and then return a 504 (Service timed out).

using System;
using System.Diagnostics;
using System.Threading;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Newtonsoft.Json;
 
namespace WorldsWorstWebService.Controllers
{
    [Route("api/[controller]")]
    [ApiController]
    public class WeekDayController : ControllerBase
    {
        [HttpGet]
        public IActionResult Get()
        {
            // Manufacture 404, 503 and 504 errors for about a third of all responses
            var randomNumber = new Random();
            var randomInteger = randomNumber.Next(08);
 
            switch (randomInteger)
            {
                case 0:
                    Debug.WriteLine("Webservice:About to serve a 404...");
                    return StatusCode(StatusCodes.Status404NotFound);
 
                case 1:
                    Debug.WriteLine("Webservice:About to serve a 503...");
                    return StatusCode(StatusCodes.Status503ServiceUnavailable);
 
                case 2:
                    Debug.WriteLine("Webservice:Sleeping for 10 seconds then serving a 504...");
                    Thread.Sleep(10000);
                    Debug.WriteLine("Webservice:About to serve a 504...");
 
                    return StatusCode(StatusCodes.Status504GatewayTimeout);
                default:
                {
                    var formattedCustomObject = JsonConvert.SerializeObject(
                        new
                        {
                            WeekDay = DateTime.Today.DayOfWeek.ToString()
                        });
 
                    Debug.WriteLine("Webservice:About to correctly serve a 200 response");
 
                    return Ok(formattedCustomObject);
                }
            }
        }
    }
}

I’ve also written another web application project that consumes this service using Flurl.

If you’re interested in Flurl and Restful web services, I’ve written more about using it here.

using System.Diagnostics;
using System.Threading.Tasks;
using Flurl.Http;
using Microsoft.AspNetCore.Mvc;
using MyWebsite.Models;
 
namespace MyWebsite.Controllers
{
    public class HomeController : Controller
    {
        public async Task<IActionResult> Index()
        {
            try
            {
                var weekday = await "https://localhost:44357/api/weekday"
                    .GetJsonAsync<WeekdayModel>();
 
                Debug.WriteLine("[App]: successful");
 
                return View(weekday);
            }
            catch (Exception e)
            {
                Debug.WriteLine("[App]: Failed - " + e.Message);
                throw;
            }
        }
    }
}

So I carried out a simple experiment – run these projects and try to hit my website 20 times, I mostly get successful responses, but I still get a load of failures. I’ve pasted the debug log below.

[App]: successful
[App]: Failed - Call failed with status code 503 (Service Unavailable): GET https://localhost:44357/api/weekday
[App]: successful
[App]: successful
[App]: successful
[App]: Failed - Call failed with status code 504 (Gateway Timeout): GET https://localhost:44357/api/weekday
[App]: successful
[App]: successful
[App]: Failed - Call failed with status code 503 (Service Unavailable): GET https://localhost:44357/api/weekday
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App]: Failed - Call failed with status code 503 (Service Unavailable): GET https://localhost:44357/api/weekday
[App]: successful
[App]: Failed - Call failed with status code 503 (Service Unavailable): GET https://localhost:44357/api/weekday
[App]: successful
[App]: Failed - Call failed with status code 404 (Not Found): GET https://localhost:44357/api/weekday

So out of 20 page hits, my test web app failed 6 times – about a 30% failure rate. That’s pretty poor (and about consistent with what we expect from the flakey web service).

Let’s say I don’t control the behaviour of the web services upstream of my web app, so I can’t change reason why my web app is failing, but let’s see if Polly allows me to reduce the number of failures that my web app users see.

Wiring up Polly

First let’s design some rules, also known as ‘policies’

So what’s a ‘policy’? Basically it’s just a rule that’ll help mitigate the intermittent problem.

For example – the web service frequently delivers 404 and 503 messages, but it’s back up again quickly. So a policy could be:

Retry Policy: When the web services returns an unsuccessful HTTP code, wait a second and try again. If it still fails, wait three seconds and try again, and if it still fails, then wait five more seconds and try one more time. If it fails after that, the service is dead and we need to deal with the error.

We also know that the web service hangs for 10 seconds before delivering a 504 timeout message. I don’t want my customers to wait for this long – after a couple of seconds I’d like to my app to give up, and execute the ‘Retry Policy’ above.

Timeout Policy: When I’ve been waiting for a response for longer than 2 seconds, cut my losses and execute the Retry Policy.

Wrapping these policies together forms a ‘Policy Strategy’.

So the first step is to install the Polly nuget package to the web app project:

Install-Package Polly

Polly is an open source project hosted on Github, with a BSD licence. It’s also a member of the .NET Foundation,

So what would these policies look like in code? The timeout policy is like the code below, where we can just pass the number of seconds to wait as a parameter:

var timeoutPolicy = Policy.TimeoutAsync<HttpResponseMessage>(2);

There’s also an overload, and I’ve specified some debug messages using that below.

var timeoutPolicy = Policy.TimeoutAsync<HttpResponseMessage>(2, (context, timeSpan, task) =>
{
    Debug.WriteLine($"[App|Policy]: Timeout delegate fired after {timeSpan.Seconds} seconds");
    return Task.CompletedTask;
});

The retry policy is a little different from the timeout policy:

  • I first specify the conditions under which I should retry – there must be an unsuccessful HTTP status code, or there must be a timeout exception.
  • Then I can specify how to wait and retry – first wait 1 second before retrying, then wait 3 seconds, then wait 5 seconds.
  • Finally I’ve used the overload with a delegate to write comments to debug.
var retryPolicy = Policy
    .HandleResult<HttpResponseMessage>(r => !r.IsSuccessStatusCode)
    .Or<TimeoutRejectedException>()
    .WaitAndRetryAsync(new[]
        {
            TimeSpan.FromSeconds(1),
            TimeSpan.FromSeconds(3),
            TimeSpan.FromSeconds(5)
        },
        (result, timeSpan, retryCount, context) =>
        {
            Debug.WriteLine($"[App|Policy]: Retry delegate fired, attempt {retryCount}");
        });

And I can bundle these policies together as a single policy strategy like this:

var policyStrategy = Policy.WrapAsync(RetryPolicy, TimeoutPolicy);

I’ve grouped these policies in their own class and pasted the code below.

public static class Policies
{
    private static TimeoutPolicy<HttpResponseMessage> TimeoutPolicy
    {
        get
        {
            return Policy.TimeoutAsync<HttpResponseMessage>(2, (context, timeSpan, task) =>
            {
                Debug.WriteLine($"[App|Policy]: Timeout delegate fired after {timeSpan.Seconds} seconds");
                return Task.CompletedTask;
            });
        }
    }
 
    private static RetryPolicy<HttpResponseMessage> RetryPolicy
    {
        get
        {
            return Policy
                .HandleResult<HttpResponseMessage>(r => !r.IsSuccessStatusCode)
                .Or<TimeoutRejectedException>()
                .WaitAndRetryAsync(new[]
                    {
                        TimeSpan.FromSeconds(1),
                        TimeSpan.FromSeconds(2),
                        TimeSpan.FromSeconds(5)
                    },
                    (delegateResult, retryCount) =>
                    {
                        Debug.WriteLine(
                            $"[App|Policy]: Retry delegate fired, attempt {retryCount}");
                    });
        }
    }
 
    public static PolicyWrap<HttpResponseMessage> PolicyStrategy => Policy.WrapAsync(RetryPolicy, TimeoutPolicy);
}

Now I want to apply this Policy Strategy to every outgoing call to the 3rd party web service.

How do I apply these policies when I’m using Flurl?

One of the things I really like about using Flurl to consume 3rd party web services is that I don’t need to instantiate an HttpClient, or worry about running out of available sockets every time I make a call – Flurl handles all of this in the background for me.

But that also means it’s not immediately obvious how I can configure calls to the HttpClient used in the background so that my policy strategy is applied to each call.

Fortunately Flurl provides a way to do this by adding a few new classes to my web app project, and a configuration instruction. I can configure Flurl’s settings in my web app’s Startup file to make it use a different implementation of Flurl’s default HttpClientFactory (which overrides how HTTP messages are handled).

public void ConfigureServices(IServiceCollection services)
{
    //...other service configuration here
 
    FlurlHttp.Configure(settings => settings.HttpClientFactory = new PollyHttpClientFactory());
}

The PollyHttpClientFactory is an extension of Flurl’s default HttpClientFactory. This overrides how HttpMessages are handled, and instead uses our own PolicyHandler.

public class PollyHttpClientFactory : DefaultHttpClientFactory
{
    public override HttpMessageHandler CreateMessageHandler()
    {
        return new PolicyHandler
        {
            InnerHandler = base.CreateMessageHandler()
        };
    }
}

And the PolicyHandler is where we apply our rules (the policy strategy) to outgoing HTTP requests.

public class PolicyHandler : DelegatingHandler
{
    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        return Policies.PolicyStrategy.ExecuteAsync(ct => base.SendAsync(request, ct), cancellationToken);
    }
}

Now let’s see if this improves things

With the policies applied to requests to the 3rd party web service, I repeated the earlier experiment and hit my application again 20 times.

[App]: successful
[App]: successful
[App|Policy]: Timeout delegate fired after 2000
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Timeout delegate fired after 2000
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App]: successful
[App]: successful
[App|Policy]: Timeout delegate fired after 2000
[App|Policy]: Retry delegate fired, attempt 1
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App]: successful
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App]: successful
[App]: successful
[App]: successful

This time, my users would have experienced no application failures in those 20 page hits. But all those orange lines are the times that the web service failed, and our policy was to try again – which eventually lead to a successful response from my web app.

In fact, I went on to hit the page 100 times and only saw two errors in total, so the total failure rate that my users experience now is at about 2% – way better than the 30% failure rate experienced originally.

Obviously this is a very contrived example – real world examples are likely to be a bit more complex. And your rules and policies will be different to mine. Instead of retrying, maybe you want to fallback to a different action (e.g. hit a different web service, pull from a cache etc.) – and Polly has its own fallback mechanism to do this. You’ll have to design your own rules and policies to handle the particular failure modes that you face.

Wrapping up

I’d a couple of aims when writing this post – first of all I wanted to come up with a couple of different scenarios for how Polly could be used in your application. I mostly work with web applications and web services, and I also like using Flurl for accessing these services, so that’s what this article focusses on. But I’ve just scratched the surface here – Polly can do way more than that. Check out the Polly Wiki to find out more about it, or look at the samples.

 


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!
Continue reading