.net core, C# tip, MVC

Adding middleware to your .NET Core MVC pipeline to prettify HTML output with AngleSharp

I was speaking to a friend of mine recently about development and server side generated HTML, and they said that one thing they would love to do is improve how HTML code is when it’s rendered. Often when they look at the HTML source of a page, the indentation is completely wrong, and there are huge amounts of whitespace and unexpected newlines.

And I agreed – I’ve seen that too. Sometimes I’ve been trying to debug an issue in the rendered output HTML, and one of the first things I do format and indent the HTML code so I can read and understand it. And why not – if my C# classes aren’t indented logically, I’d find it basically unreadable. Why should my HTML be any different?

So it occurred to me that I might be able to find a way to write some middleware for my .NET Core MVC website that formats and indents rendered HTML for me by default.

This post is just a fun little experiment for me – I don’t know if the code is performant, or if it scales. Certainly on a production site I might want to minimise the amount of whitespace in my HTML to improve download speeds rather than just change the formatting.

Formatting and Indenting HTML

I’ve seen a few posts asking how to do this with HtmlAgilityPack – but even though HtmlAgilityPack is amazing, it won’t format HTML.

I’ve also seen people recommend a .NET wrapper for the Tidy library, but I’m going to use AngleSharp. AngleSharp is a .NET library that allows us to parse HTML, and contains a super useful formatter called PrettyMarkupFormatter.

var parser = new AngleSharp.Html.Parser.HtmlParser();
var document = parser.ParseDocument("<html><body>Hello, world</body></html>");
 
var sw = new StringWriter();
document.ToHtml(swnew AngleSharp.Html.PrettyMarkupFormatter());
 
var indentedHtml = sw.ToString();

And I can encapsulate this in a function as below:

private static string PrettifyHtml(string newContent)
{
    var parser = new AngleSharp.Html.Parser.HtmlParser();
    var document = parser.ParseDocument(newContent);
 
    var sw = new StringWriter();
    document.ToHtml(swnew AngleSharp.Html.PrettyMarkupFormatter());
    return sw.ToString();
}

Adding middleware to modify the HTML output

There’s lots of information on writing ASP.NET Core middleware here and I can build on this and the AngleSharp code to re-format the rendered HTML. The code below allows me to:

  • Check I’m in my development environment,
  • Read the rendered HTML from the response,
  • Correct the indentation using AngleSharp and the new PrettifyHtml method, and
  • Write the formatted HTML back to the Response.
if (env.IsDevelopment())
{
    app.Use(async (contextnext=>
    {
        var body = context.Response.Body;
 
        using (var updatedBody = new MemoryStream())
        {
            context.Response.Body = updatedBody;
 
            await next();
 
            context.Response.Body = body;
 
            updatedBody.Seek(0SeekOrigin.Begin);
            var newContent = new StreamReader(updatedBody).ReadToEnd();
 
            await context.Response.WriteAsync(PrettifyHtml(newContent));
        }
    });
}

And now the HTML generated by my MVC application is formatted and indented correctly.

Wrapping up

This post is really just a proof of concept and for fun – I’ve restricted the effect to my development environment in case it doesn’t scale well. But hopefully this is useful to anyone trying to format HTML, or intercept an HTML response to modify it.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

Continue reading

.net, C# tip

Instantiating a C# object from a string using Activator.CreateInstance in .NET

Recently I hit an interesting programming challenge – I need to write a library which can instantiate and use a C# class object from a second C# assembly.

Sounds simple enough…but the catch is that I’m only given some string information about the class at runtime, such as the class name, its namespace, and what assembly it belongs to.

Fortunately this is possible using the Activator.CreateInstance method in C#. First I need to format the namespace, class name and assembly name in a special way – as an assembly qualified name.

Let’s look at an example – the second assembly is called “MyTestProject” and the object I need to instantiate from my library looks like the one below.

namespace SampleProject.Domain
{
    public class MyNewTestClass
    {
        public int Id { getset; }
 
        public string Name { getset; }
 
        public string DoSpecialThing()
        {
            return "My name is MyNewTestClass";
        }
    }
}

This leads to the assembly qualified name:

"SampleProject.Domain.MyNewTestClass, MyTestProject"

Note that the format here is along the lines of:

"{namespace}.{class name}, "{assembly name}"

Another way of finding this assembly qualified name is to run the code below:

Console.WriteLine(typeof(MyNewTestClass).AssemblyQualifiedName);

This will output something like:

SampleProject.Domain.MyNewTestClass, MyTestProject, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null

There’s extra information about Version, Culture and PublicKeyToken – you might need to use this if you’re targetting different versions of a library, but in my simple example I don’t need this so I won’t elaborate further here.

Now I have the qualified name of the class, I can instantiate the object in my library using the Activator.CreateInstance, as shown below:

const string objectToInstantiate = "SampleProject.Domain.MyNewTestClass, MyTestProject";
 
var objectType = Type.GetType(objectToInstantiate);

var instantiatedObject = Activator.CreateInstance(objectType);

But it’s a bit difficult to do anything useful with the instantiated object – at runtime we obviously know it has the type of Object, and I suppose we could call the ToString() method, but for a new class that’s of limited use. How can we access properties and methods in the instantiated class?

One way is to use the dynamic keyword to manipulate the new object

We could use the dynamic keyword with the instantiated object, and set/get methods dynamically, like in the code below:

const string objectToInstantiate = "SampleProject.Domain.MyNewTestClass, MyTestProject";
 
var objectType = Type.GetType(objectToInstantiate);

dynamic instantiatedObject = Activator.CreateInstance(objectTypeas ITestClass;
 
// set a property value
instantiatedObject.Name = "Test Name";
 
// get a property value
string name = instantiatedObject.Name;
 
// call a method - this outputs "My name is MyNewTestClass"
Console.Write(instantiatedObject.DoSpecialThing());

Another way to manipulate the instantiated object is through using a shared interface

We could make the original object implement an interface shared across all projects. If I add the interface below to my project…

namespace SampleProject.Domain
{
    public interface ITestClass
    {
        int Id { getset; }
        string Name { getset; }
        string DoSpecialThing();
    }
}

…then our original object could implement this interface:

namespace SampleProject.Domain
{
    public class MyNewTestClass : ITestClass
    {
        public int Id { getset; }
 
        public string Name { getset; }
 
        public string DoSpecialThing()
        {
            return "My name is MyNewTestClass";
        }
    }
}

So if we happen to know at design time that the object we want to instantiate implements the ITestClass interface, we can access methods exposed by that interface – there’s no need to use the dynamic keyword now.

const string objectToInstantiate = "SampleProject.Domain.MyNewTestClass, MyTestProject";
 
var objectType = Type.GetType(objectToInstantiate);

var instantiatedObject = Activator.CreateInstance(objectTypeas ITestClass;
 
// set a property value
instantiatedObject.Name = "Test Name";
 
// get a property value
var name = instantiatedObject.Name;
 
// call a method - this outputs "My name is MyNewTestClass"
Console.Write(instantiatedObject.DoSpecialThing());

And of course if I have another domain object which implements the same interface but has different behaviour, like the one below…

namespace SampleProject.Domain
{
    public class DifferentTestClass : ITestClass
    {
        public int Id { getset; }
 
        public string Name { getset; }
 
        public string DoSpecialThing()
        {
            return "This is a different special thing";
        }
    }
}

..then I can use similar code to instantiate and manipulate the object – I just need to use the different object’s assembly qualified name:

const string objectToInstantiate = "SampleProject.Domain.DifferentTestClass, MyTestProject";
 
var objectType = Type.GetType(objectToInstantiate);

var instantiatedObject = Activator.CreateInstance(objectTypeas ITestClass;
 
// set a property value
instantiatedObject.Name = "Other Test Name";
 
// get a property value
string name = instantiatedObject.Name;
 
// call a method - but this now outputs "This is a different special thing"
Console.Write(instantiatedObject.DoSpecialThing());

Hopefully this helps anyone else facing a similar challenge – it’s worth bearing in mind that reflection is very powerful, but also can be a bit slower than other techniques.

.net, .net core, Azure, Azure DevOps, Azure DevOps Boards, Flurl

How to delete a TestCase from Azure DevOps boards using .NET, Flurl and the Azure DevOps Restful API

So here’s a problem…

I’ve been working with Azure DevOps and finding it really great – but every now and again I hit a roadblock and feel like I’m on the edge of what’s possible with the platform.

For instance – I’ve loaded work items and test cases into my development instance for some analysis before going to my production instance, and now I’d like to delete all of them. Sounds like a simple thing to do from the UI – I’ve selected multiple work items before, clicked on the ellipsis on one item and selected ‘Delete’ from the menu that appears.

bulk delete

Except that sometimes it doesn’t work out like that. Where’s the delete option gone in the menu below?

bulk delete fail

Right now you can only delete one test case at a time through the Azure DevOps web user interface

You can only delete test cases one at a time through the Azure DevOps web UI at the moment, and you can’t do it from the WorkItem list view. To delete a test case, select the test case to display its detailed view, and then select the ellipsis at the top right of this view to reveal an action menu (as shown below). You can select the options with the text ‘Permanently delete’

delete test case

Then you’ll be presented with a dialog asking you to confirm the deletion and enter the Test Case ID to confirm your intent.

perm delete

This is a lot of work if you’ve got a few (or a few hundred) test cases to delete.

Fortunately, this isn’t the only option available – I can .NET my way out of trouble.

You also can use .NET, Flurl and the Azure DevOps Restful API to delete test cases

Azure DevOps also provides a Restful interface which has comprehensive coverage of the functions available through the web UI – and sometimes a bit more. This is one of those instances where the using Restful API gives me the flexibility that I’m looking for.

I’ve previously written about using libraries with .NET to simplify accessing Restful interfaces – one of my favourite libraries is Flurl, because it makes it really easy for me to construct a URI endpoint and call Restful verbs in a fluent way.

The code below shows a .NET method where I’ve called the Delete verb on a Restful endpoint – this allows me to delete test cases by Id from my Azure DevOps Board.

using System.Net.Http;
using System.Threading.Tasks;
using Flurl;
using Flurl.Http;
 
namespace DeleteTestCasesFromAzureDevOpsApp
{
    public class TestCaseProcessor
    {
        public static async Task<HttpResponseMessage> Delete(int id, string projectUri, string projectName,
            string personalAccessToken)
        {
            var deleteUri = Url.Combine(projectUri, projectName, "_apis/test/testcases/", id.ToString(),
                "?api-version=5.0-preview.1");
 
            var responseMessage = await deleteUri
                .WithBasicAuth(string.Empty, personalAccessToken)
                .DeleteAsync();
 
            return responseMessage;
        }
    }
}

And it’s really easy to call this method, as shown in the code below – in addition to the test case ID, I just need to provide my Azure DevUps URI, my project name and a personal access token.

using System;
 
namespace DeleteTestCasesFromAzureDevOpsApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string testToken = "[[my personal access token]]";
            const string projectName = "Corvette";
            const int testCaseToDelete = 124;
 
            var responseMessage = TestCaseProcessor.Delete(testCaseToDelete, uri, projectName, testToken).Result;
 
            Console.WriteLine("Response code: " + responseMessage.StatusCode);
        }
    }
}

So now if I want to delete test cases in bulk, I just need to iterate through the list of IDs and call this method for each test case ID – which is much for me than deleting many test cases through the UI.

Wrapping up

Deleting test cases from Azure DevOps is a bit more difficult through the web UI than deleting other types of WorkItems – fortunately the Restful interface available is available, and I can use it with an application in .NET that can delete test cases quickly and easily. Hopefully this is useful to anyone who’s working with Azure DevOps Boards and needs to delete test cases.

.net, C# tip

An extension method for .NET enumerations that uses the DescriptionAttribute

Sometimes little improvements make a big difference to my day to day programming. Like when I discovered the String.IsNullOrEmpty method (I think it came into .NET Framework way back in v2.0) – when I was testing for non-empty strings , I was able to use that super useful language feature without having to remember to write two comparisons (like myString != null && myString != string.Empty). I’ve probably used that bit of syntactical candy in every project I’ve worked on since then.

I work with enumerations all the time too. And a lot of the time, I need a text representation of the enumeration values which is a bit more complex than the value itself. This is where I find the DescriptionAttribute so useful – it’s a place supplied natively by .NET where I can add that text representation.

I’ve provided a simple example of this kind of enumeration with descriptions for each item below – maybe for a production application I’d localise the text, but you get the idea.

public enum Priority
{
    [Description("Highest Priority")]
    Top,
    [Description("MIiddle Priority")]
    Medium,
    [Description("Lowest Priority")]
    Low
}

But I’ve always felt I’d love to have native access to a method that would give me the description. Something like:

Priority.Medium.ToDescription();

Obviously it’s really easy to write an extension method like this, but every time I need it on a new project for a new client, I have to google for how to use types and reflection to access the attribute method, and then write that extension method. Then I think about the other extension methods that I might like to write (what about ToDisplayName(), that might be handy…), and how to make it generic so I can extend it later, and what about error handling…

…and anyway, this process always includes the thought, “I’ve lost count how many times I’ve done this for my .NET enumerations, why don’t I write this down somewhere so I can re-use it?

So I’ve written it down below.

public static class EnumerationExtensions
{
    public static string ToDescription(this Enum enumeration)
    {
        var attribute = GetText<DescriptionAttribute>(enumeration);
 
        return attribute.Description;
    }
 
    public static T GetText<T>(Enum enumeration) where T : Attribute
    {
        var type = enumeration.GetType();
        
        var memberInfo = type.GetMember(enumeration.ToString());
 
        if (!memberInfo.Any())
            throw new ArgumentException($"No public members for the argument '{enumeration}'.");
 
        var attributes = memberInfo[0].GetCustomAttributes(typeof(T), false);
 
        if (attributes == null || attributes.Length != 1)
            throw new ArgumentException($"Can't find an attribute matching '{typeof(T).Name}' for the argument '{enumeration}'");
 
        return attributes.Single() as T;
    }
}

I’ve split it into two methods, so if I want to create a ‘ToDisplayName()’ extension later, it’s really easy for me to do that. I might include it in a NuGet package so it’s even easier for me to re-use later. I’ll update this post if I do.

Wrapping up

This was a quick blog post covering a simple area, about something that bugs me – having to rewrite a simple function each time I start a new project. I can imagine a bunch of reasons why this isn’t a native extension in .NET Framework/Core – and my implementation is an opinionated extension. I’m sure there’d be a lot of disagreement about what good practice is. Anyway, this class works for me – I hope it’s useful to you also.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, C# tip

Correctly reading encoded text with the StreamReader in .NET

So here’s a problem…

I’ve got a list of my team members in a text file which I need to parse and process in .NET. The file is pretty simple – it’s called MyTeamNames.txt and it contains the following names:

  • Adèle
  • José
  • Chloë
  • Auróra

I created the text file on Windows 10 machine and used Notepad. I saved the file with the default ANSI encoding.

ansi save

I’m going to read names from this text file using a .NET Framework StreamReader – there’s a simple and clear example on the docs.microsoft.com site describing how to do this. So I’ve written a spike of code to use a StreamReader – I’ve more or less copied directly from the link above – and it looks like this:

using System;
using System.IO;
 
namespace ConsoleApp
{
    internal static class Program
    {
        private static void Main()
        {
            try
            {
                const string myTeamNamesFile = @"C:\Users\jeremy.lindsay\Desktop\MyTeamNames.txt";
 
                // Open the text file using a stream reader.
                using (var streamReader = new StreamReader(myTeamNamesFile))
                {
                    // Read the stream to a string, and write the string to the console.
                    var line = streamReader.ReadToEnd();
                    Console.WriteLine(line);
                }
            }
            catch (Exception e)
            {
                Console.WriteLine("The file could not be read:");
                Console.WriteLine(e.Message);
            }
        }
    }
}

But when I run the code, there’s a problem. I expected to see my team’s names written to the console – but instead all those names now have question marks scattered throughout them, as shown in the image below.

wrongnames

What’s gone wrong?

The StreamReader object and original text file need to have compatible encoding types

It’s pretty obvious that the question marks relate to the non-ASCII characters, and each name on my list have either an accent or a grave, or an umlaut/diaeresis.

The problem in my .NET Framework console application is that my StreamReader is assuming that my file is encoded one way when it’s actually encoded in another way, and it doesn’t know what to do with some characters, so it uses a question mark as a default.

Can you detect the file’s encoding type with .NET?

Big thanks to Erich Brunner for pointing out a new bit of information to me about the default encoding type – I’ve updated this post to reflect his helpful steer.

It turns out detecting the file’s encoding type is quite a difficult thing to do in .NET. But as I mentioned earlier, I know I saved this file with the encoding type ANSI selected from the dropdown list in Notepad.

Interestingly, this doesn’t mean that I’ve saved the file as ANSI – this is a misnomer. From the MSDN glossary:

“The term “ANSI” as used to signify Windows code pages is a historical reference, but is nowadays a misnomer that continues to persist in the Windows community. The source of this comes from the fact that the Windows code page 1252 was originally based on an ANSI draft—which became International Organization for Standardization (ISO) Standard 8859-1. “ANSI applications” are usually a reference to non-Unicode or code page–based applications.”

There are a couple of different options open to me here:

Change the file’s encoding type and save it with a specified encoding – e.g. UTF-8, or Unicode.

This is very straightforward – I’ve chosen to just select the UTF-8 option from the dropdown list in NotePad’s ‘Save As…’ dialog box.

save as utf-8

This time when I run the code above, the names are displayed correctly on the console, as shown below.

correct display

Alternatively, try using the StreamReader overload to specify the encoding type.

I can use an overload where I specify the encoding type, which comes from System.Text.Encoding.

var streamReader = new StreamReader(myTeamNamesFile, encodingType)

But what values do these encoding types resolve to? Well that depends on whether I use the .NET Framework or .NET Core. I’ve listed the values below, and notice that the Encoding.Default is different depending on whether you use the .NET Framework or .NET Core.

I’ve highlighted the values for “Encoding.Default“, because this is a special case.

System.Encoding value .NET Framework Encoding Header Name .NET Core Encoding Header Name
Encoding.Default Windows-1252 (on my machine) utf-8
Encoding.ASCII us-ascii us-ascii
Encoding.BigEndianUnicode utf-16BE utf-16BE
Encoding.UTF32 utf-32 utf-32
Encoding.UTF7 utf-7 utf-7
Encoding.UTF8 utf-8 utf-8
Encoding.Unicode utf-16 utf-16

So let’s say I use Encoding.Default in my .NET Framework console application, as shown in the code snippet below.

var streamReader = new StreamReader(myTeamNamesFile, Encoding.Default)

And now my names are now correctly rendered in the Console, as shown in the image below. This makes sense – the text file with my team names was saved with “ANSI” encoding, which we know actually corresponds Windows Code Page 1252. The default encoding on my own Windows machine turns out to also be the 1252 encoding (as highlighted above in red), so I’m instructing my StreamReader to use the 1252 encoding when reading a file which has been encoded as “ANSI” (also known as 1252). They match up, and the text displays correctly.

correct display

Problem solved, right? Well, no, not really.

Microsoft actually do not recommend using Encoding.Default. From docs.microsoft.com:

“Different computers can use different encodings as the default, and the default encoding can change on a single computer. If you use the Default encoding to encode and decode data streamed between computers or retrieved at different times on the same computer, it may translate that data incorrectly. In addition, the encoding returned by the Default property uses best-fit fallback to map unsupported characters to characters supported by the code page. For these reasons, using the default encoding is not recommended.”

If I target .NET Core instead of .NET Framework in my console application – with exactly the same code – I’m back to displaying question marks in my console text.

wrongnames

So even though telling the StreamReader in my .NET Framework console application to use Encoding.Default seems to work, it’s a case of it only working on my machine – it might not work on someone else’s machine. It certainly doesn’t work in .NET Core.

So it seems to me that saving my original text file as UTF-8 or Unicode is a better option.

And as a final reason to save the text file to UTF-8 or Unicode, let’s say I add a new team member, called Łukasz. If I try to save my file with ANSI encoding type, I get this warning:

unicode warning

If I press on and save the file as ANSI, the text for “Łukasz” is changed to “Lukasz” (note the change in the first character). But if I save the file as UTF-8 or Unicode, the name stays the same, including the initial “Ł”.

Wrapping up

It’s pretty common to be asked to read and process text files with non-ASCII characters, and even though .NET provides some really helpful APIs to assist with this, compatibility issues can still occur. This is a complex area, with variations across the .NET Framework and .NET Core runtimes.

I think the best solution is to change the encoding type of the original document to be Unicode or UTF-8, rather than ANSI (more correctly, Windows Code Page 1252). Telling the StreamReader to use Encoding.Default also worked for me, but it might not work on someone else’s machine with different defaults, leading to incorrect translations.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, Azure, C# tip

Setting up relationships between work items on Azure DevOps boards, and using .NET to read these relationships

So here’s a question…

How do you set up relationships between work items, and display this relationship in Azure Devops?

There’s a well known relationship between epics, features, user stories and tasks/bugs in the Agile process, but on the ‘Work Items’ screen, Azure DevOps lists them without showing the relationship –  like in the screen below.

workliist

Display relationships using the Backlogs view in Azure DevOps Boards

The thing is that the Work Items screen just lists all the work items – I prefer to view my work items in the Backlogs screen which visually represents the relationship between them.

But how do you set up relationships between work items?

Let’s work an example through from the start. I set up a new project in Azure DevOps – once I created the project, I’m shown the Summary screen, and for me this looks like the screenshot below.

project home

On the left hand navigation menu, I click on the ‘Boards’ item, and when this expands, I select the ‘Backlogs’ sub-item. This presents me with a screen where I’ve a few options, like the one at the top left – ‘New Work Item’.

backlog view

But when I click on this ‘New Work Item’ button, the pop-up only allows me to create a work item with type of ‘User Story’ (as shown below). This is not what I want – I want to create an Epic.

user story

To do this, I have to change some defaults in my Project Settings. At the bottom left of my screen, I click on the ‘Project settings’ button, and then select the ‘Team configuration’ sub-menu item which sits under the ‘Boards’ menu heading. This shows me a screen like the one below.

team configuration

There’s a section on this page which shows me the navigation levels available to me – by default, I don’t have the Epics checkbox ticked. So I can just tick the box as shown below to make this available. No need to click save anywhere – that setting is automatically saved back to the cloud.

backlogs with epic

Now if I go back to the ‘Backlogs’ menu item under ‘Boards’ in my projects left hand navigation menu, I need to select the dropdown list in the top right of the screen – my default setting here is ‘Stories’, but I can open the menu and now choose ‘Epics’ instead.

backlogs with epics dropdown

Now when I click on the ‘New Work Item’ button, I can create an Epic, and enter in the epic’s title, as shown below.

my epic title 2

And I’ve created my first epic in an Azure DevOps Board!

Ok, but what about nesting other items under that Epic?

There are a few different ways, but it’s straightforward (when you know how) – the way I like to do this is by selecting the ‘+’ button on the right hand side of my Epic. If you hover over this ‘+’ button, a tooltip appears that says ‘Add Feature’, and clicking on the button does exactly that.

add feature hover

A large dialog appears once you’ve clicked ‘+’ where you can add feature details – and note that in the bottom right of this dialog, there’s a ‘Related Work’ section, that shows the Epic we previously created as a parent.

new features

After clicking the blue ‘Save & Close’ button on the top right of the New Feature dialog, you’ll be taken back to the project board’s Backlog view, and you can see the feature that you just created below the epic we created previously, and it’s indented one place to the right to visually represent the parent-child relationship, as shown below.

add user story dropdown

And if you hover over the ‘+’ button to the left of the feature you just created, you’ll see the hint that this button now allows you to create a new user story. So if you click on ‘+’, you’ll have a similar experience to before, except the dialog that pops up is for a work item type of ‘User Story’. And you can see the relationship between this and the parent feature again by looking in the bottom right corner of the dialog, in the ‘Related Work’ section.

my new user story

And just to finish off this section, when you save that user story you’ll be taken to the backlog screen, and again see the user story sitting below its parent feature, indented one place to the right, as shown below. From this user story, you can click on the ‘+’ button on the left, and this time you’ve got a couple of options – either create a bug or a task with that user story as a parent.

show task and bug

I went ahead and create a task and a bug – the experience of creating them is identical to before where a dialog pops up with the type you select, and any existing relationed work detailed in the bottom right of the dialog box. So the image below shows my 5 new work items (an epic, a feature, a story, a task and a bug), and it’s easy to see the relationship between them by how they’re indented relative to each other.

indented

What about getting these items in .NET – how do I find out what items are related to?

I’ve previously written about creating Azure DevOps work items using the .NET framework, and you can re-apply some of the same principles to read work items into .NET objects.

I created a .NET Framework console app and installed the required NuGet packages using:

Install-Package Microsoft.TeamFoundationServer.Client
Install-Package Microsoft.VisualStudio.Services.Client

Then I used the code below to read back information about item 64 in my backlog – this is a user story which has a parent feature, and two children – a task and a bug.

So I expected the code below to tell me that there were a list of three relations in the workItem.Relations property.

using Microsoft.TeamFoundation.WorkItemTracking.WebApi;
using Microsoft.VisualStudio.Services.Common;
using Microsoft.VisualStudio.Services.WebApi;
using System;
 
namespace ConsoleApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            // my unique organization's Azure DevOps uri
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string personalAccessToken = "[[***my personal access token***]]";
 
            var credentials = new VssBasicCredential(string.Empty, personalAccessToken);
 
            // connect to Azure DevOps
            var connection = new VssConnection(new Uri(uri), credentials);
            var workItemTrackingHttpClient = connection.GetClient<WorkItemTrackingHttpClient>();
 
            // get information about workitem 64
            const int workItemId = 64;
            var workItem = workItemTrackingHttpClient.GetWorkItemAsync(workItemId).Result;
 
            // get relations
            var relations = workItem.Relations;
            Console.WriteLine(relations.Count); // uh oh - reports there are zero relations!
        }
    }

But this code doesn’t show relationships – why isn’t it working?

It turns out that the GetWorkItemAsync method doesn’t return relations by default. Instead, the GetWorkItemAsync method has an overload where you can specify to what extra information to return using a WorkItemExpand enumeration. In the code below I’ve chosen to return everything using:

expand: WorkItemExpand.All

But if I only wanted to return relations I could use:

expand: WorkItemExpand.Relations

The code below now correctly reports there are three items related to workitem 64.

using Microsoft.TeamFoundation.WorkItemTracking.WebApi;
using Microsoft.TeamFoundation.WorkItemTracking.WebApi.Models;
using Microsoft.VisualStudio.Services.Common;
using Microsoft.VisualStudio.Services.WebApi;
using System;
 
namespace ConsoleApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            // my unique organization's Azure DevOps uri
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string personalAccessToken = "[[***my personal access token***]]";
 
            var credentials = new VssBasicCredential(string.Empty, personalAccessToken);
 
            // connect to Azure DevOps
            var connection = new VssConnection(new Uri(uri), credentials);
            var workItemTrackingHttpClient = connection.GetClient<WorkItemTrackingHttpClient>();
 
            // get information about workitem 64
            const int workItemId = 64;
            var workItem = workItemTrackingHttpClient.GetWorkItemAsync(workItemId, expand: WorkItemExpand.All).Result;
 
            // get relations
            var relations = workItem.Relations;
            Console.WriteLine(relations.Count); // now correctly reports there are 3 relations
        }
    }
}

The relations list now correctly reports:

Wrapping up

This post has been about how to create work items with a hierarchical relationship using the Azure DevOps web user interface, and view them using the Backlog view in Azure -DevOps boards. I’ve also written about how to read these items and relationships between them using the .NET framework – I hope this helps!


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

GeoJson, GIS, SQL Server

Use SQL Server to import GeoJSON files, and convert geo-data back into GeoJSON

This post is about how to use SQL Server to load GeoJSON spatial data in a SQL Server table, and how to export geo-data from a SQL Server table back into valid GeoJSON.

So here’s a problem…

You’re working with some geodata which is stored in files formatted as JSON – specifically GeoJSON. And you need to query this data, and make a few modifications too. But working with this textual data by hand is kind of tedious – it’d be much nicer to be able query and manipulate this data using software with dedicated JSON querying functions.

SQL Server’s JSON capabilities can help solve this problem

I’ve mentioned in previous couple of blogs that I’ve been working with geodata recently, specifically in the GeoJSON format. GeoJSON is just JSON, but it adheres to a particular standard to describe a geographical feature.

You can read more about GeoJSON at http://geojson.org/.

An example of a GeoJSON geographical feature is shown below:

{
  "type": "Feature",
  "properties": {
    "BuildingReference": "BR-123: City Hall",
    "Address": "Donegall Square",
    "City": "Belfast",
    "Postcode": "BT1 5GS",
    "CurrentStatus": "In Use"
  },
  "geometry": {
    "type": "Point",
    "coordinates": [
      -5.9301,
      54.5967
    ]
  }
}

The really interesting parts of this JSON object are the ‘properties‘ and ‘geometry‘, which tells us information about the feature, and where it is. The example above shows a geographical point with latitude and longitude, but it could also be a shape or a line.

Below is an example of a GeoJSON FeatureCollection, which contains a couple of different features.

{
  "type": "FeatureCollection",
  "features": [
    {
      "type": "Feature",
      "properties": {
        "BuildingReference": "BR-123: City Hall",
        "Address": "Donegall Square",
        "City": "Belfast",
        "Postcode": "BT1 5GS",
        "CurrentStatus": "In Use"
      },
      "geometry": {
        "type": "Point",
        "coordinates": [
          -5.9301,
          54.5967
        ]
      }
    },
    {
      "type": "Feature",
      "properties": {
        "BuildingReference": "BR-456: Queen's University",
        "Address": "University Road",
        "City": "Belfast",
        "Postcode": "BT7 1NN",
        "CurrentStatus": "In Use"
      },
      "geometry": {
        "type": "Point",
        "coordinates": [
          -5.934,
          54.5844
        ]
      }
    }
  ]
}

If you’d like to see your GeoJSON feature collection represented on a map, check out http://geojson.io/.

So I’m working with files that have tens of thousands of features, and I need to run a few reports. For example, I need to know how many features per city are represented in the data. This would be a really easy query in T-SQL – but how can I get this data into SQL Server?

Fortunately SQL Server handles JSON really well. I can use the OPENROWSET keyword to bulk load my GeoJSON files into a variable, and it also provides the useful OPENJSON keyword, which allows me to parse a JSON string into its different components.

So if my GeoJSON is stored in a file named buildings.geojson, I can access the file using the code below, and represent it in SQL Server’s tabular format.

DECLARE @JSON nvarchar(max)
 
-- load the geojson into the variable
SELECT @JSON = BulkColumn
FROM OPENROWSET (BULK 'C:\Users\jeremy.lindsay\buildings.geojson', SINGLE_CLOB) as JSON
 
-- use OPENJSON to split the different JSON nodes into separate columns
SELECT
	*
FROM
OPENJSON(@JSON, '$.features')
	WITH (
		BuildingReference nvarchar(300) '$.properties.BuildingReference',
		Address nvarchar(300) '$.properties.Address',
		City nvarchar(300) '$.properties.City',
		Postcode nvarchar(300) '$.properties.Postcode',
		CurrentStatus nvarchar(300) '$.properties.CurrentStatus',
		Longitude nvarchar(300) '$.geometry.coordinates[0]',
		Latitude nvarchar(300) '$.geometry.coordinates[1]'
	)

Or if I wanted, I can just as easily load this data into a dedicated table, and represent the feature’s location as the SQL Server geography spatial type:

DROP TABLE IF EXISTS dbo.Buildings
 
CREATE TABLE dbo.Buildings
(
	Id int IDENTITY PRIMARY KEY,
	BuildingReference nvarchar(300),
	Address nvarchar(300),
	City nvarchar(300),
	Postcode nvarchar(300),
	CurrentStatus nvarchar(300),
	Coordinates GEOGRAPHY,
	Longitude nvarchar(100),
	Latitude nvarchar(100)
)
 
 
DECLARE @JSON nvarchar(max)
 
-- load the geojson into the variable
SELECT @JSON = BulkColumn
FROM OPENROWSET (BULK 'C:\Users\jeremy.lindsay\buildings.geojson', SINGLE_CLOB) as JSON
 
Insert Into dbo.Buildings (BuildingReference, Address, City, Postcode, CurrentStatus, Longitude, Latitude, Coordinates)
SELECT
	BuildingReference, 
	Address, 
	City,
	Postcode,
	CurrentStatus, 
	Longitude, 
	Latitude,
	geography::Point(Latitude, Longitude, 4326) AS Geography
FROM
OPENJSON(@JSON, '$.features')
	WITH (
		BuildingReference nvarchar(300) '$.properties.BuildingReference',
		Address nvarchar(300) '$.properties.Address',
		City nvarchar(300) '$.properties.City',
		Postcode nvarchar(300) '$.properties.Postcode',
		CurrentStatus nvarchar(300) '$.properties.CurrentStatus',
		Longitude nvarchar(300) '$.geometry.coordinates[0]',
		Latitude nvarchar(300) '$.geometry.coordinates[1]'
	)

How about exporting from SQL Server back to GeoJSON?

So querying data in the table is really easy for me now – but how about the scenario where I have data in SQL Server, and I want to export the results of a SELECT query to GeoJSON format?

Fortunately we can use the JSON querying capabilities of SQL Server – I can suffix my query with ‘FOR JSON PATH’ to convert the results of a SELECT query from a tabular format to a JSON format, as shown below:

DECLARE @featureList nvarchar(max) =
(
	SELECT
		'Feature'                                           as 'type',
		BuildingReference                                   as 'properties.BuildingReference',
		Address                                             as 'properties.Address',
		City                                                as 'properties.City',
		Postcode                                            as 'properties.Postcode',
		CurrentStatus                                       as 'properties.CurrentStatus',
		Coordinates.STGeometryType()                        as 'geometry.type',
		JSON_QUERY('[' + Longitude + ', ' + Latitude + ']') as 'geometry.coordinates'
	FROM Buildings
		FOR JSON PATH
)

But this doesn’t get me a result that’s quite right – it’s just a JSON formatted list of GeoJSON features. To make this a properly formatted GeoJSON featurecollection, I need to give this list a name – ‘features’, and specify the type as a ‘FeatureCollection’. Again this is reasonably straightforward with the built in JSON querying features of SQL Server.

DECLARE @featureList nvarchar(max) =
(
	SELECT
		'Feature'                                           as 'type',
		BuildingReference                                   as 'properties.BuildingReference',
		Address                                             as 'properties.Address',
		City                                                as 'properties.City',
		Postcode                                            as 'properties.Postcode',
		CurrentStatus                                       as 'properties.CurrentStatus',
		Coordinates.STGeometryType()                        as 'geometry.type',
		JSON_QUERY('[' + Longitude + ', ' + Latitude + ']') as 'geometry.coordinates'
	FROM Buildings
		FOR JSON PATH
)
 
DECLARE @featureCollection nvarchar(max) = (
	SELECT 'FeatureCollection' as 'type',
	JSON_QUERY(@featureList)   as 'Features'
	FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
)
 
SELECT @featureCollection

If you want to validate your GeoJSON, you can use a site like GeoJSONLint.com.

Wrapping up

SQL Server has great JSON querying capabilities, and I’ve found this really useful when I’ve combined this also with its support for geospatial querying. Hopefully this post is helpful to anyone working with spatial data in GeoJSON format.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!