.net, Community, Hackathon

Running the first Global AI Bootcamp in Belfast

I’ve run the Belfast Azure User Group for a couple of years now, and try to arrange events for my community every couple of months. Here in Belfast, we’ve a really vibrant technical scene, and I love having the opportunity to give everyone a chance to participate in things like the Global Azure Bootcamp in 2018 and 2019, and for the first time this year, the Global AI Bootcamp.

In case you haven’t heard of it, the Global AI Bootcamp is a free, one-day event organized by communities across the world who are passionate about Artifical Intelligence on Microsoft Azure. The keynote video for the 2019 event is at this link, and sample sessions and workshops are available on GitHub here. The global organizers (Henk Boelman and Amy Boyd) have arranged for Microsoft to assist community organizers during the day by supplying Azure Passes (each pass is worth $50 and is valid for 7 days), and also ensuring that each attendee gets a free lunch from Subway on the day of the event. This is a massive help for community organizers like me – getting Subway meals helps me have one less thing to worry about! – and the free passes ensures that everyone who wants to try something on Azure gets a chance to do that. And we were very fortunate this year to have ESO Solutions as a sponsor, who paid for the room hire.

This year, Belfast had three speakers in the morning:

Jonathan Armstrong spoke about the AI Revolution in Healthcare.

jonathan2.jpg

Mark Allan showed us how to make sense of your unstructured data with Azure.

mark

Yongyang Huo talked us through the Travelling Salesman Problem and how AI could be used to improve services (and have some fun too).

yongyang

And everyone was able to get a sticker as a memento of the event, as well as one attendee who was lucky enough to win a copy of the book shown below (Practical Automated Machine Learning on Azure).prize.jpg

In the afternoon many of the attendees stayed to try out the event’s workshops, featured on Github here. There was a range of skills, from people just trying it out for the first time to others who use Machine Learning every day at work, and I believe everyone was able to learn something from the talks – they were some of the best talks we’ve had so far at our user group.

For 2020, we’ve already got some great speakers and events lined up – our next event is in February, myself and some contacts are already arranging an Azure hackathon, and I’m looking forward to more Azure and DevOps Bootcamps in April and May next year!

.net, .net core, Azure, Azure DevOps, Azure Pipelines, Web Development

Everything as Code with Azure DevOps Pipelines: C#, ARM, and YAML: Part #1

Like a lot of developers, I’ve been using Azure DevOps to manage my CI/CD pipelines. I think (or I hope anyway) most developers now are using a continuous integration process – commit to a code repository like GitHub, which has a hook into a tool (like DevOps Build Pipelines) which checks every change to make sure it compiles, and doesn’t break any tests.

Just a note – this post doesn’t have any code, it’s more of an introduction to how I’ve found Azure Pipelines can make powerful improvements to a development process. It also looks at one of the new preview features, multi-stage build pipelines. I’ll expand on individual parts of this with more code in later posts.

Azure Pipelines have traditionally been split into two types – build and release pipelines.

oldpipelines

I like to think of the split in this way:

Build Pipelines

I’ve used these for Continuous Integration (CI) activities, like compiling code, running automated tests, creating and publishing build artifacts. These pipelines can be saved as YAML – a human readable serialisation language – and pushed to a source code repository. This means that one developer can pull another developer’s CI code and run exactly the same pipeline without having to write any code. One limitation is that Build Pipelines are a simple list of tasks/jobs – historically it’s not been possible to split these instructions into separate stages which can depend on one or more conditions (though later in this post, I’ll write more about a preview feature which introduces this capability).

Release Pipelines

I’ve used these for Continuous Deployment (CD) activities, like creating environments and deploying artifacts created during the Build Pipeline process to these environments. Azure Release Pipelines can be broken up into stages with lots of customized input and output conditions, e.g. you could choose to not allow artifacts to be deployed to an environment until a manual approval process is completed. At the time of writing this, Release Pipelines cannot be saved as YAML.

What does this mean in practice?

Let’s make up a simple scenario.

A client has defined a website that they would like my team to develop, and would like to see a demonstration of progress to a small group of stakeholders at the end of each two-week iteration. They’ve also said that they care about good practice – site stability and security are important aspects of system quality. If the demonstration validates their concept at the end of the initial phase, they’ll consider opening the application up to a wider audience on a environment designed to handle a lot more user load.

Environments

The pattern I often use for promoting code through environments is:

Development machines:

  • Individual developers work on their own machine (which can be either physical or virtual) and push code from here to branches in the source code repository.
  • These branches are peer reviewed and subject to passing this are merged into the master branch (which is my preference for this scenario – YMMV).

Integration environment (a.k.a. Development Environment, or Dev-Int):

  • If the Build Pipeline has completed successfully (e.g. code compiles and no tests break), then the Release Pipeline deploys the build artifacts to the Integration Environment (it’s critical that these artifacts are used all the way through the deployment process as these are the ones that have been tested).
  • This environment is pretty unstable – code is frequently pushed here. In my world, it’s most typically used by developers who want to check that their code works as expected somewhere other than their machine. It’s not really something that testers or demonstrators would want to use.
  • I also like to run vulnerability scanning software like OWASP ZAP regularly on this environment to highlight any security issues, or run a baseline accessibility check using something like Pa11y.

Test environment:

  • The same binary artifacts deployed to Integration (like a zipped up package of website files) are then deployed to the Test environment.
  • I’ve usually set up this environment for testers who use it for any manual testing processes. Obviously user journey testing is automated as much as possible, but sometimes manual testing is still required – e.g. for further accessibility testing.

Demonstration environment:

  • I only push to this environment when I’m pretty confident that the code does what I expect and I’m happy to present it to a room full of people.

And in this scenario, if the client wishes to open up to a wider audience, I’d usually recommend the following two environments:

Pre-production a.k.a. QA (Quality Assurance), or Staging

  • Traditionally security checks (e.g. penetration tests) are run on this environment first, as are final performance tests.
  • This is the last environment before Production, and the infrastructure should mirror the infrastructure on Production so that results of any tests here are indicative of behaviour on Production.

Production a.k.a. Live

  • This is the most stable environment, and where customers will spend most of their time when using the application.
  • Often there’ll also be a mirror of this environment for ‘Disaster Recovery’ (DR) purposes.

Obviously different teams will have different and more customized needs – for example, sometimes teams aren’t able to deploy more frequently than once every sprint. If an emergency bug fix is required it’s useful to have a separate environment to allow these bug fixes to be tested before production, without disrupting the development team’s deployment process.

Do we always need all of these environments?

The environments used depend on the user needs – there’s no strategy which works in all cases. For our simple fictitious case, I think we only need Integration, Testing and Demonstration.

Here’s a high level process using the Microsoft stack that I could use to help meet the client’s initial need (only deploying as far as a demonstration platform):

reference-architecture-diagram-2

  • Developers build a website matching client needs, often pushing new code and features to a source code environment (I usually go with either GitHub or Azure DevOps Repos).
  • Code is compiled and tested using Azure DevOps Pipelines, and then the tested artifacts are deployed to:
    • The Integration Environment, then
    • The Testing Environment, and then
    • The Demonstration Environment.
  • Each one of these environments lives in its own Azure Resource Group with identical infrastructure (e.g. web farm, web application, database and monitoring capabilities). These are built using Azure Resource Manager (ARM) templates.

Using Azure Pipelines, I can create a Release Pipeline to build artifacts and deploy to the three environments above in three separate stages, as shown in the image below.

stages8

But as mentioned previously, a limitation is that this Release Pipeline doesn’t exist as code in my source code repository.

Fancy All New Multi-Stage Pipelines

But recently Azure have introduced a preview feature, where Build Pipelines have been renamed as “Pipelines”, and added some new functions.

If you want to see these in your Azure DevOps instance, log into your DevOps instance on Azure, and head over to the menu in the top right – select “Preview features”:

pipelinemenu

In the dialog window that appears, turn on the “Multi-stage Pipelines” option, highlighted in the image below:

multistagepipelines

Now your DevOps pipeline menu will look like the one below – note how the “Builds” sub-menu item has been renamed to Pipelines:

newpipelines

Now I’m able to use YAML to not only capture individual build steps, but I can package them up into stages. The image below shows how I’ve started to mirror the Release Pipeline process above using YAML – I’ve build and deployed to integration and testing environments.

build stages

I’ve also shown a fork in the process where I can run my OWASP ZAP vulnerability scanning tool after the site has been deployed on integration, at the same time as the Testing environment is being built and having artefacts deployed to it. The image below shows the tests that have failed and how they’re reported – I can select individual tests and add them as Bugs to Azure DevOps Boards.

failing tests

Microsoft have supplied some example YAML to help developers get started:

  • A simple Build -> Test -> Staging -> Production scenario.
  • A scenario with a stage that on completion triggers two stages, which then are both required for the final stage.

It’s a huge process improvement to be able to have my website source code and tests as C#, my infrastructure code as ARM templates, and my pipeline code as YAML.

For example, if someone deleted the pipeline (either accidentally or deliberately), it’s not really a big deal – we can recreate everything again in a matter of minutes. Or if the pipeline was acting in an unexpected way, I could spin up a duplicate of the pipeline and debug it safely away from production.

Current Limitations

Multi-stage pipelines are a preview feature in Azure DevOps, and personally I wouldn’t risk this with every production application yet. One major missing feature is the ability to manually approve progression from one stage to another, though I understand this is on the team’s backlog.

Wrapping Up

I really like how everything can live in source code – my full build and release process, both CI and CD, are captured in human readable YAML. This is incredibly powerful – code, tests, infrastructure and the CI/CD pipeline can be created as a template and new projects can be spun up in minutes rather than days. Additionally, I’m able to create and tear down containers which cover some overall system quality testing aspects, for example using the OWASP ZAP container to scan for vulnerabilities on the integration environment website.

As I mentioned at the start of this post, I’ll be writing more over the coming weeks about the example scenario in this post – with topics such as:

  • writing a multi-stage pipeline in YAML to create resource groups to encapsulate resources for each environment;
  • how to deploy infrastructure using ARM templates in each of the stages;
  • how to deploy artifacts to the infrastructure created at each stage;
  • use the OWASP ZAP tool to scan for vulnerabilities in the integration website, and the YAML code to do this.

 

.net, .net core, Azure, Cosmos

Test driving the Cosmos SDK v3 with .NET Core – Getting started with Azure Cosmos DB and .NET Core, Part #3

The Azure Cosmos team have recently released (and open sourced) a new SDK preview with some awesome features, as recently seen on the Azure Friday show on Channel 9. So I wanted to test drive the functions available in this SDK against the one I’ve been using (SDK v2.2.2) to see the differences.

In the last two parts of this series, I’ve looked at how to create databases and collections in the Azure Cosmos emulator using .NET Core and version 2.2.2 of the Cosmos SDK. I’ve also looked at how to carry out some string queries against documents held in collections.

Version 3 of the SDK is a preview – it’s not recommended for production use yet, and I’d expect a lot of changes between now and a production ready version.

And before you read my comparison…it’s all purely opinion based – sometimes one piece of code will look longer than another because of how I’ve chosen to write it. I’ve also written the samples as synchronous only – this is just because I want to focus on the SDK differences in this post, rather than explore an async/await topic.

Connecting to the Cosmos Emulator

Previously when I was setting up a connection to my Cosmos Emulator instance, I’d write something in C# like the code below.

#region Set up Document client
 
// Create the client connection using v2.2.2
client = new DocumentClient(
    new Uri(CosmosEndpoint),
    EmulatorKey,
    new ConnectionPolicy
    {
        ConnectionMode = ConnectionMode.Direct,
        ConnectionProtocol = Protocol.Tcp
    });
 
#endregion

Now I can connect using the code below – client instantiation in SDK 3 is cleaner, and has keywords relevant to Cosmos rather than its previous name of DocumentDB. This makes it easier to read and conveys intent much better.

#region Set up Cosmos client
 
// Create the configuration using SDK v3
var configuration = new CosmosConfiguration(CosmosEndpoint, EmulatorKey)
{
    ConnectionMode = ConnectionMode.Direct,
};
 
_client = new CosmosClient(configuration);
 
#endregion

Creating the database and collections

Looking at my code with the previous SDK…well there sure is a lot of it. And it works, so I guess that’s something. But creation of objects from the UriFactory adds a lot of noise, and I’ve previously hidden code like this in a facade class.

#region Create database, collection and indexing policy
 
// Set up database and collection Uris
var databaseUrl = UriFactory.CreateDatabaseUri(DatabaseId);
var naturalSiteCollectionUri = UriFactory.CreateDocumentCollectionUri(DatabaseId, NaturalSitesCollection);
 
// Create the database if it doesn't exist
client.CreateDatabaseIfNotExistsAsync(new Database { Id = DatabaseId }).Wait();
 
var naturalSitesCollection = new DocumentCollection { Id = NaturalSitesCollection };
 
// Create an indexing policy to make strings have a Ranged index.
var indexingPolicy = new IndexingPolicy();
indexingPolicy.IncludedPaths.Add(new IncludedPath
{
    Path = "/*",
    Indexes = new Collection<Microsoft.Azure.Documents.Index>()
    {
        new RangeIndex(DataType.String) { Precision = -1 }
    }
});
 
// Assign the index policy to our collection
naturalSitesCollection.IndexingPolicy = indexingPolicy;
 
// And create the collection if it doesn't exist
client.CreateDocumentCollectionIfNotExistsAsync(databaseUrl, naturalSitesCollection).Wait();
 
#endregion

The new code is much cleaner – no more UriFactories, and we again have keywords which are more relevant to Cosmos.

There are a few things I think are worth commenting on:

  • “Collections” are now “Containers” in the SDK, although they’re still Collections in the Data Explorer.
  • We can access the array of available databases from a “Databases” method accessible from the Cosmos client, and we can access available containers from a “Containers” method available from individual Cosmos databases. This object hierarchy makes much more sense to me than having to create everything from methods accessible from the DocumentClient in v2.2.2.
  • We now need to specify a partition key name for a container, whereas we didn’t need to do that in v2.2.2.
#region Create database, collection and indexing policy
 
// Create the database if it doesn't exist
CosmosDatabase database = _client.Databases.CreateDatabaseIfNotExistsAsync(DatabaseId).Result;
 
var containerSettings = new CosmosContainerSettings(NaturalSitesCollection, "/Name")
{
    // Assign the index policy to our container
    IndexingPolicy = new IndexingPolicy(new RangeIndex(DataType.String) { Precision = -1 })
};
 
CosmosContainer container = database.Containers.CreateContainerIfNotExistsAsync(containerSettings).Result;
 
#endregion

Saving items to a container

The code using the previous SDK is pretty clean already.

#region Create a sample document in our collection
 
// Let's instantiate a POCO with a local landmark
var giantsCauseway = new NaturalSite { Name = "Giant's Causeway" };
 
// Create the document in our database
client.CreateDocumentAsync(naturalSiteCollectionUri, giantsCauseway).Wait();
 
#endregion

But the new SDK improves on its predecessor by using a more logical object hierarchy – we create items from an “Items” array which is available from a container, and the naming conventions are also more consistent.

#region Create sample item in our container in SDK 3
 
// Let's instantiate a POCO with a local landmark
var giantsCauseway = new NaturalSite { Id = Guid.NewGuid().ToString(), Name = "Giant's Causeway" };
 
// Create the document in our database
container.Items.CreateItemAsync(giantsCauseway.Name, giantsCauseway).Wait();
 
#endregion

There are also a couple of changes in SDK worth noting:

  • When creating the item, we also need to explicitly specify the value corresponding to the partition key.
  • My custom object now needs to have an ID property with type of string, decorated as a JsonProperty in the way shown in the code below. I didn’t need this with the previous SDK.
public class NaturalSite
{
    [JsonProperty(PropertyName = "id")]
    public string Id { get; set; }
 
    public string Name { get; set; }
}

Querying a collection/container for exact and partial string matches

Using SDK 2.2.2, my code can look something like the sample below – I’ve used a query facade and can take advantage of SDK 2.2.2’s LINQ querying function.

#region Query collection for exact matches
 
// Instantiate with the DocumentClient and database identifier
var cosmosQueryFacade = new CosmosQueryFacade<NaturalSite>
{
    DocumentClient = client,
    DatabaseId = DatabaseId,
    CollectionId = NaturalSitesCollection
};
 
// We can look for strings that exactly match a search string
var sites = cosmosQueryFacade.GetItemsAsync(m => m.Name == "Giant's Causeway").Result;
 
foreach (var site in sites)
{
    Console.WriteLine($"The natural site name is: {site.Name}");
}
 
#endregion

But in the new SDK v3, there’s presently no LINQ query function. It’s high on the team’s list of ‘things to do next’, and in the meantime I can use parameterized queries to achieve the same result.

#region Query collection for exact matches using SDK 3
 
// Or we can use the new SDK, which uses the CosmosSqlQueryDefinition object
var sql = new CosmosSqlQueryDefinition("Select * from Items i where i.Name = @name")
                                                           .UseParameter("@name", "Giant's Causeway");
 
 
var setIterator = container.Items.CreateItemQuery<NaturalSite>(
                    sqlQueryDefinition: sql,
                    partitionKey: "Giant's Causeway");
 
while (setIterator.HasMoreResults)
{
    foreach (var site in setIterator.FetchNextSetAsync().Result)
    {
        Console.WriteLine($"The natural site name is: {site.Name}");
    }
}
 
#endregion

For partial string matches, previously I could use the built in LINQ functions as shown below.

#region Query collection for matches that start with our search string
 
// And we can search for strings that start with a search string,
// as long as we have strings set up to be Ranged Indexes
sites = cosmosQueryFacade.GetItemsAsync(m => m.Name.StartsWith("Giant")).Result;
 
foreach (var site in sites)
{
    Console.WriteLine($"The natural site name is: {site.Name}");
}
 
#endregion

And even though we don’t have LINQ functions yet in the new SDK v3, we can still achieve the same result with the SQL query shown in the code below.

#region Or query collection for matches that start with our search string using SDK 3
 
sql = new CosmosSqlQueryDefinition("SELECT * FROM Items i WHERE STARTSWITH(i.Name, @name)")
    .UseParameter("@name", "Giant");
 
setIterator = container.Items.CreateItemQuery<NaturalSite>(
    sqlQueryDefinition: sql,
    partitionKey: "Giant's Causeway");
 
while (setIterator.HasMoreResults)
{
    foreach (var site in setIterator.FetchNextSetAsync().Result)
    {
        Console.WriteLine($"The natural site name is: {site.Name}");
    }
}
 
#endregion

What I’d like to see next

The Cosmos team have said the SDK is a preview only – it’s not suitable for production use yet, even though it already has some very nice advantages over the previous SDK. I think the things I’d like to see in future iterations are:

  • LINQ querying – which I know is already on the backlog.
  • More support for “Request Unit” information, so I can get a little more insight into the cost of my queries.

Wrapping up

The new Cosmos SDK v3 looks really interesting – it allows me to write much cleaner code with clear intent. And even though it’s not production ready yet, I’m going to start trying to use it where I can so I’m ready to take advantage of the new features as soon as they’re more generally available, and supported. I hope this helps anyone else who’s thinking about trying out the new SDK – what would you like to see?

.net, .net core, Non-functional Requirements, Performance

Using async/await and Task.WhenAll to improve the overall speed of your C# code

Recently I’ve been looking at ways to improve the performance of some .NET code, and this post is about an async/await pattern that I’ve observed a few times that I’ve been able to refactor.

Every-so-often, I see code like the sample below – a single method or service which awaits the outputs of numerous methods which are marked as asynchronous.

await FirstMethodAsync();
 
await SecondMethodAsync();
 
await ThirdMethodAsync();

The three methods don’t seem to depend on each other in any way, and since they’re all asynchronous methods, it’s possible to run them in parallel. But for some reason, the implementation is to run all three synchronously – the flow of execution awaits the first method running and completing, then the second, and then the third.

We might be able to do better than this.

Let’s look at an example

For this post, I’ve created a couple of sample methods which can be run asynchronously – they’re called SlowAndComplexSumAsync and SlowAndComplexWordAsync.

What these methods actually do isn’t important, so don’t worry about what function they serve – I’ve just contrived them to do something and be quite slow, so I can observe how my code’s overall performance alters as I do some refactoring.

First, SlowAndComplexSumAsync (below) adds a few numbers together, with some artificial delays to deliberately slow it down – this takes about 2.5s to run.

private static async Task<int> SlowAndComplexSumAsync()
{
    int sum = 0;
    foreach (var counter in Enumerable.Range(0, 25))
    {
        sum += counter;
        await Task.Delay(100);
    }
 
    return sum;
}

Next SlowAndComplexWordAsync (below) concatenates characters together, again with some artificial delays to slow it down. This method usually about 4s to run.

private static async Task<string> SlowAndComplexWordAsync()
{
    var word = string.Empty;
    foreach (var counter in Enumerable.Range(65, 26))
    {
        word = string.Concat(word, (char) counter);
        await Task.Delay(150);
    }
 
    return word;
}

Running synchronously – the slow way

Obviously I can just prefix each method with the “await” keyword in a Main method marked with the async keyword, as shown below. This code basically just runs the two sample methods synchronously (despite the async/await cruft in the code).

private static async Task Main(string[] args)
{
    var stopwatch = new Stopwatch();
    stopwatch.Start();
 
    // This method takes about 2.5s to run
    var complexSum = await SlowAndComplexSumAsync();
 
    // The elapsed time will be approximately 2.5s so far
    Console.WriteLine("Time elapsed when sum completes..." + stopwatch.Elapsed);
 
    // This method takes about 4s to run
    var complexWord = await SlowAndComplexWordAsync();
    
    // The elapsed time at this point will be about 6.5s
    Console.WriteLine("Time elapsed when both complete..." + stopwatch.Elapsed);
    
    // These lines are to prove the outputs are as expected,
    // i.e. 300 for the complex sum and "ABC...XYZ" for the complex word
    Console.WriteLine("Result of complex sum = " + complexSum);
    Console.WriteLine("Result of complex letter processing " + complexWord);
 
    Console.Read();
}

When I run this code, the console output looks like the image below:

series

As can be seen in the console output, both methods run consecutively – the first one takes a bit over 2.5s, and then the second method runs (taking a bit over 4s), causing the total running time to be just under 7s (which is pretty close to the predicted duration of 6.5s).

Running asynchronously – the faster way

But I’ve missed a great opportunity to make this program run faster. Instead of running each method and waiting for it to complete before starting the next one, I can start them all together and await the Task.WhenAll method to make sure all methods are completed before proceeding to the rest of the program.

This technique is shown in the code below.

private static async Task Main(string[] args)
{
    var stopwatch = new Stopwatch();
    stopwatch.Start();
 
    // this task will take about 2.5s to complete
    var sumTask = SlowAndComplexSumAsync();
 
    // this task will take about 4s to complete
    var wordTask = SlowAndComplexWordAsync();
 
    // running them in parallel should take about 4s to complete
    await Task.WhenAll(sumTask, wordTask);

    // The elapsed time at this point will only be about 4s
    Console.WriteLine("Time elapsed when both complete..." + stopwatch.Elapsed);
 
    // These lines are to prove the outputs are as expected,
    // i.e. 300 for the complex sum and "ABC...XYZ" for the complex word
    Console.WriteLine("Result of complex sum = " + sumTask.Result);
    Console.WriteLine("Result of complex letter processing " + wordTask.Result);
 
    Console.Read();
}

And the outputs are shown in the image below.

parallel

The total running time is now only a bit over 4s – and this is way better than the previous time of around 7s. This is because we are running both methods in parallel, and making full use of the opportunity asynchronous methods present. Now our total execution time is only as slow as the slowest method, rather than being the cumulative time for all methods executing one after each other.

Wrapping up

I hope this post has helped shine a little light on how to use the async/await keywords and how to use Task.WhenAll to run independent methods in parallel.

Obviously every case has its own merits – but if code has series of asynchronous methods written so that each one has to wait for the previous one to complete, definitely check out whether the code can be refactored to use Task.WhenAll to improve the overall speed.

And maybe even more importantly, when designing an API surface, keep in mind that decoupling dependencies between methods might give developers using the API an opportunity to run these asynchronous methods in parallel.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, C# tip

Instantiating a C# object from a string using Activator.CreateInstance in .NET

Recently I hit an interesting programming challenge – I need to write a library which can instantiate and use a C# class object from a second C# assembly.

Sounds simple enough…but the catch is that I’m only given some string information about the class at runtime, such as the class name, its namespace, and what assembly it belongs to.

Fortunately this is possible using the Activator.CreateInstance method in C#. First I need to format the namespace, class name and assembly name in a special way – as an assembly qualified name.

Let’s look at an example – the second assembly is called “MyTestProject” and the object I need to instantiate from my library looks like the one below.

namespace SampleProject.Domain
{
    public class MyNewTestClass
    {
        public int Id { getset; }
 
        public string Name { getset; }
 
        public string DoSpecialThing()
        {
            return "My name is MyNewTestClass";
        }
    }
}

This leads to the assembly qualified name:

"SampleProject.Domain.MyNewTestClass, MyTestProject"

Note that the format here is along the lines of:

"{namespace}.{class name}, "{assembly name}"

Another way of finding this assembly qualified name is to run the code below:

Console.WriteLine(typeof(MyNewTestClass).AssemblyQualifiedName);

This will output something like:

SampleProject.Domain.MyNewTestClass, MyTestProject, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null

There’s extra information about Version, Culture and PublicKeyToken – you might need to use this if you’re targetting different versions of a library, but in my simple example I don’t need this so I won’t elaborate further here.

Now I have the qualified name of the class, I can instantiate the object in my library using the Activator.CreateInstance, as shown below:

const string objectToInstantiate = "SampleProject.Domain.MyNewTestClass, MyTestProject";
 
var objectType = Type.GetType(objectToInstantiate);

var instantiatedObject = Activator.CreateInstance(objectType);

But it’s a bit difficult to do anything useful with the instantiated object – at runtime we obviously know it has the type of Object, and I suppose we could call the ToString() method, but for a new class that’s of limited use. How can we access properties and methods in the instantiated class?

One way is to use the dynamic keyword to manipulate the new object

We could use the dynamic keyword with the instantiated object, and set/get methods dynamically, like in the code below:

const string objectToInstantiate = "SampleProject.Domain.MyNewTestClass, MyTestProject";
 
var objectType = Type.GetType(objectToInstantiate);

dynamic instantiatedObject = Activator.CreateInstance(objectTypeas ITestClass;
 
// set a property value
instantiatedObject.Name = "Test Name";
 
// get a property value
string name = instantiatedObject.Name;
 
// call a method - this outputs "My name is MyNewTestClass"
Console.Write(instantiatedObject.DoSpecialThing());

Another way to manipulate the instantiated object is through using a shared interface

We could make the original object implement an interface shared across all projects. If I add the interface below to my project…

namespace SampleProject.Domain
{
    public interface ITestClass
    {
        int Id { getset; }
        string Name { getset; }
        string DoSpecialThing();
    }
}

…then our original object could implement this interface:

namespace SampleProject.Domain
{
    public class MyNewTestClass : ITestClass
    {
        public int Id { getset; }
 
        public string Name { getset; }
 
        public string DoSpecialThing()
        {
            return "My name is MyNewTestClass";
        }
    }
}

So if we happen to know at design time that the object we want to instantiate implements the ITestClass interface, we can access methods exposed by that interface – there’s no need to use the dynamic keyword now.

const string objectToInstantiate = "SampleProject.Domain.MyNewTestClass, MyTestProject";
 
var objectType = Type.GetType(objectToInstantiate);

var instantiatedObject = Activator.CreateInstance(objectTypeas ITestClass;
 
// set a property value
instantiatedObject.Name = "Test Name";
 
// get a property value
var name = instantiatedObject.Name;
 
// call a method - this outputs "My name is MyNewTestClass"
Console.Write(instantiatedObject.DoSpecialThing());

And of course if I have another domain object which implements the same interface but has different behaviour, like the one below…

namespace SampleProject.Domain
{
    public class DifferentTestClass : ITestClass
    {
        public int Id { getset; }
 
        public string Name { getset; }
 
        public string DoSpecialThing()
        {
            return "This is a different special thing";
        }
    }
}

..then I can use similar code to instantiate and manipulate the object – I just need to use the different object’s assembly qualified name:

const string objectToInstantiate = "SampleProject.Domain.DifferentTestClass, MyTestProject";
 
var objectType = Type.GetType(objectToInstantiate);

var instantiatedObject = Activator.CreateInstance(objectTypeas ITestClass;
 
// set a property value
instantiatedObject.Name = "Other Test Name";
 
// get a property value
string name = instantiatedObject.Name;
 
// call a method - but this now outputs "This is a different special thing"
Console.Write(instantiatedObject.DoSpecialThing());

Hopefully this helps anyone else facing a similar challenge – it’s worth bearing in mind that reflection is very powerful, but also can be a bit slower than other techniques.

.net, .net core, Azure, Azure DevOps, Azure DevOps Boards, Flurl

How to delete a TestCase from Azure DevOps boards using .NET, Flurl and the Azure DevOps Restful API

So here’s a problem…

I’ve been working with Azure DevOps and finding it really great – but every now and again I hit a roadblock and feel like I’m on the edge of what’s possible with the platform.

For instance – I’ve loaded work items and test cases into my development instance for some analysis before going to my production instance, and now I’d like to delete all of them. Sounds like a simple thing to do from the UI – I’ve selected multiple work items before, clicked on the ellipsis on one item and selected ‘Delete’ from the menu that appears.

bulk delete

Except that sometimes it doesn’t work out like that. Where’s the delete option gone in the menu below?

bulk delete fail

Right now you can only delete one test case at a time through the Azure DevOps web user interface

You can only delete test cases one at a time through the Azure DevOps web UI at the moment, and you can’t do it from the WorkItem list view. To delete a test case, select the test case to display its detailed view, and then select the ellipsis at the top right of this view to reveal an action menu (as shown below). You can select the options with the text ‘Permanently delete’

delete test case

Then you’ll be presented with a dialog asking you to confirm the deletion and enter the Test Case ID to confirm your intent.

perm delete

This is a lot of work if you’ve got a few (or a few hundred) test cases to delete.

Fortunately, this isn’t the only option available – I can .NET my way out of trouble.

You also can use .NET, Flurl and the Azure DevOps Restful API to delete test cases

Azure DevOps also provides a Restful interface which has comprehensive coverage of the functions available through the web UI – and sometimes a bit more. This is one of those instances where the using Restful API gives me the flexibility that I’m looking for.

I’ve previously written about using libraries with .NET to simplify accessing Restful interfaces – one of my favourite libraries is Flurl, because it makes it really easy for me to construct a URI endpoint and call Restful verbs in a fluent way.

The code below shows a .NET method where I’ve called the Delete verb on a Restful endpoint – this allows me to delete test cases by Id from my Azure DevOps Board.

using System.Net.Http;
using System.Threading.Tasks;
using Flurl;
using Flurl.Http;
 
namespace DeleteTestCasesFromAzureDevOpsApp
{
    public class TestCaseProcessor
    {
        public static async Task<HttpResponseMessage> Delete(int id, string projectUri, string projectName,
            string personalAccessToken)
        {
            var deleteUri = Url.Combine(projectUri, projectName, "_apis/test/testcases/", id.ToString(),
                "?api-version=5.0-preview.1");
 
            var responseMessage = await deleteUri
                .WithBasicAuth(string.Empty, personalAccessToken)
                .DeleteAsync();
 
            return responseMessage;
        }
    }
}

And it’s really easy to call this method, as shown in the code below – in addition to the test case ID, I just need to provide my Azure DevUps URI, my project name and a personal access token.

using System;
 
namespace DeleteTestCasesFromAzureDevOpsApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string testToken = "[[my personal access token]]";
            const string projectName = "Corvette";
            const int testCaseToDelete = 124;
 
            var responseMessage = TestCaseProcessor.Delete(testCaseToDelete, uri, projectName, testToken).Result;
 
            Console.WriteLine("Response code: " + responseMessage.StatusCode);
        }
    }
}

So now if I want to delete test cases in bulk, I just need to iterate through the list of IDs and call this method for each test case ID – which is much for me than deleting many test cases through the UI.

Wrapping up

Deleting test cases from Azure DevOps is a bit more difficult through the web UI than deleting other types of WorkItems – fortunately the Restful interface available is available, and I can use it with an application in .NET that can delete test cases quickly and easily. Hopefully this is useful to anyone who’s working with Azure DevOps Boards and needs to delete test cases.

.net, C# tip

An extension method for .NET enumerations that uses the DescriptionAttribute

Sometimes little improvements make a big difference to my day to day programming. Like when I discovered the String.IsNullOrEmpty method (I think it came into .NET Framework way back in v2.0) – when I was testing for non-empty strings , I was able to use that super useful language feature without having to remember to write two comparisons (like myString != null && myString != string.Empty). I’ve probably used that bit of syntactical candy in every project I’ve worked on since then.

I work with enumerations all the time too. And a lot of the time, I need a text representation of the enumeration values which is a bit more complex than the value itself. This is where I find the DescriptionAttribute so useful – it’s a place supplied natively by .NET where I can add that text representation.

I’ve provided a simple example of this kind of enumeration with descriptions for each item below – maybe for a production application I’d localise the text, but you get the idea.

public enum Priority
{
    [Description("Highest Priority")]
    Top,
    [Description("MIiddle Priority")]
    Medium,
    [Description("Lowest Priority")]
    Low
}

But I’ve always felt I’d love to have native access to a method that would give me the description. Something like:

Priority.Medium.ToDescription();

Obviously it’s really easy to write an extension method like this, but every time I need it on a new project for a new client, I have to google for how to use types and reflection to access the attribute method, and then write that extension method. Then I think about the other extension methods that I might like to write (what about ToDisplayName(), that might be handy…), and how to make it generic so I can extend it later, and what about error handling…

…and anyway, this process always includes the thought, “I’ve lost count how many times I’ve done this for my .NET enumerations, why don’t I write this down somewhere so I can re-use it?

So I’ve written it down below.

public static class EnumerationExtensions
{
    public static string ToDescription(this Enum enumeration)
    {
        var attribute = GetText<DescriptionAttribute>(enumeration);
 
        return attribute.Description;
    }
 
    public static T GetText<T>(Enum enumeration) where T : Attribute
    {
        var type = enumeration.GetType();
        
        var memberInfo = type.GetMember(enumeration.ToString());
 
        if (!memberInfo.Any())
            throw new ArgumentException($"No public members for the argument '{enumeration}'.");
 
        var attributes = memberInfo[0].GetCustomAttributes(typeof(T), false);
 
        if (attributes == null || attributes.Length != 1)
            throw new ArgumentException($"Can't find an attribute matching '{typeof(T).Name}' for the argument '{enumeration}'");
 
        return attributes.Single() as T;
    }
}

I’ve split it into two methods, so if I want to create a ‘ToDisplayName()’ extension later, it’s really easy for me to do that. I might include it in a NuGet package so it’s even easier for me to re-use later. I’ll update this post if I do.

Wrapping up

This was a quick blog post covering a simple area, about something that bugs me – having to rewrite a simple function each time I start a new project. I can imagine a bunch of reasons why this isn’t a native extension in .NET Framework/Core – and my implementation is an opinionated extension. I’m sure there’d be a lot of disagreement about what good practice is. Anyway, this class works for me – I hope it’s useful to you also.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, C# tip

Correctly reading encoded text with the StreamReader in .NET

So here’s a problem…

I’ve got a list of my team members in a text file which I need to parse and process in .NET. The file is pretty simple – it’s called MyTeamNames.txt and it contains the following names:

  • Adèle
  • José
  • Chloë
  • Auróra

I created the text file on Windows 10 machine and used Notepad. I saved the file with the default ANSI encoding.

ansi save

I’m going to read names from this text file using a .NET Framework StreamReader – there’s a simple and clear example on the docs.microsoft.com site describing how to do this. So I’ve written a spike of code to use a StreamReader – I’ve more or less copied directly from the link above – and it looks like this:

using System;
using System.IO;
 
namespace ConsoleApp
{
    internal static class Program
    {
        private static void Main()
        {
            try
            {
                const string myTeamNamesFile = @"C:\Users\jeremy.lindsay\Desktop\MyTeamNames.txt";
 
                // Open the text file using a stream reader.
                using (var streamReader = new StreamReader(myTeamNamesFile))
                {
                    // Read the stream to a string, and write the string to the console.
                    var line = streamReader.ReadToEnd();
                    Console.WriteLine(line);
                }
            }
            catch (Exception e)
            {
                Console.WriteLine("The file could not be read:");
                Console.WriteLine(e.Message);
            }
        }
    }
}

But when I run the code, there’s a problem. I expected to see my team’s names written to the console – but instead all those names now have question marks scattered throughout them, as shown in the image below.

wrongnames

What’s gone wrong?

The StreamReader object and original text file need to have compatible encoding types

It’s pretty obvious that the question marks relate to the non-ASCII characters, and each name on my list have either an accent or a grave, or an umlaut/diaeresis.

The problem in my .NET Framework console application is that my StreamReader is assuming that my file is encoded one way when it’s actually encoded in another way, and it doesn’t know what to do with some characters, so it uses a question mark as a default.

Can you detect the file’s encoding type with .NET?

Big thanks to Erich Brunner for pointing out a new bit of information to me about the default encoding type – I’ve updated this post to reflect his helpful steer.

It turns out detecting the file’s encoding type is quite a difficult thing to do in .NET. But as I mentioned earlier, I know I saved this file with the encoding type ANSI selected from the dropdown list in Notepad.

Interestingly, this doesn’t mean that I’ve saved the file as ANSI – this is a misnomer. From the MSDN glossary:

“The term “ANSI” as used to signify Windows code pages is a historical reference, but is nowadays a misnomer that continues to persist in the Windows community. The source of this comes from the fact that the Windows code page 1252 was originally based on an ANSI draft—which became International Organization for Standardization (ISO) Standard 8859-1. “ANSI applications” are usually a reference to non-Unicode or code page–based applications.”

There are a couple of different options open to me here:

Change the file’s encoding type and save it with a specified encoding – e.g. UTF-8, or Unicode.

This is very straightforward – I’ve chosen to just select the UTF-8 option from the dropdown list in NotePad’s ‘Save As…’ dialog box.

save as utf-8

This time when I run the code above, the names are displayed correctly on the console, as shown below.

correct display

Alternatively, try using the StreamReader overload to specify the encoding type.

I can use an overload where I specify the encoding type, which comes from System.Text.Encoding.

var streamReader = new StreamReader(myTeamNamesFile, encodingType)

But what values do these encoding types resolve to? Well that depends on whether I use the .NET Framework or .NET Core. I’ve listed the values below, and notice that the Encoding.Default is different depending on whether you use the .NET Framework or .NET Core.

I’ve highlighted the values for “Encoding.Default“, because this is a special case.

System.Encoding value .NET Framework Encoding Header Name .NET Core Encoding Header Name
Encoding.Default Windows-1252 (on my machine) utf-8
Encoding.ASCII us-ascii us-ascii
Encoding.BigEndianUnicode utf-16BE utf-16BE
Encoding.UTF32 utf-32 utf-32
Encoding.UTF7 utf-7 utf-7
Encoding.UTF8 utf-8 utf-8
Encoding.Unicode utf-16 utf-16

So let’s say I use Encoding.Default in my .NET Framework console application, as shown in the code snippet below.

var streamReader = new StreamReader(myTeamNamesFile, Encoding.Default)

And now my names are now correctly rendered in the Console, as shown in the image below. This makes sense – the text file with my team names was saved with “ANSI” encoding, which we know actually corresponds Windows Code Page 1252. The default encoding on my own Windows machine turns out to also be the 1252 encoding (as highlighted above in red), so I’m instructing my StreamReader to use the 1252 encoding when reading a file which has been encoded as “ANSI” (also known as 1252). They match up, and the text displays correctly.

correct display

Problem solved, right? Well, no, not really.

Microsoft actually do not recommend using Encoding.Default. From docs.microsoft.com:

“Different computers can use different encodings as the default, and the default encoding can change on a single computer. If you use the Default encoding to encode and decode data streamed between computers or retrieved at different times on the same computer, it may translate that data incorrectly. In addition, the encoding returned by the Default property uses best-fit fallback to map unsupported characters to characters supported by the code page. For these reasons, using the default encoding is not recommended.”

If I target .NET Core instead of .NET Framework in my console application – with exactly the same code – I’m back to displaying question marks in my console text.

wrongnames

So even though telling the StreamReader in my .NET Framework console application to use Encoding.Default seems to work, it’s a case of it only working on my machine – it might not work on someone else’s machine. It certainly doesn’t work in .NET Core.

So it seems to me that saving my original text file as UTF-8 or Unicode is a better option.

And as a final reason to save the text file to UTF-8 or Unicode, let’s say I add a new team member, called Łukasz. If I try to save my file with ANSI encoding type, I get this warning:

unicode warning

If I press on and save the file as ANSI, the text for “Łukasz” is changed to “Lukasz” (note the change in the first character). But if I save the file as UTF-8 or Unicode, the name stays the same, including the initial “Ł”.

Wrapping up

It’s pretty common to be asked to read and process text files with non-ASCII characters, and even though .NET provides some really helpful APIs to assist with this, compatibility issues can still occur. This is a complex area, with variations across the .NET Framework and .NET Core runtimes.

I think the best solution is to change the encoding type of the original document to be Unicode or UTF-8, rather than ANSI (more correctly, Windows Code Page 1252). Telling the StreamReader to use Encoding.Default also worked for me, but it might not work on someone else’s machine with different defaults, leading to incorrect translations.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, Azure, C# tip

Setting up relationships between work items on Azure DevOps boards, and using .NET to read these relationships

So here’s a question…

How do you set up relationships between work items, and display this relationship in Azure Devops?

There’s a well known relationship between epics, features, user stories and tasks/bugs in the Agile process, but on the ‘Work Items’ screen, Azure DevOps lists them without showing the relationship –  like in the screen below.

workliist

Display relationships using the Backlogs view in Azure DevOps Boards

The thing is that the Work Items screen just lists all the work items – I prefer to view my work items in the Backlogs screen which visually represents the relationship between them.

But how do you set up relationships between work items?

Let’s work an example through from the start. I set up a new project in Azure DevOps – once I created the project, I’m shown the Summary screen, and for me this looks like the screenshot below.

project home

On the left hand navigation menu, I click on the ‘Boards’ item, and when this expands, I select the ‘Backlogs’ sub-item. This presents me with a screen where I’ve a few options, like the one at the top left – ‘New Work Item’.

backlog view

But when I click on this ‘New Work Item’ button, the pop-up only allows me to create a work item with type of ‘User Story’ (as shown below). This is not what I want – I want to create an Epic.

user story

To do this, I have to change some defaults in my Project Settings. At the bottom left of my screen, I click on the ‘Project settings’ button, and then select the ‘Team configuration’ sub-menu item which sits under the ‘Boards’ menu heading. This shows me a screen like the one below.

team configuration

There’s a section on this page which shows me the navigation levels available to me – by default, I don’t have the Epics checkbox ticked. So I can just tick the box as shown below to make this available. No need to click save anywhere – that setting is automatically saved back to the cloud.

backlogs with epic

Now if I go back to the ‘Backlogs’ menu item under ‘Boards’ in my projects left hand navigation menu, I need to select the dropdown list in the top right of the screen – my default setting here is ‘Stories’, but I can open the menu and now choose ‘Epics’ instead.

backlogs with epics dropdown

Now when I click on the ‘New Work Item’ button, I can create an Epic, and enter in the epic’s title, as shown below.

my epic title 2

And I’ve created my first epic in an Azure DevOps Board!

Ok, but what about nesting other items under that Epic?

There are a few different ways, but it’s straightforward (when you know how) – the way I like to do this is by selecting the ‘+’ button on the right hand side of my Epic. If you hover over this ‘+’ button, a tooltip appears that says ‘Add Feature’, and clicking on the button does exactly that.

add feature hover

A large dialog appears once you’ve clicked ‘+’ where you can add feature details – and note that in the bottom right of this dialog, there’s a ‘Related Work’ section, that shows the Epic we previously created as a parent.

new features

After clicking the blue ‘Save & Close’ button on the top right of the New Feature dialog, you’ll be taken back to the project board’s Backlog view, and you can see the feature that you just created below the epic we created previously, and it’s indented one place to the right to visually represent the parent-child relationship, as shown below.

add user story dropdown

And if you hover over the ‘+’ button to the left of the feature you just created, you’ll see the hint that this button now allows you to create a new user story. So if you click on ‘+’, you’ll have a similar experience to before, except the dialog that pops up is for a work item type of ‘User Story’. And you can see the relationship between this and the parent feature again by looking in the bottom right corner of the dialog, in the ‘Related Work’ section.

my new user story

And just to finish off this section, when you save that user story you’ll be taken to the backlog screen, and again see the user story sitting below its parent feature, indented one place to the right, as shown below. From this user story, you can click on the ‘+’ button on the left, and this time you’ve got a couple of options – either create a bug or a task with that user story as a parent.

show task and bug

I went ahead and create a task and a bug – the experience of creating them is identical to before where a dialog pops up with the type you select, and any existing relationed work detailed in the bottom right of the dialog box. So the image below shows my 5 new work items (an epic, a feature, a story, a task and a bug), and it’s easy to see the relationship between them by how they’re indented relative to each other.

indented

What about getting these items in .NET – how do I find out what items are related to?

I’ve previously written about creating Azure DevOps work items using the .NET framework, and you can re-apply some of the same principles to read work items into .NET objects.

I created a .NET Framework console app and installed the required NuGet packages using:

Install-Package Microsoft.TeamFoundationServer.Client
Install-Package Microsoft.VisualStudio.Services.Client

Then I used the code below to read back information about item 64 in my backlog – this is a user story which has a parent feature, and two children – a task and a bug.

So I expected the code below to tell me that there were a list of three relations in the workItem.Relations property.

using Microsoft.TeamFoundation.WorkItemTracking.WebApi;
using Microsoft.VisualStudio.Services.Common;
using Microsoft.VisualStudio.Services.WebApi;
using System;
 
namespace ConsoleApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            // my unique organization's Azure DevOps uri
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string personalAccessToken = "[[***my personal access token***]]";
 
            var credentials = new VssBasicCredential(string.Empty, personalAccessToken);
 
            // connect to Azure DevOps
            var connection = new VssConnection(new Uri(uri), credentials);
            var workItemTrackingHttpClient = connection.GetClient<WorkItemTrackingHttpClient>();
 
            // get information about workitem 64
            const int workItemId = 64;
            var workItem = workItemTrackingHttpClient.GetWorkItemAsync(workItemId).Result;
 
            // get relations
            var relations = workItem.Relations;
            Console.WriteLine(relations.Count); // uh oh - reports there are zero relations!
        }
    }

But this code doesn’t show relationships – why isn’t it working?

It turns out that the GetWorkItemAsync method doesn’t return relations by default. Instead, the GetWorkItemAsync method has an overload where you can specify to what extra information to return using a WorkItemExpand enumeration. In the code below I’ve chosen to return everything using:

expand: WorkItemExpand.All

But if I only wanted to return relations I could use:

expand: WorkItemExpand.Relations

The code below now correctly reports there are three items related to workitem 64.

using Microsoft.TeamFoundation.WorkItemTracking.WebApi;
using Microsoft.TeamFoundation.WorkItemTracking.WebApi.Models;
using Microsoft.VisualStudio.Services.Common;
using Microsoft.VisualStudio.Services.WebApi;
using System;
 
namespace ConsoleApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            // my unique organization's Azure DevOps uri
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string personalAccessToken = "[[***my personal access token***]]";
 
            var credentials = new VssBasicCredential(string.Empty, personalAccessToken);
 
            // connect to Azure DevOps
            var connection = new VssConnection(new Uri(uri), credentials);
            var workItemTrackingHttpClient = connection.GetClient<WorkItemTrackingHttpClient>();
 
            // get information about workitem 64
            const int workItemId = 64;
            var workItem = workItemTrackingHttpClient.GetWorkItemAsync(workItemId, expand: WorkItemExpand.All).Result;
 
            // get relations
            var relations = workItem.Relations;
            Console.WriteLine(relations.Count); // now correctly reports there are 3 relations
        }
    }
}

The relations list now correctly reports:

Wrapping up

This post has been about how to create work items with a hierarchical relationship using the Azure DevOps web user interface, and view them using the Backlog view in Azure -DevOps boards. I’ve also written about how to read these items and relationships between them using the .NET framework – I hope this helps!


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, Azure

Load your work items into Azure DevOps Boards with .NET

This post is about how to write a .NET application to move workitems from another source (e.g. JIRA, Excel etc) into Azure Boards in Azure DevOps, and a Nuget package I’ve built to hopefully make it a bit easier for anyone else doing this as well.

So here’s a problem…

Let’s say you’ve convinced your boss to move your projects to Azure Devops – great! You’re happy, and your team are happy, but before you can really start, there’s still some work to be done – migration of all the historical project data from your existing company systems….

Maybe your company has its own custom story/issue/bug tracking system (maybe it’s JIRA, maybe it’s Mantis, or something else), and you don’t want to lose or archive all that valuable content. You want to load all that content in your project’s Azure Board as well – how do you do that?

Use .NET with Azure Boards to solve this problem

I had exactly this problem recently – my project’s history was exported into one big CSV file, and I needed to get it into Azure Boards. I had loads of fields which I needed to keep and I don’t want to lose all this…

…so I ‘.NET’ted  my way out of trouble.

A bit of searching on the internet also leads me to the option of bulk loading using Excel and the TFS Standalone Office Integration pack, but I’m a programmer and I prefer the flexibility of using code. Though, y’know, YMMV.

excel link

First I created a .NET Framework console application, and added a couple of NuGet packages for Azure DevOps:

Install-Package Microsoft.TeamFoundationServer.Client
Install-Package Microsoft.VisualStudio.Services.Client

These are both projects that target .NET Framework, so I can’t use .NET Core for this yet.

With these included in my application, I now have access to objects which allow me to connect to Azure DevOps through .NET, and also connect to a work item client that allows me to perform create/read/update/delete operations on work items in my project’s board.

It’s pretty easy to load up my project history CSV into a list in a .NET application, so I knew I had all the puzzle pieces to solve this problem, I just needed to put them together.

In order to connect to Azure DevOps and add items using .NET, I used:

  • The name of the project I want to add work items to – my project codename is “Corvette
  • The Url of my Azure DevOps instance – http://dev.azure.com/jeremylindsay
  • My personal access token.

If you’ve not generated a personal access token in Azure DevOps before, check this link out for details on how to do it – it’s really straightforward from the Azure DevOps portal:

https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=vsts

I can now use the code below to connect to AzureDevOps and create a work item client.

var uri = new Uri("http://dev.azure.com/jeremylindsay");
var personalAccessToken = "[***my access token***]";
var projectName = "Corvette";
 
var credentials = new VssBasicCredential("", personalAccessToken);
 
var connection = new VssConnection(uri, credentials);
var workItemTrackingHttpClient = connection.GetClient<WorkItemTrackingHttpClient>();

Next, I need to create what is basically a list of name and value pairs which describes the name of the work item field (e.g. title, description etc), and the value that I want to put in that field.

This link below describes the fields you can access through code:

https://docs.microsoft.com/en-us/azure/devops/reference/xml/reportable-fields-reference?view=vsts

It’s a little bit more complex than a normal dictionaries or other key-value pair objects in .NET but not that difficult. The work item client uses custom objects called JsonPatchDocuments and JsonPatchOperations. Also, the names of the fields are not intuitive out of the box, but given all that, I can still create a work item in .NET using the code below:

var bug = new JsonPatchDocument
{
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/System.Title",
        Value = "Spelling mistake on the home page"
    },
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/Microsoft.VSTS.TCM.ReproSteps",
        Value = "Log in, look at the home page - there is a spelling mistake."
    },
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/Microsoft.VSTS.Common.Priority",
        Value = "1"
    },
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/Microsoft.VSTS.Common.Severity",
        Value = "2 - High"
    }
};

Then I can add the bug to my Board with the code below:

workItemTrackingHttpClient.CreateWorkItemAsync(bug, ProjectName, "Bug").Result;

Now this works and is very flexible, but I think my code could be made more readable and easy to use. So I refactored the code, moved most of it into library, and uploaded it to NuGet here. My refactoring is pretty simple – I’m not going to go into lots of detail on how I did it, but if you’re interested the code is up on GitHub here.

If you’d like to get this package, you can use the command below

Install-Package AzureDevOpsBoardsCustomWorkItemObjects -pre

This package depends on the two NuGet packages I referred to earlier in this post, so they’ll be added automatically if you install my NuGet package.

This allows us to instantiate a bug object look much more like creation of a normal POCO, as shown below:

var bug = new AzureDevOpsBug
{
    Title = "Spelling mistake on the home page",
    ReproSteps = "Log in, look at the home page - there is a spelling mistake.",
    Priority = AzureDevOpsWorkItemPriority.Medium,
    Severity = AzureDevOpsWorkItemSeverity.Low,
    AssignedTo = "Jeremy Lindsay",
    Comment = "First comment from me",
    Activity = "Development",
    AcceptanceCriteria = "This is the acceptance criteria",
    SystemInformation = "This is the system information",
    Effort = 13,
    Tag = "Cosmetic; UI Only"
};

And to push this bug to my Azure Board, I can use the code below which is a little simpler than what I wrote previously.

using AzureDevOpsCustomObjects;
using AzureDevOpsCustomObjects.Enumerations;
using AzureDevOpsCustomObjects.WorkItems;
 
namespace ConsoleApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string personalAccessToken = "[[***my personal access token***]]";
            const string projectName = "Corvette";
 
            var workItemCreator = new WorkItemCreator(uri, personalAccessToken, projectName);
 
            var bug = new AzureDevOpsBug
            {
                Title = "Spelling mistake on the home page",
                ReproSteps = "Log in, look at the home page - there is a spelling mistake.",
                Priority = AzureDevOpsWorkItemPriority.Medium,
                Severity = AzureDevOpsWorkItemSeverity.Low,
                AssignedTo = "Jeremy Lindsay",
                Comment = "First comment from me",
                Activity = "Development",
                AcceptanceCriteria = "This is the acceptance criteria",
                SystemInformation = "This is the system information",
                Effort = 13,
                Tag = "Cosmetic; UI Only"
            };
 
            var createdBug = workItemCreator.Create(bug);
        }
    }
}

I’ve chosen to instantiate the bug with hard-coded text in the example above for clarity – but obviously you can instantiate the POCO any way you like, for example from a database, or perhaps parsing data out of a CSV file.

Anyway, the image below shows the bug added to my Azure Board.

bug

Of course, Bugs are not the only types of work item – let’s say I want to add Product Backlog Items also. And there are many, many different fields used in Azure Boards, and I haven’t coded for all of them in my NuGet package. So:

  • I’ve also added a Product Backlog object into my NuGet package,
  • I’ve made the creation method generic so it can detect the object type and work out what type of work item is being added to the Board
  • I’ve made the work item objects extensible so I users can add any fields which I haven’t coded for yet.

For example, the code below how to add a task and include a comment in the System.History field:

private static void Main(string[] args)
{
const string uri = "https://dev.azure.com/jeremylindsay";
const string personalAccessToken = "[[***my personal access token***]]";
const string projectName = "Corvette";
 
var workItemCreator = new WorkItemCreator(uri, personalAccessToken, projectName);
 
var productBacklogItem = new AzureDevOpsProductBacklogItem
{
    Title = "Add reports for how many users log in each day",
    Description = "Need a new report with log in statistics.",
    Priority = AzureDevOpsWorkItemPriority.Low,
    Severity = AzureDevOpsWorkItemSeverity.Low,
    AssignedTo = "Jeremy Lindsay",
    Activity = "Development",
    AcceptanceCriteria = "This is the acceptance criteria",
    SystemInformation = "This is the system information",
    Effort = 13,
    Tag = "Reporting; Users"
};
 
productBacklogItem.Add(
    new JsonPatchOperation
    {
        Path = "/fields/System.History",
        Value = "Comment from product owner."
    }
);
 
var createdBacklogItem = workItemCreator.Create(productBacklogItem);
}

Obviously I can change the code to allow addition of comments through a property in the AzureDevOpsProductBacklogItem POCO, but this is just an example to demonstrate how it can be done by adding a JsonPatchOperation.

The image below shows the product backlog item successfully added to my Azure Board.

bug

Wrapping up

The Boards component of Azure DevOps is a useful and effective way to track your teams work items. And if you want to populate a new Board with a list of existing bugs or backlog items, you can do this with .NET. I guess a lot of these functions aren’t new, and they were available in VSTS, and it’s still nice to see these powerful functions and libraries continue to be supported. And hopefully the NuGet package I’ve created to assist in the process will be useful to some of you who are working through the same migration challenges that I am. Obviously this NuGet package can still be improved a lot – it just covers Backlog Items and Bugs right now, and it’d be better if it flagged those fields that are read only – but it’s good enough to meet minimum viable standards for me right now, and maybe it’ll be helpful for you too.


About me: I regularly post about Microsoft technologies like Azure and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!