.net core, Azure, Azure DevOps, Azure Pipelines

Everything as Code with Azure DevOps Pipelines: C#, ARM, and YAML: Part #3, resource groups and YAML templates

This series of posts is about how to use Azure DevOps and Pipelines to set up a basic web application, app service infrastructure, and CI/CD pipelines – with everything saved as code in our source code repository.

reference-architecture-diagram-2

Previously I’ve written about how to configure multi-stage pipelines in YAML – I created a simple skeleton of different stages for creating infrastructure and deploying a website to integration, test and demonstration environments. However, the skeleton YAML had no actual build/deploy functionality.

This time I’ll cover a couple of different topics:

  • Creating resource groups for each environment in our overall solution (integration, testing and demonstration).
  • Reducing code duplication by parameterising YAML templates, so that the main pipeline can call smaller YAML templates. This simplifies the overall structure.

As always, my code is available on GitHub here.

A simple YAML template

In the previous post, I created a series of skeleton stages in my pipeline that simply wrote text to the standard output using the echo command. I’d like to start including code in my stages that now does something useful, and the first step in each stage is to create a resource group that can hold all the logically related pieces of infrastructure for each of the 3 environments (integration, test and demo).

It was pretty obvious that there was some similarity in the stages:

  • Build integration infrastructure and deploy the website to this
  • Build test infrastructure and deploy the website to this
  • Build demo infrastructure and deploy the website to this

This kind of repeating pattern is an obvious signal that the code could be parameterised and broken out into a re-usable block.

I’ve created a sample below of how I could do this – the block of code has 4 parameters:

  • name, for the name of the job
  • vmImage, for the type of vm I want my agent to be (e.g. windows, ubuntu etc)
  • displayName, to customise what I want the step to display in my pipeline,
  • resourceGroupName, obviously for the name of the resource group.

Because I just want to change one thing at a time, the code below still just echoes text to standard output.

parameters:
  name: '' 
  vmImage: ''
  displayName: ''
  resourceGroupName: ''

jobs:
- job: ${{ parameters.name }}
  pool: 
    vmImage: ${{ parameters.vmImage }}
  steps:
  - script: echo create resource group for ${{ parameters.resourceGroupName }}
    displayName: ${{ parameters.displayName }}

And I can use this in my main pipeline file – azure-pipelines.yml – as shown below. It’s pretty intuitive – the “jobs” section has a “template” which is just the path to where I’ve saved the template in my source code repository, and then I list out the parameters I want to use for this job.

- stage: build_integration
  displayName: 'Build the integration environment'
  dependsOn: build
  jobs:
  - template: ci/templates/create-resource-group.yml
    parameters:
      name: create_infrastructure
      displayName: 'First step in building the integration infrastructure'
      vmImage: 'Ubuntu-16.04'
      resourceGroupName: integration

Using Tasks in Azure Pipelines – the Azure CLI task

I’ve updated my template in the code below – instead of simply writing what it’s supposed to do, it now uses an Azure CLI task to create a resource group.

The Azure CLI makes this kind of thing quite simple – you just need to tell it the name of your resource group, and where you want to host it. So if I wanted to give my resource group the name”development-rg” and host it in the “UK South” location, the command would be:

az group create -n development-rg -l uksouth

Open the pipeline in Azure DevOps as shown below, and place the cursor where you want to add the task. The “tasks” sidebar is open by default on the right hand side.

task1

In the “tasks” sidebar there’s a search box. I want to add an Azure CLI task so that’s the text I search for – it automatically filters and I’m left with only one type of task.

task2

After clicking on the “Azure CLI” task, the sidebar refreshes to show me what information I need to supply for this task:

  • the Azure subscription to use,
  • what kind of script to run (inline or a file), and
  • the script details (as shown below, I just put the simple resource group creation script inline)

task3

Finally when I click the “Add” at the bottom right of the screen, the YAML corresponding to the task is inserted to my pipeline.

It might not be positioned quite correctly – sometimes I have to hit tab to move it to the correct position. But the Azure DevOps editor uses the red wavy line under any elements which it thinks are in the wrong place to help out.

task4

I find this technique quite useful when I want to generate the YAML for a task, as it makes it easier for me to modify and parameterise it. I’d find it quite hard to write YAML for DevOps tasks from scratch, so I’ll take whatever help I can get.

I’ve tweaked the generated code to parameterise it, and included it as a task in my template below (highlighted in red). I always want to deploy my code to the “UK South” location so I’ve left that as an unparameterised value in the template.

parameters:
  name: '' 
  vmImage: ''
  displayName: ''
  resourceGroupName: ''
  subscription: ''

jobs:
- job: ${{ parameters.name }}
  pool: 
    vmImage: ${{ parameters.vmImage }}
  steps:
  - task: AzureCLI@1
    displayName: ${{ parameters.displayName }}
    inputs:
      azureSubscription: ${{ parameters.subscription }}
      scriptLocation: 'inlineScript'
      inlineScript: az group create -n ${{ parameters.resourceGroupName }}-rg -l uksouth

And as shown earlier, I can call it from my stage by just referencing the template location, and passing in all the parameters that need to be populated for the job to run.

trigger:
- master

variables:
  subscription: 'Visual Studio Professional with MSDN'

stages:

# previous stages...

- stage: build_integration
  displayName: 'Build the integration environment'
  dependsOn: build
  jobs:
  - template: ci/templates/create-resource-group.yml
    parameters:
      name: create_infrastructure
      displayName: 'First step in building the integration infrastructure'
      vmImage: 'Ubuntu-16.04'
      resourceGroupName: integration
      subscription: $(subscription)

# next stages...

And now we have a pipeline that builds the architecture shown below. Still no app service or website, but I’ll write about those in future posts.

resource-group-architecture-diagram

Ironically, there’s now more lines of code in the solution than there would have been if I had just duplicated all the code! But that’s just because this is still pretty early skeleton code. As I build out the YAML to generate infrastructure and do more complex things, the value will become more apparent.

Service connections

Last note – if you take a fork of my repository and try to run this yourself, it’ll probably not run because my Azure subscription is called “Visual Studio Professional with MSDN” and your subscription is probably called something else. You could change the text of the variable in the azure-pipelines.yml file in your fork, or you could also set up a Service Connection with this name.

Setting one up is quite straightforward – hit “Project settings” which lives at the bottom left of your Azure DevOps project, which opens up a window with a list of settings. Select “Service connections” as shown below, and select the “Azure Resource Manager” option after clicking on “New service connection”.

serviceconnection1

This will open a dialog where you can specify the connection name (I chose to call my “Visual Studio Professional with MSDN” but you could call it whatever you want), and you can also select your subscription from the dropdown level. I didn’t need to specify a resource group, and just hit ok. You might be asked for your Azure login and password during this process.

serviceconnection2

Whatever you decided to call your service connection will be the text you can use in your pipeline to specify the subscription Azure should use.

Wrapping up

There’s been a lot of words and pictures here to achieve only a little progress, but it’s a big foundational piece – instead of having a huge YAML pipeline which could become unmaintainable, we can break it up into logical parameterised code blocks. We’ve used the Azure CLI task as an example of how to do this. Next time I’ll write deploying some ARM templates to create an Azure App Service in each of the three environments.

Azure, Azure DevOps, Azure Pipelines, YAML

Everything as Code with Azure DevOps Pipelines: C#, ARM, and YAML: Part #2 – multi-stage builds in YAML

I wrote a post recently introducing some of the work I’ve been doing with Azure DevOps, and ensuring my websites, tests, infrastructure and processes are stored as code in GitHub. This post stretches a bit beyond that introduction, and is about creating a multi-stage pipeline in YAML. I’m going to include lots of screenshots to help you follow along with what I’ve done.

I’m going to try to describe how to do small tasks in each of my next few posts – trying to do everything at once would just be overwhelming.

A multi-stage pipeline

In the last post I’ve described a sample scenario – a client who wants a demonstration of the work I’ve done for them website, and also values stability and system quality. I’ve suggested the architecture below:

reference-architecture-diagram-2

In order to implement this, I can imagine 8 stages in my pipeline:

  1. Build and Test the website pushed to source code;
  2. Build the Integration Environment;
  3. Deploy the website to the Integration Environment;
  4. Build the Test Environment;
  5. Deploy the website to the Test Environment;
  6. Build the Demo Environment;
  7. Deploy the website to the Demo Environment.

And as the 8th stage, I’d like a system quality test (perhaps vulnerability scanning or automated page accessibility checking) to complete after the website is deployed to the Integration Environment.

Let’s see how we can create logical stages in the Azure Build Pipeline to manage each of these operations.

Creating the YAML skeleton in Azure DevOps

As I said earlier, I’m going to build this architecture a bit at a time. In this post, I’m only going to cover creating a skeleton of my build pipelines – creating a series of logically named stages with dependencies.

In the following posts I’ll start populating these stages with tasks like creating resource groups, building infrastructure, running tests and deploying websites to the infrastructure.

First let’s create the source code repository

I’ve created a publically available GitHub repository called EverythingAsCode which is where I’ll store all of my code.

I could have done this in Azure DevOps also, but I’ve chosen work with GitHub this time.

Create a new project in Azure DevOps

The next step for me to log into Azure DevOps and create a project. This is very straightforward – the normal pre-requisite is to set up an account and organisation, and log on to https://dev.azure.com/.

When I’ve logged in, I can see a “Create project” button at the top right of the screen, as shown below.

create project

This opens up a dialog window where I can specify the project name and its visibility.

create devops project

After clicking on “Create”, after a short while the project is created and I can see an empty template.

empty project

Follow the steps in the wizard to create, save and run a single stage Build Pipeline

I click on the Pipelines option in the left navigation menu and see the screen below:

first build pipeline

Remember from the first post in this series that I’m using the multi-stage pipeline preview feature – it’s very simple to switch this on and there’s instructions on how to do this in the first part.

multistagepipelines

I click on “Create Pipeline”, and I’m taken into a wizard to guide me through the process. The first step I’m asked to complete is to choose where my source code lives – I select GitHub because that’s where my “EverythingAsCode” repo lives, but there’s a good selection of alternative options as shown below:

where is your code

When I select the “GitHub” option, the next page shows me my GitHub repositories (I’m logged into GitHub so DevOps knows which ones belong to me). I select the “EverythingAsCode” repo.

select a repository

After selecting this repo, I’m redirected to GitHub where I’m asked to install the Azure Pipelines app for my repo – I scroll to the bottom and hit “Approve and Install”.

azurepipelines

After approving the install, I’m redirected back to my DevOps organisation to carry on with the Wizard.

configure your pipeline

I selected “Starter pipeline”, and I’m taken to the screen shown below – DevOps is showing me a YAML editor, with some simple steps that write text to the standard output.

review your yaml

I hit the “Save and run” button, and then I’m told that this YAML file will be saved in the root of my repository – this is great. My first pipeline is being saved as code in GitHub.

save and run

After I hit “Save and Run”, the code is pushed to my GitHub repo, and the sample pipeline starts running. After clicking on “Pipelines” in the left hand nav and selecting my pipeline, I can see the progress (as shown below).

first run

And if I click on the running job, I’m taken to another screen where I can see the individual steps within the job running. You can see in the screen below how the script has written “Hello, world” to the standard output.

build output

That’s my first fully coded pipeline completed.

github with pipeline

And finally, when I browse to the repository in GitHub, I can see that the YAML file has been saved as code in my source code repository, as shown above.

So that’s single stage – what about multi-stage?

Let’s edit our sample pipeline through the Azure DevOps YAML editor.

First I click on Pipelines in the left hand navigation menu, and see a list of pipelines for the project.

pipelines

The area shown below in the red box above is a clickable area, so I click on this to see the pipeline detail, as shown below. On this detail page, there’s an “Edit” button in the top right (highlighted in a red box in the image below).

edit pipelines

After clicking on “Edit”, I’m taken to a screen which has a rich text editor that allows me to modify my YAML file.

edit yaml

So the starter pipeline was useful to test things were working, but it’s not really want I want. I pasted in the code shown below. This has just three stages right now (build the website, build the infrastructure, and deploy the website to the infrastructure), each of which presently just echo text to standard output.

trigger:
- master

stages:
- stage: build
  displayName: 'Build and test the website'
  jobs:
  - job: run_build
    pool:
      vmImage: 'Ubuntu 16.04'
    steps:
    - script: echo Build
      displayName: 'First step in building the website skeleton'

- stage: build_integration
  displayName: 'Build the integration environment'
  dependsOn: build
  jobs:
  - job: create_infrastructure
    pool:
      vmImage: 'Ubuntu 16.04'
    steps:
    - script: echo Build Integration Infrastructure
      displayName: 'First step in building the integration infrastructure'

- stage: deploy_to_integration
  displayName: 'Deploy the website to the integration environment'
  dependsOn: build_integration
  jobs:
  - job: deploy_artefacts_to_integration
    pool:
      vmImage: 'Ubuntu 16.04'
    steps:
    - script: echo Deploy Website to Integration
      displayName: 'First step in deploying the website to integration'

And when I save and run this, I can see my pipeline has changed in the screen below – I now have three distinct stages in my pipeline, and I can populate each of these with tasks like creating resource groups and deployiing infrastructure.

building with 3 stages

Let’s take this further – and in addition to creating stages for my infrastructure build and release, earlier I also mentioned I’d like stages for the Test and Demo environments also.

I’ve added code into the azure-pipelines.yml file which is available in GitHub – rather than posting pages and pages of code, I recommend you check it out over at GitHub.

When I run my updated code through Azure DevOps, I can now see many more stages in my pipeline, as shown below.

lots more stages

So far it’s a very linear series of stages. But we’ve only done 7 of our 8 stages – what about the 8th, the one where I wanted to run my system quality tests? I want that to depend on the third stage (where I deploy my website to the integration infrastructure) but I don’t want any other stage to depend on this 8th stage (yet, anyway).

It turns out this is quite easy – I can just add in another stage to the YAML, specify what stage it depends on (highlighted in red in the code sample below), and make sure that no other stage depends on this one.

- stage: run_system_quality_tests
  displayName: 'Run the system quality tests'
  dependsOn: deploy_to_integration
  jobs:
  - job: run_non_functional_tests
    pool:
      vmImage: 'Ubuntu 16.04'
    steps:
    - script: echo Run system quality tests
      displayName: 'Running the system quality tests'

And now my pipeline has two stages after deploying the website to integration, with one being an end point to that fork, and the other continuing on to build and deploy to the Test and Demo instances.

added system quality step

Wrapping up

This post has been about creating multi-stage YAML for an Azure Build Pipeline. Next time I’ll write about how to populate these stages with useful Azure tasks.

.net, .net core, Azure, Azure DevOps, Azure Pipelines, Web Development

Everything as Code with Azure DevOps Pipelines: C#, ARM, and YAML: Part #1

Like a lot of developers, I’ve been using Azure DevOps to manage my CI/CD pipelines. I think (or I hope anyway) most developers now are using a continuous integration process – commit to a code repository like GitHub, which has a hook into a tool (like DevOps Build Pipelines) which checks every change to make sure it compiles, and doesn’t break any tests.

Just a note – this post doesn’t have any code, it’s more of an introduction to how I’ve found Azure Pipelines can make powerful improvements to a development process. It also looks at one of the new preview features, multi-stage build pipelines. I’ll expand on individual parts of this with more code in later posts.

Azure Pipelines have traditionally been split into two types – build and release pipelines.

oldpipelines

I like to think of the split in this way:

Build Pipelines

I’ve used these for Continuous Integration (CI) activities, like compiling code, running automated tests, creating and publishing build artifacts. These pipelines can be saved as YAML – a human readable serialisation language – and pushed to a source code repository. This means that one developer can pull another developer’s CI code and run exactly the same pipeline without having to write any code. One limitation is that Build Pipelines are a simple list of tasks/jobs – historically it’s not been possible to split these instructions into separate stages which can depend on one or more conditions (though later in this post, I’ll write more about a preview feature which introduces this capability).

Release Pipelines

I’ve used these for Continuous Deployment (CD) activities, like creating environments and deploying artifacts created during the Build Pipeline process to these environments. Azure Release Pipelines can be broken up into stages with lots of customized input and output conditions, e.g. you could choose to not allow artifacts to be deployed to an environment until a manual approval process is completed. At the time of writing this, Release Pipelines cannot be saved as YAML.

What does this mean in practice?

Let’s make up a simple scenario.

A client has defined a website that they would like my team to develop, and would like to see a demonstration of progress to a small group of stakeholders at the end of each two-week iteration. They’ve also said that they care about good practice – site stability and security are important aspects of system quality. If the demonstration validates their concept at the end of the initial phase, they’ll consider opening the application up to a wider audience on a environment designed to handle a lot more user load.

Environments

The pattern I often use for promoting code through environments is:

Development machines:

  • Individual developers work on their own machine (which can be either physical or virtual) and push code from here to branches in the source code repository.
  • These branches are peer reviewed and subject to passing this are merged into the master branch (which is my preference for this scenario – YMMV).

Integration environment (a.k.a. Development Environment, or Dev-Int):

  • If the Build Pipeline has completed successfully (e.g. code compiles and no tests break), then the Release Pipeline deploys the build artifacts to the Integration Environment (it’s critical that these artifacts are used all the way through the deployment process as these are the ones that have been tested).
  • This environment is pretty unstable – code is frequently pushed here. In my world, it’s most typically used by developers who want to check that their code works as expected somewhere other than their machine. It’s not really something that testers or demonstrators would want to use.
  • I also like to run vulnerability scanning software like OWASP ZAP regularly on this environment to highlight any security issues, or run a baseline accessibility check using something like Pa11y.

Test environment:

  • The same binary artifacts deployed to Integration (like a zipped up package of website files) are then deployed to the Test environment.
  • I’ve usually set up this environment for testers who use it for any manual testing processes. Obviously user journey testing is automated as much as possible, but sometimes manual testing is still required – e.g. for further accessibility testing.

Demonstration environment:

  • I only push to this environment when I’m pretty confident that the code does what I expect and I’m happy to present it to a room full of people.

And in this scenario, if the client wishes to open up to a wider audience, I’d usually recommend the following two environments:

Pre-production a.k.a. QA (Quality Assurance), or Staging

  • Traditionally security checks (e.g. penetration tests) are run on this environment first, as are final performance tests.
  • This is the last environment before Production, and the infrastructure should mirror the infrastructure on Production so that results of any tests here are indicative of behaviour on Production.

Production a.k.a. Live

  • This is the most stable environment, and where customers will spend most of their time when using the application.
  • Often there’ll also be a mirror of this environment for ‘Disaster Recovery’ (DR) purposes.

Obviously different teams will have different and more customized needs – for example, sometimes teams aren’t able to deploy more frequently than once every sprint. If an emergency bug fix is required it’s useful to have a separate environment to allow these bug fixes to be tested before production, without disrupting the development team’s deployment process.

Do we always need all of these environments?

The environments used depend on the user needs – there’s no strategy which works in all cases. For our simple fictitious case, I think we only need Integration, Testing and Demonstration.

Here’s a high level process using the Microsoft stack that I could use to help meet the client’s initial need (only deploying as far as a demonstration platform):

reference-architecture-diagram-2

  • Developers build a website matching client needs, often pushing new code and features to a source code environment (I usually go with either GitHub or Azure DevOps Repos).
  • Code is compiled and tested using Azure DevOps Pipelines, and then the tested artifacts are deployed to:
    • The Integration Environment, then
    • The Testing Environment, and then
    • The Demonstration Environment.
  • Each one of these environments lives in its own Azure Resource Group with identical infrastructure (e.g. web farm, web application, database and monitoring capabilities). These are built using Azure Resource Manager (ARM) templates.

Using Azure Pipelines, I can create a Release Pipeline to build artifacts and deploy to the three environments above in three separate stages, as shown in the image below.

stages8

But as mentioned previously, a limitation is that this Release Pipeline doesn’t exist as code in my source code repository.

Fancy All New Multi-Stage Pipelines

But recently Azure have introduced a preview feature, where Build Pipelines have been renamed as “Pipelines”, and added some new functions.

If you want to see these in your Azure DevOps instance, log into your DevOps instance on Azure, and head over to the menu in the top right – select “Preview features”:

pipelinemenu

In the dialog window that appears, turn on the “Multi-stage Pipelines” option, highlighted in the image below:

multistagepipelines

Now your DevOps pipeline menu will look like the one below – note how the “Builds” sub-menu item has been renamed to Pipelines:

newpipelines

Now I’m able to use YAML to not only capture individual build steps, but I can package them up into stages. The image below shows how I’ve started to mirror the Release Pipeline process above using YAML – I’ve build and deployed to integration and testing environments.

build stages

I’ve also shown a fork in the process where I can run my OWASP ZAP vulnerability scanning tool after the site has been deployed on integration, at the same time as the Testing environment is being built and having artefacts deployed to it. The image below shows the tests that have failed and how they’re reported – I can select individual tests and add them as Bugs to Azure DevOps Boards.

failing tests

Microsoft have supplied some example YAML to help developers get started:

  • A simple Build -> Test -> Staging -> Production scenario.
  • A scenario with a stage that on completion triggers two stages, which then are both required for the final stage.

It’s a huge process improvement to be able to have my website source code and tests as C#, my infrastructure code as ARM templates, and my pipeline code as YAML.

For example, if someone deleted the pipeline (either accidentally or deliberately), it’s not really a big deal – we can recreate everything again in a matter of minutes. Or if the pipeline was acting in an unexpected way, I could spin up a duplicate of the pipeline and debug it safely away from production.

Current Limitations

Multi-stage pipelines are a preview feature in Azure DevOps, and personally I wouldn’t risk this with every production application yet. One major missing feature is the ability to manually approve progression from one stage to another, though I understand this is on the team’s backlog.

Wrapping Up

I really like how everything can live in source code – my full build and release process, both CI and CD, are captured in human readable YAML. This is incredibly powerful – code, tests, infrastructure and the CI/CD pipeline can be created as a template and new projects can be spun up in minutes rather than days. Additionally, I’m able to create and tear down containers which cover some overall system quality testing aspects, for example using the OWASP ZAP container to scan for vulnerabilities on the integration environment website.

As I mentioned at the start of this post, I’ll be writing more over the coming weeks about the example scenario in this post – with topics such as:

  • writing a multi-stage pipeline in YAML to create resource groups to encapsulate resources for each environment;
  • how to deploy infrastructure using ARM templates in each of the stages;
  • how to deploy artifacts to the infrastructure created at each stage;
  • use the OWASP ZAP tool to scan for vulnerabilities in the integration website, and the YAML code to do this.

 

.net, .net core, Azure, Azure DevOps, Azure DevOps Boards, Flurl

How to delete a TestCase from Azure DevOps boards using .NET, Flurl and the Azure DevOps Restful API

So here’s a problem…

I’ve been working with Azure DevOps and finding it really great – but every now and again I hit a roadblock and feel like I’m on the edge of what’s possible with the platform.

For instance – I’ve loaded work items and test cases into my development instance for some analysis before going to my production instance, and now I’d like to delete all of them. Sounds like a simple thing to do from the UI – I’ve selected multiple work items before, clicked on the ellipsis on one item and selected ‘Delete’ from the menu that appears.

bulk delete

Except that sometimes it doesn’t work out like that. Where’s the delete option gone in the menu below?

bulk delete fail

Right now you can only delete one test case at a time through the Azure DevOps web user interface

You can only delete test cases one at a time through the Azure DevOps web UI at the moment, and you can’t do it from the WorkItem list view. To delete a test case, select the test case to display its detailed view, and then select the ellipsis at the top right of this view to reveal an action menu (as shown below). You can select the options with the text ‘Permanently delete’

delete test case

Then you’ll be presented with a dialog asking you to confirm the deletion and enter the Test Case ID to confirm your intent.

perm delete

This is a lot of work if you’ve got a few (or a few hundred) test cases to delete.

Fortunately, this isn’t the only option available – I can .NET my way out of trouble.

You also can use .NET, Flurl and the Azure DevOps Restful API to delete test cases

Azure DevOps also provides a Restful interface which has comprehensive coverage of the functions available through the web UI – and sometimes a bit more. This is one of those instances where the using Restful API gives me the flexibility that I’m looking for.

I’ve previously written about using libraries with .NET to simplify accessing Restful interfaces – one of my favourite libraries is Flurl, because it makes it really easy for me to construct a URI endpoint and call Restful verbs in a fluent way.

The code below shows a .NET method where I’ve called the Delete verb on a Restful endpoint – this allows me to delete test cases by Id from my Azure DevOps Board.

using System.Net.Http;
using System.Threading.Tasks;
using Flurl;
using Flurl.Http;
 
namespace DeleteTestCasesFromAzureDevOpsApp
{
    public class TestCaseProcessor
    {
        public static async Task<HttpResponseMessage> Delete(int id, string projectUri, string projectName,
            string personalAccessToken)
        {
            var deleteUri = Url.Combine(projectUri, projectName, "_apis/test/testcases/", id.ToString(),
                "?api-version=5.0-preview.1");
 
            var responseMessage = await deleteUri
                .WithBasicAuth(string.Empty, personalAccessToken)
                .DeleteAsync();
 
            return responseMessage;
        }
    }
}

And it’s really easy to call this method, as shown in the code below – in addition to the test case ID, I just need to provide my Azure DevUps URI, my project name and a personal access token.

using System;
 
namespace DeleteTestCasesFromAzureDevOpsApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string testToken = "[[my personal access token]]";
            const string projectName = "Corvette";
            const int testCaseToDelete = 124;
 
            var responseMessage = TestCaseProcessor.Delete(testCaseToDelete, uri, projectName, testToken).Result;
 
            Console.WriteLine("Response code: " + responseMessage.StatusCode);
        }
    }
}

So now if I want to delete test cases in bulk, I just need to iterate through the list of IDs and call this method for each test case ID – which is much for me than deleting many test cases through the UI.

Wrapping up

Deleting test cases from Azure DevOps is a bit more difficult through the web UI than deleting other types of WorkItems – fortunately the Restful interface available is available, and I can use it with an application in .NET that can delete test cases quickly and easily. Hopefully this is useful to anyone who’s working with Azure DevOps Boards and needs to delete test cases.