.net, .net core, Azure, Azure DevOps, Azure DevOps Boards, Flurl

How to delete a TestCase from Azure DevOps boards using .NET, Flurl and the Azure DevOps Restful API

So here’s a problem…

I’ve been working with Azure DevOps and finding it really great – but every now and again I hit a roadblock and feel like I’m on the edge of what’s possible with the platform.

For instance – I’ve loaded work items and test cases into my development instance for some analysis before going to my production instance, and now I’d like to delete all of them. Sounds like a simple thing to do from the UI – I’ve selected multiple work items before, clicked on the ellipsis on one item and selected ‘Delete’ from the menu that appears.

bulk delete

Except that sometimes it doesn’t work out like that. Where’s the delete option gone in the menu below?

bulk delete fail

Right now you can only delete one test case at a time through the Azure DevOps web user interface

You can only delete test cases one at a time through the Azure DevOps web UI at the moment, and you can’t do it from the WorkItem list view. To delete a test case, select the test case to display its detailed view, and then select the ellipsis at the top right of this view to reveal an action menu (as shown below). You can select the options with the text ‘Permanently delete’

delete test case

Then you’ll be presented with a dialog asking you to confirm the deletion and enter the Test Case ID to confirm your intent.

perm delete

This is a lot of work if you’ve got a few (or a few hundred) test cases to delete.

Fortunately, this isn’t the only option available – I can .NET my way out of trouble.

You also can use .NET, Flurl and the Azure DevOps Restful API to delete test cases

Azure DevOps also provides a Restful interface which has comprehensive coverage of the functions available through the web UI – and sometimes a bit more. This is one of those instances where the using Restful API gives me the flexibility that I’m looking for.

I’ve previously written about using libraries with .NET to simplify accessing Restful interfaces – one of my favourite libraries is Flurl, because it makes it really easy for me to construct a URI endpoint and call Restful verbs in a fluent way.

The code below shows a .NET method where I’ve called the Delete verb on a Restful endpoint – this allows me to delete test cases by Id from my Azure DevOps Board.

using System.Net.Http;
using System.Threading.Tasks;
using Flurl;
using Flurl.Http;
 
namespace DeleteTestCasesFromAzureDevOpsApp
{
    public class TestCaseProcessor
    {
        public static async Task<HttpResponseMessage> Delete(int id, string projectUri, string projectName,
            string personalAccessToken)
        {
            var deleteUri = Url.Combine(projectUri, projectName, "_apis/test/testcases/", id.ToString(),
                "?api-version=5.0-preview.1");
 
            var responseMessage = await deleteUri
                .WithBasicAuth(string.Empty, personalAccessToken)
                .DeleteAsync();
 
            return responseMessage;
        }
    }
}

And it’s really easy to call this method, as shown in the code below – in addition to the test case ID, I just need to provide my Azure DevUps URI, my project name and a personal access token.

using System;
 
namespace DeleteTestCasesFromAzureDevOpsApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string testToken = "[[my personal access token]]";
            const string projectName = "Corvette";
            const int testCaseToDelete = 124;
 
            var responseMessage = TestCaseProcessor.Delete(testCaseToDelete, uri, projectName, testToken).Result;
 
            Console.WriteLine("Response code: " + responseMessage.StatusCode);
        }
    }
}

So now if I want to delete test cases in bulk, I just need to iterate through the list of IDs and call this method for each test case ID – which is much for me than deleting many test cases through the UI.

Wrapping up

Deleting test cases from Azure DevOps is a bit more difficult through the web UI than deleting other types of WorkItems – fortunately the Restful interface available is available, and I can use it with an application in .NET that can delete test cases quickly and easily. Hopefully this is useful to anyone who’s working with Azure DevOps Boards and needs to delete test cases.

.net, .net core, Flurl, Polly

Using Polly and Flurl to improve your website

This post is about how to use The Polly Project to make a .NET website better. I use Flurl to consume Restful web services so I’ve some Flurl specific code later on, but I hope this post is useful to anyone who’s interested in learning what Polly is, what it’s for and how it can help you.

So here’s a problem

Let’s pretend you run your business through a website, and part of your code calls out to a web service that another company supplies.

And, every once in a while, errors from this web service appear in your logs. Sometimes the HTTP status code is a 404 (not found), sometimes the code is a 503 (service unavailable), and other times you see a 504 (timeout). There’s no pattern, it goes away as quickly as it starts, and you’d really really like to get this fixed before customers start cancelling their subscriptions to your service.

You call up the business running the remote web service, and their answer is a bit… vague. Every so often they restart their web servers which takes their service down for a couple of seconds, and at certain times of the day they get spikes of traffic which causes their system to max out for up to 5 seconds at a time. They’re apologetic, and they expect to migrate to new, better infrastructure in about 6 months. But their only workaround is for you to re-query the service.

So you could be forgiven for going spare right now – this response doesn’t fix anything. This company is the only place you can get the data you need so you’re locked in. And you know your customers are seeing errors because it’s right there staring at you from your website logs. Asking your customers to ‘just hit refresh’ when they get an error is a great way to lose business and win a bad reputation.

You can use Polly to help solve this problem

When I first read about Polly a long while back, I was really interested but I wasn’t sure how I could apply it to the project I was working on.  What I wanted was to find a post that described a real world scenario that I could recognise and identify with, and how Polly would help with that.

Since then, I’ve worked on projects a little bit like the one I described above – one time when I’ve raised a ticket to say that we’re having intermittent problems with a web service, I’ve been told that the workaround is ‘hit refresh’. And since there’s a workaround, it’s only going to be raised as medium priority issue (which feels like a coded message for ‘we’re not even going to look at this’). This kind of thing drives me crazy and it’s exactly the kind of problem that Polly can at least mitigate.

I’ve also met people who are doing really interesting work with hardware devices in .NET, and need to be able to handle hardware that can only deal with single threads – Polly allows the application to handle occasions when it doesn’t receive an acknowledgement from the hardware by waiting for a while and then retrying.

Let’s get to some code

I’ve pushed all of the code below to a repo in my Github, so you pull it locally and step through it yourself.

First, a couple of harnesses to simulate a flakey web-service

So I’ve written a simple (and really awful) web-service project to simulate random transient errors. The service is just meant to return what day it is, but it’ll only work about two times out of three. The rest of the time it’ll return either a 404 (Not Found), a 503 (Service Unavailable), or it’ll hang for 10 seconds and then return a 504 (Service timed out).

using System;
using System.Diagnostics;
using System.Threading;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Newtonsoft.Json;
 
namespace WorldsWorstWebService.Controllers
{
    [Route("api/[controller]")]
    [ApiController]
    public class WeekDayController : ControllerBase
    {
        [HttpGet]
        public IActionResult Get()
        {
            // Manufacture 404, 503 and 504 errors for about a third of all responses
            var randomNumber = new Random();
            var randomInteger = randomNumber.Next(08);
 
            switch (randomInteger)
            {
                case 0:
                    Debug.WriteLine("Webservice:About to serve a 404...");
                    return StatusCode(StatusCodes.Status404NotFound);
 
                case 1:
                    Debug.WriteLine("Webservice:About to serve a 503...");
                    return StatusCode(StatusCodes.Status503ServiceUnavailable);
 
                case 2:
                    Debug.WriteLine("Webservice:Sleeping for 10 seconds then serving a 504...");
                    Thread.Sleep(10000);
                    Debug.WriteLine("Webservice:About to serve a 504...");
 
                    return StatusCode(StatusCodes.Status504GatewayTimeout);
                default:
                {
                    var formattedCustomObject = JsonConvert.SerializeObject(
                        new
                        {
                            WeekDay = DateTime.Today.DayOfWeek.ToString()
                        });
 
                    Debug.WriteLine("Webservice:About to correctly serve a 200 response");
 
                    return Ok(formattedCustomObject);
                }
            }
        }
    }
}

I’ve also written another web application project that consumes this service using Flurl.

If you’re interested in Flurl and Restful web services, I’ve written more about using it here.

using System.Diagnostics;
using System.Threading.Tasks;
using Flurl.Http;
using Microsoft.AspNetCore.Mvc;
using MyWebsite.Models;
 
namespace MyWebsite.Controllers
{
    public class HomeController : Controller
    {
        public async Task<IActionResult> Index()
        {
            try
            {
                var weekday = await "https://localhost:44357/api/weekday"
                    .GetJsonAsync<WeekdayModel>();
 
                Debug.WriteLine("[App]: successful");
 
                return View(weekday);
            }
            catch (Exception e)
            {
                Debug.WriteLine("[App]: Failed - " + e.Message);
                throw;
            }
        }
    }
}

So I carried out a simple experiment – run these projects and try to hit my website 20 times, I mostly get successful responses, but I still get a load of failures. I’ve pasted the debug log below.

[App]: successful
[App]: Failed - Call failed with status code 503 (Service Unavailable): GET https://localhost:44357/api/weekday
[App]: successful
[App]: successful
[App]: successful
[App]: Failed - Call failed with status code 504 (Gateway Timeout): GET https://localhost:44357/api/weekday
[App]: successful
[App]: successful
[App]: Failed - Call failed with status code 503 (Service Unavailable): GET https://localhost:44357/api/weekday
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App]: Failed - Call failed with status code 503 (Service Unavailable): GET https://localhost:44357/api/weekday
[App]: successful
[App]: Failed - Call failed with status code 503 (Service Unavailable): GET https://localhost:44357/api/weekday
[App]: successful
[App]: Failed - Call failed with status code 404 (Not Found): GET https://localhost:44357/api/weekday

So out of 20 page hits, my test web app failed 6 times – about a 30% failure rate. That’s pretty poor (and about consistent with what we expect from the flakey web service).

Let’s say I don’t control the behaviour of the web services upstream of my web app, so I can’t change reason why my web app is failing, but let’s see if Polly allows me to reduce the number of failures that my web app users see.

Wiring up Polly

First let’s design some rules, also known as ‘policies’

So what’s a ‘policy’? Basically it’s just a rule that’ll help mitigate the intermittent problem.

For example – the web service frequently delivers 404 and 503 messages, but it’s back up again quickly. So a policy could be:

Retry Policy: When the web services returns an unsuccessful HTTP code, wait a second and try again. If it still fails, wait three seconds and try again, and if it still fails, then wait five more seconds and try one more time. If it fails after that, the service is dead and we need to deal with the error.

We also know that the web service hangs for 10 seconds before delivering a 504 timeout message. I don’t want my customers to wait for this long – after a couple of seconds I’d like to my app to give up, and execute the ‘Retry Policy’ above.

Timeout Policy: When I’ve been waiting for a response for longer than 2 seconds, cut my losses and execute the Retry Policy.

Wrapping these policies together forms a ‘Policy Strategy’.

So the first step is to install the Polly nuget package to the web app project:

Install-Package Polly

Polly is an open source project hosted on Github, with a BSD licence. It’s also a member of the .NET Foundation,

So what would these policies look like in code? The timeout policy is like the code below, where we can just pass the number of seconds to wait as a parameter:

var timeoutPolicy = Policy.TimeoutAsync<HttpResponseMessage>(2);

There’s also an overload, and I’ve specified some debug messages using that below.

var timeoutPolicy = Policy.TimeoutAsync<HttpResponseMessage>(2, (context, timeSpan, task) =>
{
    Debug.WriteLine($"[App|Policy]: Timeout delegate fired after {timeSpan.Seconds} seconds");
    return Task.CompletedTask;
});

The retry policy is a little different from the timeout policy:

  • I first specify the conditions under which I should retry – there must be an unsuccessful HTTP status code, or there must be a timeout exception.
  • Then I can specify how to wait and retry – first wait 1 second before retrying, then wait 3 seconds, then wait 5 seconds.
  • Finally I’ve used the overload with a delegate to write comments to debug.
var retryPolicy = Policy
    .HandleResult<HttpResponseMessage>(r => !r.IsSuccessStatusCode)
    .Or<TimeoutRejectedException>()
    .WaitAndRetryAsync(new[]
        {
            TimeSpan.FromSeconds(1),
            TimeSpan.FromSeconds(3),
            TimeSpan.FromSeconds(5)
        },
        (result, timeSpan, retryCount, context) =>
        {
            Debug.WriteLine($"[App|Policy]: Retry delegate fired, attempt {retryCount}");
        });

And I can bundle these policies together as a single policy strategy like this:

var policyStrategy = Policy.WrapAsync(RetryPolicy, TimeoutPolicy);

I’ve grouped these policies in their own class and pasted the code below.

public static class Policies
{
    private static TimeoutPolicy<HttpResponseMessage> TimeoutPolicy
    {
        get
        {
            return Policy.TimeoutAsync<HttpResponseMessage>(2, (context, timeSpan, task) =>
            {
                Debug.WriteLine($"[App|Policy]: Timeout delegate fired after {timeSpan.Seconds} seconds");
                return Task.CompletedTask;
            });
        }
    }
 
    private static RetryPolicy<HttpResponseMessage> RetryPolicy
    {
        get
        {
            return Policy
                .HandleResult<HttpResponseMessage>(r => !r.IsSuccessStatusCode)
                .Or<TimeoutRejectedException>()
                .WaitAndRetryAsync(new[]
                    {
                        TimeSpan.FromSeconds(1),
                        TimeSpan.FromSeconds(2),
                        TimeSpan.FromSeconds(5)
                    },
                    (delegateResult, retryCount) =>
                    {
                        Debug.WriteLine(
                            $"[App|Policy]: Retry delegate fired, attempt {retryCount}");
                    });
        }
    }
 
    public static PolicyWrap<HttpResponseMessage> PolicyStrategy => Policy.WrapAsync(RetryPolicy, TimeoutPolicy);
}

Now I want to apply this Policy Strategy to every outgoing call to the 3rd party web service.

How do I apply these policies when I’m using Flurl?

One of the things I really like about using Flurl to consume 3rd party web services is that I don’t need to instantiate an HttpClient, or worry about running out of available sockets every time I make a call – Flurl handles all of this in the background for me.

But that also means it’s not immediately obvious how I can configure calls to the HttpClient used in the background so that my policy strategy is applied to each call.

Fortunately Flurl provides a way to do this by adding a few new classes to my web app project, and a configuration instruction. I can configure Flurl’s settings in my web app’s Startup file to make it use a different implementation of Flurl’s default HttpClientFactory (which overrides how HTTP messages are handled).

public void ConfigureServices(IServiceCollection services)
{
    //...other service configuration here
 
    FlurlHttp.Configure(settings => settings.HttpClientFactory = new PollyHttpClientFactory());
}

The PollyHttpClientFactory is an extension of Flurl’s default HttpClientFactory. This overrides how HttpMessages are handled, and instead uses our own PolicyHandler.

public class PollyHttpClientFactory : DefaultHttpClientFactory
{
    public override HttpMessageHandler CreateMessageHandler()
    {
        return new PolicyHandler
        {
            InnerHandler = base.CreateMessageHandler()
        };
    }
}

And the PolicyHandler is where we apply our rules (the policy strategy) to outgoing HTTP requests.

public class PolicyHandler : DelegatingHandler
{
    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        return Policies.PolicyStrategy.ExecuteAsync(ct => base.SendAsync(request, ct), cancellationToken);
    }
}

Now let’s see if this improves things

With the policies applied to requests to the 3rd party web service, I repeated the earlier experiment and hit my application again 20 times.

[App]: successful
[App]: successful
[App|Policy]: Timeout delegate fired after 2000
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Timeout delegate fired after 2000
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App]: successful
[App]: successful
[App|Policy]: Timeout delegate fired after 2000
[App|Policy]: Retry delegate fired, attempt 1
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App]: successful
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App]: successful
[App]: successful
[App]: successful

This time, my users would have experienced no application failures in those 20 page hits. But all those orange lines are the times that the web service failed, and our policy was to try again – which eventually lead to a successful response from my web app.

In fact, I went on to hit the page 100 times and only saw two errors in total, so the total failure rate that my users experience now is at about 2% – way better than the 30% failure rate experienced originally.

Obviously this is a very contrived example – real world examples are likely to be a bit more complex. And your rules and policies will be different to mine. Instead of retrying, maybe you want to fallback to a different action (e.g. hit a different web service, pull from a cache etc.) – and Polly has its own fallback mechanism to do this. You’ll have to design your own rules and policies to handle the particular failure modes that you face.

Wrapping up

I’d a couple of aims when writing this post – first of all I wanted to come up with a couple of different scenarios for how Polly could be used in your application. I mostly work with web applications and web services, and I also like using Flurl for accessing these services, so that’s what this article focusses on. But I’ve just scratched the surface here – Polly can do way more than that. Check out the Polly Wiki to find out more about it, or look at the samples.

 


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!
Continue reading

.net, .net core, C# tip, Flurl

Comparing RestSharp and Flurl.Http while consuming a web service in .NET Core

Just before the holidays I was working on a .NET Core project that needed data available from some web services. I’ve done this a bunch of times previously, and always seem to spend a couple of hours writing code using the HttpClient object before remembering there are libraries out there that have done the heavy lifting for me.

So I thought I’d do a little write up of a couple of popular library options that I’ve used – RestSharp and Flurl. I find that learn quickest from reading example code, so I’ve written sample code showing how to use both of these libraries with a few different publically available APIs.

I’ll look at three different services in this post:

  • api.postcodes.io – no authentication required, uses GET and POST verbs
  • api.nasa.gov – authentication via an API key passed in the query string
  • api.github.com – Basic Authentication required to access private repo information

And as an architect, I’m sometimes asked how to get started (and sometimes ‘why did you chose library X instead of library Y?’), so I’ve wrapped up with a comparison and which library I like best right now.

Reading data using RestSharp

This is a very mature and well documented open source project (released under the Apache 2.0 licence), with the code available on Github. You can install the nuget package in your project using package manager with the command:

Install-Package RestSharp

First – using the GET verb with RestSharp.

Using HTTP GET to return data from a web service

Using Postcodes.io

I’ve been working with mapping software recently – some of my data sources don’t have latitude and longitude for locations, and instead they only have a UK postcode. Fortunately I can use the free Postcodes.io RESTful web API to determine a latitude and longitude for each of the postcode values. I can either just send a postcode using a GET request to get the corresponding geocode (latitude and longitude) back, or I can use a POST request to send a list of postcodes and get a list of geocodes back, which speeds things up a bit with bulk processing.

Let’ start with a simple example – using the GET verb for a single postcode. I can request a geocode corresponding to a postcode from the Postcodes.io service through a browser with a URL like the one below:

https://api.postcodes.io/postcodes/IP1 3JR

This service doesn’t require any authentication, and the code below shows how to use RestSharp and C# to get data using a GET request.

// instantiate the RestClient with the base API url
var client = new RestClient("https://api.postcodes.io");
 
// specify the resource, e.g. https://api.postcodes.io/postcodes/IP1 3JR
var getRequest = new RestRequest("postcodes/{postcode}");
getRequest.AddUrlSegment("postcode""IP1 3JR");
 
// send the GET request and return an object which contains the API's JSON response
var singleGeocodeResponseContainer = client.Execute(getRequest);
 
// get the API's JSON response
var singleGeocodeResponse = singleGeocodeResponseContainer.Content;

The example above returns raw JSON content, which I can deserialise into a custom POCO, such as the one below.

public class GeocodeResponse
{
    public string Status { getset; }
 
    public Result Result { getset; }
}
 
public class Result
{
    public string Postcode { getset; }
 
    public string Longitude { getset; }
 
    public string Latitude { getset; }
}

But I can do better than the code above – if I specify the GeocodeResponse type in the Execute method (as shown below), RestSharp uses the classes above and intelligently hydrates the POCO  from the raw JSON content returned:

// instantiate the RestClient with the base API url
var client = new RestClient("https://api.postcodes.io");
 
// specify the resource, e.g. https://api.postcodes.io/postcodes/OX495NU
var getRequest = new RestRequest("postcodes/{postcode}");
getRequest.AddUrlSegment("postcode""OX495NU");
 
// send the GET request and return an object which contains a strongly typed response
var singleGeocodeResponseContainer = client.Execute<GeocodeResponse>(getRequest);
 
// get the strongly typed response
var singleGeocodeResponse = singleGeocodeResponseContainer.Data;

Of course, not APIs all work in the same way, so here are another couple of examples of how to return data from different publically available APIs.

NASA Astronomy Picture of the Day

This NASA API is also freely available, but slightly different from the Postcodes.io API in that it requires an API subscription key. NASA requires that the key is passed as a query string parameter, and RestSharp facilitates this with the AddQueryParameter method (as shown below).

This method of securing a service isn’t that unusual – goodreads.com/api also uses this method.

// instantiate the RestClient with the base API url
var client = new RestClient("https://api.nasa.gov/");
 
// specify the resource, e.g. https://api.nasa.gov/planetary/apod
var getRequest = new RestRequest("planetary/apod");
 
// Add the authentication key which NASA expects to be passed as a parameter
// This gives https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY
getRequest.AddQueryParameter("api_key""DEMO_KEY");
 
// send the GET request and return an object which contains the API's JSON response
var pictureOfTheDayResponseContainer = client.Execute(getRequest);
 
// get the API's JSON response
var pictureOfTheDayJson  = pictureOfTheDayResponseContainer.Content;

Again, I could create a custom POCO corresponding to the JSON structure and populate an instance of this by passing the type with the Execute method.

Github’s API

The Github API will return public data any authentication, but if I provide Basic Authentication data it will also return extra information relevant to me about my profile, such as information about my private repositories.

RestSharp allows us to set an Authenticator property to specify the userid and password.

// instantiate the RestClient with the base API url
var client = new RestClient("https://api.github.com/");
 
// pass in user id and password 
client.Authenticator = new HttpBasicAuthenticator("jeremylindsayni""[[my password]]");
 
// specify the resource that requires authentication
// e.g. https://api.github.com/users/jeremylindsayni
var getRequest = new RestRequest("users/jeremylindsayni");
 
// send the GET request and return an object which contains the API's JSON response
var response = client.Execute(getRequest);

Obviously you shouldn’t hard code your password into your code – these are just examples of how to return data, they’re not meant to be best practices. You might want to store your password in an environment variable, or you could do even better and use Azure Key Vault – I’ve written about how to do that here and here.

Using the POST verb to obtain data from a web service

The code in the previous example refers to GET requests  – a POST request is slightly more complex.

The api.postcodes.io service has a few different endpoints – the one I described earlier only finds geocode information for a single postcode – but I’m also able to post a JSON list of up to 100 postcodes, and get corresponding geocode information back as a JSON list. The JSON needs to be in the format below:

{
   "postcodes" : ["IP1 3JR", "M32 0JG"]
}

Normally I prefer to manipulate data in C# structures, so I can add my list of postcodes to the object below.

public class PostCodeCollection
{
    public List<string> postcodes { getset; }
}

I’m able to create a POCO object with the data I want to post to the body of the POST request, and RestSharp will automatically convert it to JSON when I pass the object into the AddJsonBody method.

// instantiate the ResttClient with the base API url
var client = new RestClient("https://api.postcodes.io");
 
// specify the resource, e.g. https://api.postcodes.io/postcodes
var postRequest = new RestRequest("postcodes"Method.POST, DataFormat.Json);
 
// instantiate and hydrate a POCO object with the list postcodes we want geocode data for
var postcodes = new PostCodeCollection { postcodes = new List<string> { "IP1 3JR""M32 0JG" } };
 
// add this POCO object to the request body, RestSharp automatically serialises it to JSON
postRequest.AddJsonBody(postcodes);
 
// send the POST request and return an object which contains JSON
var bulkGeocodeResponseContainer = client.Execute(postRequest);

One gotcha – RestSharp Serialization and Deserialization

One aspect of RestSharp that I don’t like is how the JSON serialisation and deserialisation works. RestSharp uses its own engine for processing JSON, but basically I prefer Json.NET for this. For example, if I use the default JSON processing engine in RestSharp, then my PostcodeCollection POCO needs to have property names which exactly match the JSON property names (including case sensitivity).

I’m used to working with Json.NET and decorating properties with attributes describing how to serialise into JSON, but this won’t work with RestSharp by default.

// THIS DOESN'T WORK WITH RESTSHARP UNLESS YOU ALSO USE **AND REGISTER** JSON.NET
public class PostCodeCollection
{
    [JsonProperty(PropertyName = "postcodes")]
    public List<string> Postcodes { getset; }
}

Instead I need to override the default RestSharp serializer and instruct it to use Json.NET. The RestSharp maintainers have written about their reasons here and also here – and helped out by writing the code to show how to override the default RestSharp serializer. But personally I’d rather just use Json.NET the way I normally do, and not have to jump through an extra hoop to use it.

Reading Data using Flurl

Flurl is newer than RestSharp, but it’s still a reasonably mature and well documented open source project (released under the MIT licence). Again, the code is on Github.

Flurl is different from RestSharp in that it allows you to consume the web service by building a fluent chain of instructions.

You can install the nuget package in your project using package manager with the command:

Install-Package Flurl.Http

Using HTTP GET to return data from a web service

Let’s look at how to use the GET verb to read data from the api.postcodes.io. api.nasa.gov. and api.github.com.

First, using Flurl with api.postcodes.io

The code below searches for geocode data from the specified postcode, and returns the raw JSON response. There’s no need to instantiate a client, and I’ve written much less code than I wrote with RestSharp.

var singleGeocodeResponse = await "https://api.postcodes.io"
    .AppendPathSegment("postcodes")
    .AppendPathSegment("IP1 3JR")
    .GetJsonAsync();

I also find using the POST method with postcodes.io easier with Flurl. Even though Flurl doesn’t have a build in JSON serialiser, it’s easy for me to install the Json.NET package – this means I can now use a POCO like the one below…

public class PostCodeCollection
{
    [JsonProperty(PropertyName = "postcodes")]
    public List<string> Postcodes { getset; }
}

… to fluently build up a post request like the one below. I can also createmy own custom POCO – GeocodeResponseCollection – which Flurl will automatically populate with the JSON fields.

var postcodes = new PostCodeCollection { Postcodes = new List<string> { "OX49 5NU""M32 0JG" } };
 
var url = await "https://api.postcodes.io"
    .AppendPathSegment("postcodes")
    .PostJsonAsync(postcodes)
    .ReceiveJson<GeocodeResponseCollection>();

Next, using Flurl with api.nasa.gov

As mentioned previously, NASA’s astronomy picture of the day requires a demo key passed in the query string – I can do this with Flurl using the code below:

var astronomyPictureOfTheDayJsonResponse = await "https://api.nasa.gov/"
    .AppendPathSegments("planetary""apod")
    .SetQueryParam("api_key""DEMO_KEY")
    .GetJsonAsync();

Again, it’s a very concise way of retrieving data from a web service.

Finally using Flurl with api.github.com

Lastly for this post, the code below show how to use Flurl with Basic Authentication and the Github API.

var singleGeocodeResponse = await "https://api.github.com/"
    .AppendPathSegments("users""jeremylindsayni")
    .WithBasicAuth("jeremylindsayni""[[my password]]")
    .WithHeader("user-agent""csharp-console-app")
    .GetJsonAsync();

One interesting difference in this example between RestSharp and Flurl is that I had to send user-agent information to the Github API with Flurl – I didn’t need to do this with RestSharp.

Wrapping up

Both RestSharp and Flurl are great options for consuming Restful web services – they’re both stable, source for both is on Github, and there’s great documentation.  They let me write less code and do the thing I want to do quickly, rather than spending ages writing my own code and tests.

Right now, I prefer working with Flurl, though the choice comes down to personal preference. Things I like are:

  • Flurl’s MIT licence
  • I can achieve the same results with less code, and
  • I can integrate Json.NET with Flurl out of the box, with no extra classes needed.

About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net core, Azure, C# tip, Clean Code, Cloud architecture, Security

Simplifying Azure Key Vault and .NET Core Web App (includes NuGet package)

In my previous post I wrote about securing my application secrets using Azure Key Vault, and in this post I’m going to write about how to simplify the code that a .NET Core web app needs to use the Key Vault.

I previously went into a bit of detail about how to create a Key Vault and add a secret to that vault, and then add a Managed Service Identity to a web app. At the end of the post, I showed some C# code about how to access a secret inside a controller action.

public class HomeController : Controller
{
    public async Task<ActionResult> Index()
    {
        var azureServiceTokenProvider = new AzureServiceTokenProvider();
        var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
        var secret = await keyVaultClient.GetSecretAsync("https://mywebsitesecret.vault.azure.net/secrets/TheSecret").ConfigureAwait(false);
        ViewBag.Secret = secret.Value;
        return View();
    }
    // rest of the class...
}

Whereas this works for the purposes of an example of how to use it in a .NET Core MVC application, it’s not amazingly pretty code – for any serious application, I wouldn’t have all this code in my controller.

I think it would be more logical to access my secrets at the time my web application starts up, and put them into the Configuration for the app. Therefore if I need them later, I can just inject an IConfiguration object into my class, and use that to get the secret values.

using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Configuration.AzureKeyVault;

namespace MyWebApplication
{
    public class Program
    {
        public static void Main(string[] args)
        {
            BuildWebHost(args).Run();
        }
 
        private static IWebHost BuildWebHost(string[] args)
        {
            return WebHost.CreateDefaultBuilder(args)
                .ConfigureAppConfiguration(builder =>
                {
                    var azureServiceTokenProvider = new AzureServiceTokenProvider();
                    var keyVaultClient =
                        new KeyVaultClient(
                            new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
                    builder.AddAzureKeyVault("https://mywebsitesecret.vault.azure.net/",
                            keyVaultClient, new DefaultKeyVaultSecretManager());
                })
                .UseStartup<Startup>()
                .Build();
        }
    }
}

But as you can see above, the code I need to add here to build my web host is not very clear either – it seemed a shame to lose all the good work done in .NET Core 2 to simplify this class.

So I’ve created a NuGet package to allow me to have a simpler and cleaner interface – it’s at uploaded to the NuGet repository here, and I’ve open sourced the code at GitHub here.

As usual you can install pretty easily from the command-line:

Install-Package Kodiak.Azure.WebHostExtension -prerelease

Now my BuildWebHost method looks much cleaner – I can just add the fluent extension AddAzureKeyVaultSecretsToConfiguration and pass in the URL of the vault).

private static IWebHost BuildWebHost(string[] args)
{
    return WebHost.CreateDefaultBuilder(args)
        .AddAzureKeyVaultSecretsToConfiguration("https://myvaultname.vault.azure.net")
        .UseStartup<Startup>()
        .Build();
}

I think this is a more elegant implementation, and now if I need to access the secret inside my controller’s action, I can use the cleaner code below.

using System.Diagnostics;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Configuration;
 
namespace MyWebApplication.Controllers
{
    public class HomeController : Controller
    {
        private readonly IConfiguration _configuration;
 
        public HomeController(IConfiguration configuration)
        {
            _configuration = configuration;
        }
 
        public IActionResult Index()
        {
            ViewBag.Secret = _configuration["MySecret"];
 
            return View();
        }
    }
}

Summing up

Azure Key Vault (AKV) is a useful tool to help keep production application secrets secure, although like any tool it’s possible to mis-use it, so don’t assume that just because you’re using AKV that your secrets are secured – always remember to examine threat vectors and impacts.

There’s a lot of documentation on the internet about AKV, and a lot of it recommends using the client Id and secrets in your code – I personally don’t like this because it’s always risky to have any kind of secret in your code. The last couple of posts that I’ve written have been about how to use an Azure Managed Service Identity with your application to avoid secrets in your code, and how to simplify the C# code you might use in your application to access AKV secrets.

.net core, Cake, IOT, nuget, Raspberry Pi 3

Deploying a .NET Core 2 app to a Raspberry Pi with Cake and PuTTY saved sessions

I’ve written previously about creating a .NET Core 2 template for a simple IoT application which can be deployed to the Raspberry Pi. Recently I was asked if this could be extended to use PuTTY saved sessions, which I thought was an interesting challenge.

I’ve used Cake to help me deploy my .NET Core 2 code – you can see the build.cake file up on GitHub for my template – and this script uses an add-in called Cake.PuTTY to allow me to deploy cross platform .NET Code from my Windows development machine to the Linux based Raspbian Jessie system running on my Raspberry Pi.

I’ve always just supplied my Raspberry Pi’s IP address and username through configurable parameters in the build.cake script, as shown below.

///////////////////////////////////////////////////////////////////////
// ARGUMENTS (WITH DEFAULT PARAMETERS FOR LINUX (Ubuntu 16.04, Raspbian Jessie, etc)
///////////////////////////////////////////////////////////////////////
var runtime = Argument("runtime""linux-arm");
var destinationIp = Argument("destinationPi""192.168.1.73");
var destinationDirectory = Argument("destinationDirectory"@"/home/pi/DotNetConsoleApps/RaspbianTest");
var username = Argument("username""pi");
var executableName = Argument("executableName""HelloRaspbian");

And these are used in the Cake build script in the “Deploy” target – I’ve pasted a snippet of code below which does a few things I’ve listed below the code:

var destination = destinationIp + ":" + destinationDirectory;
var fileArray = files.Select(m => m.ToString()).ToArray();
Pscp(fileArray, destination, new PscpSettings
                                    { 
                                        SshVersion = SshVersion.V2, 
                                        User = username 
                                    }
);
 
var plinkCommand = "chmod u+x,o+x " + destinationDirectory + "/" + executableName;
Plink(username + "@" + destination, plinkCommand);
  • First, it uses the PSCP function (which securely copies files) and supplies the destination in the format of an IP address, then a colon, then the directory that I want to copy files to on the remote machine – as shown below:
"192.168.1.73:/home/pi/DotNetConsoleApps/RaspbianTest"
  • The snippet also supplies the username “pi” to the PSCP function – I supplied this username in the configurable parameter section described earlier.
  • Finally it uses the Plink function to run a custom linux command on remote files – in this case, I change the permissions of the file I need to run to make it executable. Again, here I need to specify a username, and IP address and a location in the format below:
"pi@192.168.1.73:/home/pi/DotNetConsoleApps/RaspbianTest"

So my existing mechanism to deploy a file is really tightly coupled to knowing the username and the destination IP address in the code.

Can we do better with a saved PuTTY session?

I plugged my IP address and username into PuTTY, and saved them as a session named “Raspbian”. I’ve included a couple of screenshots below showing where I entered key bits of data. The first one shows the IP address, and where I name the saved session as “Raspbian”.

Putty saved session with IP address

The second one shows where I enter the username that I’d like to log on with:

Putty saved session with username

Once this “Raspbian” session was saved, I needed to find out how to use it with PSCP and Plink tools.

Using PSCP with Cake.PuTTY

This turned out to be pretty easy – instead of passing the destination IP address, I just passed in the session name. So now instead of:

"192.168.1.73:/home/pi/DotNetConsoleApps/RaspbianTest"

I’ve got:

"Raspbian:/home/pi/DotNetConsoleApps/RaspbianTest"

And I don’t need to pass the username in the settings anymore, so my code in the Cake build script for PSCP looks like this:

var destination = sessionname + ":" + destinationDirectory;
var fileArray = files.Select(m => @"""" + m.ToString() + @"""").ToArray();
Pscp(fileArray, destination, new PscpSettings
                                    { 
                                        SshVersion = SshVersion.V2 
                                    }
);

Using Plink

This turned out to be a little harder – but not too much. I couldn’t get the Cake.PuTTY plugin to work for this, but fortunately I’m able to use the StartProcess C# method with Cake to just run a Plink command.

The command I’d like to run looks like:

plink -load Raspbian [[insert a custom linux command here]]

And the C# code for this, where sessionname = “Raspbian”, is pasted below:

var plinkCommand = "chmod u+x,o+x " + destinationDirectory + "/" + executableName;
 
StartProcess("plink"new ProcessSettings {
        Arguments = new ProcessArgumentBuilder()
            .Append(@"-load")
	    .Append(sessionname)
            .Append(plinkCommand)
        }
);

So now that I can deploy with a saved session, I create a new configurable parameter in my build.cake script called “sessionname”, and leave the username and IP address fields blank.

///////////////////////////////////////////////////////////////////////
// ARGUMENTS (WITH DEFAULT PARAMETERS FOR LINUX (Ubuntu 16.04, Raspbian Jessie, etc)
///////////////////////////////////////////////////////////////////////
var runtime = Argument("runtime""linux-arm");
var destinationIp = Argument("destinationPi""<<safe to leave blank>>");
var destinationDirectory = Argument("destinationDirectory"@"/home/pi/DotNetConsoleApps/RaspbianTest");
var username = Argument("username""<<safe to leave blank>>");
var sessionname = Argument("sessionname""Raspbian");
var executableName = Argument("executableName""HelloRaspbian");

And use the target “DeployWithPuTTYSession”, which is detailed below:

Task("DeployWithPuTTYSession")
    .IsDependentOn("Publish")
    .Does(() =>
    {
        var files = GetFiles("./publish/*");
        
		var destination = sessionname + ":" + destinationDirectory;
		var fileArray = files.Select(m => @"""" + m.ToString() + @"""").ToArray();
		Pscp(fileArray, destination, new PscpSettings
					{ 
						SshVersion = SshVersion.V2 
					}
		);
 
		var plinkCommand = "chmod u+x,o+x " + destinationDirectory + "/" + executableName;
 
		StartProcess("plink"new ProcessSettings {
        		Arguments = new ProcessArgumentBuilder()
            		.Append(@"-load")
			.Append(sessionname)
            		.Append(plinkCommand)
        }
);

Updating the open source code, and how to use it

I’ve updated the source code on Github and also updated the NuGet package – you can check out my previous post on how to install this .NET Core template, and there are instructions on the project ReadMe, but the short version is that you can run the command below at a command prompt:

 dotnet new -i RaspberryPi.Template::*

And then you can create a new RaspberryPi IoT project (I called mine HelloRaspbian, yours can obviously be different) with the command:

dotnet new coreiot -n HelloRaspbian

From the generated project, run the command in the ReadMe.txt file to generate a build.ps1 file so we can run Cake:

Invoke-WebRequest http://cakebuild.net/download/bootstrapper/windows -OutFile build.ps1

Now open the build.cake file and update the parameters you want to use – the focus of this post is on a PuTTY session, and now we can update the sessionname parameter. Then in the project folder, run the command:

.\build.ps1 -target DeployWithPuTTYSession

This command will use your saved PuTTY session when deploying from a Windows machine to a Raspberry Pi running Linux, and allow you to deploy without specifying a username or IP address in your build.cake file.

Wrapping up

I’ve extended my RaspberryPi.Template for .NET Core to allow users to deploy to their Pi using a saved PuTTY session. Hopefully this small extension makes the template easier for the community to use.

.net core, arduino, Bifröst, IOT, Making, Raspberry Pi 3

Using .NET Core 2 on Raspbian Jessie to read serial data from an Arduino

I’ve previously written about how to use the System.IO.Ports library in .NET Core 2 to read serial data from an Arduino using a Windows 10 PC – but this library doesn’t work for Linux machines. This time I’m going to look at how to read data from an Arduino over USB with .NET Core running on a Raspberry Pi 3 with Raspbian Jessie.

There’s a few steps to getting this running.

  • First you’ll need:
    • A development machine (I’m using Windows 10),
    • A Raspberry Pi 3,
    • An Arduino,
    • A USB cable to link your Arduino and Raspberry Pi.
  • Next I’ll write a simple sketch and deploy it to my Arduino.
  • I’ll connect the Arduino to my Raspberry Pi 3, and check that the Pi can see my Arduino and the serial input using minicom.
  • Finally I’ll write and deploy a .NET Core application written in C# to my Raspberry Pi, and I’ll show how this application can read serial input from the Arduino.

As usual, I’ve uploaded all my code to GitHub and you can see it here.

For a few of the steps in this guide, I’ll refer to other sources – there’s not a lot of value in me writing a step by step guide for things which are commonly understood.

For example, I have a fresh install of Raspbian Jessie on my Raspberry Pi 3, and then I set up SSH on the Pi. I also use Putty, PSCP, Plink and Cake.net to deploy from my Windows machine to the Raspberry Pi (I’ve blogged in detail about this here).

Writing a sketch that writes serial data from the Arduino

In a previous post, I use VSCode to deploy a simple Arduino application that writes serial data. I could have used the Arduino IDE just as easily – and it’s widely known how to verify and upload sketches to the Arduino, so I’m not going to go into this part in great detail.

The code for the Arduino sketch is below:

int i = 0;
 
void setup(){
  Serial.begin(9600);
}
 
void loop(){
  Serial.print("Hello Pi ");
  Serial.println(i);
  delay(1000);
  i++;
}

One thing worth noting is that the Baud rate is 9600 – we’ll use this information later.

I can test this is writing serial data by plugging the Arduino into my Windows 10 development machine, and using VSCode or the Arduino IDE. I’ve shown a screenshot below of the serial monitor in my Arduino IDE, which just prints the text “Hello Pi” followed by a number. This confirms that my Arduino is writing data to the serial port.

screenshot.1502137097

Let’s test the Raspberry Pi can receive the serial data

Before writing a C# application in .NET Core to read serial data on my Raspberry Pi, I wanted to test that my Pi can receive data at all (obviously after connecting my Arduino to my Raspberry Pi using a USB cable).

This isn’t mandatory for this guide – I just wanted to definitely know that the Pi and Arduino communication was working before starting to write my C# application.

First, I need to find the name of the serial port used by the Arduino. I can find the names of serial ports by using PuTTY to SSH into my Pi 3, and then running the command:

ls -l /dev | grep dialout

Before connecting my Arduino UNO to my Raspberry Pi, this reports back two serial ports – ttyAMA0 and ttyS0.

screenshot.1502138314

After connecting my Arduino to my Raspberry Pi, this now reports back three serial ports – ttyACM0, ttyAMA0, and ttyS0.

screenshot.1502138155

Therefore I know the port used by my Arduino over USB is /dev/ttyACM0.

As an aside – not all Arduino boards will use the port /dev/ttyACM0. For example, I repeated this with my Arduino Nano and Arduino Duemilanove, and found they both use /dev/ttyUSB0, as shown below:

screenshot.1502211857

But for my Arduino Yun and my Arduino Primo, the port is /dev/ttyACM0.

screenshot.1502212282

So the point here is that you need to check what your port name is when you connect it to a Linux machine, like your Pi – the port name can be different, depending on what kind of hardware you connect.

Finally, if you’re interested in why “tty” is used in the Linux world for ports, check out this post.

Tools to read serial data

Minicom is a serial communication program which will confirm my Pi is receiving serial data from the Arduino. It’s very easy to install – I just used PuTTY to SSH into my Pi 3, and ran the command:

sudo apt-get install minicom

Then I was able to run the command below, using the port name (/dev/ttyACM0) and the Baud rate (9600).

minicom -b 9600 -o -D /dev/ttyACM0

If the Raspberry Pi is receiving serial data from the Arduino, it’ll be written to the SSH terminal.

Some posts I’ve read say it’s necessary to disable serial port logins to allow the Arduino to send messages to the Raspberry Pi, and modify files in the “/boot” directory – but on Jessie, I didn’t actually find this to be necessary – it all worked out of the box with my fresh install of Raspbian Jessie. YMMV.

Another alternative to prove serial communication is working is to install the Arduino IDE onto the Raspberry Pi, and just open the serial monitor on device. Again, installing the IDE on your Pi is very easy – just run the command below at a terminal:

sudo apt-get install arduino

This will even install an Arduino shortcut into the main Raspbian menu.

screenshot.1502139590.png

Once you’ve started the IDE and connected the Arduino to a USB port, select the serial port /dev/ttyACM0 (shown available on the Tools menu in the screenshot below):

screenshot.1502139935

Then open the serial monitor to check that the “Hello Pi” messages are coming through correctly (as shown below):

screenshot.1502140145

Writing the C# application

Now that I’m sure that the physical connection between the Arduino and Pi works, I can start writing the C# application.

TL:DR; I’ve uploaded my working code to GitHub here.

When writing my code, I wanted to stay close to the existing API provided by Microsoft in their library System.IO.Ports, which allows Windows machines to read from the serial port (I’ve blogged about this here). I was able to look at their source code on GitHub, and from this I designed the serial port interface below:

using Bifrost.IO.Ports.Core;
using System;
 
namespace Bifrost.IO.Ports.Abstractions
{
    public interface ISerialPort : IDisposable
    {
        int BaudRate { getset; }
 
        string PortName { getset; }
 
        bool IsOpen { getset; }
 
        string ReadExisting();
 
        void Open();
        
        event SerialDataReceivedEventHandler DataReceived;
 
        void Close();
    }
}

I like interfaces because consumers can use this interface, and don’t care if change my implementation behind the interface. It also makes my libraries more testable with mocking libraries. You can see this interface on GitHub here.

The next task was to design a .NET Core implementation for this interface which would work for Linux. I’ve previously done something similar to this for I2C communication, using P/Invoke calls (I’ve written about this here). After reading the documentation and finding some inspiration from another open source sample here, I knew I needed the following six P/Invoke calls:

[DllImport("libc", EntryPoint = "open")]
public static extern int Open(string portName, int mode);
 
[DllImport("libc", EntryPoint = "close")]
public static extern int Close(int handle);
 
[DllImport("libc", EntryPoint = "read")]
public static extern int Read(int handle, byte[] data, int length);
 
[DllImport("libc", EntryPoint = "tcgetattr")]
public static extern int GetAttribute(int handle, [Outbyte[] attributes);
 
[DllImport("libc", EntryPoint = "tcsetattr")]
public static extern int SetAttribute(int handle, int optionalActions, byte[] attributes);
 
[DllImport("libc", EntryPoint = "cfsetspeed")]
public static extern int SetSpeed(byte[] attributes, int baudrate);

These calls allow me to:

  • Open a port in read/write mode and get an integer handle to this port;
  • I can also get a list of attributes, specify the baudrate attribute, and then set these attributes.
  • Given the handle to the port, I can read from the port into an array of bytes.
  • Finally, I can also close the connection.

I’ll look at the most important elements below.

Opening the serial port

If we have instantiated a port with a name (/dev/ttyACM0) and a Baud rate (9600), we can use these P/Invoke calls in C# to open the port.

public void Open()
{
    int handle = Open(this.PortName, OPEN_READ_WRITE);
 
    if (handle == -1)
    {
        throw new Exception($"Could not open port ({this.PortName})");
    }
 
    SetBaudRate(handle);
 
    Task.Delay(2000);
 
    Task.Run(() => StartReading(handle));
}

You’ll notice that if the request to open the port is successful, it’ll return a non-negative integer, which will be the handle to the port that we’ll use throughout the rest of the class.

Setting the Baud rate is straightforward – we get the array of port attributes using the port’s handle, specify the Baud rate, and then send this array of attributes back to the device.

private void SetBaudRate(int handle)
{
    byte[] terminalData = new byte[256];
 
    GetAttribute(handle, terminalData);
    SetSpeed(terminalData, this.BaudRate);
    SetAttribute(handle, 0, terminalData);
}

I give the port a couple of seconds to settle down – I often find that the first few messages come through out of order, or with missing bytes – and then run the “StartReading” method in a separate thread using Task.Run.

Reading from the serial port

Reading from the port is quite straightforward too – given the handle, we just use the P/Invoke call “Read” to copy the serial data into a byte array which is stored as a member variable. Before invoking an event corresponding to a successful read, I check that there actually is valid data returned (i.e. the return value is non-negative), and that any data returned isn’t just a single newline character. If it passes this test, I pass control to the event handler for the DataReceived event.

private void StartReading(int handle)
{
    while (true)
    {
        Array.Clear(serialDataBuffer, 0, serialDataBuffer.Length);
 
        int lengthOfDataInBuffer = Read(handle, serialDataBuffer, SERIAL_BUFFER_SIZE);
 
        // make sure there is data in the buffer, and check if it's just one character that it's not just a newline character
        if (lengthOfDataInBuffer != -1 && !(lengthOfDataInBuffer == 1 && serialDataBuffer[0== ASCII_NEWLINE_CODE))
        {
            DataReceived.Invoke(thisnew SerialDataReceivedEventArgs());
        }
    }
}

Putting it all together

I’ve put my interfaces and implementations into separate .NET Standard libraries so that I can re-use them in my other .NET Core applications. And when I write a sample program for my Raspberry Pi to read from my Arduino, the implementation is very similar to the implementation that works for Windows x86/x64 devices reading from an Arduino (covered in this post).

using Bifrost.IO.Ports;
using Bifrost.IO.Ports.Core;
using System;
 
namespace SerialSample
{
    class Program
    {
        static void Main(string[] args)
        {
            var serialPort = new SerialPort()
            {
                PortName = "/dev/ttyACM0",
                BaudRate = 9600
            };
 
            // Subscribe to the DataReceived event.
            serialPort.DataReceived += SerialPort_DataReceived;
 
            // Now open the port.
            serialPort.Open();
 
            Console.ReadKey();
 
            serialPort.Close();
        }
 
        private static void SerialPort_DataReceived(object sender, SerialDataReceivedEventArgs e)
        {
            var serialPort = (SerialPort)sender;
 
            // Read the data that's in the serial buffer.
            var serialdata = serialPort.ReadExisting();
 
            // Write to debug output.
            Console.Write(serialdata);
        }
    }
}

Once I’ve compiled my project, and deployed it to my Raspberry Pi (using my build.cake script), I can SSH into my Pi and run the .NET Core application – and this displays the “Hello Pi” serial output being sent by the Arduino, as expected.

screenshot.1502145572

Wrapping up

It’s possible to read serial data from an Arduino connected by USB to a Raspberry Pi 3 using C#. Obviously the code here is just a proof of concept – it doesn’t use handshaking, parity bits or stop bits, and it only reads from the Arduino, but writing back to the serial port could be achieved using the P/Invoke call to the “write” function. Hopefully this post is useful to anyone trying to use serial communications between a Raspberry Pi 3 and an Arduino.


About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

 

.net core, arduino, IOT

Using .NET Core 2 to read serial data from an Arduino UNO over USB

If you’ve worked with an Arduino and used a Windows development machine, you’ll probably have used the Arduino IDE to deploy code, and used the IDE’s built in serial monitor to read messages back from the Arduino device. And if you want to use these messages in a .NET application, there’s already good support in the .NET Framework – but what about .NET Core?

In this post, I’ll look at how to use VSCode to create an Arduino simple project which writes to a serial port, and deploy this to an Arduino Uno using VSCode. Then I’ll write about how to use the preview NuGet package to read this data using .NET Core. I’ll finish up with some issues I’ve observed.

Note – this article about .NET Core targets Windows 10 32-bit/64-bit editions – this won’t work if you’re targeting ARM devices with Windows 10 IoT Core (i.e. win10-arm). I’ll cover targeting the ARM platform in a later post.

Also, you’ll obviously need an Arduino for this – I’ve found an Arduino UNO is ideal.

This post has two main sections – setting up a example project on an Arduino using VSCode to write data to a serial port, and then how to access this data using a .NET Core application.

Writing data to a serial port with an Arduino

Setting up VSCode for Arduino development

First of all, you’ll need VSCode on your development machine – you can download it from here. I’ll not describe this in detail – it’s pretty much a point and click installer.

You’ll also need the Arduino IDE installed on your machine so VSCode can access the necessary libraries – you can download it from here. Again this is a very straightforward installation.

Next, install the Arduino extension for VSCode. There are great instructions here – but in summary, from VSCode:

  • Hit ‘Ctrl + P’
  • Type:
ext install vscode-arduino

At this point, you’ll have the Arduino extension for VSCode, presently at version 0.2.4.

Create an Arduino project in VSCode which writes to the serial port

From VSCode, you can create a new Arduino project with the following steps:

  • Create a folder on your development machine to hold the Arduino project – I’ve called mine ArduinoSerialExample
  • Open this folder in VSCode.
  • In VSCode, hit ‘Ctrl + Shift + P’ to open the VSCode Command Palette.
  • Type: Arduino: Initialize – VSCode will offer to create a file with extension app.ino.

screenshot.1501514685

  • Rename this to ArduinoSerialExample.ino. It’s important that this file (also known as an Arduino sketch) has the same name as the parent directory.
  • At this point, VSCode will ask what Arduino device is being used – I’m using an Arduino UNO, so I selected this from the list.

screenshot.1501514840

Your VSCode workspace is now initialised for Arduino development.

  • Update the code in the ArduinoSerialExample.ino file to have the contents shown below.
int i = 0;
 
void setup(){
  Serial.begin(9600);
}
 
void loop(){
  Serial.print("Hello Pi ");
  Serial.println(i);
  delay(1000);
  i++;
}

Important tip – if you copy and paste into VSCode, make sure you don’t accidentally copy unexpected unicode characters, as these will cause compiler errors.

  • Now hit ‘Ctrl + Shift + R’ to compile (also known as Verify) the script – if everything works, you should see an output similar to the text below.
[Starting] Verify sketch - ArduinoSerialExample.ino
Loading configuration...
Initialising packages...
Preparing boards...
Verifying...

Sketch uses 1,868 bytes (5%) of program storage space. Maximum is 32,256 bytes.
Global variables use 198 bytes (9%) of dynamic memory, leaving 1,850 bytes for local variables. Maximum is 2,048 bytes.
[Done] Finished verify sketch - ArduinoSerialExample.ino

Test this project works with a real Arduino

There’s a couple of last steps – connecting the physical Arduino to your development machine, and choosing the serial port.

If you look at the bottom right corner of VSCode, you should see that there’s still a prompt to select the serial port (as shown below).

screenshot.1501515586

Once I plugged my Arduino UNO into a USB port on my machine, I was able to click on theprompt (highlighted in a red box in the image above), and VSCode prompts me to select a serial port, as shown below.

screenshot.1501518211

I selected COM4, and this updates VSCode to show the serial port in the bottom right corner of the screen, as shown below.

screenshot.1501278934

I’m now ready to rest the Arduino is writing to the serial port.

I can upload the sketch to the Arduino by hitting ‘Ctrl + Shift + U’ – this will re-compile the sketch and upload it to the Arduino.

Next, hit open the command palette again (by hitting ‘Ctrl + Shift + P’, and type ‘Arduino Open Serial Monitor’, and select the option to open the Serial Monitor from the dropdown list.

screenshot.1501519010

The serial monitor opens, and I’m able to see output being logged to the console from the Arduino through the serial port COM4, as shown below.

screenshot.1501518933

Accessing the serial port data on a PC using .NET Core

TL:DR; I’ve uploaded the project to GitHub here.

First set up the .NET Core 2 solution – a console project and a .NET Standard 2.0 class library

Create a new project to hold your .NET solution. I like to manage solutions using the command line – I create a solution using the command:

dotnet new sln -n ReadSerialDataFromArduino

Inside this solution folder, create a new .NET Core console project – I do this using the command:

dotnet new console -n ReadFromArduino

Also create a new .NET Standard 2.0 library project inside the solution folder – again, I do this using the command:

dotnet new classlib -n ReadSerialInputFromUSB

Now we can add these two projects to the solution using the commands below

dotnet sln add .\ReadFromArduino\ReadFromArduino.csproj
dotnet sln add .\ReadSerialInputFromUSB\ReadSerialInputFromUSB.csproj

And we can see the projects in the solution using the command below:

dotnet sln list

And this command presents the expected output of:

Project reference(s)
--------------------
ReadFromArduino\ReadFromArduino.csproj
ReadSerialInputFromUSB\ReadSerialInputFromUSB.csproj

Finally for this section, I want to add the library as a reference to my console application with the command:

dotnet add .\ReadFromArduino\ReadFromArduino.csproj reference .\ReadSerialInputFromUSB\ReadSerialInputFromUSB.csproj

Add the .NET Core System.IO.Ports preview package

The System.IO.Ports package (available here on nuget.org) allows access to the serial port through a .NET Core application. I can add this to my .NET Standard 2.0 class library by navigating into the ReadSerialInputFromUSB directory, and run the command below:

dotnet add package System.IO.Ports --version 4.4.0-preview2-25405-01

So now the project structure is is place – we can add the bits of code that actually do things.

Let’s use C# to list what serial ports are available to us. I’ve created a class in the ReadSerialInputFromUSB project named SerialInformation, and added a static method called GetPorts().

using System;
using System.IO.Ports;
 
namespace ReadSerialInputFromUSB
{
    public class SerialInformation
    {
        public static void GetPorts()
        {
            Console.WriteLine("Serial ports available:");
            Console.WriteLine("-----------------------");
            foreach(var portName in SerialPort.GetPortNames())
            {
                Console.WriteLine(portName);
            }
        }
    }
}

And we can access this through the main method in the ReadFromArduino project:

using ReadSerialInputFromUSB;
 
namespace ReadFromArduino
{
    class Program
    {
        static void Main(string[] args)
        {
            SerialInformation.GetPorts();
        }
    }
}

If we build this and run the project (using dotnet build and dotnet run) the output is:

Serial ports available:
-----------------------
COM4

This is exactly what we’d expect from earlier, where VSCode identified COM4 as the port being used by the Arduino.

And if we can get the data from the Arduino into a variable and write to the console, we can do that by using the DataReceived event and using the ReadExisting() method on the serial port object, as shown below:

public void ReadFromPort()
{
    // Initialise the serial port on COM4.
    // obviously we would normally parameterise this, but
    // this is for demonstration purposes only.
    this.SerialPort = new SerialPort("COM4")
    {
        BaudRate = 9600,
        Parity = Parity.None,
        StopBits = StopBits.One,
        DataBits = 8,
        Handshake = Handshake.None
    };
 
    // Subscribe to the DataReceived event.
    this.SerialPort.DataReceived += SerialPortDataReceived;
 
    // Now open the port.
    this.SerialPort.Open();
}
 
private void SerialPortDataReceived(object sender, SerialDataReceivedEventArgs e)
{
    var serialPort = (SerialPort)sender;
 
    // Read the data that's in the serial buffer.
    var serialdata = serialPort.ReadExisting();
 
    // Write to debug output.
    Debug.Write(serialdata);
}

I can call this in my console project’s main method using the code below:

static void Main(string[] args)
{
    SerialInformation.GetPorts();
 
    var serialInformation = new SerialInformation();
 
    serialInformation.ReadFromPort();
 
    Console.ReadKey();
 
    serialInformation.SerialPort.Close();
}

So when I run this console application, the COM4 serial port is opened, and writes whatever it receives to the debug output.

You can see the source code for the Serial.IO.Ports library on GitHub in the CoreFX library, and there’s access to the nightly builds on myget.org.

This library is great for connecting to (and reading from) serial ports using a .NET Core application running on a Windows x32/x64 machine. However, one issue is this library doesn’t work with ARM – either for Windows 10 IoT Core or for Linux.

Wrapping up

Using the Serial.IO.Ports preview library available on NuGet, it’s possible to read from serial ports using a .NET Core 2 application on a Windows 32-bit/64-bit machine, and I’ve a very simple example of how to do this available on GitHub here. So far there’s not an implementation in the Serial.IO.Ports library which works for ARM architectures, but I’ll look at options for closing this gap in future posts.


About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!