.net, .net core, Flurl, Polly

Using Polly and Flurl to improve your website

This post is about how to use The Polly Project to make a .NET website better. I use Flurl to consume Restful web services so I’ve some Flurl specific code later on, but I hope this post is useful to anyone who’s interested in learning what Polly is, what it’s for and how it can help you.

So here’s a problem

Let’s pretend you run your business through a website, and part of your code calls out to a web service that another company supplies.

And, every once in a while, errors from this web service appear in your logs. Sometimes the HTTP status code is a 404 (not found), sometimes the code is a 503 (service unavailable), and other times you see a 504 (timeout). There’s no pattern, it goes away as quickly as it starts, and you’d really really like to get this fixed before customers start cancelling their subscriptions to your service.

You call up the business running the remote web service, and their answer is a bit… vague. Every so often they restart their web servers which takes their service down for a couple of seconds, and at certain times of the day they get spikes of traffic which causes their system to max out for up to 5 seconds at a time. They’re apologetic, and they expect to migrate to new, better infrastructure in about 6 months. But their only workaround is for you to re-query the service.

So you could be forgiven for going spare right now – this response doesn’t fix anything. This company is the only place you can get the data you need so you’re locked in. And you know your customers are seeing errors because it’s right there staring at you from your website logs. Asking your customers to ‘just hit refresh’ when they get an error is a great way to lose business and win a bad reputation.

You can use Polly to help solve this problem

When I first read about Polly a long while back, I was really interested but I wasn’t sure how I could apply it to the project I was working on.  What I wanted was to find a post that described a real world scenario that I could recognise and identify with, and how Polly would help with that.

Since then, I’ve worked on projects a little bit like the one I described above – one time when I’ve raised a ticket to say that we’re having intermittent problems with a web service, I’ve been told that the workaround is ‘hit refresh’. And since there’s a workaround, it’s only going to be raised as medium priority issue (which feels like a coded message for ‘we’re not even going to look at this’). This kind of thing drives me crazy and it’s exactly the kind of problem that Polly can at least mitigate.

I’ve also met people who are doing really interesting work with hardware devices in .NET, and need to be able to handle hardware that can only deal with single threads – Polly allows the application to handle occasions when it doesn’t receive an acknowledgement from the hardware by waiting for a while and then retrying.

Let’s get to some code

I’ve pushed all of the code below to a repo in my Github, so you pull it locally and step through it yourself.

First, a couple of harnesses to simulate a flakey web-service

So I’ve written a simple (and really awful) web-service project to simulate random transient errors. The service is just meant to return what day it is, but it’ll only work about two times out of three. The rest of the time it’ll return either a 404 (Not Found), a 503 (Service Unavailable), or it’ll hang for 10 seconds and then return a 504 (Service timed out).

using System;
using System.Diagnostics;
using System.Threading;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Newtonsoft.Json;
 
namespace WorldsWorstWebService.Controllers
{
    [Route("api/[controller]")]
    [ApiController]
    public class WeekDayController : ControllerBase
    {
        [HttpGet]
        public IActionResult Get()
        {
            // Manufacture 404, 503 and 504 errors for about a third of all responses
            var randomNumber = new Random();
            var randomInteger = randomNumber.Next(08);
 
            switch (randomInteger)
            {
                case 0:
                    Debug.WriteLine("Webservice:About to serve a 404...");
                    return StatusCode(StatusCodes.Status404NotFound);
 
                case 1:
                    Debug.WriteLine("Webservice:About to serve a 503...");
                    return StatusCode(StatusCodes.Status503ServiceUnavailable);
 
                case 2:
                    Debug.WriteLine("Webservice:Sleeping for 10 seconds then serving a 504...");
                    Thread.Sleep(10000);
                    Debug.WriteLine("Webservice:About to serve a 504...");
 
                    return StatusCode(StatusCodes.Status504GatewayTimeout);
                default:
                {
                    var formattedCustomObject = JsonConvert.SerializeObject(
                        new
                        {
                            WeekDay = DateTime.Today.DayOfWeek.ToString()
                        });
 
                    Debug.WriteLine("Webservice:About to correctly serve a 200 response");
 
                    return Ok(formattedCustomObject);
                }
            }
        }
    }
}

I’ve also written another web application project that consumes this service using Flurl.

If you’re interested in Flurl and Restful web services, I’ve written more about using it here.

using System.Diagnostics;
using System.Threading.Tasks;
using Flurl.Http;
using Microsoft.AspNetCore.Mvc;
using MyWebsite.Models;
 
namespace MyWebsite.Controllers
{
    public class HomeController : Controller
    {
        public async Task<IActionResult> Index()
        {
            try
            {
                var weekday = await "https://localhost:44357/api/weekday"
                    .GetJsonAsync<WeekdayModel>();
 
                Debug.WriteLine("[App]: successful");
 
                return View(weekday);
            }
            catch (Exception e)
            {
                Debug.WriteLine("[App]: Failed - " + e.Message);
                throw;
            }
        }
    }
}

So I carried out a simple experiment – run these projects and try to hit my website 20 times, I mostly get successful responses, but I still get a load of failures. I’ve pasted the debug log below.

[App]: successful
[App]: Failed - Call failed with status code 503 (Service Unavailable): GET https://localhost:44357/api/weekday
[App]: successful
[App]: successful
[App]: successful
[App]: Failed - Call failed with status code 504 (Gateway Timeout): GET https://localhost:44357/api/weekday
[App]: successful
[App]: successful
[App]: Failed - Call failed with status code 503 (Service Unavailable): GET https://localhost:44357/api/weekday
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App]: Failed - Call failed with status code 503 (Service Unavailable): GET https://localhost:44357/api/weekday
[App]: successful
[App]: Failed - Call failed with status code 503 (Service Unavailable): GET https://localhost:44357/api/weekday
[App]: successful
[App]: Failed - Call failed with status code 404 (Not Found): GET https://localhost:44357/api/weekday

So out of 20 page hits, my test web app failed 6 times – about a 30% failure rate. That’s pretty poor (and about consistent with what we expect from the flakey web service).

Let’s say I don’t control the behaviour of the web services upstream of my web app, so I can’t change reason why my web app is failing, but let’s see if Polly allows me to reduce the number of failures that my web app users see.

Wiring up Polly

First let’s design some rules, also known as ‘policies’

So what’s a ‘policy’? Basically it’s just a rule that’ll help mitigate the intermittent problem.

For example – the web service frequently delivers 404 and 503 messages, but it’s back up again quickly. So a policy could be:

Retry Policy: When the web services returns an unsuccessful HTTP code, wait a second and try again. If it still fails, wait three seconds and try again, and if it still fails, then wait five more seconds and try one more time. If it fails after that, the service is dead and we need to deal with the error.

We also know that the web service hangs for 10 seconds before delivering a 504 timeout message. I don’t want my customers to wait for this long – after a couple of seconds I’d like to my app to give up, and execute the ‘Retry Policy’ above.

Timeout Policy: When I’ve been waiting for a response for longer than 2 seconds, cut my losses and execute the Retry Policy.

Wrapping these policies together forms a ‘Policy Strategy’.

So the first step is to install the Polly nuget package to the web app project:

Install-Package Polly

Polly is an open source project hosted on Github, with a BSD licence. It’s also a member of the .NET Foundation,

So what would these policies look like in code? The timeout policy is like the code below, where we can just pass the number of seconds to wait as a parameter:

var timeoutPolicy = Policy.TimeoutAsync<HttpResponseMessage>(2);

There’s also an overload, and I’ve specified some debug messages using that below.

var timeoutPolicy = Policy.TimeoutAsync<HttpResponseMessage>(2, (context, timeSpan, task) =>
{
    Debug.WriteLine($"[App|Policy]: Timeout delegate fired after {timeSpan.Seconds} seconds");
    return Task.CompletedTask;
});

The retry policy is a little different from the timeout policy:

  • I first specify the conditions under which I should retry – there must be an unsuccessful HTTP status code, or there must be a timeout exception.
  • Then I can specify how to wait and retry – first wait 1 second before retrying, then wait 3 seconds, then wait 5 seconds.
  • Finally I’ve used the overload with a delegate to write comments to debug.
var retryPolicy = Policy
    .HandleResult<HttpResponseMessage>(r => !r.IsSuccessStatusCode)
    .Or<TimeoutRejectedException>()
    .WaitAndRetryAsync(new[]
        {
            TimeSpan.FromSeconds(1),
            TimeSpan.FromSeconds(3),
            TimeSpan.FromSeconds(5)
        },
        (result, timeSpan, retryCount, context) =>
        {
            Debug.WriteLine($"[App|Policy]: Retry delegate fired, attempt {retryCount}");
        });

And I can bundle these policies together as a single policy strategy like this:

var policyStrategy = Policy.WrapAsync(RetryPolicy, TimeoutPolicy);

I’ve grouped these policies in their own class and pasted the code below.

public static class Policies
{
    private static TimeoutPolicy<HttpResponseMessage> TimeoutPolicy
    {
        get
        {
            return Policy.TimeoutAsync<HttpResponseMessage>(2, (context, timeSpan, task) =>
            {
                Debug.WriteLine($"[App|Policy]: Timeout delegate fired after {timeSpan.Seconds} seconds");
                return Task.CompletedTask;
            });
        }
    }
 
    private static RetryPolicy<HttpResponseMessage> RetryPolicy
    {
        get
        {
            return Policy
                .HandleResult<HttpResponseMessage>(r => !r.IsSuccessStatusCode)
                .Or<TimeoutRejectedException>()
                .WaitAndRetryAsync(new[]
                    {
                        TimeSpan.FromSeconds(1),
                        TimeSpan.FromSeconds(2),
                        TimeSpan.FromSeconds(5)
                    },
                    (delegateResult, retryCount) =>
                    {
                        Debug.WriteLine(
                            $"[App|Policy]: Retry delegate fired, attempt {retryCount}");
                    });
        }
    }
 
    public static PolicyWrap<HttpResponseMessage> PolicyStrategy => Policy.WrapAsync(RetryPolicy, TimeoutPolicy);
}

Now I want to apply this Policy Strategy to every outgoing call to the 3rd party web service.

How do I apply these policies when I’m using Flurl?

One of the things I really like about using Flurl to consume 3rd party web services is that I don’t need to instantiate an HttpClient, or worry about running out of available sockets every time I make a call – Flurl handles all of this in the background for me.

But that also means it’s not immediately obvious how I can configure calls to the HttpClient used in the background so that my policy strategy is applied to each call.

Fortunately Flurl provides a way to do this by adding a few new classes to my web app project, and a configuration instruction. I can configure Flurl’s settings in my web app’s Startup file to make it use a different implementation of Flurl’s default HttpClientFactory (which overrides how HTTP messages are handled).

public void ConfigureServices(IServiceCollection services)
{
    //...other service configuration here
 
    FlurlHttp.Configure(settings => settings.HttpClientFactory = new PollyHttpClientFactory());
}

The PollyHttpClientFactory is an extension of Flurl’s default HttpClientFactory. This overrides how HttpMessages are handled, and instead uses our own PolicyHandler.

public class PollyHttpClientFactory : DefaultHttpClientFactory
{
    public override HttpMessageHandler CreateMessageHandler()
    {
        return new PolicyHandler
        {
            InnerHandler = base.CreateMessageHandler()
        };
    }
}

And the PolicyHandler is where we apply our rules (the policy strategy) to outgoing HTTP requests.

public class PolicyHandler : DelegatingHandler
{
    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        return Policies.PolicyStrategy.ExecuteAsync(ct => base.SendAsync(request, ct), cancellationToken);
    }
}

Now let’s see if this improves things

With the policies applied to requests to the 3rd party web service, I repeated the earlier experiment and hit my application again 20 times.

[App]: successful
[App]: successful
[App|Policy]: Timeout delegate fired after 2000
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Timeout delegate fired after 2000
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App]: successful
[App]: successful
[App|Policy]: Timeout delegate fired after 2000
[App|Policy]: Retry delegate fired, attempt 1
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App]: successful
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App|Policy]: Retry delegate fired, attempt 1
[App|Policy]: Retry delegate fired, attempt 2
[App]: successful
[App]: successful
[App]: successful
[App]: successful

This time, my users would have experienced no application failures in those 20 page hits. But all those orange lines are the times that the web service failed, and our policy was to try again – which eventually lead to a successful response from my web app.

In fact, I went on to hit the page 100 times and only saw two errors in total, so the total failure rate that my users experience now is at about 2% – way better than the 30% failure rate experienced originally.

Obviously this is a very contrived example – real world examples are likely to be a bit more complex. And your rules and policies will be different to mine. Instead of retrying, maybe you want to fallback to a different action (e.g. hit a different web service, pull from a cache etc.) – and Polly has its own fallback mechanism to do this. You’ll have to design your own rules and policies to handle the particular failure modes that you face.

Wrapping up

I’d a couple of aims when writing this post – first of all I wanted to come up with a couple of different scenarios for how Polly could be used in your application. I mostly work with web applications and web services, and I also like using Flurl for accessing these services, so that’s what this article focusses on. But I’ve just scratched the surface here – Polly can do way more than that. Check out the Polly Wiki to find out more about it, or look at the samples.

 


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!
Continue reading

.net, .net core, C# tip, Flurl

Comparing RestSharp and Flurl.Http while consuming a web service in .NET Core

Just before the holidays I was working on a .NET Core project that needed data available from some web services. I’ve done this a bunch of times previously, and always seem to spend a couple of hours writing code using the HttpClient object before remembering there are libraries out there that have done the heavy lifting for me.

So I thought I’d do a little write up of a couple of popular library options that I’ve used – RestSharp and Flurl. I find that learn quickest from reading example code, so I’ve written sample code showing how to use both of these libraries with a few different publically available APIs.

I’ll look at three different services in this post:

  • api.postcodes.io – no authentication required, uses GET and POST verbs
  • api.nasa.gov – authentication via an API key passed in the query string
  • api.github.com – Basic Authentication required to access private repo information

And as an architect, I’m sometimes asked how to get started (and sometimes ‘why did you chose library X instead of library Y?’), so I’ve wrapped up with a comparison and which library I like best right now.

Reading data using RestSharp

This is a very mature and well documented open source project (released under the Apache 2.0 licence), with the code available on Github. You can install the nuget package in your project using package manager with the command:

Install-Package RestSharp

First – using the GET verb with RestSharp.

Using HTTP GET to return data from a web service

Using Postcodes.io

I’ve been working with mapping software recently – some of my data sources don’t have latitude and longitude for locations, and instead they only have a UK postcode. Fortunately I can use the free Postcodes.io RESTful web API to determine a latitude and longitude for each of the postcode values. I can either just send a postcode using a GET request to get the corresponding geocode (latitude and longitude) back, or I can use a POST request to send a list of postcodes and get a list of geocodes back, which speeds things up a bit with bulk processing.

Let’ start with a simple example – using the GET verb for a single postcode. I can request a geocode corresponding to a postcode from the Postcodes.io service through a browser with a URL like the one below:

https://api.postcodes.io/postcodes/IP1 3JR

This service doesn’t require any authentication, and the code below shows how to use RestSharp and C# to get data using a GET request.

// instantiate the RestClient with the base API url
var client = new RestClient("https://api.postcodes.io");
 
// specify the resource, e.g. https://api.postcodes.io/postcodes/IP1 3JR
var getRequest = new RestRequest("postcodes/{postcode}");
getRequest.AddUrlSegment("postcode""IP1 3JR");
 
// send the GET request and return an object which contains the API's JSON response
var singleGeocodeResponseContainer = client.Execute(getRequest);
 
// get the API's JSON response
var singleGeocodeResponse = singleGeocodeResponseContainer.Content;

The example above returns raw JSON content, which I can deserialise into a custom POCO, such as the one below.

public class GeocodeResponse
{
    public string Status { getset; }
 
    public Result Result { getset; }
}
 
public class Result
{
    public string Postcode { getset; }
 
    public string Longitude { getset; }
 
    public string Latitude { getset; }
}

But I can do better than the code above – if I specify the GeocodeResponse type in the Execute method (as shown below), RestSharp uses the classes above and intelligently hydrates the POCO  from the raw JSON content returned:

// instantiate the RestClient with the base API url
var client = new RestClient("https://api.postcodes.io");
 
// specify the resource, e.g. https://api.postcodes.io/postcodes/OX495NU
var getRequest = new RestRequest("postcodes/{postcode}");
getRequest.AddUrlSegment("postcode""OX495NU");
 
// send the GET request and return an object which contains a strongly typed response
var singleGeocodeResponseContainer = client.Execute<GeocodeResponse>(getRequest);
 
// get the strongly typed response
var singleGeocodeResponse = singleGeocodeResponseContainer.Data;

Of course, not APIs all work in the same way, so here are another couple of examples of how to return data from different publically available APIs.

NASA Astronomy Picture of the Day

This NASA API is also freely available, but slightly different from the Postcodes.io API in that it requires an API subscription key. NASA requires that the key is passed as a query string parameter, and RestSharp facilitates this with the AddQueryParameter method (as shown below).

This method of securing a service isn’t that unusual – goodreads.com/api also uses this method.

// instantiate the RestClient with the base API url
var client = new RestClient("https://api.nasa.gov/");
 
// specify the resource, e.g. https://api.nasa.gov/planetary/apod
var getRequest = new RestRequest("planetary/apod");
 
// Add the authentication key which NASA expects to be passed as a parameter
// This gives https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY
getRequest.AddQueryParameter("api_key""DEMO_KEY");
 
// send the GET request and return an object which contains the API's JSON response
var pictureOfTheDayResponseContainer = client.Execute(getRequest);
 
// get the API's JSON response
var pictureOfTheDayJson  = pictureOfTheDayResponseContainer.Content;

Again, I could create a custom POCO corresponding to the JSON structure and populate an instance of this by passing the type with the Execute method.

Github’s API

The Github API will return public data any authentication, but if I provide Basic Authentication data it will also return extra information relevant to me about my profile, such as information about my private repositories.

RestSharp allows us to set an Authenticator property to specify the userid and password.

// instantiate the RestClient with the base API url
var client = new RestClient("https://api.github.com/");
 
// pass in user id and password 
client.Authenticator = new HttpBasicAuthenticator("jeremylindsayni""[[my password]]");
 
// specify the resource that requires authentication
// e.g. https://api.github.com/users/jeremylindsayni
var getRequest = new RestRequest("users/jeremylindsayni");
 
// send the GET request and return an object which contains the API's JSON response
var response = client.Execute(getRequest);

Obviously you shouldn’t hard code your password into your code – these are just examples of how to return data, they’re not meant to be best practices. You might want to store your password in an environment variable, or you could do even better and use Azure Key Vault – I’ve written about how to do that here and here.

Using the POST verb to obtain data from a web service

The code in the previous example refers to GET requests  – a POST request is slightly more complex.

The api.postcodes.io service has a few different endpoints – the one I described earlier only finds geocode information for a single postcode – but I’m also able to post a JSON list of up to 100 postcodes, and get corresponding geocode information back as a JSON list. The JSON needs to be in the format below:

{
   "postcodes" : ["IP1 3JR", "M32 0JG"]
}

Normally I prefer to manipulate data in C# structures, so I can add my list of postcodes to the object below.

public class PostCodeCollection
{
    public List<string> postcodes { getset; }
}

I’m able to create a POCO object with the data I want to post to the body of the POST request, and RestSharp will automatically convert it to JSON when I pass the object into the AddJsonBody method.

// instantiate the ResttClient with the base API url
var client = new RestClient("https://api.postcodes.io");
 
// specify the resource, e.g. https://api.postcodes.io/postcodes
var postRequest = new RestRequest("postcodes"Method.POST, DataFormat.Json);
 
// instantiate and hydrate a POCO object with the list postcodes we want geocode data for
var postcodes = new PostCodeCollection { postcodes = new List<string> { "IP1 3JR""M32 0JG" } };
 
// add this POCO object to the request body, RestSharp automatically serialises it to JSON
postRequest.AddJsonBody(postcodes);
 
// send the POST request and return an object which contains JSON
var bulkGeocodeResponseContainer = client.Execute(postRequest);

One gotcha – RestSharp Serialization and Deserialization

One aspect of RestSharp that I don’t like is how the JSON serialisation and deserialisation works. RestSharp uses its own engine for processing JSON, but basically I prefer Json.NET for this. For example, if I use the default JSON processing engine in RestSharp, then my PostcodeCollection POCO needs to have property names which exactly match the JSON property names (including case sensitivity).

I’m used to working with Json.NET and decorating properties with attributes describing how to serialise into JSON, but this won’t work with RestSharp by default.

// THIS DOESN'T WORK WITH RESTSHARP UNLESS YOU ALSO USE **AND REGISTER** JSON.NET
public class PostCodeCollection
{
    [JsonProperty(PropertyName = "postcodes")]
    public List<string> Postcodes { getset; }
}

Instead I need to override the default RestSharp serializer and instruct it to use Json.NET. The RestSharp maintainers have written about their reasons here and also here – and helped out by writing the code to show how to override the default RestSharp serializer. But personally I’d rather just use Json.NET the way I normally do, and not have to jump through an extra hoop to use it.

Reading Data using Flurl

Flurl is newer than RestSharp, but it’s still a reasonably mature and well documented open source project (released under the MIT licence). Again, the code is on Github.

Flurl is different from RestSharp in that it allows you to consume the web service by building a fluent chain of instructions.

You can install the nuget package in your project using package manager with the command:

Install-Package Flurl.Http

Using HTTP GET to return data from a web service

Let’s look at how to use the GET verb to read data from the api.postcodes.io. api.nasa.gov. and api.github.com.

First, using Flurl with api.postcodes.io

The code below searches for geocode data from the specified postcode, and returns the raw JSON response. There’s no need to instantiate a client, and I’ve written much less code than I wrote with RestSharp.

var singleGeocodeResponse = await "https://api.postcodes.io"
    .AppendPathSegment("postcodes")
    .AppendPathSegment("IP1 3JR")
    .GetJsonAsync();

I also find using the POST method with postcodes.io easier with Flurl. Even though Flurl doesn’t have a build in JSON serialiser, it’s easy for me to install the Json.NET package – this means I can now use a POCO like the one below…

public class PostCodeCollection
{
    [JsonProperty(PropertyName = "postcodes")]
    public List<string> Postcodes { getset; }
}

… to fluently build up a post request like the one below. I can also createmy own custom POCO – GeocodeResponseCollection – which Flurl will automatically populate with the JSON fields.

var postcodes = new PostCodeCollection { Postcodes = new List<string> { "OX49 5NU""M32 0JG" } };
 
var url = await "https://api.postcodes.io"
    .AppendPathSegment("postcodes")
    .PostJsonAsync(postcodes)
    .ReceiveJson<GeocodeResponseCollection>();

Next, using Flurl with api.nasa.gov

As mentioned previously, NASA’s astronomy picture of the day requires a demo key passed in the query string – I can do this with Flurl using the code below:

var astronomyPictureOfTheDayJsonResponse = await "https://api.nasa.gov/"
    .AppendPathSegments("planetary""apod")
    .SetQueryParam("api_key""DEMO_KEY")
    .GetJsonAsync();

Again, it’s a very concise way of retrieving data from a web service.

Finally using Flurl with api.github.com

Lastly for this post, the code below show how to use Flurl with Basic Authentication and the Github API.

var singleGeocodeResponse = await "https://api.github.com/"
    .AppendPathSegments("users""jeremylindsayni")
    .WithBasicAuth("jeremylindsayni""[[my password]]")
    .WithHeader("user-agent""csharp-console-app")
    .GetJsonAsync();

One interesting difference in this example between RestSharp and Flurl is that I had to send user-agent information to the Github API with Flurl – I didn’t need to do this with RestSharp.

Wrapping up

Both RestSharp and Flurl are great options for consuming Restful web services – they’re both stable, source for both is on Github, and there’s great documentation.  They let me write less code and do the thing I want to do quickly, rather than spending ages writing my own code and tests.

Right now, I prefer working with Flurl, though the choice comes down to personal preference. Things I like are:

  • Flurl’s MIT licence
  • I can achieve the same results with less code, and
  • I can integrate Json.NET with Flurl out of the box, with no extra classes needed.

About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, Non-functional Requirements, Security

Serve CSS and JavaScript from CDNs safely with subresource integrity (SRI) attributes

I’m building a web application at the moment which plots data on a map using the Leaflet JS framework. Leaflet JS is fantastic, and has a huge number of open-source community plugins which make it even more useful.

For these plugins, I can download them and host the JavaScript and CSS on my own website, but I’d prefer to use a CDN (Content Delivery Network) like CloudFlare. Using a service like this this means so I don’t have to host the files, and also these files will be served to my users from a site that’s close to them.

Obviously this means that the CDN is now in control of my files – how can I make sure these files haven’t been tampered with before I serve them up to my users?

How can I make sure these files on the CDN haven’t been tampered with before I serve them up to my users?

W3C.org recommends that “compromise of a third-party service should not automatically mean compromise of every site which includes its scripts“.

Troy Hunt wrote about this a while back and recommends using the ‘integrity’ attributes in script and link tags that reference subresources – supported browsers will calculate a hash of the file served by the CDN and compare that hash with the value in the integrity attribute. If they don’t match, the browser doesn’t serve the file.

The catch is that not all browsers support this – though coverage in modern browsers is pretty good. You can check out caniuse.com to see which browsers support the integrity attribute.

sripic

How can I calculate the hash of my file to put in the integrity attribute?

I like to use Scott Helme’s utility at https://report-uri.com/home/sri_hash/ to create the hash of JavaScript and CSS files. This calculates 3 different hashes, using SHA256, SHA384 and SHA512.

So instead of my script tag looking like this:

<script src="https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.3.4/leaflet.js"></script>

My script tags now look like this:

<script src="https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.3.4/leaflet.js" 
        integrity="sha256-tfcLorv/GWSrbbsn6NVgflWp1YOmTjyJ8HWtfXaOaJc= sha384-/I247jMyT/djAL4ijcbNXfX+PA8OZmkwzUr6Gotpgjz1Rxti1ZECG9Ne0Dj1pXrx sha512-nMMmRyTVoLYqjP9hrbed9S+FzjZHW5gY1TWCHA5ckwXZBadntCNs8kEqAWdrb9O7rxbCaA4lKTIWjDXZxflOcA==" 
        crossorigin="anonymous"></script>

This also works for CSS – without the integrity attribute it would look like the code below:

<link rel="stylesheet" href="https://unpkg.com/leaflet@1.3.1/dist/leaflet.css" />

But a more secure version is below:

<link rel="stylesheet" 
      href="https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.3.4/leaflet.css" integrity="sha256-YR4HrDE479EpYZgeTkQfgVJq08+277UXxMLbi/YP69o= sha384-BF7C732iE6WuqJMhUnTNJJLVvW1TIP87P2nMDY7aN2j2EJFWIaqK89j3WlirhFZU sha512-puBpdR0798OZvTTbP4A8Ix/l+A4dHDD0DGqYW6RQ+9jxkRFclaxxQb/SJAWZfWAkuyeQUytO7+7N4QKrDh+drA==" 
      crossorigin="anonymous">

Wrapping up

Hopefully this is useful information, and provides a guide on how to make sure your site doesn’t serve up JavaScript or CSS content that has been tampered with.

.net, C# tip, Visual Studio, Xamarin

How to detect nearby Bluetooth devices with .NET and Xamarin.Android

I’m working on an Xamarin.Android app at the moment – for this app, I need to detect what Bluetooth devices are available to my Android phone (so the user can choose which one to pair with).

For modern versions of Android, it’s not as simple as just using a BroadcastReceiver (although that is part of the solution). In this post I’ll write about the steps needed to successfully use the Bluetooth hardware on your Android phone with .NET.

One thing to note – I can test detecting Bluetooth devices by deploying my code directly onto an Android device, but I can’t use the Android emulator as it doesn’t have Bluetooth support.

As usual I’ve uploaded my code to GitHub (you can get it here).

Update AndroidManifest.xml with Bluetooth and Location permissions

First I had to make sure that my application told the device what hardware services it needed to access. For detecting and interacting with Bluetooth hardware, there are four services to add to the application AndroidManifest.xml:

  • Bluetooth
  • Bluetooth Admin
  • Access Coarse Location
  • Access Fine Location

When the application loads on the Android device for the first time, the user will be challenged to allow the application permission to use these hardware services.

I’ve pasted my AndroidManifest.xml file below – yours will look slightly different, but I’ve highlighted the important bit in red.

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
          android:versionCode="1"
          android:versionName="1.0"
          package="Bluetooth_Device_Scanner.Bluetooth_Device_Scanner">
  <uses-sdk android:minSdkVersion="23" android:targetSdkVersion="27" />
  <application
    android:allowBackup="true"
    android:icon="@mipmap/ic_launcher"
    android:label="@string/app_name"
    android:roundIcon="@mipmap/ic_launcher_round"
    android:supportsRtl="true"
    android:theme="@style/AppTheme">
  </application>
  <uses-permission android:name="android.permission.BLUETOOTH" />
  <uses-permission android:name="android.permission.BLUETOOTH_ADMIN" />
  <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
  <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
</manifest>

 

List the Bluetooth devices which the Android device has already paired with

This part is very straightforward – remember the code below will list to console only the Bluetooth devices which have already been detected and paired with the Android device. It will not list other devices which haven’t already been paired with each other (I write about this later in the article).

Obviously you’d probably want to write the output to a UI component rather than write to the console, but for this article I wanted to cut this down to only the Bluetooth interactions and not focus on UI interactions.

if (BluetoothAdapter.DefaultAdapter != null && BluetoothAdapter.DefaultAdapter.IsEnabled)
{
    foreach (var pairedDevice in BluetoothAdapter.DefaultAdapter.BondedDevices)
    {
        Console.WriteLine(
            $"Found device with name: {pairedDevice.Name} and MAC address: {pairedDevice.Address}");
    }
}

There’s not much more to say about this – I can put this pretty much anywhere in the C# code and it’ll just work as expected.

List new Bluetooth devices by creating a BluetoothDeviceReceiver class that extends BroadcastReceiver

Next I wanted to list the Bluetooth devices that haven’t been paired with the Android device. I can do this by creating a receiver class, which extends the ‘BroadcastReceiver’ base class, and overrides the ‘OnReceive’ method – I’ve included the code for my class below.

using System;
using Android.Bluetooth;
using Android.Content;
 
namespace Bluetooth_Device_Scanner
{
    public class BluetoothDeviceReceiver : BroadcastReceiver
    {
        public override void OnReceive(Context context, Intent intent)
        {
            var action = intent.Action;
            
            if (action != BluetoothDevice.ActionFound)
            {
                return;
            }
 
            // Get the device
            var device = (BluetoothDevice)intent.GetParcelableExtra(BluetoothDevice.ExtraDevice);
 
            if (device.BondState != Bond.Bonded)
            {
                Console.WriteLine($"Found device with name: {device.Name} and MAC address: {device.Address}");
            }
        }
    }
}

This receiver class is registered with the application and told to activate when the Android device detects specific events – such as finding a new Bluetooth device. Xamarin.Android does this through something called an ‘Intent’. The code below shows how to register the receiver to trigger when a Bluetooth device is detected.

// Register for broadcasts when a device is discovered
_receiver = new BluetoothDeviceReceiver();
RegisterReceiver(_receiver, new IntentFilter(BluetoothDevice.ActionFound));

When the Android device finds a new Bluetooth device and calls the OnReceive method, the class checks that the event is definitely the right one (i.e. BluetoothDevice.ActionFound).

Then it checks that the devices are not already paired (i.e. ‘Bonded’) and again my class just writes some details to the console about the Bluetooth device that its found.

But we’re not quite done yet – there’s one more very important piece of code which is necessary for modern versions of Android.

Finally – check permissions are applied at runtime

This is the bit that is sometimes missed in other tutorials, and that’s possibly because this is only needed for more recent versions of Android, so older tutorials wouldn’t have needed this step.

Basically even though the Access Coarse and Fine Location permissions are already specified in the AndroidManifest.xml file, if you’re using later than version 23 of the Android SDK, you need to also check that the permissions are correctly set at runtime. If they aren’t, you need to add code to prompt the user to grant these permissions.

There’s lots more about this topic on the Xamarin blog here.

I’ve pasted my MainActivity class below. This class:

  • Checks permissions,
  • Prompts the user for any permissions that are missing,
  • Registers the receiver to trigger when Bluetooth devices are detected, and
  • Starts scanning for Bluetooth devices.
using Android;
using Android.App;
using Android.Bluetooth;
using Android.Content;
using Android.Content.PM;
using Android.OS;
using Android.Support.V4.App;
using Android.Support.V4.Content;
 
namespace Bluetooth_Device_Scanner
{
    [Activity(Label = "Bluetooth Device Scanner", MainLauncher = true)]
    public class MainActivity : Activity
    {
        private BluetoothDeviceReceiver _receiver;
 
        protected override void OnCreate(Bundle savedInstanceState)
        {
            base.OnCreate(savedInstanceState);
 
            SetContentView(Resource.Layout.activity_main);
 
            const int locationPermissionsRequestCode = 1000;
 
            var locationPermissions = new[]
            {
                Manifest.Permission.AccessCoarseLocation,
                Manifest.Permission.AccessFineLocation
            };
 
            // check if the app has permission to access coarse location
            var coarseLocationPermissionGranted =
                ContextCompat.CheckSelfPermission(thisManifest.Permission.AccessCoarseLocation);
 
            // check if the app has permission to access fine location
            var fineLocationPermissionGranted =
                ContextCompat.CheckSelfPermission(thisManifest.Permission.AccessFineLocation);
 
            // if either is denied permission, request permission from the user
            if (coarseLocationPermissionGranted == Permission.Denied ||
                fineLocationPermissionGranted == Permission.Denied)
            {
                ActivityCompat.RequestPermissions(this, locationPermissions, locationPermissionsRequestCode);
            }
 
            // Register for broadcasts when a device is discovered
            _receiver = new BluetoothDeviceReceiver();
 
            RegisterReceiver(_receiver, new IntentFilter(BluetoothDevice.ActionFound));
 
            BluetoothDeviceReceiver.Adapter.StartDiscovery();
        }
    }
}

Now the application will call the BluetoothDeviceReceiver class’s OnReceive method when it detects Bluetooth hardware.

Wrapping up

Hopefully this is useful to anyone writing a Xamarin.Android application that interacts with Bluetooth devices – I struggled with this for a while and wasn’t able to find an article which detailed all the pieces of the puzzle:

  • Update the manifest with the 4 required application permissions
  • Create a class that extends BroadcastReceiver,
  • Check at runtime that the location permissions have been granted and prompt the user if they haven’t, and
  • Register the receiver class and start discovery.

About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

Xamarin

Building Xamarin.Android projects and how to find the ‘missing Android SDKs required for building’.

Recently I’ve seen the error below when trying to build some sample Xamarin projects out-of-the-box in Visual Studio 2017.

sdk error

Seems like a simple error to fix…just follow the instructions and double click on the message to install the missing Android SDKs, right?

On my machine double-clicking on the error does not install the missing Android SDKs which makes me a sad developer.

After a lot of searching, I found the reason that double-clicking does nothing is because my instance of VS2017 is set to not do anything, and I expect I’m not the only one seeing this error because VS2017 is set to behave this way during the default installation.

If you’re unlucky and are at this article because you’re seeing the same error as me and double clicking is doing nothing for you, you could try this – in Visual Studio go to Tools -> Options, and scroll down to the Xamarin settings so you see a screen like the one below. You’ll probably see an un-ticked tickbox setting with the name ‘Auto Install Android SDKs’.

settings1

I changed the value to ticked and hit OK.

settings2

After I changed this setting, I tried double clicking on the error message again, and this time the window below popped up immediately. I was able to follow through the SDK installation wizard and make the project compile.

android sdk licence

So a pretty simple fix, but it’d be better if the error message said:

Make sure that Visual Studio is set to Auto Install Android SDKs in Xamarin’s Android.Settings option, and then double click on this message and follow the instructions‘.

Azure, GIS, Leaflet, Security, Web Development

Getting started with Azure Maps, using Leaflet to display roads and satellite images, and comparing different browser behaviours

In this post I’m going to describe how to use the new(ish) Azure Maps service from Microsoft with the Leaflet JavaScript library. Azure Maps provides its own API for Geoservices, but I have an existing application that uses Leaflet, and I wanted to try out using the Azure Maps tiling services.

Rather than just replicating the example that already exists on the excellent Azure Maps Code Samples site, I’ll go a bit further:

  • I’ll show how to display both the tiles with roads and those with aerial images
  • I’ll show how to switch between the layers using a UI component on the map
  • I’ll show how Leaflet can identify your present location
  • And I’ll talk about my experiences of location accuracy in Chrome, Firefox and Edge browsers on my desktop machine.

As usual, I’ve made my code open source and posted it to GitHub here.

First, use your Azure account to get your map API Key

I won’t go into lots of detail about this part – Microsoft have documented the process very well here. In summary:

  • If you don’t have an Azure account, there are instructions here on how to create one.
  • Create a Maps account within the Azure Portal and get your API Key (instructions here).

Once you have set up a resource group in Azure to manage your mapping services, you’ll be able to track usage and errors through the portal – I’ve pasted graphs of my usage and recorded errors below.

graphs

You’ll use this API Key to identify yourself to the Azure Maps tiling service. Azure Maps is not a free service – pricing information is here – although presently on the S0 tier there is an included free quantity of tiles and services.

API Key security is one key area of Azure Maps that I would like to be enhanced – the API Key has to be rendered on the client somewhere in plain text and then passed back to the maps API. Even with HTTPS, the API Key could be easily intercepted by someone viewing the page source, or using a tool to read outgoing requests.

Many other tiling services use CORS to restrict which domains can make requests, but:

  • Azure Maps doesn’t do this at the time of writing and
  • This isn’t real security because the Origin header can be easily modified (I know it’s a forbidden header name for a browser but tools like cUrl can spoof the Origin). More discussion here and here.

So this isn’t a solved problem yet – I’d recommend you consider how you use your API Key very carefully and bear in mind that if you expose it on the internet you’re leaving your account open to abuse. There’s an open issue about this raised on GitHub and hopefully there will be an announcement soon.

Next, set up your web page to use the Leaflet JS library

There’s a very helpful ‘getting started‘ tutorial on the Leaflet website – I added the stylesheet and javascript to my webpage’s head using the code below.

<link rel="stylesheet" href="https://unpkg.com/leaflet@1.3.1/dist/leaflet.css"
      integrity="sha512-Rksm5RenBEKSKFjgI3a41vrjkw4EVPlJ3+OiI65vTjIdo9brlAacEuKOiQ5OFh7cOI1bkDwLqdLw3Zg0cRJAAQ=="
      crossorigin="" />
 
<script src="https://unpkg.com/leaflet@1.3.1/dist/leaflet.js"
        integrity="sha512-/Nsx9X4HebavoBvEBuyp3I7od5tA0UzAxs+j83KgC8PU0kgB4XiK4Lfe4y4cgBtaRJQEIFCW+oC506aPT2L1zw=="
        crossorigin=""></script>

Now add the URLs to your JavaScript to access the tiling services

I’ve included some very simple JavaScript code below for accessing two Azure Maps services – the tiles which display roadmaps and also those which have satellite images.

function satelliteImageryUrl() {
    return "https://atlas.microsoft.com/map/imagery/png?api-version=1&style=satellite&tileSize=512&zoom={z}&x={x}&y={y}&subscription-key={subscriptionKey}";
}
 
function roadMapTilesUrl() {
    return "https://atlas.microsoft.com/map/tile/png?api-version=1&layer=basic&style=main&TileFormat=pbf&tileSize=512&zoom={z}&x={x}&y={y}&subscription-key={subscriptionKey}";
}

If you’re interested in reading more about these two tiling services, there’s more about the road map service here and more about the satellite image service here.

Now add the tiling layers to Leaflet and create the map

I’ve written a JavaScript function below which registers the two tiling layers (satellite and roads) with Leaflet. It also instantiates the map object, and attempts to identify the user’s location from the browser. Finally it registers a control which will appear on the map and list the available tiling services, allowing me to toggle between them on the fly.

var map;
 
function GetMap() {
    var subscriptionKey = '[[[**YOUR API KEY HERE**]]]';
 
    var satellite = L.tileLayer(satelliteImageryUrl(), {
        attribution: '© ' + new Date().getFullYear() + ' Microsoft, © 1992 - ' + new Date().getFullYear() + ' TomTom',
        maxZoom: 18,
        tileSize: 512,
        zoomOffset: -1,
        id: 'azureSatelliteMaps',
        crossOrigin: true,
        subscriptionKey: subscriptionKey
    });
 
    var roads = L.tileLayer(roadMapTilesUrl(), {
        attribution: '© ' + new Date().getFullYear() + ' Microsoft, © 1992 - ' + new Date().getFullYear() + ' TomTom',
        maxZoom: 18,
        tileSize: 512,
        zoomOffset: -1,
        id: 'azureRoadMaps',
        crossOrigin: true,
        subscriptionKey: subscriptionKey
    });
 
    // instantiate the map object and display the 'roads' layer
    map = L.map('myMap', { layers: [roads] });
 
    // attempt to identify the user's location from the browser
    map.locate({ setView: true, enableHighAccuracy: true });
    map.on('locationfound', onLocationFound);
 
    // create an array of the tiling base layers and their 'friendly' names
    var baseMaps = {
        "Azure Satellite Imagery": satellite,
        "Azure Roads": roads
    };
 
    // add a control to map (top-right by default) allowing the user to toggle the layer
    L.control.layers(baseMaps, null, { collapsed: false }).addTo(map);
}

Finally, I’ve added a div to my page which specifies the size of the map, gives it the Id “mymap” (which I’ve used in the JavaScript above when instantiating the map object), and I call the GetMap() method when the page loads.

<body onload="GetMap()">
    <div id="myMap" style="position:relative;width:900px;height:600px;"></div>
</body>

If the browser GeoServices have identified my location, I’ll also be given an accuracy in meters – the JavaScript below allows me to draw a circle on my map to indicate where the browser believes my location to be.

map.on('locationfound', onLocationFound);
 
function onLocationFound(e) {
    var radius = e.accuracy / 2;
 
    L.marker(e.latlng)
        .addTo(map)
        .bindPopup("You are within " + radius + " meters from this point")
        .openPopup();
 
    L.circle(e.latlng, radius).addTo(map);
}

And I’ve taken some screenshots of the results below – first of all the results in the MS Edge browser showing roads and landmarks near my location…

roads

…and swapping to the satellite imagery using the control at the top right of the map.

satellite

Results in Firefox and Chrome

When I ran this in Firefox and Chrome, I found that my location was identified with much less accuracy. I know both of these browsers use the Google GeoLocation API and MS Edge uses the Windows Location API so this might account for the difference on my machine (Windows 10), but I’d need to do more experimentation to better understand. Obviously my laptop machine doesn’t have GPS hardware, so testing on a mobile phone probably would give very different results.

roads2

Wrapping up

We’ve seen how to use the Azure Maps tiling services with the Leaflet JS library, and create a very basic web application which uses the Azure Maps tiling services to display both road and landmark data, and also satellite aerial imagery. It seems to me that MS Edge is able to identify my location much more accurately on a desktop machine than Firefox or Chrome on my Windows 10 machine (within a 75m radius on Edge, and over 3.114km radius on Firefox and Chrome) – however, your mileage may vary.

Finally, as I emphasised above, I’ve concerns about the security of a production application using an API Key in plain text inside my JavaScript, and hopefully Microsoft will deploy a solution with improved security soon.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

GeoJson, GIS, SQL Server

How to list feature property information for GeoJSON data using SQL Server 2017 and OPENJSON

I’ve been importing GeoJSON files into tables in SQL Server 2017 – I’ve just had the files themselves without any metadata, so I’ve had to have a look into the file contents to work out what the properties are of the features listed so I can design the table schema.

After a while it got painful looking at the GeoJSON text and trying to parse it visually, and I wondered if I could write a SQL command to list the feature properties for me.

The SQL below allows me to import a GeoJSON file and quickly peek into the first feature in the collection of features, and list all the name-value pairs in the ‘property’ node.

Declare @geoJson nvarchar(max)
 
SELECT @geoJson = BulkColumn
FROM OPENROWSET (BULK 'C:\Users\jeremy.lindsay\Desktop\my_gis_data.json', SINGLE_CLOB) as importedGeoJson
 
SELECT *
FROM OPENJSON(@geoJson, '$.features[0].properties')

Now I can read the properties of the first feature much more easily.

I also get the values and the type SQL Server infers about the property data – that’s interesting but I don’t trust the type information too much, as that could be different in every feature in the collection. (Of course the properties could be different in every feature in the collection as well!)


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!