.net, .net core, Non-functional Requirements, Performance

Using async/await and Task.WhenAll to improve the overall speed of your C# code

Recently I’ve been looking at ways to improve the performance of some .NET code, and this post is about an async/await pattern that I’ve observed a few times that I’ve been able to refactor.

Every-so-often, I see code like the sample below – a single method or service which awaits the outputs of numerous methods which are marked as asynchronous.

await FirstMethodAsync();
 
await SecondMethodAsync();
 
await ThirdMethodAsync();

The three methods don’t seem to depend on each other in any way, and since they’re all asynchronous methods, it’s possible to run them in parallel. But for some reason, the implementation is to run all three synchronously – the flow of execution awaits the first method running and completing, then the second, and then the third.

We might be able to do better than this.

Let’s look at an example

For this post, I’ve created a couple of sample methods which can be run asynchronously – they’re called SlowAndComplexSumAsync and SlowAndComplexWordAsync.

What these methods actually do isn’t important, so don’t worry about what function they serve – I’ve just contrived them to do something and be quite slow, so I can observe how my code’s overall performance alters as I do some refactoring.

First, SlowAndComplexSumAsync (below) adds a few numbers together, with some artificial delays to deliberately slow it down – this takes about 2.5s to run.

private static async Task<int> SlowAndComplexSumAsync()
{
    int sum = 0;
    foreach (var counter in Enumerable.Range(0, 25))
    {
        sum += counter;
        await Task.Delay(100);
    }
 
    return sum;
}

Next SlowAndComplexWordAsync (below) concatenates characters together, again with some artificial delays to slow it down. This method usually about 4s to run.

private static async Task<string> SlowAndComplexWordAsync()
{
    var word = string.Empty;
    foreach (var counter in Enumerable.Range(65, 26))
    {
        word = string.Concat(word, (char) counter);
        await Task.Delay(150);
    }
 
    return word;
}

Running synchronously – the slow way

Obviously I can just prefix each method with the “await” keyword in a Main method marked with the async keyword, as shown below. This code basically just runs the two sample methods synchronously (despite the async/await cruft in the code).

private static async Task Main(string[] args)
{
    var stopwatch = new Stopwatch();
    stopwatch.Start();
 
    // This method takes about 2.5s to run
    var complexSum = await SlowAndComplexSumAsync();
 
    // The elapsed time will be approximately 2.5s so far
    Console.WriteLine("Time elapsed when sum completes..." + stopwatch.Elapsed);
 
    // This method takes about 4s to run
    var complexWord = await SlowAndComplexWordAsync();
    
    // The elapsed time at this point will be about 6.5s
    Console.WriteLine("Time elapsed when both complete..." + stopwatch.Elapsed);
    
    // These lines are to prove the outputs are as expected,
    // i.e. 300 for the complex sum and "ABC...XYZ" for the complex word
    Console.WriteLine("Result of complex sum = " + complexSum);
    Console.WriteLine("Result of complex letter processing " + complexWord);
 
    Console.Read();
}

When I run this code, the console output looks like the image below:

series

As can be seen in the console output, both methods run consecutively – the first one takes a bit over 2.5s, and then the second method runs (taking a bit over 4s), causing the total running time to be just under 7s (which is pretty close to the predicted duration of 6.5s).

Running asynchronously – the faster way

But I’ve missed a great opportunity to make this program run faster. Instead of running each method and waiting for it to complete before starting the next one, I can start them all together and await the Task.WhenAll method to make sure all methods are completed before proceeding to the rest of the program.

This technique is shown in the code below.

private static async Task Main(string[] args)
{
    var stopwatch = new Stopwatch();
    stopwatch.Start();
 
    // this task will take about 2.5s to complete
    var sumTask = SlowAndComplexSumAsync();
 
    // this task will take about 4s to complete
    var wordTask = SlowAndComplexWordAsync();
 
    // running them in parallel should take about 4s to complete
    await Task.WhenAll(sumTask, wordTask);

    // The elapsed time at this point will only be about 4s
    Console.WriteLine("Time elapsed when both complete..." + stopwatch.Elapsed);
 
    // These lines are to prove the outputs are as expected,
    // i.e. 300 for the complex sum and "ABC...XYZ" for the complex word
    Console.WriteLine("Result of complex sum = " + sumTask.Result);
    Console.WriteLine("Result of complex letter processing " + wordTask.Result);
 
    Console.Read();
}

And the outputs are shown in the image below.

parallel

The total running time is now only a bit over 4s – and this is way better than the previous time of around 7s. This is because we are running both methods in parallel, and making full use of the opportunity asynchronous methods present. Now our total execution time is only as slow as the slowest method, rather than being the cumulative time for all methods executing one after each other.

Wrapping up

I hope this post has helped shine a little light on how to use the async/await keywords and how to use Task.WhenAll to run independent methods in parallel.

Obviously every case has its own merits – but if code has series of asynchronous methods written so that each one has to wait for the previous one to complete, definitely check out whether the code can be refactored to use Task.WhenAll to improve the overall speed.

And maybe even more importantly, when designing an API surface, keep in mind that decoupling dependencies between methods might give developers using the API an opportunity to run these asynchronous methods in parallel.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

Non-functional Requirements, Security

Another good reason to use GitHub.com to host your .NET source code – automated NuGet package vulnerability scans

This quick post is about how you can use GitHub and the OSS Index to scan your project’s NuGet packages for vulnerabilities – a good example of how perform your application security early on in the application life cycle (also known as ‘shift left‘)

So here’s a problem

You’re working on a .NET Core application, and obviously using some of the libraries provided by Microsoft. Behind the scenes, Microsoft’s security teams have found a security hole in one of these libraries and issued a patched NuGet package.

You wouldn’t always immediately update NuGet packages everytime there’s an upgrade, but you definitely would upgrade if you knew there was a vulnerability in one of your dependencies.

How are you going to know that Microsoft have found and fixed a vulnerability, so you can prioritise an application upgrade?

GitHub is helping to solve this problem

A little while back I got a couple of notifications from GitHub that one of my projects refers to a Microsoft NuGet package – Microsoft.AspNetCore.All, version 2.0.5 – and the library contains a potential security hazard.

security-image

I was pretty surprised – I already use the Audit.NET extension within Visual Studio 2017 to audit my NuGet packages against the OSS index. This extension triggers an error at design time in Visual Studio if it detects that my project uses a package that has a known vulnerability.

If you haven’t checked out the Sonatype OSS index, I really encourage you to do so – it has a bunch of useful tools and information to identify if there are known security vulnerabilities lurking in your open source dependencies.

sonatype

GitHub helpfully provided me with a bit more detail on what the problem was – my project used version 2.0.5 of the Microsoft.AspNetCore.All NuGet package, and this version is vulnerable to a couple of forms of attack (denial of service and excess consumption of resources).

vulnerability

This makes me extremely glad to be using GitHub to store my code, as it directly highlights to me that there’s a potential hazard lurking in the project dependencies. Now I can do something about it – like upgrade my libraries to the patched version v2.0.9, and push the changes to GitHub.

And after pushing the updated version, the security alerts disappear.

patched

Wrapping up

I like to ‘shift left‘ as much as possible with my application security testing – it’s unambiguously better to carry out security testing as early as possible in the application lift cycle. Having an security extra test automatically built into my GitHub source code repository is a great addition to the security testing process.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, Non-functional Requirements, Security

Serve CSS and JavaScript from CDNs safely with subresource integrity (SRI) attributes

I’m building a web application at the moment which plots data on a map using the Leaflet JS framework. Leaflet JS is fantastic, and has a huge number of open-source community plugins which make it even more useful.

For these plugins, I can download them and host the JavaScript and CSS on my own website, but I’d prefer to use a CDN (Content Delivery Network) like CloudFlare. Using a service like this this means so I don’t have to host the files, and also these files will be served to my users from a site that’s close to them.

Obviously this means that the CDN is now in control of my files – how can I make sure these files haven’t been tampered with before I serve them up to my users?

How can I make sure these files on the CDN haven’t been tampered with before I serve them up to my users?

W3C.org recommends that “compromise of a third-party service should not automatically mean compromise of every site which includes its scripts“.

Troy Hunt wrote about this a while back and recommends using the ‘integrity’ attributes in script and link tags that reference subresources – supported browsers will calculate a hash of the file served by the CDN and compare that hash with the value in the integrity attribute. If they don’t match, the browser doesn’t serve the file.

The catch is that not all browsers support this – though coverage in modern browsers is pretty good. You can check out caniuse.com to see which browsers support the integrity attribute.

sripic

How can I calculate the hash of my file to put in the integrity attribute?

I like to use Scott Helme’s utility at https://report-uri.com/home/sri_hash/ to create the hash of JavaScript and CSS files. This calculates 3 different hashes, using SHA256, SHA384 and SHA512.

So instead of my script tag looking like this:

<script src="https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.3.4/leaflet.js"></script>

My script tags now look like this:

<script src="https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.3.4/leaflet.js" 
        integrity="sha256-tfcLorv/GWSrbbsn6NVgflWp1YOmTjyJ8HWtfXaOaJc= sha384-/I247jMyT/djAL4ijcbNXfX+PA8OZmkwzUr6Gotpgjz1Rxti1ZECG9Ne0Dj1pXrx sha512-nMMmRyTVoLYqjP9hrbed9S+FzjZHW5gY1TWCHA5ckwXZBadntCNs8kEqAWdrb9O7rxbCaA4lKTIWjDXZxflOcA==" 
        crossorigin="anonymous"></script>

This also works for CSS – without the integrity attribute it would look like the code below:

<link rel="stylesheet" href="https://unpkg.com/leaflet@1.3.1/dist/leaflet.css" />

But a more secure version is below:

<link rel="stylesheet" 
      href="https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.3.4/leaflet.css" integrity="sha256-YR4HrDE479EpYZgeTkQfgVJq08+277UXxMLbi/YP69o= sha384-BF7C732iE6WuqJMhUnTNJJLVvW1TIP87P2nMDY7aN2j2EJFWIaqK89j3WlirhFZU sha512-puBpdR0798OZvTTbP4A8Ix/l+A4dHDD0DGqYW6RQ+9jxkRFclaxxQb/SJAWZfWAkuyeQUytO7+7N4QKrDh+drA==" 
      crossorigin="anonymous">

Wrapping up

Hopefully this is useful information, and provides a guide on how to make sure your site doesn’t serve up JavaScript or CSS content that has been tampered with.

.net, Non-functional Requirements, Performance

Measuring your code’s performance during development with BenchmarkDotNet – Part #2: Methods with parameters

Last time, I wrote about how to use BenchmarkDotNet (Github here: NuGet: here) to measure code performance for a very simple method with no parameters. This time I’ll write about testing another scenario that I find is more common – methods with parameters.

Let’s start with a simple case – primitive parameters.

Methods with Primitive Parameters

Let’s write a method which takes an integer parameter and calculates the square.

I know that I could use the static System.Math.Pow(int a, int b) for this instead of rolling my own method – but more on this later.

I written a little static method like this.

public class MathFunctions
{
    public static long Square(int number)
    {
        return number * number;
    }
}

Nothing wrong with that – but not that easy to test with BenchmarkDotNet and decorate with a simple [Benchmark] attribute because I need to specify the number parameter.

There are a couple of ways to test this.

Refactor and use the Params attribute

Instead of passing the number as a parameter to the Square method, I can refactor the code so that Number is a property of the class, and the Square method uses this property.

public class MathFunctions
{
    public int Number { getset; }
 
    public long Square()
    {
        return this.Number * this.Number;
    }
}

Now I can decorate Square method with the [Benchmark] attribute, and I can use the ParamsAttribute in BenchmarkDotNet to decorate the property with numbers that I want to test.

public class MathFunctions
{
    [Params(12)]
    public int Number { getset; }
        
    [Benchmark]
    public int Square()
    {
        return this.Number * this.Number;
    }
}

And then it’s very simple to execute a performance runner class like the code below:

using BenchmarkDotNet.Running;
using Services;
 
namespace PerformanceRunner
{
    class Program
    {
        static void Main(string[] args)
        {
            var summary = BenchmarkRunner.Run<MathFunctions>();
        }
    }
}

Which yields the results:

// * Summary *

BenchmarkDotNet=v0.10.8, OS=Windows 10 Redstone 2 (10.0.15063)
Processor=Intel Core i7-2640M CPU 2.80GHz (Sandy Bridge), ProcessorCount=4
Frequency=2728178 Hz, Resolution=366.5450 ns, Timer=TSC
dotnet cli version=2.0.0-preview2-006127
 [Host] : .NET Core 4.6.25316.03, 64bit RyuJIT
 DefaultJob : .NET Core 4.6.25316.03, 64bit RyuJIT


 Method | Number | Mean      | Error     | StdDev    | Median    |
------- |------- |----------:|----------:|----------:|----------:|
 Square | 1      | 0.0429 ns | 0.0370 ns | 0.0658 ns | 0.0001 ns |
 Square | 2      | 0.0035 ns | 0.0086 ns | 0.0072 ns | 0.0000 ns |

This mechanism has the advantage that you can specify a range of parameters and observe the behaviour for each of the values.

But I think it has a few disadvantages:

  • I’m a bit limited in the type of parameter that I can specify in an attribute. Primitives like integers and strings are easy, but instantiating a more complex data transfer object is harder.
  • I have to refactor my code to measure performance – you could argue that the refactored version is better code, but to me the code below is simple and has a clear intent:
var output = MathFunctions.Square(10);

Whereas I think the code below is more obtuse.

var math = new MathFunctions { Number = 10 };
var output = math.Square();
  • My source code has a tight dependency on the BenchmarkDotNet library, and the attributes add a little litter to the class.

Basically I’m not sure I’ve made my code better by refactoring it to measure performance. Let’s look at other techniques.

Separate performance measurement code into a specific test class

I can avoid some of the disadvantages of the technique above by creating a dedicated class to measure the performance of my method, as shown below.

public class MathFunctions
{
    public static long Square(int number)
    {
        return number * number;
    }
}
 
public class PerformanceTestMathFunctions
{
    [Params(12)]
    public int Number { getset; }
 
    [Benchmark]
    public long Measure_Speed_of_Square_Function()
    {
        return MathFunctions.Square(Number);
    }
}

So now I can run the code below to measure the performance of my method.

using BenchmarkDotNet.Running;
using Services;
 
namespace PerformanceRunner
{
    class Program
    {
        static void Main(string[] args)
        {
            var summary = BenchmarkRunner.Run<PerformanceTestMathFunctions>();
        }
    }
}

This time I’ve not had to refactor my original code, and I’ve moved the dependency from my source code under test to the dedicated test class. But I’m still a bit limited in what types of parameter I can supply to my test class.

Using GlobalSetup for methods with non-primitive data transfer object parameters

Let’s try benchmarking an example which is a bit more involved – how to measure the performance of some more math functions I’ve written which use Complex Numbers.

Complex numbers are nothing to do with BenchmarkDotNet – I’m just using this as an example of a non-trivial problem space and how to run benchmark tests against it.

At school you might have done some work with Complex Numbers. These numbers have a real and imaginary component – which sounds weird if you’re not used to it, but they can be represented as:

1 + 2i

Where 1 is the real component, and 2 is the size of the ‘imaginary’ component.

If you want to calculate the magnitude of a complex number, you just use Pythagorean maths – namely:

  • Calculate the square of the real component, and the square of the imaginary component.
  • Add these two squares together.
  • The magnitude is the square root of the sum of the two squares.

So I can represent a Complex Number in code in the object class shown below:

public class ComplexNumber
{
    public int Real { getset; }
 
    public int Imaginary { getset; }
}

And I can instantiate a complex number 1 + 2i with the code:

new ComplexNumber { Real = 1, Imaginary = 2 };

If I want to calculate the magnitude of this Complex Number, I can pass the ComplexNumber data transfer object as a parameter to a method shown below.

public class ComplexMathFunctions
{
    public static double Magnitude(ComplexNumber complexNumber)
    {
        return Math.Pow(Math.Pow(complexNumber.Real, 2                        + Math.Pow(complexNumber.Imaginary, 2), 0.5);
    }
}

But how do I benchmark this?

I can’t instantiate a ComplexNumber parameter in the Params attribute supplied by BenchmarkDotNet.

Fortunately there’s a GlobalSetup attribute – this is very similar to the Setup attribute used by some unit test frameworks, were we can arrange our parameters before they are used by a test.

The code below shows how to create a dedicated test class, and instantiate a Complex Number in the GlobalSetup method which is used in the method being benchmarked.

public class PerformanceTestComplexMathFunctions
{
    private ComplexNumber ComplexNumber;
 
    [GlobalSetup]
    public void GlobalSetup()
    {
        this.ComplexNumber = new ComplexNumber { Real = 1, Imaginary = 2 };
    }
 
    [Benchmark]
    public double Measure_Magnitude_of_ComplexNumber_Function()
    {
        return ComplexMathFunctions.Magnitude(ComplexNumber);
    }
}

This yields the results below:

// * Summary *

BenchmarkDotNet=v0.10.8, OS=Windows 10 Redstone 2 (10.0.15063)
Processor=Intel Core i7-2640M CPU 2.80GHz (Sandy Bridge), ProcessorCount=4
Frequency=2728178 Hz, Resolution=366.5450 ns, Timer=TSC
dotnet cli version=2.0.0-preview2-006127
 [Host] : .NET Core 4.6.25316.03, 64bit RyuJIT
 DefaultJob : .NET Core 4.6.25316.03, 64bit RyuJIT


 Method                                      | Mean     | Error    | StdDev    |
-------------------------------------------- |---------:|---------:|----------:|
 Measure_Magnitude_of_ComplexNumber_Function | 110.5 ns | 1.058 ns | 0.9897 ns |

I think this eliminates pretty much all the disadvantages I listed earlier, but does add a restriction that I’m only testing one instantiated value of the data transfer object parameter.

You might wonder why we need the GlobalSetup at all when we could just instantiate a local variable in the method under test – I don’t think we should do that because we’d also be including the time taken to set up the experiment in the method being benchmarked – which reduces the accuracy of the measurement.

Addendum

I was kind of taken aback by how slow my Magnitude function was, so I started playing with some different options – instead of using the built in System.Math.Pow static method, I decide to calculate a square by just multiplying the base by itself. I also decided to use the System.Math.Sqrt function to calculate the square root, rather than the equivalent of raising the base to the power of 0.5. My refactored code is shown in the code below.

public class ComplexMathFunctions
{
    public static double Magnitude(ComplexNumber complexNumber)
    {
        return Math.Sqrt(complexNumber.Real * complexNumber.Real 
                    + complexNumber.Imaginary * complexNumber.Imaginary);
    }
}

Re-running the test yielded the benchmark results below:

// * Summary *

BenchmarkDotNet=v0.10.8, OS=Windows 10 Redstone 2 (10.0.15063)
Processor=Intel Core i7-2640M CPU 2.80GHz (Sandy Bridge), ProcessorCount=4
Frequency=2728178 Hz, Resolution=366.5450 ns, Timer=TSC
dotnet cli version=2.0.0-preview2-006127
 [Host] : .NET Core 4.6.25316.03, 64bit RyuJIT
 DefaultJob : .NET Core 4.6.25316.03, 64bit RyuJIT


 Method                                      | Mean     | Error     | StdDev    |
-------------------------------------------- |---------:|----------:|----------:|
 Measure_Magnitude_of_ComplexNumber_Function | 4.192 ns | 0.0371 ns | 0.0347 ns |

So with a minor code tweak, the time taken to calculate the magnitude dropped from 110.5 nanoseconds to 4.192 nanoseconds. That’s a pretty big performance improvement. If I hadn’t been measuring this, I’d probably never have known that I could have improved my original implementation so much.

Of course, this performance improvement might only work for small integers – it could be that large integers have a different performance profile. But it’s easy to understand how we could set up some other tests to check this.

Wrapping up

This time I’ve written about how to use BenchmarkDotNet to measure the performance of methods which have parameters, even ones that are data transfer objects. The Params attribute can be useful sometimes for methods which have simple primitive parameters, and the GlobalSetup attribute can specify a method which sets up more complicated scenarios. I’ve also shown how we can create classes dedicated to testing individual methods, and keep benchmarking test references isolated in their own classes and projects.

This makes it really simple to benchmark your existing codebase, even code which wasn’t originally designed with performance testing in mind. I think it’s worth doing – even while writing this post, I unexpectedly discovered a simple way to change my example code that made a big performance improvement.

I hope you find this post useful in starting to measure the performance of your codebase. If you want to dig into understanding BenchmarkDotNet more, I highly recommend this post from Andrey Akinshin – it goes into lots more detail.


About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, Non-functional Requirements, Performance

Measuring your code’s performance during development with BenchmarkDotNet – Part #1: Getting started

I was inspired to write a couple of posts after recently reading articles by Stephen Toub and Andrey Akinshin about BenchmarkDotNet from the .NET Blog, and I wanted to write about how I could use BenchmarkDotNet to understand my own existing codebase a little bit better.

A common programming challenge is how to manage complexity around code performance – a small change might have a large impact on application performance.

I’ve managed this in the past with page-level performance tests (usually written in JMeter) running on my integration server – and it works well.

However, these page-level performance tests only give me coarse grained results – if the outputs of the JMeter tests start showing a slowdown, I’ll have to do more digging in the code to find the problem. At this point, tools like ANTS or dotTrace are really good for finding the bottlenecks – but even with these, I’m reacting to a problem rather than managing it early.

I’d like to have more immediate feedback – I’d like to be able to perform micro-benchmarks against my code before and after I make small changes, and know right away if I’ve made things better or worse. Fortunately BenchmarkDotNet helps with this.

This isn’t premature optimisation – this is about how I can have a deeper understanding of the quality of code I’ve written. Also, if you don’t know if your code is slow or not, how can you argue that any optimisation is premature?

A simple example

Let’s take a simple example – say that I have a .NET Core website which has a single page that just generates random numbers.

Obviously this application wouldn’t be a lot of use – I’m deliberately choosing something conceptually simple so I can focus on the benchmarking aspects.

I’ve created a simple HomeController, which has an action called Index that returns a random number. This random number is generated from a service called RandomNumberGenerator.

Let’s look at the source for this. I’ve put the code for the controller below – this uses .NET Core’s built in dependency injection feature.

using Microsoft.AspNetCore.Mvc;
using Services;
 
namespace SampleFrameworkWebApp.Controllers
{
    public class HomeController : Controller
    {
        private readonly IRandomNumberGenerator _randomNumberGenerator;
        
        public HomeController(IRandomNumberGenerator randomNumberGenerator)
        {
            _randomNumberGenerator = randomNumberGenerator;
        }
 
        public IActionResult Index()
        {
            ViewData["randomNumber"= _randomNumberGenerator.GetRandomNumber();
 
            return View();
        }
    }
}

The code below shows the RandomNumberGenerator – it uses the Random() class from the System library.

using System;
 
namespace Services
{
    public class RandomNumberGenerator : IRandomNumberGenerator
    {
        private static Random random = new Random();
 
        public int GetRandomNumber()
        {
            return random.Next();
        }
    }
}

A challenge to make it “better”

But after a review, let’s say a colleague tells me that the System.Random class isn’t really random – it’s really only pseudo random, certainly not random enough for any kind of cryptographic purpose. If I want to have a really random number, I need to use the RNGCryptoServiceProvider class.

So I’m keen to make my code “better” – or at least make the output more cryptographically secure – but I’m nervous that this new class is going to make my RandomNumberGenerator class slower for my users. How can I measure the before and after performance without recording a JMeter test?

Using BenchmarkDotNet

With BenchmarkDotNet, I can just decorate the method being examined using the [Benchmark] attribute, and use this to measure the performance of my code as it is at the moment.

To make this attribute available in my Service project, I need to include a nuget package in my project, and you can use the code below at the Package Manager Console:

Install-Package BenchmarkDotNet

The code for the RandomNumberGenerator class now looks like the code below – as you can see, it’s not changed much at all – just an extra library reference at the top, and a single attribute decorating the method I want to test.

using System;
using BenchmarkDotNet.Attributes;
 
namespace Services
{
    public class RandomNumberGenerator : IRandomNumberGenerator
    {
        private static Random random = new Random();
 
        [Benchmark]
        public int GetRandomNumber()
        {
            return random.Next();
        }
    }
}

I like to keep my performance benchmarking code in a separate project (in the same way that I keep my unit tests in a separate project). That project is a simple console application, with a main class that looks like the code below (obviously I need to install the BenchmarkDotNet nuget package in this project as well):

using BenchmarkDotNet.Running;
using Services;
 
namespace PerformanceRunner
{
    class Program
    {
        static void Main(string[] args)
        {
            var summary = BenchmarkRunner.Run<RandomNumberGenerator>();
        }
    }
}

And now if I run this console application at a command line, BenchmarkDotNet presents me with some experiment results like the ones below.

// * Summary *

BenchmarkDotNet=v0.10.8, OS=Windows 10 Redstone 2 (10.0.15063)
Processor=Intel Core i7-2640M CPU 2.80GHz (Sandy Bridge), ProcessorCount=4
Frequency=2728183 Hz, Resolution=366.5443 ns, Timer=TSC
dotnet cli version=2.0.0-preview2-006127
 [Host] : .NET Core 4.6.25316.03, 64bit RyuJIT
 DefaultJob : .NET Core 4.6.25316.03, 64bit RyuJIT


          Method | Mean     | Error     | StdDev    |
---------------- |---------:|----------:|----------:|
 GetRandomNumber | 10.41 ns | 0.0468 ns | 0.0365 ns |

As you can see above, my machine specifications are listed, and the experiment results suggest that my RandomNumberGenerator class presently takes about 10.41 nanoseconds to generate a random number.

So now I have a baseline – after I change my code to use the more cryptographically secure RNGCryptoServiceProvider, I’ll be able to run this test again and see if I’ve made it faster or slower.

How fast is the service after the code changes?

I’ve changed the service to use the RNGCryptoServiceProvider – the code is below.

using System;
using BenchmarkDotNet.Attributes;
using System.Security.Cryptography;
 
namespace Services
{
    public class RandomNumberGenerator : IRandomNumberGenerator
    {
        private static Random random = new Random();
 
        [Benchmark]
        public int GetRandomNumber()
        {
            using (var randomNumberProvider = new RNGCryptoServiceProvider())
            {
                byte[] randomBytes = new byte[sizeof(Int32)];
 
                randomNumberProvider.GetBytes(randomBytes);
 
                return BitConverter.ToInt32(randomBytes, 0);
            }
        }
    }
}

And now, when I run the same performance test at the console, I get the results below. The code has become slower, and now takes 154.4 nanoseconds instead of 10.41 nanoseconds.

BenchmarkDotNet=v0.10.8, OS=Windows 10 Redstone 2 (10.0.15063)
Processor=Intel Core i7-2640M CPU 2.80GHz (Sandy Bridge), ProcessorCount=4
Frequency=2728183 Hz, Resolution=366.5443 ns, Timer=TSC
dotnet cli version=2.0.0-preview2-006127
 [Host] : .NET Core 4.6.25316.03, 64bit RyuJIT
 DefaultJob : .NET Core 4.6.25316.03, 64bit RyuJIT


          Method | Mean     | Error    | StdDev   |
---------------- |---------:|---------:|---------:|
 GetRandomNumber | 154.4 ns | 2.598 ns | 2.028 ns |

So it’s more functionally correct, and unfortunately it has become a little slower. But I can now go to my technical architect with a proposal to change the code, and present a more complete picture – they’ll be able to not only understand why my proposed code is more cryptographically secure, but also I’ll be able to show some solid metrics around the performance deterioration cost. With this data, they have can make better decisions about what mitigations they might want to put in place.

How should I use these numbers?

A slow down from about 10 to 150 nanoseconds doesn’t mean that the user’s experience deteriorates by a factor of 15 – remember that in this case, a single user’s experience is over the entire lifecycle of the page, so really a single user should only see a slowdown of 140 nanoseconds over the time it takes to refresh the whole page. Obviously a website will have many more users than just one at a time, and this is where our JMeter tests will be able to tell us more accurately how the page performance deteriorates at scales of hundreds or thousands of users.

Wrapping up

BenchmarkDotNet is a great open-source tool (sponsored by the .NET Foundation) that allows us to perform micro-benchmarking experiments on methods in our code. Check out more of the documentation here.

I’ve chosen to demonstrate BenchmarkDotNet with a very small service that has methods which take no parameters. The chances are that your code is more complex than this example, and you can structure your code to so that you can pass parameters to BenchmarkDotNet – I’ll write more about these more complicated scenarios in the next post.

Where I think BenchmarkDotNet is most valuable is that it changes the discussion in development teams around performance. Rather than changing code and hoping for the best – or worse, reacting to an unexpected performance drop affecting users – micro-benchmarking is part of the development process, and helps developers understand and mitigate code problems before they’re even pushed to an integration server.


About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net core, C# tip, MVC, Non-functional Requirements, Performance

Creating a RESTful Web API template in .NET Core 1.1 – Part #3: Improving the performance by using compression

One of the simplest and most effective improvements you can make to your website or web service is to compress the stream of data sent from the server. With .NET Core 1.1, it’s really simple to set this up – I’ve decided to include this in my template project, but the instructions below will work for any .NET Core MVC or Web API project.

Only really ancient browsers are going to have problems with gzip – I’m pretty happy to switch it on by default.

.NET Core 1.1 adds compression to the ASP.NET HTTP pipeline using some middleware in the Microsoft.AspNetCore.ResponseCompression package. Let’s look at how to add this to our .NET Core Web API project.

Step 1: Add the Microsoft.AspNetCore.ResponseCompression package

There’s a few different ways to do this – I prefer to add packages from within PowerShell. From within Visual Studio (with my project open), I open a Package Manager Console, and run:

Install-Package Microsoft.AspNetCore.ResponseCompression

(But it’s obviously possible to do this from within the NuGet package manager UI as well)

This will add the package to the Web API project, and you can see this in the project.json file (partially shown below).

{
  "dependencies": {
    "Microsoft.NETCore.App": {
      "version": "1.1.0",
      "type": "platform"
    },
    "Microsoft.AspNetCore.Mvc": "1.1.0",
    "Microsoft.AspNetCore.Routing": "1.1.0",
    "Microsoft.AspNetCore.Server.IISIntegration": "1.1.0",
    "Microsoft.AspNetCore.Server.Kestrel": "1.1.0",
    "Microsoft.Extensions.Configuration.EnvironmentVariables": "1.1.0",
    "Microsoft.Extensions.Configuration.FileExtensions": "1.1.0",
    "Microsoft.Extensions.Configuration.Json": "1.1.0",
    "Microsoft.Extensions.Logging": "1.1.0",
    "Microsoft.Extensions.Logging.Console": "1.1.0",
    "Microsoft.Extensions.Logging.Debug": "1.1.0",
    "Microsoft.Extensions.Options.ConfigurationExtensions": "1.1.0",
    "Microsoft.AspNetCore.ResponseCompression": "1.0.0"
  },
  ...

Step 2: Update and configure services in the project Startup.cs file

We now just need to add a couple of lines to the Startup.cs project file, which will:

  • Add the services available to the runtime container, and
  • Use the services in the HTTP pipeline at runtime.

I’ve highlighted the lines that I added in bold red font in the code below.

public class Startup
{
    public Startup(IHostingEnvironment env)
    {
        var builder = new ConfigurationBuilder()
            .SetBasePath(env.ContentRootPath)
            .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
            .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
            .AddEnvironmentVariables();
        Configuration = builder.Build();
    }
 
    public IConfigurationRoot Configuration { get; }
 
    // This method gets called by the runtime. Use this method to add services to the container.
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddResponseCompression();
 
        // Add framework services.
        services.AddMvc();
    }
 
    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
        app.UseResponseCompression();
 
        loggerFactory.AddConsole(Configuration.GetSection("Logging"));
        loggerFactory.AddDebug();
 
        app.UseMvc();
    }
}

Now when I call my web service, all responses are zipped by default.

We can prove this by looking at the headers sent with the response – I’ve pasted a screenshot of the headers sent back when I call a GET method in my Web API service. There is a header named “Content-Encoding” which has the value “gzip” – this signals that the response has been zipped.

screenshot-1480371947

Wrapping up

This is a really easy way to improve the performance of your website or your web service – this is one of the first things I configure when starting a new project.

Continue reading

.net, Accessibility, Non-functional Requirements, Visual Studio, Visual Studio Plugin, Web Development

How to use the Web Accessibility Checker for Visual Studio to help conform to accessibility guidelines

I’ve previously blogged about assessibility a few times and I’d love to find a good way to identify accessibility issues from my development environment. So I was really interested to see that recently Mads Kristensen from Microsoft released the Web Accessibility Checker for Visual Studio 2015. This extension uses the aXe-core library for analysing code in Visual Studio.

The Visual Studio Gallery gives some good instructions on how to install and use this extension. It’s a pretty straightforward install – once you run your website, a list of non-conformances will appear in the Error List in VS 2015 (to see the Error List, go to the View Menu and select Error List from there).

Obviously this can’t identify every accessibility problem on your site, so fixing all the errors on this list isn’t going to guarantee your website is accessible. But one of the manifesto items from aXe-core’s github page states the tool aims to report zero false positives – so if aXe-core is raising an error, it’s worth investigating.

Let’s look at an example.

How does it report errors?

I’ve written some HTML code and pasted it below…ok, it’s some pretty ropey HTML code, with some really obvious accessibility issues.

<!DOCTYPE html>
<html>
<body>
    <form>
        This is simple text on a page.
 
        Here's a picture:
        <br />
        <img src="/image.png" />
        <br />
        And here's a button:
        <br />
        <button></button>
    </form>
</body>
</html>

 

Let’s see what the Web Accessibility Checker picks up:

screenshot.1460325884

Four errors are reported:

  • No language attribute is specified in the HTML element. This is pretty easy to fix – I’ve blogged about this before;
  • The <button> element has no text inside it;
  • The page has no <title> element.
  • The image does not have an alternative text attribute.

Note – these errors are first reported at the application runtime, don’t expect to see them when your writing your code, or just after compiling it.

If you want to discover more about any of these errors, the Error List has a column called “Code”, and clicking the text will take you to an explanation of what the problem is.

In addition, you can just double click on the description, and the VS editor focus will move to the line of code where the issue is.

I’ve corrected some of the errors – why are they still in the Error List?

I found that the errors stayed in the list, even after starting to fix the issues. In order to clear the errors away, I found that I needed to right click on the Error List, and from the context menu select “Clear All Accessibility Errors”.

screenshot.1460326466

When I hit refresh on my browser, and I was able to see the remaining issues without it showing the ones that I had fixed.

What more does this give me when compared to some of the existing accessibility tools?

Previously I’ve used tools like the HTML_CodeSniffer bookmarklet, which also report accessibility errors.

screenshot.1460326977

This is a great tool, but it will only point to the issues on the web page – the Web Accessibility Checker in VS2015 has the advantage of taking your cursor straight to the line of source code with the issue.

Conclusion

Obviously you can’t completely test if a website is accessible using automated tools. But you can definitely use tools to check if certain rules are being adhered to in your code. Tools like the Web Accessibility Checker for VS2015 help you identify and locate accessibility issues in your code – and when it’s free, there’s no reason not use it in your web application development process today.