.net, Non-functional Requirements, Performance

Measuring your code’s performance during development with BenchmarkDotNet – Part #2: Methods with parameters

Last time, I wrote about how to use BenchmarkDotNet (Github here: NuGet: here) to measure code performance for a very simple method with no parameters. This time I’ll write about testing another scenario that I find is more common – methods with parameters.

Let’s start with a simple case – primitive parameters.

Methods with Primitive Parameters

Let’s write a method which takes an integer parameter and calculates the square.

I know that I could use the static System.Math.Pow(int a, int b) for this instead of rolling my own method – but more on this later.

I written a little static method like this.

public class MathFunctions
{
    public static long Square(int number)
    {
        return number * number;
    }
}

Nothing wrong with that – but not that easy to test with BenchmarkDotNet and decorate with a simple [Benchmark] attribute because I need to specify the number parameter.

There are a couple of ways to test this.

Refactor and use the Params attribute

Instead of passing the number as a parameter to the Square method, I can refactor the code so that Number is a property of the class, and the Square method uses this property.

public class MathFunctions
{
    public int Number { getset; }
 
    public long Square()
    {
        return this.Number * this.Number;
    }
}

Now I can decorate Square method with the [Benchmark] attribute, and I can use the ParamsAttribute in BenchmarkDotNet to decorate the property with numbers that I want to test.

public class MathFunctions
{
    [Params(12)]
    public int Number { getset; }
        
    [Benchmark]
    public int Square()
    {
        return this.Number * this.Number;
    }
}

And then it’s very simple to execute a performance runner class like the code below:

using BenchmarkDotNet.Running;
using Services;
 
namespace PerformanceRunner
{
    class Program
    {
        static void Main(string[] args)
        {
            var summary = BenchmarkRunner.Run<MathFunctions>();
        }
    }
}

Which yields the results:

// * Summary *

BenchmarkDotNet=v0.10.8, OS=Windows 10 Redstone 2 (10.0.15063)
Processor=Intel Core i7-2640M CPU 2.80GHz (Sandy Bridge), ProcessorCount=4
Frequency=2728178 Hz, Resolution=366.5450 ns, Timer=TSC
dotnet cli version=2.0.0-preview2-006127
 [Host] : .NET Core 4.6.25316.03, 64bit RyuJIT
 DefaultJob : .NET Core 4.6.25316.03, 64bit RyuJIT


 Method | Number | Mean      | Error     | StdDev    | Median    |
------- |------- |----------:|----------:|----------:|----------:|
 Square | 1      | 0.0429 ns | 0.0370 ns | 0.0658 ns | 0.0001 ns |
 Square | 2      | 0.0035 ns | 0.0086 ns | 0.0072 ns | 0.0000 ns |

This mechanism has the advantage that you can specify a range of parameters and observe the behaviour for each of the values.

But I think it has a few disadvantages:

  • I’m a bit limited in the type of parameter that I can specify in an attribute. Primitives like integers and strings are easy, but instantiating a more complex data transfer object is harder.
  • I have to refactor my code to measure performance – you could argue that the refactored version is better code, but to me the code below is simple and has a clear intent:
var output = MathFunctions.Square(10);

Whereas I think the code below is more obtuse.

var math = new MathFunctions { Number = 10 };
var output = math.Square();
  • My source code has a tight dependency on the BenchmarkDotNet library, and the attributes add a little litter to the class.

Basically I’m not sure I’ve made my code better by refactoring it to measure performance. Let’s look at other techniques.

Separate performance measurement code into a specific test class

I can avoid some of the disadvantages of the technique above by creating a dedicated class to measure the performance of my method, as shown below.

public class MathFunctions
{
    public static long Square(int number)
    {
        return number * number;
    }
}
 
public class PerformanceTestMathFunctions
{
    [Params(12)]
    public int Number { getset; }
 
    [Benchmark]
    public long Measure_Speed_of_Square_Function()
    {
        return MathFunctions.Square(Number);
    }
}

So now I can run the code below to measure the performance of my method.

using BenchmarkDotNet.Running;
using Services;
 
namespace PerformanceRunner
{
    class Program
    {
        static void Main(string[] args)
        {
            var summary = BenchmarkRunner.Run<PerformanceTestMathFunctions>();
        }
    }
}

This time I’ve not had to refactor my original code, and I’ve moved the dependency from my source code under test to the dedicated test class. But I’m still a bit limited in what types of parameter I can supply to my test class.

Using GlobalSetup for methods with non-primitive data transfer object parameters

Let’s try benchmarking an example which is a bit more involved – how to measure the performance of some more math functions I’ve written which use Complex Numbers.

Complex numbers are nothing to do with BenchmarkDotNet – I’m just using this as an example of a non-trivial problem space and how to run benchmark tests against it.

At school you might have done some work with Complex Numbers. These numbers have a real and imaginary component – which sounds weird if you’re not used to it, but they can be represented as:

1 + 2i

Where 1 is the real component, and 2 is the size of the ‘imaginary’ component.

If you want to calculate the magnitude of a complex number, you just use Pythagorean maths – namely:

  • Calculate the square of the real component, and the square of the imaginary component.
  • Add these two squares together.
  • The magnitude is the square root of the sum of the two squares.

So I can represent a Complex Number in code in the object class shown below:

public class ComplexNumber
{
    public int Real { getset; }
 
    public int Imaginary { getset; }
}

And I can instantiate a complex number 1 + 2i with the code:

new ComplexNumber { Real = 1, Imaginary = 2 };

If I want to calculate the magnitude of this Complex Number, I can pass the ComplexNumber data transfer object as a parameter to a method shown below.

public class ComplexMathFunctions
{
    public static double Magnitude(ComplexNumber complexNumber)
    {
        return Math.Pow(Math.Pow(complexNumber.Real, 2                        + Math.Pow(complexNumber.Imaginary, 2), 0.5);
    }
}

But how do I benchmark this?

I can’t instantiate a ComplexNumber parameter in the Params attribute supplied by BenchmarkDotNet.

Fortunately there’s a GlobalSetup attribute – this is very similar to the Setup attribute used by some unit test frameworks, were we can arrange our parameters before they are used by a test.

The code below shows how to create a dedicated test class, and instantiate a Complex Number in the GlobalSetup method which is used in the method being benchmarked.

public class PerformanceTestComplexMathFunctions
{
    private ComplexNumber ComplexNumber;
 
    [GlobalSetup]
    public void GlobalSetup()
    {
        this.ComplexNumber = new ComplexNumber { Real = 1, Imaginary = 2 };
    }
 
    [Benchmark]
    public double Measure_Magnitude_of_ComplexNumber_Function()
    {
        return ComplexMathFunctions.Magnitude(ComplexNumber);
    }
}

This yields the results below:

// * Summary *

BenchmarkDotNet=v0.10.8, OS=Windows 10 Redstone 2 (10.0.15063)
Processor=Intel Core i7-2640M CPU 2.80GHz (Sandy Bridge), ProcessorCount=4
Frequency=2728178 Hz, Resolution=366.5450 ns, Timer=TSC
dotnet cli version=2.0.0-preview2-006127
 [Host] : .NET Core 4.6.25316.03, 64bit RyuJIT
 DefaultJob : .NET Core 4.6.25316.03, 64bit RyuJIT


 Method                                      | Mean     | Error    | StdDev    |
-------------------------------------------- |---------:|---------:|----------:|
 Measure_Magnitude_of_ComplexNumber_Function | 110.5 ns | 1.058 ns | 0.9897 ns |

I think this eliminates pretty much all the disadvantages I listed earlier, but does add a restriction that I’m only testing one instantiated value of the data transfer object parameter.

You might wonder why we need the GlobalSetup at all when we could just instantiate a local variable in the method under test – I don’t think we should do that because we’d also be including the time taken to set up the experiment in the method being benchmarked – which reduces the accuracy of the measurement.

Addendum

I was kind of taken aback by how slow my Magnitude function was, so I started playing with some different options – instead of using the built in System.Math.Pow static method, I decide to calculate a square by just multiplying the base by itself. I also decided to use the System.Math.Sqrt function to calculate the square root, rather than the equivalent of raising the base to the power of 0.5. My refactored code is shown in the code below.

public class ComplexMathFunctions
{
    public static double Magnitude(ComplexNumber complexNumber)
    {
        return Math.Sqrt(complexNumber.Real * complexNumber.Real 
                    + complexNumber.Imaginary * complexNumber.Imaginary);
    }
}

Re-running the test yielded the benchmark results below:

// * Summary *

BenchmarkDotNet=v0.10.8, OS=Windows 10 Redstone 2 (10.0.15063)
Processor=Intel Core i7-2640M CPU 2.80GHz (Sandy Bridge), ProcessorCount=4
Frequency=2728178 Hz, Resolution=366.5450 ns, Timer=TSC
dotnet cli version=2.0.0-preview2-006127
 [Host] : .NET Core 4.6.25316.03, 64bit RyuJIT
 DefaultJob : .NET Core 4.6.25316.03, 64bit RyuJIT


 Method                                      | Mean     | Error     | StdDev    |
-------------------------------------------- |---------:|----------:|----------:|
 Measure_Magnitude_of_ComplexNumber_Function | 4.192 ns | 0.0371 ns | 0.0347 ns |

So with a minor code tweak, the time taken to calculate the magnitude dropped from 110.5 nanoseconds to 4.192 nanoseconds. That’s a pretty big performance improvement. If I hadn’t been measuring this, I’d probably never have known that I could have improved my original implementation so much.

Of course, this performance improvement might only work for small integers – it could be that large integers have a different performance profile. But it’s easy to understand how we could set up some other tests to check this.

Wrapping up

This time I’ve written about how to use BenchmarkDotNet to measure the performance of methods which have parameters, even ones that are data transfer objects. The Params attribute can be useful sometimes for methods which have simple primitive parameters, and the GlobalSetup attribute can specify a method which sets up more complicated scenarios. I’ve also shown how we can create classes dedicated to testing individual methods, and keep benchmarking test references isolated in their own classes and projects.

This makes it really simple to benchmark your existing codebase, even code which wasn’t originally designed with performance testing in mind. I think it’s worth doing – even while writing this post, I unexpectedly discovered a simple way to change my example code that made a big performance improvement.

I hope you find this post useful in starting to measure the performance of your codebase. If you want to dig into understanding BenchmarkDotNet more, I highly recommend this post from Andrey Akinshin – it goes into lots more detail.


About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, Non-functional Requirements, Performance

Measuring your code’s performance during development with BenchmarkDotNet – Part #1: Getting started

I was inspired to write a couple of posts after recently reading articles by Stephen Toub and Andrey Akinshin about BenchmarkDotNet from the .NET Blog, and I wanted to write about how I could use BenchmarkDotNet to understand my own existing codebase a little bit better.

A common programming challenge is how to manage complexity around code performance – a small change might have a large impact on application performance.

I’ve managed this in the past with page-level performance tests (usually written in JMeter) running on my integration server – and it works well.

However, these page-level performance tests only give me coarse grained results – if the outputs of the JMeter tests start showing a slowdown, I’ll have to do more digging in the code to find the problem. At this point, tools like ANTS or dotTrace are really good for finding the bottlenecks – but even with these, I’m reacting to a problem rather than managing it early.

I’d like to have more immediate feedback – I’d like to be able to perform micro-benchmarks against my code before and after I make small changes, and know right away if I’ve made things better or worse. Fortunately BenchmarkDotNet helps with this.

This isn’t premature optimisation – this is about how I can have a deeper understanding of the quality of code I’ve written. Also, if you don’t know if your code is slow or not, how can you argue that any optimisation is premature?

A simple example

Let’s take a simple example – say that I have a .NET Core website which has a single page that just generates random numbers.

Obviously this application wouldn’t be a lot of use – I’m deliberately choosing something conceptually simple so I can focus on the benchmarking aspects.

I’ve created a simple HomeController, which has an action called Index that returns a random number. This random number is generated from a service called RandomNumberGenerator.

Let’s look at the source for this. I’ve put the code for the controller below – this uses .NET Core’s built in dependency injection feature.

using Microsoft.AspNetCore.Mvc;
using Services;
 
namespace SampleFrameworkWebApp.Controllers
{
    public class HomeController : Controller
    {
        private readonly IRandomNumberGenerator _randomNumberGenerator;
        
        public HomeController(IRandomNumberGenerator randomNumberGenerator)
        {
            _randomNumberGenerator = randomNumberGenerator;
        }
 
        public IActionResult Index()
        {
            ViewData["randomNumber"= _randomNumberGenerator.GetRandomNumber();
 
            return View();
        }
    }
}

The code below shows the RandomNumberGenerator – it uses the Random() class from the System library.

using System;
 
namespace Services
{
    public class RandomNumberGenerator : IRandomNumberGenerator
    {
        private static Random random = new Random();
 
        public int GetRandomNumber()
        {
            return random.Next();
        }
    }
}

A challenge to make it “better”

But after a review, let’s say a colleague tells me that the System.Random class isn’t really random – it’s really only pseudo random, certainly not random enough for any kind of cryptographic purpose. If I want to have a really random number, I need to use the RNGCryptoServiceProvider class.

So I’m keen to make my code “better” – or at least make the output more cryptographically secure – but I’m nervous that this new class is going to make my RandomNumberGenerator class slower for my users. How can I measure the before and after performance without recording a JMeter test?

Using BenchmarkDotNet

With BenchmarkDotNet, I can just decorate the method being examined using the [Benchmark] attribute, and use this to measure the performance of my code as it is at the moment.

To make this attribute available in my Service project, I need to include a nuget package in my project, and you can use the code below at the Package Manager Console:

Install-Package BenchmarkDotNet

The code for the RandomNumberGenerator class now looks like the code below – as you can see, it’s not changed much at all – just an extra library reference at the top, and a single attribute decorating the method I want to test.

using System;
using BenchmarkDotNet.Attributes;
 
namespace Services
{
    public class RandomNumberGenerator : IRandomNumberGenerator
    {
        private static Random random = new Random();
 
        [Benchmark]
        public int GetRandomNumber()
        {
            return random.Next();
        }
    }
}

I like to keep my performance benchmarking code in a separate project (in the same way that I keep my unit tests in a separate project). That project is a simple console application, with a main class that looks like the code below (obviously I need to install the BenchmarkDotNet nuget package in this project as well):

using BenchmarkDotNet.Running;
using Services;
 
namespace PerformanceRunner
{
    class Program
    {
        static void Main(string[] args)
        {
            var summary = BenchmarkRunner.Run<RandomNumberGenerator>();
        }
    }
}

And now if I run this console application at a command line, BenchmarkDotNet presents me with some experiment results like the ones below.

// * Summary *

BenchmarkDotNet=v0.10.8, OS=Windows 10 Redstone 2 (10.0.15063)
Processor=Intel Core i7-2640M CPU 2.80GHz (Sandy Bridge), ProcessorCount=4
Frequency=2728183 Hz, Resolution=366.5443 ns, Timer=TSC
dotnet cli version=2.0.0-preview2-006127
 [Host] : .NET Core 4.6.25316.03, 64bit RyuJIT
 DefaultJob : .NET Core 4.6.25316.03, 64bit RyuJIT


          Method | Mean     | Error     | StdDev    |
---------------- |---------:|----------:|----------:|
 GetRandomNumber | 10.41 ns | 0.0468 ns | 0.0365 ns |

As you can see above, my machine specifications are listed, and the experiment results suggest that my RandomNumberGenerator class presently takes about 10.41 nanoseconds to generate a random number.

So now I have a baseline – after I change my code to use the more cryptographically secure RNGCryptoServiceProvider, I’ll be able to run this test again and see if I’ve made it faster or slower.

How fast is the service after the code changes?

I’ve changed the service to use the RNGCryptoServiceProvider – the code is below.

using System;
using BenchmarkDotNet.Attributes;
using System.Security.Cryptography;
 
namespace Services
{
    public class RandomNumberGenerator : IRandomNumberGenerator
    {
        private static Random random = new Random();
 
        [Benchmark]
        public int GetRandomNumber()
        {
            using (var randomNumberProvider = new RNGCryptoServiceProvider())
            {
                byte[] randomBytes = new byte[sizeof(Int32)];
 
                randomNumberProvider.GetBytes(randomBytes);
 
                return BitConverter.ToInt32(randomBytes, 0);
            }
        }
    }
}

And now, when I run the same performance test at the console, I get the results below. The code has become slower, and now takes 154.4 nanoseconds instead of 10.41 nanoseconds.

BenchmarkDotNet=v0.10.8, OS=Windows 10 Redstone 2 (10.0.15063)
Processor=Intel Core i7-2640M CPU 2.80GHz (Sandy Bridge), ProcessorCount=4
Frequency=2728183 Hz, Resolution=366.5443 ns, Timer=TSC
dotnet cli version=2.0.0-preview2-006127
 [Host] : .NET Core 4.6.25316.03, 64bit RyuJIT
 DefaultJob : .NET Core 4.6.25316.03, 64bit RyuJIT


          Method | Mean     | Error    | StdDev   |
---------------- |---------:|---------:|---------:|
 GetRandomNumber | 154.4 ns | 2.598 ns | 2.028 ns |

So it’s more functionally correct, and unfortunately it has become a little slower. But I can now go to my technical architect with a proposal to change the code, and present a more complete picture – they’ll be able to not only understand why my proposed code is more cryptographically secure, but also I’ll be able to show some solid metrics around the performance deterioration cost. With this data, they have can make better decisions about what mitigations they might want to put in place.

How should I use these numbers?

A slow down from about 10 to 150 nanoseconds doesn’t mean that the user’s experience deteriorates by a factor of 15 – remember that in this case, a single user’s experience is over the entire lifecycle of the page, so really a single user should only see a slowdown of 140 nanoseconds over the time it takes to refresh the whole page. Obviously a website will have many more users than just one at a time, and this is where our JMeter tests will be able to tell us more accurately how the page performance deteriorates at scales of hundreds or thousands of users.

Wrapping up

BenchmarkDotNet is a great open-source tool (sponsored by the .NET Foundation) that allows us to perform micro-benchmarking experiments on methods in our code. Check out more of the documentation here.

I’ve chosen to demonstrate BenchmarkDotNet with a very small service that has methods which take no parameters. The chances are that your code is more complex than this example, and you can structure your code to so that you can pass parameters to BenchmarkDotNet – I’ll write more about these more complicated scenarios in the next post.

Where I think BenchmarkDotNet is most valuable is that it changes the discussion in development teams around performance. Rather than changing code and hoping for the best – or worse, reacting to an unexpected performance drop affecting users – micro-benchmarking is part of the development process, and helps developers understand and mitigate code problems before they’re even pushed to an integration server.


About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net core, C# tip, MVC, Non-functional Requirements, Performance

Creating a RESTful Web API template in .NET Core 1.1 – Part #3: Improving the performance by using compression

One of the simplest and most effective improvements you can make to your website or web service is to compress the stream of data sent from the server. With .NET Core 1.1, it’s really simple to set this up – I’ve decided to include this in my template project, but the instructions below will work for any .NET Core MVC or Web API project.

Only really ancient browsers are going to have problems with gzip – I’m pretty happy to switch it on by default.

.NET Core 1.1 adds compression to the ASP.NET HTTP pipeline using some middleware in the Microsoft.AspNetCore.ResponseCompression package. Let’s look at how to add this to our .NET Core Web API project.

Step 1: Add the Microsoft.AspNetCore.ResponseCompression package

There’s a few different ways to do this – I prefer to add packages from within PowerShell. From within Visual Studio (with my project open), I open a Package Manager Console, and run:

Install-Package Microsoft.AspNetCore.ResponseCompression

(But it’s obviously possible to do this from within the NuGet package manager UI as well)

This will add the package to the Web API project, and you can see this in the project.json file (partially shown below).

{
  "dependencies": {
    "Microsoft.NETCore.App": {
      "version": "1.1.0",
      "type": "platform"
    },
    "Microsoft.AspNetCore.Mvc": "1.1.0",
    "Microsoft.AspNetCore.Routing": "1.1.0",
    "Microsoft.AspNetCore.Server.IISIntegration": "1.1.0",
    "Microsoft.AspNetCore.Server.Kestrel": "1.1.0",
    "Microsoft.Extensions.Configuration.EnvironmentVariables": "1.1.0",
    "Microsoft.Extensions.Configuration.FileExtensions": "1.1.0",
    "Microsoft.Extensions.Configuration.Json": "1.1.0",
    "Microsoft.Extensions.Logging": "1.1.0",
    "Microsoft.Extensions.Logging.Console": "1.1.0",
    "Microsoft.Extensions.Logging.Debug": "1.1.0",
    "Microsoft.Extensions.Options.ConfigurationExtensions": "1.1.0",
    "Microsoft.AspNetCore.ResponseCompression": "1.0.0"
  },
  ...

Step 2: Update and configure services in the project Startup.cs file

We now just need to add a couple of lines to the Startup.cs project file, which will:

  • Add the services available to the runtime container, and
  • Use the services in the HTTP pipeline at runtime.

I’ve highlighted the lines that I added in bold red font in the code below.

public class Startup
{
    public Startup(IHostingEnvironment env)
    {
        var builder = new ConfigurationBuilder()
            .SetBasePath(env.ContentRootPath)
            .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
            .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
            .AddEnvironmentVariables();
        Configuration = builder.Build();
    }
 
    public IConfigurationRoot Configuration { get; }
 
    // This method gets called by the runtime. Use this method to add services to the container.
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddResponseCompression();
 
        // Add framework services.
        services.AddMvc();
    }
 
    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
        app.UseResponseCompression();
 
        loggerFactory.AddConsole(Configuration.GetSection("Logging"));
        loggerFactory.AddDebug();
 
        app.UseMvc();
    }
}

Now when I call my web service, all responses are zipped by default.

We can prove this by looking at the headers sent with the response – I’ve pasted a screenshot of the headers sent back when I call a GET method in my Web API service. There is a header named “Content-Encoding” which has the value “gzip” – this signals that the response has been zipped.

screenshot-1480371947

Wrapping up

This is a really easy way to improve the performance of your website or your web service – this is one of the first things I configure when starting a new project.

Continue reading

.net, Accessibility, Non-functional Requirements, Visual Studio, Visual Studio Plugin, Web Development

How to use the Web Accessibility Checker for Visual Studio to help conform to accessibility guidelines

I’ve previously blogged about assessibility a few times and I’d love to find a good way to identify accessibility issues from my development environment. So I was really interested to see that recently Mads Kristensen from Microsoft released the Web Accessibility Checker for Visual Studio 2015. This extension uses the aXe-core library for analysing code in Visual Studio.

The Visual Studio Gallery gives some good instructions on how to install and use this extension. It’s a pretty straightforward install – once you run your website, a list of non-conformances will appear in the Error List in VS 2015 (to see the Error List, go to the View Menu and select Error List from there).

Obviously this can’t identify every accessibility problem on your site, so fixing all the errors on this list isn’t going to guarantee your website is accessible. But one of the manifesto items from aXe-core’s github page states the tool aims to report zero false positives – so if aXe-core is raising an error, it’s worth investigating.

Let’s look at an example.

How does it report errors?

I’ve written some HTML code and pasted it below…ok, it’s some pretty ropey HTML code, with some really obvious accessibility issues.

<!DOCTYPE html>
<html>
<body>
    <form>
        This is simple text on a page.
 
        Here's a picture:
        <br />
        <img src="/image.png" />
        <br />
        And here's a button:
        <br />
        <button></button>
    </form>
</body>
</html>

 

Let’s see what the Web Accessibility Checker picks up:

screenshot.1460325884

Four errors are reported:

  • No language attribute is specified in the HTML element. This is pretty easy to fix – I’ve blogged about this before;
  • The <button> element has no text inside it;
  • The page has no <title> element.
  • The image does not have an alternative text attribute.

Note – these errors are first reported at the application runtime, don’t expect to see them when your writing your code, or just after compiling it.

If you want to discover more about any of these errors, the Error List has a column called “Code”, and clicking the text will take you to an explanation of what the problem is.

In addition, you can just double click on the description, and the VS editor focus will move to the line of code where the issue is.

I’ve corrected some of the errors – why are they still in the Error List?

I found that the errors stayed in the list, even after starting to fix the issues. In order to clear the errors away, I found that I needed to right click on the Error List, and from the context menu select “Clear All Accessibility Errors”.

screenshot.1460326466

When I hit refresh on my browser, and I was able to see the remaining issues without it showing the ones that I had fixed.

What more does this give me when compared to some of the existing accessibility tools?

Previously I’ve used tools like the HTML_CodeSniffer bookmarklet, which also report accessibility errors.

screenshot.1460326977

This is a great tool, but it will only point to the issues on the web page – the Web Accessibility Checker in VS2015 has the advantage of taking your cursor straight to the line of source code with the issue.

Conclusion

Obviously you can’t completely test if a website is accessible using automated tools. But you can definitely use tools to check if certain rules are being adhered to in your code. Tools like the Web Accessibility Checker for VS2015 help you identify and locate accessibility issues in your code – and when it’s free, there’s no reason not use it in your web application development process today.

.net, C# tip, IIS, MVC, Non-functional Requirements, Performance, Web Development

More performance tips for .NET websites which access data

I recently wrote about improving the performance of a website that accesses a SQL Server database using Entity Framework, and I wanted to follow up with a few more thoughts on optimising performance in an MVC website written in .NET. I’m coming towards the end of a project now where my team built an MVC 5 site, and accessed a database using Entity Framework. The engineers were all scarred survivors from previous projects pretty experienced, so we were able to implement a lot of non-functional improvements during sprints as we went along. As our site was data driven, looking at that part was obviously important, but it wasn’t the only thing we looked at. I’ve listed a few of the other things we did during the project – some of these were one off settings, and others were things we checked for regularly to make sure problems weren’t creeping in.

Compress, compress, compress

GZip your content! This makes a huge difference to your page size, and therefore to the time it takes to render your page. I’ve written about how to do this for a .NET site and test that it’s working here. Do it once at the start of your project, and you can forget about it after that (except occasionally when you should check to make sure someone hasn’t switched it off!)

Check your SQL queries, tune them, and look out for N+1 problems

As you might have guessed from one of my previous posts, we were very aware of how a few poorly tuned queries or some rogue N+1 problems could make a site grind to a halt once there were more than a few users. We tested with sample data which was the “correct size” – meaning that it was comparable with the projected size of the production database. This gave us a lot of confidence that the indexes we created in our database were relevant, and that our automated integration tests would highlight real N+1 problems. If we didn’t have “real sized data” – as often happens where a development database just has a few sample rows – then you can’t expect to discover real performance issues early.

Aside: Real sized data doesn’t have to mean real data – anonymised/fictitious data is just as good for performance analysis (and obviously way better from a security perspective).

Use MiniProfiler to find other ADO.NET bottlenecks

Just use it. Seriously, it’s so easy, read about it here. There’s even a nuget repository to make it even easier to include in your project. It automatically profiles ADO.NET calls, and allows you to profile individual parts of your application with a couple of simple lines of code (though I prefer to use this during debugging, rather than pushing those profile customisations into the codebase). It’s great for identifying slow parts of the site, and particularly good at identifying repeated queries (which is a giveaway symptom of the N+1 problem).

Reduce page bloat by optimising your images

We didn’t have many images in the site – but they were still worth checking. We used the Firefox Web Developer Toolbar plugin, and the “View Document Size” item from the “Information” menu. This gave us a detailed breakdown of all the images on the page being tested – and highlighted a couple of SVGs which had crept in unexpectedly. These were big files, and appeared in the site’s header, so every page would have been affected. They didn’t need to be SVGs, and it was a quick fix to change it to a GIF which made every page served a lot smaller.

For PNGs, you can use the PNGOut utility to optimise images – and you can convert GIFs to PNG as well using this tool.

For JPEGs, read about progressive rendering here. This is something where your mileage may vary – I’ll probably write more about how to do this in Windows at a future time.

Minifying CSS and JavaScript

The Web Developer Toolbar saved us in another way – it identified a few JavaScript and CSS files issues. We were using the built in Bundling feature of MVC to combine and minify our included scripts – I’ve written about how to do this here – and initially it looked like everything had worked. However, when we looked at the document size using the Web Developer Toolbar, we saw that some documents weren’t being minified. I wrote about the issue and solution here, but the main point was that the Bundling feature was failing silently, causing the overall page size to increase very significantly. So remember to check that bundling/minifying is actually working – just because you have it enabled doesn’t mean it’s being done correctly!

Remember to put CSS at the top of your page, and include JavaScript files at the bottom.

Check for duplicated scripts and remove them

We switched off bundling and minification to see all the scripts being downloaded and noticed that we had a couple of separate entries for the JQuery library, and also for some JQuery-UI files. These were big files and downloading them once is painful enough, never mind unnecessarily doing it again everytime. It’s really worth checking to make sure you’re not doing this – not just for performance reasons, but if you find this is happening it’s also a sign that there’s maybe an underlying problem in your codebase. Finding it early gives you a chance to fix this.

Do you really need that 3rd party script?

We worked hard to make sure that we weren’t including libraries just for the sake of it. There might be some cool UI feature which is super simple to implement by just including that 3rd party library…but every one of those 3rd party libraries includes page size. Be smart about what you include.

Tools like JQuery UI even allow you to customise your script to be exactly as big or small as you need it to be.

Is your backup schedule causing your site to slow down?

I witnessed this on a previous project – one of our team had scheduled the daily database backup to happen after we went home…leading to some of our users in the world in a later time zone to see a performance deterioration for about half an hour at the same time every day. Rescheduling the daily backup to later in the day caused us no problems and removed a significant problem for our users.

Is someone else’s backup schedule causing your site to slow down?

There’s a corollary to the previous point – if you’re seeing a mysterious performance deterioration at the same time every day and you’re absolutely sure it’s not something that you or your users are doing, check if your site is on shared hosting. When I contacted our hosts and requested that our company VMs were moved onto a different SAN, it miraculously cleared up a long-standing performance issue.

Summary

There’s a few tips here which really helped us keep our pages feeling fast to our users (and some other tips that I’ve picked up over the years). We didn’t do all of this at the end of the project, this was something we focussed on all the way through. It’s really important to make sure you’re checking these things during sprints – and part of your Definition of Done if possible.

.net, C# tip, Non-functional Requirements, Performance

Performance tips for database access and Entity Framework

One of the most common ‘gotchas’ in a development project is to forget about performance until there’s a problem. I’ve often heard people quote Knuth saying “premature optimisation is the root of all evil” – hinting that right now is too early to think about performance tuning.

Of course performance tuning and improvement is put off, and put off, and put off some more…until there’s a performance test in pre-production and everything fails. (That’s if you’re lucky – at least you’ve caught it before it goes to production. A lot of the time that’s the first place the issue is spotted).

I believe in making it work first before you make it work fast – but within that statement, there’s an implication that “working” and “working fast” are both necessary. Making it just work isn’t enough. And Knuth is being quoted out of context – the full quote is “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.” (my emphasis). That’s small efficiencies, not big ones. He also says “In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering“. 12%!!

I’d like to share 3 tips that I’ve used to make a huge difference to the performance of a .NET application using Entity Framework. I’ve often heard people criticise Entity Framework as slow, but I’m staying out of the pointless endless religious arguments about whether it is or isn’t. All I can say is that from my experience, the performance bottleneck has never been Entity Framework’s fault – it’s either somewhere else, or the way that Entity Framework has been used.

Missing Indices

This is nothing to do with Entity Framework – this is a change to the database, not the .NET code. Entity Framework generates SQL behind the scenes and sends this to the database for execution, and it has no idea whether this SQL is going to perform a hugely expensive full table scan, or whether it’s going to use indices cleverly to prevent having to search every row in the database.

For me, this is the first port of call when someone says an application accessing a database is slow. SQL Server has some great tools to help with this – you can use SQL Profiler to record a trace file of all the SQL queries hitting a database over a period of time, and then use this trace file in Database Engine Tuning Advisor to identify what indices that the engine thinks will make the biggest difference to your application.

I’ve seen amazing improvements result from this technique – 97% improvements are not uncommon. Again, it’s not really an Entity Framework tip, but it’s worth checking.

The “Select N+1” Problem

So again, not really an Entity Framework problem…yes, there’s a bit of a theme emerging here! This is something that’s common to a lot of ORMs.

Basically I think of the problem as being a side effect of “lazy loading”. For example, say your application queries a database about cars. Cars are represented by a “Car” POCO object, which contains a list of child objects of POCO type “Wheel”.

From your application, you might query by primary key for a car with registration plate “ABC 123”, which (hopefully) returns one object as the result. Then you call the “Wheels” method to get information about the car’s wheels.

If your database is logically normalised, you’ve probably made at least two queries here – the original one to get the car, and then another to get information about the wheels. If you then call a property from the “Wheel” object which makes up the list, you’ll probably make another database query to get that information.

This is actually a massive advantage of ORMs – you as a developer don’t have to do extra work to load in information about child objects, and the query only happens when the application asks for information about that object. It’s all abstracted away from you, and it’s called lazy-loading.

There’s nothing wrong or evil with lazy-loading. Like any tool, it has a place and there are opportunities to mis-use it. Where I’ve seen it misused most is in the scenario where a developer:

  • returns an object from an Entity Framework call;
  • closes the session (i.e. connection to the database);
  • looks in the parent object for a child object, and gets an exception saying the session is closed;

The developer then does one of two things:

  • The developer moves all the logic into the method where the session is open because lazy loading fixes all their problems. This leads to a big mess of code. At some point – always – this code is copied and pasted, usually into a loop, leading to loads and loads of database queries. Because SQL Server is brilliant, it’s probably done all of these queries in a few seconds, and no-one really notices until it’s deployed to production and hundreds of users try to do this all at once and the site collapses. (Ok this is over dramatic – your performance testing events will catch this. Because of course you’re doing performance testing before going to production, aren’t you. Aren’t you?)
  • The better developer realises that moving all the code into one method is a bad idea, and even though lazy loading allows you to do this, it’s mis-using the technique. They read a few blogs, discover this thing called eager loading and write code like this:
var car = (from c in context.Cars.Include("Wheel")
            where c.RegistrationPlate == "ABC 123"
            select c).FirstOrDefault<Car>();

Entity Framework is smart enough to recognise what’s going on here – instead of doing a dumb query on the Car table, it joins to the Wheel table and sends one query out to get everything it needs for the Car and the Wheels.

So this is good – but in my career, almost every application has a much more complex relationship between object and database entities than just one simple parent and child. This leads to much more complex chains of queries.

One technique I’ve used successfully is to create a database view which includes everything needed for the application business method. I like using views because it gives me much more granualar control over exactly what the joins are between tables, and also what fields are returned from the database. It also simplifies the Entity Framework code. But the biggest advantage is that the view becomes an interface – a contract really – between the database and the code. So if you have a DB expert who tells you “Look, your performance issues are down to how your database is designed – I can fix this, but if I do it’ll probably break your application“, you’ll be able to respond “Well, we query the database through a view, so as long as you’re able to create a view that has the same columns and output, you can change the database without affecting us.

Of course if you’re using a database view, that means you won’t be able to update objects using Entity Framework because a view is read-only…which kind of defeats the purpose of using an ORM. However, if you have someone demanding a fix for a slow site, it’s a lot less intrusive to create and index a view than it is to re-engineer the application.

Note: I’m not advocating this as a magic bullet – it’s just a technique that sometimes has its place.

AsNoTracking

This is an Entity Framework setting. If you’re using views – or you know that your Entity Framework call won’t need to update the database – you can get an extra performance boost by using the AsNoTracking keyword.

var cars = context.Cars.AsNoTracking().Where(c => c.Color == "Red");

This will give you a performance boost if you’re returning large volumes of data, but less so for smaller volumes. Your mileage may vary – but remember you need to be sure you aren’t updating the context to use this.

Summary

  • Ignore the wisdom of the newsgroup posts that say “Entity Framework’s just slow, nothing you can do”;
  • Instead, run SQL Server profiler on the database, and put the resulting trace file through SQL Server’s Database Engine Tuning Adviser to find indices that will improve the slowest queries;
  • Analyse the code to identify the “Select N+1” problem – there almost always is one of these in the code somewhere. If you want to find it, turn off lazy loading and run your tests.
  • If you’re returning large volumes of data into a read-only list, see if you can use AsNoTracking to squeeze a bit more performance from your application.

 

 

Non-functional Requirements, Security, Web Development

Use https://securityheaders.io to check your site’s header security in an instant

A while back I posted an article on how to improve the security of your site by configuring headers in IIS.

I thought I’d follow up on this with a quick post about a fantastic utility online – https://securityheaders.io/.

Plug your website URL into this site, and get a report immediately about how good your site headers are, and what you can do to tighten things up. The report is understandable, and every bit of information – whether that’s missing headers, or headers configured insecurely – will have a link to the site creator’s blog explaining what this means in great detail.

Sadly my blog – which is all managed by WordPress.com – comes out with an E rating. How embarrassing…one day I will find the time to host all this on my own domain.

Final hint – as you might expect, if you put https://securityheaders.io into the site, you’ll see what an A+ report looks like!