.net core, Azure, Azure Application Insights, C# tip, Instrumentation, MVC

Healthcheck endpoints in C# in MVC projects using ASP.NET Core, and writing results to Azure Application Insights

Every developer wants to build a system that never breaks, but in reality things go wrong. The best systems are built to expect that and handle problems, that rather than just silently failing.

Maybe your database becomes unavailable (e.g. runs out of hard disk space) and your failover doesn’t work – or maybe a third party web service that you depend on stops working.

Sometimes your application can be programmed to recover from things going wrong – here’s my post on The Polly Project to find out more about one way of doing that – but when there’s a catastrophic failure that you can’t recover from, you want to be alerted as soon as it happens, rather than hear from a customer.

And it’s kind to provide a way for your customers to find out about the health of your system. As an example, just check out the monitoring hub below from Postcodes.io – this is a great example of being transparent about key system metrics like service status, availability, performance, and latency.


MVC projects in ASP.NET Core have a built in feature to provide information on the health of your website. It’s really simple to add it to your site, and this instrumentation comes packaged as part of the default ASP.NET Core toolkit. There are also some neat extensions available on NuGet to format the data as JSON, add a nice dashboard for these healthchecks, and finally to push the outputs to Azure Application Insights. As I’ve been implementing this recently, I wanted to share with the community how I’ve done it.

Scott Hanselman has blogged about this previously, but there have been some updates since he wrote about this which I’ve included in my post.

Returning system health from an ASP.NET Core v2.2 website

Before I start – I’ve uploaded all the code to GitHub here so you can pull the project and try yourself. You’ll obviously need to update subscription keys, instrumentation keys and connection strings for databases etc.

Edit your MVC site’s Startup.cs file and add the line below to the ConfigureServices method:


And then add the line of code below to the Configure method.


That’s it. Now your website has a URL available to tell whether it’s healthy or not. When I browse to my local test site at the URL below…


..my site returns the word “Healthy”. (obviously your local test site’s URL will have a different port number, but you get the idea)

So this is useful, but it’s very basic. Can we amp this up a bit – let’s say want to see a JSON representation of this? Or what about our database status? Well fortunately, there’s a great series of libraries from Xabaril (available on GitHub here) which massively extend the core healthcheck functions.

Returning system health as JSON

First, install the AspNetCoreHealthChecks.UI NuGet package.

Install-Package AspNetCore.HealthChecks.UI

Now I can change the code in my StartUp.cs file’s Configure method to specify some more options.

The code below changes the response output to be JSON format, rather than just the single word “Healthy”.

app.UseHealthChecks("/healthcheck", new HealthCheckOptions
        Predicate = _ => true,
        ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse

And as you can see in the image below, when I browse to the healthcheck endpoint I configured as “/healthcheck”, it’s now returning JSON:

healthcheck basic json

What about checking the health of other system components, like URIs, SQL Server or Redis?

Xabaril has got you covered here as well. For these three types of things, I just install the NuGet packages with the commands below:

Install-Package AspNetCore.HealthChecks.Uris
Install-Package AspNetCore.HealthChecks.Redis
Install-Package AspNetCore.HealthChecks.SqlServer

Check out the project’s ReadMe file for a full list of what’s available.

Then change the code in the ConfigureServices method in the project’s Startup.cs file.

        .AddSqlServer(connectionString: Configuration.GetConnectionString("SqlServerDatabase"),
                  healthQuery: "SELECT 1;",
                  name: "Sql Server", 
                  failureStatus: HealthStatus.Degraded)
        .AddRedis(redisConnectionString: Configuration.GetConnectionString("RedisCache"),
                        name: "Redis", 
                        failureStatus: HealthStatus.Degraded)
        .AddUrlGroup(new Uri("https://localhost:59658/Home/Index"),
                        name: "Base URL",
                        failureStatus: HealthStatus.Degraded);

Obviously in the example above, I have my connection strings stored in my appsettings.json file.

When I browse to the healthcheck endpoint now, I get much a richer JSON output.

health json

Can this information be displayed in a more friendly dashboard?

We don’t need to just show JSON or text output – Xabaril allows the creation of a clear and simple dashboard to display the health checks in a user friendly form. I updated my code in the StartUp.cs file – first of all, my ConfigureServices method now has the code below:

        .AddSqlServer(connectionString: Configuration.GetConnectionString("SqlServerDatabase"),
                  healthQuery"SELECT 1;",
                  name"Sql Server", 
        .AddRedis(redisConnectionString: Configuration.GetConnectionString("RedisCache"),
        .AddUrlGroup(new Uri("https://localhost:59658/Home/Index"),
                        name"Base URL",
services.AddHealthChecksUI(setupSettings: setup =>
    setup.AddHealthCheckEndpoint("Basic healthcheck", "https://localhost:59658/healthcheck");

And my Configure method also has the code below.

app.UseHealthChecks("/healthcheck"new HealthCheckOptions
        Predicate = _ => true,
        ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse

Now I can browse to a new endpoint which presents the dashboard below:


health default ui
And if you don’t like the default CSS, you can configure it to use your own. Xabaril has an example of a css file to include here, and I altered my Configure method to the code below which uses this CSS file.

app.UseHealthChecks("/healthcheck"new HealthCheckOptions
        Predicate = _ => true,
        ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
    .UseHealthChecksUI(setup =>

And now the website is styled slightly differently, as you can see in the image below.

health styled ui

What happens when a system component fails?

Let’s break something. I’ve turned off SQL Server, and a few seconds later the UI automatically refreshes to show the overall system health status has changed – as you can see, the SQL Server check has been changed to a status of “Degraded”.

health degrades

And this same error appears in the JSON message.

health degraded json

Can I monitor these endpoints in Azure Application Insights?

Sure – but first make sure your project is configured to use Application Insights.

If you’re not familiar with Application Insights and .NET Core applications, check out some more information here.

If it’s not set up already, you can add the Application Insights Telemetry by right clicking on your project in the Solution Explorer window of VS2019, selecting “Add” from the context menu, and choosing “Application Insights Telemetry…”. This will take you through the wizard to configure your site to use Application Insights.


Once that’s done, I changed the code in my Startup.cs file’s ConfigureServices method to explicitly push to Application Insights, as shown in the snippet below:

        .AddSqlServer(connectionString: Configuration.GetConnectionString("SqlServerDatabase"),
                  healthQuery"SELECT 1;",
                  name"Sql Server", 
        .AddRedis(redisConnectionString: Configuration.GetConnectionString("RedisCache"),
        .AddUrlGroup(new Uri("https://localhost:44398/Home/Index"),
                        name"Base URL",
services.AddHealthChecksUI(setupSettingssetup =>
    setup.AddHealthCheckEndpoint("Basic healthcheck""https://localhost:44398/healthcheck");

Now I’m able to view these results in the Application Insights – the way I did this was:

  • First browse to portal.azure.com and click on the “Application Insights” resource which has been created for your web application (it’ll probably be top of the recently created resources).
  • Once that Application Insights blade opens, click on the “Metrics” menu item (highlighted in the image below):

app insights metrics

When the chart windows opens – it’ll look like the image below – click on the “Metric Namespace” dropdown and select the “azure.applicationinsights” value (highlighted below).

app insights custom metric

Once you’ve selected the namespace to plot, choose the specific metric from that namespace. I find that the “AspNetCoreHealthCheckStatus” metric is most useful to me (as shown below).

app insights status

And finally I also choose to display the “Min” value of the status (as shown below), so if anything goes wrong the value plotted will be zero.

app insights aggregation

After this, you’ll have a graph displaying availaility information for your web application. As you can see in the graph below, it’s pretty clear when I turned on my SQL Server instance again so the application health went from a overall health status of ‘Degraded’ to ‘Healthy’.

application insights

Wrapping up

I’ve covered a lot of ground in this post – from .NET Core 2.2’s built in HealthCheck extensions, building on that to use community content to check other site resources like SQL Server and Redis, adding a helpful dashboard, and finally pushing results to Azure Application Insights. I’ve also created a bootstrapper project on GitHub to help anyone else interested in getting started with this – I hope it helps you.


.net core, Azure, C# tip, Clean Code, Dependency Injection, Inversion of Control, MVC

Azure Cache as a session data store in C# and MVC

Using the HTTP Session is one of those things that provokes…opinions. A lot of developers think that sessions are evil. And whereas I understand some of the reasons why that’s a common viewpoint – I’ve had my share of problems with code that uses sessions in the past – I’d have to qualify any criticism by saying the problems were more about how I was using the technique, rather than any inherent problem with sessions as a concept.

For example, some instances where using an in-memory session store can cause problems are:

  • If you chuck lots of website data into an in-memory session, you’ll quickly eat up lots of RAM on your web server. This might eventually cause performance problems.
  • Sessions are often short lived – often around 20 minutes – leading to a poor user experience after a period of inactivity (like being unexpectedly logged out).
  • Also in a load balanced environment, users might experience issues – if their first request leads to a session being created on one web server, and then their next request is routed to a different (less-busy) web server, then it won’t know anything about their previous session. Apparently you can work around this with using sticky sessions…and my own experiences with the sticky sessions approach are best described as “mixed”. But YMMV.
  • If you’re not using SSL/TLS, your session might be vulnerable to the Man-in-the-Middle attack vector. But the easy answer to this is….use SSL.

Anyway – I think most people would agree with the basic need for one web page to access data entered on another web page. However, if you have high throughput, a need for a large session store, or a load balanced environment, then the out-of-the-box HTTP Session object might not be for you. But that doesn’t mean you can’t use sessions at all.

‘Azure Cache for Redis’ to the rescue

So even though my application isn’t in a load-balanced environment right now, I’d still like to make sure it’s easy to port to one in the future. So I’ve been looking for alternatives to using the Session object:

  • I could use Cookies, but I can’t store very much information in them.
  • I could use a SQL database, but this seems heavyweight for my need for shortlived session-based information.
  • Something like a NoSQL store like Redis would suit very well – it’s super fast with low-latency, high-throughput performance.

I’ve written about using Redis as a fast-access data store a long time ago, but that post is out of date now and worth updating as there’s now a built-in Azure option – Azure Cache for Redis.

Spinning up Azure Cache for Redis

Check out the official docs for how to create a cache in Azure – it’s clearly described here with lots of screenshots to guide you through.

But how can I using Azure Cache for Redis in a website?

I don’t really like the ASP.NET implementation from the official documentation. It works, but there’s a lot of code in the controller’s action, and I’d like a cleaner solution. Ideally I’d like to inject an interface into my controller as a dependency, and use ASP.NET Core’s service container to instantiate the dependency on demand.

I found this really useful post from Simon Holman, and he also has created a super helpful example on GitHub. I tested this with an MVC project in .NET Core v2.2, and the implementation is very simple (check out Simon’s source code for exactly where to put these snippets).

  • Update the Startup.cs file’s ConfigureServices method after putting the connection string into your appsettings.json file:
services.AddDistributedRedisCache(options =>
    options.Configuration = Configuration.GetConnectionString("RedisCache");
  • Update the Startup.cs file’s Configure method:
  • Here’s how to set data into the cache:
var sessionstartTime = DateTime.Now.ToLongTimeString();
  • …get data from the cache:
var sessionstartTime = HttpContext.Session.GetString("mysessiondata");
  • …and remove data from the cache:

Some things to bear in mind about Azure Cache for Redis

I wanted to dive into the details of Azure Cache a bit more, just to understand what’s actually going on beneath the hood when we’re reading from, and writing to, the session object.

You can see what session keys are saved in your Redis cache using the Azure portal

redis console arrow

Once you’re in the console, you run the command “scan 0 count 100 match *” to see up to the first 100 keys in your Redis cache.

redis console

From the screenshot above, I can see that I’ve got 16 sessions open.

The Guid in the Key is actually your Session’s *private* “_sessionKey”

In the image above, you can see a bunch of GUID objects which are the keys of the items in my Redis cache. And if you look at the 16th item in the list above, you can see that it corresponds to the private “_sessionKey” value, which is held in my HttpContext.Session object (compare with the VS2019 watch window below).

redis session

So this information is interesting…but I’m not sure how useful it is. Since that property is private, you can’t access it (well you can, but not easily, you have to use reflection). But it might be helpful to know this at debug time.

Browsers behave differently when in incognito mode

I thought I’d try the application with my browser in incognito mode. And every time I hit refresh on the browser when I was in incognito or private browsing mode, a new session key was created on the server – which meant it wasn’t able to obtain data from the session previously created in the same browser instance.

You can see the number of keys has hugely increased in the image below, corresponding to the number of times I hit refresh:

private window redis

But at least I can detect when the session isn’t available through the HttpContext.Session.IsAvailable property – when a session is available, the image below is what I can see in the session using a watch in the VS2019 debugger:

session available

And when a session isn’t available (such as when my browser is in incognito mode), this is what I see:

session unavailable

So at least I can programmatically distinguish between when the session can work for the user and when it can’t.

In summary, this behaviour had a couple of implications for me:

  • Session persistence didn’t work in incognito/private windows – values weren’t persistent in the same session across pages.
  • Hitting refresh a bunch of times in incognito will create lots of orphan session objects in your server, which might have security/availability implications for your application, especially if your sessions are large and fill up available memory.

Clearing down sessions was harder than I thought

HttpContext.Session.Clear() emptied my session, but didn’t delete the key from the server, as I could still see it in the Redis console.

In fact, the only way I was able to remove sessions held in Redis was to get right into the guts of the StackExchange.Redis package using the code below. Since I knew the exact session that I wanted to delete had the key “57154387-d8b7-c361-a174-9d27b6c6caae“:

var connectionMultiplexer = StackExchange.Redis.ConnectionMultiplexer.Connect(Configuration.GetConnectionString("RedisCache"));

But this is only useful if you can get the exact session key that you want to delete, and that isn’t particularly easy. You could use reflection to get that private value like in the code below, but I get why that’s not something you might want to do.

var _sessionKey = typeof(DistributedSession)
                .GetField("_sessionKey"BindingFlags.NonPublic | BindingFlags.Instance)

Wrapping up

I’ve talked a little bit about sessions in this post – they’re not a magic hammer, but also aren’t inherently a bad tool – maybe consider an alternative to in-memory sessions if you have large session objects, or are working in a load balanced environment. Azure Cache for Redis might be one of those alternatives. I’ve found it to be interesting and useful, and relatively easy to set up as an alternative to an in-memory session, but there are a few quirks – sessions may not work the way you expect them to for users who are incognito/using private browsing, and it’s hard to completely delete a session once it has been created.

.net core, C# tip, MVC

Adding middleware to your .NET Core MVC pipeline to prettify HTML output with AngleSharp

I was speaking to a friend of mine recently about development and server side generated HTML, and they said that one thing they would love to do is improve how HTML code is when it’s rendered. Often when they look at the HTML source of a page, the indentation is completely wrong, and there are huge amounts of whitespace and unexpected newlines.

And I agreed – I’ve seen that too. Sometimes I’ve been trying to debug an issue in the rendered output HTML, and one of the first things I do format and indent the HTML code so I can read and understand it. And why not – if my C# classes aren’t indented logically, I’d find it basically unreadable. Why should my HTML be any different?

So it occurred to me that I might be able to find a way to write some middleware for my .NET Core MVC website that formats and indents rendered HTML for me by default.

This post is just a fun little experiment for me – I don’t know if the code is performant, or if it scales. Certainly on a production site I might want to minimise the amount of whitespace in my HTML to improve download speeds rather than just change the formatting.

Formatting and Indenting HTML

I’ve seen a few posts asking how to do this with HtmlAgilityPack – but even though HtmlAgilityPack is amazing, it won’t format HTML.

I’ve also seen people recommend a .NET wrapper for the Tidy library, but I’m going to use AngleSharp. AngleSharp is a .NET library that allows us to parse HTML, and contains a super useful formatter called PrettyMarkupFormatter.

var parser = new AngleSharp.Html.Parser.HtmlParser();
var document = parser.ParseDocument("<html><body>Hello, world</body></html>");
var sw = new StringWriter();
document.ToHtml(swnew AngleSharp.Html.PrettyMarkupFormatter());
var indentedHtml = sw.ToString();

And I can encapsulate this in a function as below:

private static string PrettifyHtml(string newContent)
    var parser = new AngleSharp.Html.Parser.HtmlParser();
    var document = parser.ParseDocument(newContent);
    var sw = new StringWriter();
    document.ToHtml(swnew AngleSharp.Html.PrettyMarkupFormatter());
    return sw.ToString();

Adding middleware to modify the HTML output

There’s lots of information on writing ASP.NET Core middleware here and I can build on this and the AngleSharp code to re-format the rendered HTML. The code below allows me to:

  • Check I’m in my development environment,
  • Read the rendered HTML from the response,
  • Correct the indentation using AngleSharp and the new PrettifyHtml method, and
  • Write the formatted HTML back to the Response.
if (env.IsDevelopment())
    app.Use(async (contextnext=>
        var body = context.Response.Body;
        using (var updatedBody = new MemoryStream())
            context.Response.Body = updatedBody;
            await next();
            context.Response.Body = body;
            var newContent = new StreamReader(updatedBody).ReadToEnd();
            await context.Response.WriteAsync(PrettifyHtml(newContent));

And now the HTML generated by my MVC application is formatted and indented correctly.

Wrapping up

This post is really just a proof of concept and for fun – I’ve restricted the effect to my development environment in case it doesn’t scale well. But hopefully this is useful to anyone trying to format HTML, or intercept an HTML response to modify it.

About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

Continue reading

.net core, C# tip, MVC, Non-functional Requirements, Performance

Creating a RESTful Web API template in .NET Core 1.1 – Part #3: Improving the performance by using compression

One of the simplest and most effective improvements you can make to your website or web service is to compress the stream of data sent from the server. With .NET Core 1.1, it’s really simple to set this up – I’ve decided to include this in my template project, but the instructions below will work for any .NET Core MVC or Web API project.

Only really ancient browsers are going to have problems with gzip – I’m pretty happy to switch it on by default.

.NET Core 1.1 adds compression to the ASP.NET HTTP pipeline using some middleware in the Microsoft.AspNetCore.ResponseCompression package. Let’s look at how to add this to our .NET Core Web API project.

Step 1: Add the Microsoft.AspNetCore.ResponseCompression package

There’s a few different ways to do this – I prefer to add packages from within PowerShell. From within Visual Studio (with my project open), I open a Package Manager Console, and run:

Install-Package Microsoft.AspNetCore.ResponseCompression

(But it’s obviously possible to do this from within the NuGet package manager UI as well)

This will add the package to the Web API project, and you can see this in the project.json file (partially shown below).

  "dependencies": {
    "Microsoft.NETCore.App": {
      "version": "1.1.0",
      "type": "platform"
    "Microsoft.AspNetCore.Mvc": "1.1.0",
    "Microsoft.AspNetCore.Routing": "1.1.0",
    "Microsoft.AspNetCore.Server.IISIntegration": "1.1.0",
    "Microsoft.AspNetCore.Server.Kestrel": "1.1.0",
    "Microsoft.Extensions.Configuration.EnvironmentVariables": "1.1.0",
    "Microsoft.Extensions.Configuration.FileExtensions": "1.1.0",
    "Microsoft.Extensions.Configuration.Json": "1.1.0",
    "Microsoft.Extensions.Logging": "1.1.0",
    "Microsoft.Extensions.Logging.Console": "1.1.0",
    "Microsoft.Extensions.Logging.Debug": "1.1.0",
    "Microsoft.Extensions.Options.ConfigurationExtensions": "1.1.0",
    "Microsoft.AspNetCore.ResponseCompression": "1.0.0"

Step 2: Update and configure services in the project Startup.cs file

We now just need to add a couple of lines to the Startup.cs project file, which will:

  • Add the services available to the runtime container, and
  • Use the services in the HTTP pipeline at runtime.

I’ve highlighted the lines that I added in bold red font in the code below.

public class Startup
    public Startup(IHostingEnvironment env)
        var builder = new ConfigurationBuilder()
            .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
            .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
        Configuration = builder.Build();
    public IConfigurationRoot Configuration { get; }
    // This method gets called by the runtime. Use this method to add services to the container.
    public void ConfigureServices(IServiceCollection services)
        // Add framework services.
    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)

Now when I call my web service, all responses are zipped by default.

We can prove this by looking at the headers sent with the response – I’ve pasted a screenshot of the headers sent back when I call a GET method in my Web API service. There is a header named “Content-Encoding” which has the value “gzip” – this signals that the response has been zipped.


Wrapping up

This is a really easy way to improve the performance of your website or your web service – this is one of the first things I configure when starting a new project.

Continue reading

.net, C# tip, Clean Code, Dependency Injection, Inversion of Control, MVC, Solid Principles

How to use built-in dependency inversion in MVC6 and ASP.NET Core

I’ve previously posted about the new logging features in ASP.NET Core RC1 and MVC6. This time I’m going to write about how Microsoft now has dependency inversion baked into the new Core framework.

Dependency inversion is a well documented and understood principle – it’s what the D stands for in SOLID, and says that your code should only depend on abstractions, not concrete implementations. So plug your services into your application through interfaces.


In previous versions of MVC, I’ve needed to download a 3rd party library to assist with dependency inversion – these libraries are also sometimes called “containers”. Examples of containers I’ve used are NInject.MVC, Autofac, and Sprint.NET.

In MVC6, Microsoft has entered this field, by including a simple container in the new version of ASP.NET. This isn’t intended to replicate all the features of other containers – but it provides dependency inversion features which may be suitable for many projects. This allows us to avoid adding a heavyweight 3rd party dependency to our solution (at least until there’s a feature we need from it).

Getting started

For our example, first create the default MVC6 web application in Visual Studio 2015.


Now let’s create a simple stubbed service and interface to get some users. We’ll save this in the “Services”folder of the project.

public interface IUserService
    IEnumerable<User> Get();

We’ll need a User object too – we’ll put this in the “Models” folder.

public class User
    public string Name { getset; }

Let’s create a concrete implementation of this interface, and save this in the “Services” folder too.

public class UserService : IUserService
    public IEnumerable<User> Get()
        return new List<User>{ new User { Name = "Jeremy" } };

Now modify the HomeController to allow us to display these users on the Index page – we need to change the constructor (to inject the interface as a class dependency), and to change the Index action to actually get the users.

public class HomeController : Controller
    private readonly IUserService _userService;
    public HomeController(IUserService userService)
        _userService = userService;
    public IActionResult Index()
        var users = _userService.Get();
        return View(users);

If we just run our project now, we’ll get an exception – the HomeController’s Index action is trying to get users, but the IUserService has not been instantiated yet.


We need to configure the services that the container knows about. This is where Microsoft’s new dependency inversion container comes in. You just need to add a single line of code in the ConfigureServices method in Startup.cs to make sure the controller is given a concrete instance of UserService when it asks the container “Can you give me something that implements IUserService?

public void ConfigureServices(IServiceCollection services)

If we run the project again now, we won’t get any exceptions – obviously we’d have to change the Index view to display the users.

Transient, Scoped, Singleton, Instance

In the example above, I used the “AddTransient” method to register the service. There’s actually 4 options to register services:

  • AddTransient
  • AddScoped
  • AddSingleton
  • AddInstance

Which option you choose depends on the lifetime of your service:

  • Transient services are created each time they are called. This would be useful for a light service, or when you need to guarantee that every call to this service comes from a fresh instantiation (like a random number generator).
  • Scoped services are created once per request. Entity Framework contexts are a good example of this kind of service.
  • Singleton services are created once and then every request after that uses the service that was created the first time. A static calculation engine might be a good candidate for this kind of service.
  • Instance services are similar to Singleton services, but they’re created at application startup from the ConfigureServices method (whereas the Singleton service is only created when the first request is made). Instantiating the service at startup would be useful if the service is slow to start up, so this would save the site’s first user from experiencing poor performance.


Microsoft have added their own dependency inversion container to the new ASP.NET Core framework in MVC6. This should be good enough for the needs of many ASP.NET projects, and potentially allows us to avoid adding a heavyweight third party IoC container.

.net, C# tip, Clean Code, Dependency Injection, Inversion of Control, MVC

How to use NLog or Serilog with C# in ASP.NET Core

ASP.NET core is still pretty new – at the time of writing, it’s still only at Release Candidate 1. I downloaded it for the first time a few days ago to play with the sample projects, and was surprised (in a good way) by how much has changed in the default project for MVC6. Of course the standard way of using Models, Views and Controllers is still similar to how it was in recent versions of MVC – but, the project infrastructure and configuration options are unrecognisably different (at least to me).

One of the first things I do when I set up a new project is configure the instrumentation – namely logging. I’d read a new feature of ASP.NET Core is that it provides built-in interfaces for logging – ILogger and ILoggerFactory.

This is a nice feature and provides me with an opportunity to write cleaner code. In previous versions of MVC, if I’d injected a logger interface into my controller classes, I still needed to introduce a dependency on a 3rd party library to every class that used this interface. So even though I’m injecting a dependency using an interface, if I changed logging library, I’d have to modify each of these classes anyway. Of course I could write a wrapper library for my 3rd party logging library, but I’d prefer not to have to write (and test) even more code.

Having the logging interface built into the framework gives me the opportunity to clean this up. So if I now want to add logging to my controller, I can write something like the code below. You can see this doesn’t have a dependency on a 3rd party library’s namespace – just a namespace provided by Microsoft.

using Microsoft.AspNet.Mvc;
using Microsoft.Extensions.Logging;
namespace WebApplication.Controllers
    public class HomeController : Controller
        private ILogger<HomeController> _logger;
        public HomeController(ILogger<HomeController> logger)
            _logger = logger;
        public IActionResult Index()
            _logger.LogInformation("Home controller and Index action - logged");
            return View();

For this post, I created a default MVC6 project, and modified the HomeController to match the code above – I just added the bold text.

So how can we integrate third party libraries into an MVC6 project?

Configure the default ASP.NET MVC6 project to use NLog

Let’s configure NLog first.

Install-package NLog.Extensions.Logging -pre
  • Then we need to add a configuration file – nlog.config – to the root of our project. You can get a perfect example from github here – just remember to change the file locations in this config file to directories that exist in your environment.
  • Finally, modify the Startup.cs file’s Configure method by adding a couple of lines of code.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)

Now just run the project – notice I didn’t need to make any changes to my HomeController class. My project created a log file named “nlog-all-2016-03-27.log” which has the text:

2016-03-27 00:27:29.3796|WebApplication.Controllers.HomeController|INFO|Home controller and Index action - logged

Configure the default ASP.NET MVC6 project to use Serilog

Let’s say for whatever reason – maybe you want to use message templates to structure your logging data – you decide that you’d prefer to use the Serilog library instead of NLog. What changes do I need to make to my project to accommodate this?

Previously, if I’d wanted to change logging library, I’d have had to change every class that logged something – probably remove a namespace inclusion of “using NLog” and add a new one of “using Serilog”, and maybe even change the methods used to log information.

But with Asp.NET Core, I don’t need to worry about that.

  • First I need to install a pre-release nuget package for Serilog;
     Install-package Serilog.Sinks.File -pre
  • Next, I need to modify the Startup.cs file in a couple of places – the first change goes into the Startup method:
public Startup(IHostingEnvironment env)
    // For Serilog
    Log.Logger = new LoggerConfiguration()

The next change goes into the Configure method:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)

That’s it – after running the project again, I had logs written to the file at C:\users\jeremy\Desktop\log.txt, showing the entry:

2016-03-27 00:01:46.923 +00:00 [Information] Home controller and Index action - logged

Obviously I can also safely remove the NLog packages and configuration that this point.


So you can see the new ASP.NET Core framework has made it super easy to swap out logging library dependencies. A big advantage for me is that the logging interface used by each file is now part of the framework that Microsoft provide, which means my classes aren’t tightly coupled to an implementation.

.net, C# tip, IIS, MVC, Non-functional Requirements, Performance, Web Development

More performance tips for .NET websites which access data

I recently wrote about improving the performance of a website that accesses a SQL Server database using Entity Framework, and I wanted to follow up with a few more thoughts on optimising performance in an MVC website written in .NET. I’m coming towards the end of a project now where my team built an MVC 5 site, and accessed a database using Entity Framework. The engineers were all scarred survivors from previous projects pretty experienced, so we were able to implement a lot of non-functional improvements during sprints as we went along. As our site was data driven, looking at that part was obviously important, but it wasn’t the only thing we looked at. I’ve listed a few of the other things we did during the project – some of these were one off settings, and others were things we checked for regularly to make sure problems weren’t creeping in.

Compress, compress, compress

GZip your content! This makes a huge difference to your page size, and therefore to the time it takes to render your page. I’ve written about how to do this for a .NET site and test that it’s working here. Do it once at the start of your project, and you can forget about it after that (except occasionally when you should check to make sure someone hasn’t switched it off!)

Check your SQL queries, tune them, and look out for N+1 problems

As you might have guessed from one of my previous posts, we were very aware of how a few poorly tuned queries or some rogue N+1 problems could make a site grind to a halt once there were more than a few users. We tested with sample data which was the “correct size” – meaning that it was comparable with the projected size of the production database. This gave us a lot of confidence that the indexes we created in our database were relevant, and that our automated integration tests would highlight real N+1 problems. If we didn’t have “real sized data” – as often happens where a development database just has a few sample rows – then you can’t expect to discover real performance issues early.

Aside: Real sized data doesn’t have to mean real data – anonymised/fictitious data is just as good for performance analysis (and obviously way better from a security perspective).

Use MiniProfiler to find other ADO.NET bottlenecks

Just use it. Seriously, it’s so easy, read about it here. There’s even a nuget repository to make it even easier to include in your project. It automatically profiles ADO.NET calls, and allows you to profile individual parts of your application with a couple of simple lines of code (though I prefer to use this during debugging, rather than pushing those profile customisations into the codebase). It’s great for identifying slow parts of the site, and particularly good at identifying repeated queries (which is a giveaway symptom of the N+1 problem).

Reduce page bloat by optimising your images

We didn’t have many images in the site – but they were still worth checking. We used the Firefox Web Developer Toolbar plugin, and the “View Document Size” item from the “Information” menu. This gave us a detailed breakdown of all the images on the page being tested – and highlighted a couple of SVGs which had crept in unexpectedly. These were big files, and appeared in the site’s header, so every page would have been affected. They didn’t need to be SVGs, and it was a quick fix to change it to a GIF which made every page served a lot smaller.

For PNGs, you can use the PNGOut utility to optimise images – and you can convert GIFs to PNG as well using this tool.

For JPEGs, read about progressive rendering here. This is something where your mileage may vary – I’ll probably write more about how to do this in Windows at a future time.

Minifying CSS and JavaScript

The Web Developer Toolbar saved us in another way – it identified a few JavaScript and CSS files issues. We were using the built in Bundling feature of MVC to combine and minify our included scripts – I’ve written about how to do this here – and initially it looked like everything had worked. However, when we looked at the document size using the Web Developer Toolbar, we saw that some documents weren’t being minified. I wrote about the issue and solution here, but the main point was that the Bundling feature was failing silently, causing the overall page size to increase very significantly. So remember to check that bundling/minifying is actually working – just because you have it enabled doesn’t mean it’s being done correctly!

Remember to put CSS at the top of your page, and include JavaScript files at the bottom.

Check for duplicated scripts and remove them

We switched off bundling and minification to see all the scripts being downloaded and noticed that we had a couple of separate entries for the JQuery library, and also for some JQuery-UI files. These were big files and downloading them once is painful enough, never mind unnecessarily doing it again everytime. It’s really worth checking to make sure you’re not doing this – not just for performance reasons, but if you find this is happening it’s also a sign that there’s maybe an underlying problem in your codebase. Finding it early gives you a chance to fix this.

Do you really need that 3rd party script?

We worked hard to make sure that we weren’t including libraries just for the sake of it. There might be some cool UI feature which is super simple to implement by just including that 3rd party library…but every one of those 3rd party libraries includes page size. Be smart about what you include.

Tools like JQuery UI even allow you to customise your script to be exactly as big or small as you need it to be.

Is your backup schedule causing your site to slow down?

I witnessed this on a previous project – one of our team had scheduled the daily database backup to happen after we went home…leading to some of our users in the world in a later time zone to see a performance deterioration for about half an hour at the same time every day. Rescheduling the daily backup to later in the day caused us no problems and removed a significant problem for our users.

Is someone else’s backup schedule causing your site to slow down?

There’s a corollary to the previous point – if you’re seeing a mysterious performance deterioration at the same time every day and you’re absolutely sure it’s not something that you or your users are doing, check if your site is on shared hosting. When I contacted our hosts and requested that our company VMs were moved onto a different SAN, it miraculously cleared up a long-standing performance issue.


There’s a few tips here which really helped us keep our pages feeling fast to our users (and some other tips that I’ve picked up over the years). We didn’t do all of this at the end of the project, this was something we focussed on all the way through. It’s really important to make sure you’re checking these things during sprints – and part of your Definition of Done if possible.