Non-functional Requirements, Security

Another good reason to use to host your .NET source code – automated NuGet package vulnerability scans

This quick post is about how you can use GitHub and the OSS Index to scan your project’s NuGet packages for vulnerabilities – a good example of how perform your application security early on in the application life cycle (also known as ‘shift left‘)

So here’s a problem

You’re working on a .NET Core application, and obviously using some of the libraries provided by Microsoft. Behind the scenes, Microsoft’s security teams have found a security hole in one of these libraries and issued a patched NuGet package.

You wouldn’t always immediately update NuGet packages everytime there’s an upgrade, but you definitely would upgrade if you knew there was a vulnerability in one of your dependencies.

How are you going to know that Microsoft have found and fixed a vulnerability, so you can prioritise an application upgrade?

GitHub is helping to solve this problem

A little while back I got a couple of notifications from GitHub that one of my projects refers to a Microsoft NuGet package – Microsoft.AspNetCore.All, version 2.0.5 – and the library contains a potential security hazard.


I was pretty surprised – I already use the Audit.NET extension within Visual Studio 2017 to audit my NuGet packages against the OSS index. This extension triggers an error at design time in Visual Studio if it detects that my project uses a package that has a known vulnerability.

If you haven’t checked out the Sonatype OSS index, I really encourage you to do so – it has a bunch of useful tools and information to identify if there are known security vulnerabilities lurking in your open source dependencies.


GitHub helpfully provided me with a bit more detail on what the problem was – my project used version 2.0.5 of the Microsoft.AspNetCore.All NuGet package, and this version is vulnerable to a couple of forms of attack (denial of service and excess consumption of resources).


This makes me extremely glad to be using GitHub to store my code, as it directly highlights to me that there’s a potential hazard lurking in the project dependencies. Now I can do something about it – like upgrade my libraries to the patched version v2.0.9, and push the changes to GitHub.

And after pushing the updated version, the security alerts disappear.


Wrapping up

I like to ‘shift left‘ as much as possible with my application security testing – it’s unambiguously better to carry out security testing as early as possible in the application lift cycle. Having an security extra test automatically built into my GitHub source code repository is a great addition to the security testing process.

About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, Non-functional Requirements, Security

Serve CSS and JavaScript from CDNs safely with subresource integrity (SRI) attributes

I’m building a web application at the moment which plots data on a map using the Leaflet JS framework. Leaflet JS is fantastic, and has a huge number of open-source community plugins which make it even more useful.

For these plugins, I can download them and host the JavaScript and CSS on my own website, but I’d prefer to use a CDN (Content Delivery Network) like CloudFlare. Using a service like this this means so I don’t have to host the files, and also these files will be served to my users from a site that’s close to them.

Obviously this means that the CDN is now in control of my files – how can I make sure these files haven’t been tampered with before I serve them up to my users?

How can I make sure these files on the CDN haven’t been tampered with before I serve them up to my users? recommends that “compromise of a third-party service should not automatically mean compromise of every site which includes its scripts“.

Troy Hunt wrote about this a while back and recommends using the ‘integrity’ attributes in script and link tags that reference subresources – supported browsers will calculate a hash of the file served by the CDN and compare that hash with the value in the integrity attribute. If they don’t match, the browser doesn’t serve the file.

The catch is that not all browsers support this – though coverage in modern browsers is pretty good. You can check out to see which browsers support the integrity attribute.


How can I calculate the hash of my file to put in the integrity attribute?

I like to use Scott Helme’s utility at to create the hash of JavaScript and CSS files. This calculates 3 different hashes, using SHA256, SHA384 and SHA512.

So instead of my script tag looking like this:

<script src=""></script>

My script tags now look like this:

<script src="" 
        integrity="sha256-tfcLorv/GWSrbbsn6NVgflWp1YOmTjyJ8HWtfXaOaJc= sha384-/I247jMyT/djAL4ijcbNXfX+PA8OZmkwzUr6Gotpgjz1Rxti1ZECG9Ne0Dj1pXrx sha512-nMMmRyTVoLYqjP9hrbed9S+FzjZHW5gY1TWCHA5ckwXZBadntCNs8kEqAWdrb9O7rxbCaA4lKTIWjDXZxflOcA==" 

This also works for CSS – without the integrity attribute it would look like the code below:

<link rel="stylesheet" href="" />

But a more secure version is below:

<link rel="stylesheet" 
      href="" integrity="sha256-YR4HrDE479EpYZgeTkQfgVJq08+277UXxMLbi/YP69o= sha384-BF7C732iE6WuqJMhUnTNJJLVvW1TIP87P2nMDY7aN2j2EJFWIaqK89j3WlirhFZU sha512-puBpdR0798OZvTTbP4A8Ix/l+A4dHDD0DGqYW6RQ+9jxkRFclaxxQb/SJAWZfWAkuyeQUytO7+7N4QKrDh+drA==" 

Wrapping up

Hopefully this is useful information, and provides a guide on how to make sure your site doesn’t serve up JavaScript or CSS content that has been tampered with.

Azure, GIS, Leaflet, Security, Web Development

Getting started with Azure Maps, using Leaflet to display roads and satellite images, and comparing different browser behaviours

In this post I’m going to describe how to use the new(ish) Azure Maps service from Microsoft with the Leaflet JavaScript library. Azure Maps provides its own API for Geoservices, but I have an existing application that uses Leaflet, and I wanted to try out using the Azure Maps tiling services.

Rather than just replicating the example that already exists on the excellent Azure Maps Code Samples site, I’ll go a bit further:

  • I’ll show how to display both the tiles with roads and those with aerial images
  • I’ll show how to switch between the layers using a UI component on the map
  • I’ll show how Leaflet can identify your present location
  • And I’ll talk about my experiences of location accuracy in Chrome, Firefox and Edge browsers on my desktop machine.

As usual, I’ve made my code open source and posted it to GitHub here.

First, use your Azure account to get your map API Key

I won’t go into lots of detail about this part – Microsoft have documented the process very well here. In summary:

  • If you don’t have an Azure account, there are instructions here on how to create one.
  • Create a Maps account within the Azure Portal and get your API Key (instructions here).

Once you have set up a resource group in Azure to manage your mapping services, you’ll be able to track usage and errors through the portal – I’ve pasted graphs of my usage and recorded errors below.


You’ll use this API Key to identify yourself to the Azure Maps tiling service. Azure Maps is not a free service – pricing information is here – although presently on the S0 tier there is an included free quantity of tiles and services.

API Key security is one key area of Azure Maps that I would like to be enhanced – the API Key has to be rendered on the client somewhere in plain text and then passed back to the maps API. Even with HTTPS, the API Key could be easily intercepted by someone viewing the page source, or using a tool to read outgoing requests.

Many other tiling services use CORS to restrict which domains can make requests, but:

  • Azure Maps doesn’t do this at the time of writing and
  • This isn’t real security because the Origin header can be easily modified (I know it’s a forbidden header name for a browser but tools like cUrl can spoof the Origin). More discussion here and here.

So this isn’t a solved problem yet – I’d recommend you consider how you use your API Key very carefully and bear in mind that if you expose it on the internet you’re leaving your account open to abuse. There’s an open issue about this raised on GitHub and hopefully there will be an announcement soon.

Next, set up your web page to use the Leaflet JS library

There’s a very helpful ‘getting started‘ tutorial on the Leaflet website – I added the stylesheet and javascript to my webpage’s head using the code below.

<link rel="stylesheet" href=""
      crossorigin="" />
<script src=""

Now add the URLs to your JavaScript to access the tiling services

I’ve included some very simple JavaScript code below for accessing two Azure Maps services – the tiles which display roadmaps and also those which have satellite images.

function satelliteImageryUrl() {
    return "{z}&x={x}&y={y}&subscription-key={subscriptionKey}";
function roadMapTilesUrl() {
    return "{z}&x={x}&y={y}&subscription-key={subscriptionKey}";

If you’re interested in reading more about these two tiling services, there’s more about the road map service here and more about the satellite image service here.

Now add the tiling layers to Leaflet and create the map

I’ve written a JavaScript function below which registers the two tiling layers (satellite and roads) with Leaflet. It also instantiates the map object, and attempts to identify the user’s location from the browser. Finally it registers a control which will appear on the map and list the available tiling services, allowing me to toggle between them on the fly.

var map;
function GetMap() {
    var subscriptionKey = '[[[**YOUR API KEY HERE**]]]';
    var satellite = L.tileLayer(satelliteImageryUrl(), {
        attribution: '© ' + new Date().getFullYear() + ' Microsoft, © 1992 - ' + new Date().getFullYear() + ' TomTom',
        maxZoom: 18,
        tileSize: 512,
        zoomOffset: -1,
        id: 'azureSatelliteMaps',
        crossOrigin: true,
        subscriptionKey: subscriptionKey
    var roads = L.tileLayer(roadMapTilesUrl(), {
        attribution: '© ' + new Date().getFullYear() + ' Microsoft, © 1992 - ' + new Date().getFullYear() + ' TomTom',
        maxZoom: 18,
        tileSize: 512,
        zoomOffset: -1,
        id: 'azureRoadMaps',
        crossOrigin: true,
        subscriptionKey: subscriptionKey
    // instantiate the map object and display the 'roads' layer
    map ='myMap', { layers: [roads] });
    // attempt to identify the user's location from the browser
    map.locate({ setView: true, enableHighAccuracy: true });
    map.on('locationfound', onLocationFound);
    // create an array of the tiling base layers and their 'friendly' names
    var baseMaps = {
        "Azure Satellite Imagery": satellite,
        "Azure Roads": roads
    // add a control to map (top-right by default) allowing the user to toggle the layer
    L.control.layers(baseMaps, null, { collapsed: false }).addTo(map);

Finally, I’ve added a div to my page which specifies the size of the map, gives it the Id “mymap” (which I’ve used in the JavaScript above when instantiating the map object), and I call the GetMap() method when the page loads.

<body onload="GetMap()">
    <div id="myMap" style="position:relative;width:900px;height:600px;"></div>

If the browser GeoServices have identified my location, I’ll also be given an accuracy in meters – the JavaScript below allows me to draw a circle on my map to indicate where the browser believes my location to be.

map.on('locationfound', onLocationFound);
function onLocationFound(e) {
    var radius = e.accuracy / 2;
        .bindPopup("You are within " + radius + " meters from this point")
        .openPopup();, radius).addTo(map);

And I’ve taken some screenshots of the results below – first of all the results in the MS Edge browser showing roads and landmarks near my location…


…and swapping to the satellite imagery using the control at the top right of the map.


Results in Firefox and Chrome

When I ran this in Firefox and Chrome, I found that my location was identified with much less accuracy. I know both of these browsers use the Google GeoLocation API and MS Edge uses the Windows Location API so this might account for the difference on my machine (Windows 10), but I’d need to do more experimentation to better understand. Obviously my laptop machine doesn’t have GPS hardware, so testing on a mobile phone probably would give very different results.


Wrapping up

We’ve seen how to use the Azure Maps tiling services with the Leaflet JS library, and create a very basic web application which uses the Azure Maps tiling services to display both road and landmark data, and also satellite aerial imagery. It seems to me that MS Edge is able to identify my location much more accurately on a desktop machine than Firefox or Chrome on my Windows 10 machine (within a 75m radius on Edge, and over 3.114km radius on Firefox and Chrome) – however, your mileage may vary.

Finally, as I emphasised above, I’ve concerns about the security of a production application using an API Key in plain text inside my JavaScript, and hopefully Microsoft will deploy a solution with improved security soon.

About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net core, Azure, C# tip, Clean Code, Cloud architecture, Security

Simplifying Azure Key Vault and .NET Core Web App (includes NuGet package)

In my previous post I wrote about securing my application secrets using Azure Key Vault, and in this post I’m going to write about how to simplify the code that a .NET Core web app needs to use the Key Vault.

I previously went into a bit of detail about how to create a Key Vault and add a secret to that vault, and then add a Managed Service Identity to a web app. At the end of the post, I showed some C# code about how to access a secret inside a controller action.

public class HomeController : Controller
    public async Task<ActionResult> Index()
        var azureServiceTokenProvider = new AzureServiceTokenProvider();
        var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
        var secret = await keyVaultClient.GetSecretAsync("").ConfigureAwait(false);
        ViewBag.Secret = secret.Value;
        return View();
    // rest of the class...

Whereas this works for the purposes of an example of how to use it in a .NET Core MVC application, it’s not amazingly pretty code – for any serious application, I wouldn’t have all this code in my controller.

I think it would be more logical to access my secrets at the time my web application starts up, and put them into the Configuration for the app. Therefore if I need them later, I can just inject an IConfiguration object into my class, and use that to get the secret values.

using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Configuration.AzureKeyVault;

namespace MyWebApplication
    public class Program
        public static void Main(string[] args)
        private static IWebHost BuildWebHost(string[] args)
            return WebHost.CreateDefaultBuilder(args)
                .ConfigureAppConfiguration(builder =>
                    var azureServiceTokenProvider = new AzureServiceTokenProvider();
                    var keyVaultClient =
                        new KeyVaultClient(
                            new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
                            keyVaultClient, new DefaultKeyVaultSecretManager());

But as you can see above, the code I need to add here to build my web host is not very clear either – it seemed a shame to lose all the good work done in .NET Core 2 to simplify this class.

So I’ve created a NuGet package to allow me to have a simpler and cleaner interface – it’s at uploaded to the NuGet repository here, and I’ve open sourced the code at GitHub here.

As usual you can install pretty easily from the command-line:

Install-Package Kodiak.Azure.WebHostExtension -prerelease

Now my BuildWebHost method looks much cleaner – I can just add the fluent extension AddAzureKeyVaultSecretsToConfiguration and pass in the URL of the vault).

private static IWebHost BuildWebHost(string[] args)
    return WebHost.CreateDefaultBuilder(args)

I think this is a more elegant implementation, and now if I need to access the secret inside my controller’s action, I can use the cleaner code below.

using System.Diagnostics;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Configuration;
namespace MyWebApplication.Controllers
    public class HomeController : Controller
        private readonly IConfiguration _configuration;
        public HomeController(IConfiguration configuration)
            _configuration = configuration;
        public IActionResult Index()
            ViewBag.Secret = _configuration["MySecret"];
            return View();

Summing up

Azure Key Vault (AKV) is a useful tool to help keep production application secrets secure, although like any tool it’s possible to mis-use it, so don’t assume that just because you’re using AKV that your secrets are secured – always remember to examine threat vectors and impacts.

There’s a lot of documentation on the internet about AKV, and a lot of it recommends using the client Id and secrets in your code – I personally don’t like this because it’s always risky to have any kind of secret in your code. The last couple of posts that I’ve written have been about how to use an Azure Managed Service Identity with your application to avoid secrets in your code, and how to simplify the C# code you might use in your application to access AKV secrets.

.net, Azure, Cloud architecture, Security

Using the Azure Key Vault to keep secrets out of your web app’s source code

Ahead of the Global Azure Bootcamp, I’ve been looking how I could allow a distributed team to develop and deploy a web application to access an Azure SQL Server instance in a secure way. There are a few different ways that I could share credentials to access my Azure SQL database:

  • Environment variables – this keeps secrets (like passwords) out of the code and mitigates the risk they’d be committed to source code. But Environment Variables are stored in plain text, so if the host is compromised, those secrets are lost.
  • .NET Core Secret Manager tool – there is a NuGet package which allows the user to keep application secrets (like a password) in a JSON file which is stored in the user profile directory – again, this mitigates the risk that secrets would be committed to source code, but I’d still have to share that secret to be stored in plain text.

Neither of these options are ideal for me – I would rather grant access to my Azure SQL database by role and not share passwords with developers which need to be written down in somewhere, either in JSON or in my automated deployment scripts. And even though the two options above mitigate the risk that passwords are committed to source code, they don’t eliminate the risk.

So I was pretty excited to read about Azure Key Vault (AKV)- a way to securely store secrets in the cloud and avoid any risk of secrets being committed to source code.

This page from Microsoft presents a few different user stories and how AKV meets these needs, specifically around:

(There’s more information on Stack Overflow here)

But after reading the document here I was a bit surprise that the implementation described still used the Secret Manager tool – it seemed like we’re just swapping storing secrets in one place for another. I searched around to find how this could be done without the Secret Manager tool, and in numerous blog posts and videos I saw developers setting up a secret in AKV, but then copying a “client secret” from Azure into their code, and I thought this really defeated the purpose of having a vault for secrets.

Fortunately I’ve found what I need to do to use AKV with my .NET Core web application and not have to add any secrets to code – I secure my application with a Managed Service Identity. I’ve described how to do this below, with the C# code I needed to use the feature.

It looks like there’s a lot of steps but they’re all very simple – this is a lot less complicated than I thought it would be!

How to keep the secrets out of your source code.

  • First create a vault
  • Add a secret to your vault
  • Secure your app service using Managed Service Identity
  • Access the secret from your source code with a KeyVaultClient

I’ll cover each of these in turn, with code samples at the end to show how to access the AKV.

First create a vault

Open the Azure portal and log in – click on the “All services” menu item on the left hand side, and search for “key vault” – this should filter the options so you have a screen like the one below.


Once you’ve got the Key Vaults option, click on it to see a screen like the one below which will list the Key Vaults in your subscription. To create a new vault, click on the “Add” button, highlighted in the document below.


This will open another “blade” (which I just see as jargon for a floating window) in the portal where you can enter information about your new vault.

As you can see in the image below, I’ve called my vault “MyWebsiteSecret”, and I’ve created a new resource group for it called “Development_Secret”. I’ve chosen the location to be “UK West”, and by default my user has been added as the first principal who has permission to access this.


I clicked on the Create button at the bottom of the screen, and the portal presents a toast at the top right to say my vault is in the process of being created.


Eventually this changes when the deployment has succeeded.


So the Azure portal screen now shows the list page again, and my new vault is on this page.


Add a secret to the vault

Now the vault is created, we can create a new secret in it. Click on the vault created in the previous step to see the details for this vault (shown below).


Now click on the “Secrets” menu item to open a blade showing secrets in this vault. Obviously as I’ve just created it, there are no secrets yet. We can create on by clicking on the “Generate/Import” button, highlighted in the image below.


After clicking on the “Generate/Import” button, a new blade opens where you can enter details of your secret. I chose a name of “TheSecret”, entered a secret value which is masked, and entered in a bit of text for the Content Type to describe the type of secret.


Once I click on “Create” at the bottom of the blade, the site returns me to the list of secrets in this vault – but this time, you can see my secret in the list, as shown below.


Secure the app service using Managed Service Identity

I have deployed my .NET Core application to Azure previously – I won’t go into lots of detail about how to deploy a .NET Core application since it’s in a million other blog posts and videos – basically I created a new App Service through the Azure portal, and linked it to a .NET Core application on my GitHub profile. Now when I push code to that application on GitHub, Azure will automatically build and deploy it.

Obviously this isn’t a full deployment pipeline – but it works for this simple application.

But I do want to show how to create a Managed Service Identity for this application – as shown in the image below, I’ve searched for my App Service on Azure.


I selected my app service to open a blade with options for this service, and selected “Managed Service Identity”, as shown below. By default it’s off – I’ve drawn an arrow below beside the button I pressed to turn it on for the app service, and after that I clicked on Save to persist my changes.


Once it was saved, I needed to go back to the key vault and secret that I created earlier and select “Access Policies”, as shown below. As I mentioned earlier, my name is in there as having permission by default, but I want my application to have permission too – so I clicked on the “Add new” option, which I’ve highlighted with a red arrow below.


The blade below opens up – for the principle, I selected my app service (called “MyAppServiceForTestingVaults”) – by default nothing is selected so you just need to click on the option to open another blade where you can search for your app service. It’ll only be available if you’ve correctly configured the Managed Service Identity as described above.

Also, I selected two “Secret permissions” from the dropdown – Get and List.


Once I click OK, I can now see the my application is in the list of app services which have access to the secret I created earlier.


Add code to my .NET application to access these secrets

I use the Azure Services Authentication Extension to simplify development with my Visual Studio account.

I’m going to choose a really simple example – modifying the Index action of a HomeController class in the default .NET Core MVC website. I also need to add a NuGet package to my project:

Install-Package Microsoft.Azure.Services.AppAuthentication -Version 1.1.0-preview

The code below allows me to authenticate myself to my Azure instance, and get the secret from my vault.

You can see there’s a URL specified in the action below – it’s in the format of:

https://<<name of the vault>><<name of the secret>>

public class HomeController : Controller
    public async Task<ActionResult> Index()
        var azureServiceTokenProvider = new AzureServiceTokenProvider();
        var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
        var secret = await keyVaultClient.GetSecretAsync("").ConfigureAwait(false);
        ViewBag.Secret = secret.Value;
        return View();
    // rest of the class...


Now I can just modify the Index.cshtml view and add some code to show the secret (as simple as adding @ViewBag.Secret into the cshtml) – and when I run the project locally, I can now see that my application has been able to access the vault and decrypt my secret (as highlighted in the image below) without any client Id or client secret information in my code – this is because my machine recognises that I’m authenticated to access my own Azure instance.

I can also deploy this code to my Azure App Service and I’ll get the same results, because the application’s Managed Service Identity ensures that my application in Azure has permission to access the secret.

secret in image

Summing up

This was a really simple example, and it’s just to illustrate how to allow developers to access AKV secrets without having to add secret information to source code. Obviously, if a developer is determined to compromise security, they could obviously decrypt passwords and disseminate in another way –  so we’d have to tighten up security for a real-world application. For example, we could have different secrets stored in different environmental resource groups as we promote our application from Dev to QA/Staging and finally to Production.

Continue reading

.net core, Security

Creating a RESTful Web API template in .NET Core 1.1 – Part #4: Securing a service against XSS, clickjacking and drive-by downloads

Back in October 2015, I posted a short article with some information on HTTP response headers which, if configured correctly in IIS, can help protect your site.

I decided it was time to update this article for .NET Core 1.1 and Kestrel, with how to protect against:

What does say about the default Web API service?

I created a RESTful Web API service using the default project template in Visual Studio 2015, and uploaded it to Azure. I decided to use Scott Helme‘s site to check if there are any problems. Looks like there’s a few…


The site helpfully gives a few more detail on specific problems.



Obviously the default project template serves no realistic purpose, and it’s not really fair to criticise a template intended as a programming starting point – all I’m doing here is charting a journey from a poor rating on to a better one.

Start with the SecurityHeaders NuGet package

So we can start fixing these problems by adding some headers – and with .NET Core, it’s really easy with the SecurityHeaders NuGet package, written by Andrew Locke. This starts to give my websites and services protection with just a few lines of code.

I can install this package to the project in Visual Studio from the command line:

Install-Package NetEscapades.AspNetCore.SecurityHeaders

Then to add the three headers listed above, we change the Configure method in the web project’s Startup.cs file – simply add a collection of pre-defined policies, and apply these to the response.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    var policyCollection = new HeaderPolicyCollection()
        .AddFrameOptionsSameOrigin()    // prevent click-jacking
        .AddXssProtectionBlock()    // prevent cross-site scripting (XSS)
        .AddContentTypeOptionsNoSniff(); // prevent drive-by-downloads
    // ...add other configuration here...

Finally, make sure the container uses the custom middleware by adding the line below to the “ConfigureServices” method in the Startup.cs file.

public void ConfigureServices(IServiceCollection services)
    // ...other services here...

Andrew has written a great post going into much more detail here and has open-sourced the code on GitHub here.

So now if I put my service into, I get a better result:


Reduce XSS risks further by adding a content security policy

These three headers are a good start – but there’s room for improvement. As the image above suggests, I cam improve things further using a Content-Security-Policy.

You can read lots more about a Content Security Policy here, but basically it adds an extra tier of protection against XSS for modern browsers by allowing you to specify what resources the site is allowed to load. The site here gives a good “starter policy” to use – shown below.

default-src 'none'; script-src 'self'; connect-src 'self'; img-src 'self'; style-src 'self';

If I wanted to allow my site to use javascript resources from a CDN, I could modify this policy to specify the origin of the CDN – however, I’ll stick with the starter policy for now.

Again, I can use the SecurityHeaders NuGet package to help me out – there’s a useful extension method called “AddCustomHeader” that I can use. I can modify the code above with just one extra line to add a content-security-policy.

        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
            var policyCollection = new HeaderPolicyCollection()
                .AddFrameOptionsSameOrigin()    // prevent click-jacking
                .AddXssProtectionBlock()    // prevent cross-site scripting (XSS)
                .AddContentTypeOptionsNoSniff() // prevent drive-by-downloads
                .AddCustomHeader("Content-Security-Policy""default-src 'none'; script-src 'self'; connect-src 'self'; img-src 'self'; style-src 'self';"); //

So let’s look now at how rates our service:


Looking pretty good!

And for bonus marks

We’ve covered the major bases – but there were a couple of warnings about information being leaked by the server, namely the server type and the framework.


How do we get rid of these?

Unfortunately the SecurityHeaders NuGet package can’t help us here – these headers are added too late in the middleware pipeline for us to intercept. Fortunately we have a couple of extra tricks up our sleeve to get rid of these.

Remove the Server header

.NET Core has a built in feature to remove this – we just modify the settings for the Kestrel web host in our application’s Program.cs file. I’ve highlighted the line we need in the code snippet below.

public class Program
    public static void Main(string[] args)
        var host = new WebHostBuilder()
            .UseKestrel(options => options.AddServerHeader = false)

Remove the X-Powered-By header

This is a slightly strange one – we can’t remove this header in our C# code. This header appears to be added by IIS in Azure. Fortunately we can remove this by changing our web.config file and adding the node, as shown in the snippet below:

    <!-- ... other web.config stuff goes here... -->
        <remove name="X-Powered-By" /> <!-- required because this is added by IIS - -->

So after deploying this, we can check what says now:


Still an A+ and this time, no more informational warnings.

You might still use IIS as a reverse proxy in front of a set of web-servers hosting Kestrel, and obviously you’ll still need to add these headers to IIS as described here.

Double checking

I thought it might be a good idea to check this against more than one site that benchmarks the security of your sites headers – another useful one is from High-Tech Bridge who provide a free header checking service. I got an A from this service – so a bit less than from, because I’m on HTTP rather than HTTPS, and therefore cookie dropped in by Azure isn’t secured. Obviously moving to HTTPS would solve that problem.

I found some interesting statistics on High-Tech Bridge’s site – I’ve taken a screen-capture, where they claim that only 2.4% of sites get an A grade for the security of their sites headers. A staggering 85.3% don’t have even this basic level of security. It’s amazing that just a few lines of .NET code will get you into the top 2.4%.


Wrapping up

You can see that with just a few extra lines of .NET code, you can add significant and valuable extra protection to your site. I personally prefer doing as much as possible of this in my C# code, and the SecurityHeaders NuGet package written by Andrew Locke helps a lot. You can check the security of your own site’s headers using the site written by Scott Helme. I think these headers are an important addition to my Web API and App templates, as it bakes in elements of security to the code from the earliest stages in a project.

.net, Clean Code, Security

Using C# to create a bitmap of a fingerprint from the BioMini scanner and Neurotec FFV SDK

Previously, I wrote about my experiences with the BioMini fingerprint scanner from Suprema and using it with the Neurotechnology Free Fingerprint Verification SDK. The last thing I mentioned in that post was that I’d research how to use these tools to scan a fingerprint and use .NET to generate a bitmap image of the finger that was scanned.

There is obviously a C# sample application bundled with the Neurotechnology FFV SDK – this is a Windows Forms application, and has quite a lot of functionality built around Enrollment (the act of scanning a fingerprint), Verification (the act of comparing a fingerprint to another), and storing fingerprints in a local database.

At this point, I am really just interested in scanning the fingerprint, and generating a bitmap of the scanned image. I wanted to cut through all of the extra sample applications code, and write the absolute bare minimum code necessary to do this. I also wanted to make this code clean – potentially with a view to creating a Nuget package.

First I designed what the interface needed to look like – I wanted to enroll a fingerprint, and create a bitmap file. Also, I wanted to make sure that the resources used by the library were released, so I knew I needed to implement the IDisposable interface. I designed my interface to look like the code below.

public interface IFingerprintScanner : IDisposable
    void CreateBitmapFile(string path);
    void Enroll();

Next I needed an implementation of enrolling a fingerprint and generating the bitmap image.

Enrollment and image generation is pretty straightforward – the Neurotechnology FFV documentation is extremely useful for this. There are three steps:

  • First create a scanning engine, based on the Nffv object in the FFV SDK. This object takes three parameters – the verification database name and password (which  I don’t care about, I just want to enroll), and a string representing the manufacturer code (for the BioMini scanner, the manufacturer code is “Suprema”);
_scanningEngine = new Nffv("FakeDatabaseName", "", _manufacturerCode);
  • Then call the Enroll method, which makes the scanner hardware switch on and wait for someone to put their finger on the screen. This returns an NffvUser object, which contains the information scanned in.
_scannerUser = _scanningEngine.Enroll(_timeout, out engineStatus);
  • Finally, I can then call the GetBitmap() method on the NffvUser object, which returns a Bitmap object.
var image = _scannerUser.GetBitmap();

I decided to create a scanner class that was abstract, which would take the manufacturer code as a parameter – the class looks like the code below:

public abstract class AbstractFingerprintScanner : IDisposable, IFingerprintScanner
    private Nffv _scanningEngine;
    private NffvStatus engineStatus;

    private NffvUser _scannerUser;

    private uint _timeout;

    private string _manufacturerCode;

    public AbstractFingerprintScanner(uint timeout, string manufacturerCode)
        _timeout = timeout;
        _manufacturerCode = manufacturerCode;

    public void Enroll()
        _scanningEngine = new Nffv("FakeDatabaseName", "", _manufacturerCode);
        // when this next line is executed, a signal is sent to the hardware fingerprint scanner to start detecting a fingerprint.
        _scannerUser = _scanningEngine.Enroll(_timeout, out engineStatus);

    public void CreateBitmapFile(string path)
        if (engineStatus == NffvStatus.TemplateCreated)
            var image = _scannerUser.GetBitmap();
            throw new Exception(string.Format("Bitmap was not created - Enrollment result status: {0}", engineStatus));

    public void Dispose()

        if (_scanningEngine != null)
            _scanningEngine = null;

This means that I can create a very simple concrete instantiation of the BioMini fingerprint scanner software:

public class BioMiniFingerprintScanner : AbstractFingerprintScanner
    private static string SCANNER_MANUFACTURER = "Suprema";

    public BioMiniFingerprintScanner(uint timeout) : base(timeout, SCANNER_MANUFACTURER) { }

And finally, the code I need to enroll and create a print becomes simple also:

static void Main(string[] args)
    uint timeout = 10000;

    using (var scanner = new BioMiniFingerprintScanner(timeout))
Non-functional Requirements, Security, Web Development

Use to check your site’s header security in an instant

A while back I posted an article on how to improve the security of your site by configuring headers in IIS.

I thought I’d follow up on this with a quick post about a fantastic utility online –

Plug your website URL into this site, and get a report immediately about how good your site headers are, and what you can do to tighten things up. The report is understandable, and every bit of information – whether that’s missing headers, or headers configured insecurely – will have a link to the site creator’s blog explaining what this means in great detail.

Sadly my blog – which is all managed by – comes out with an E rating. How embarrassing…one day I will find the time to host all this on my own domain.

Final hint – as you might expect, if you put into the site, you’ll see what an A+ report looks like!