.net, .net core, Azure, Azure DevOps, Azure Pipelines, Web Development

Everything as Code with Azure DevOps Pipelines: C#, ARM, and YAML: Part #1

Like a lot of developers, I’ve been using Azure DevOps to manage my CI/CD pipelines. I think (or I hope anyway) most developers now are using a continuous integration process – commit to a code repository like GitHub, which has a hook into a tool (like DevOps Build Pipelines) which checks every change to make sure it compiles, and doesn’t break any tests.

Just a note – this post doesn’t have any code, it’s more of an introduction to how I’ve found Azure Pipelines can make powerful improvements to a development process. It also looks at one of the new preview features, multi-stage build pipelines. I’ll expand on individual parts of this with more code in later posts.

Azure Pipelines have traditionally been split into two types – build and release pipelines.

oldpipelines

I like to think of the split in this way:

Build Pipelines

I’ve used these for Continuous Integration (CI) activities, like compiling code, running automated tests, creating and publishing build artifacts. These pipelines can be saved as YAML – a human readable serialisation language – and pushed to a source code repository. This means that one developer can pull another developer’s CI code and run exactly the same pipeline without having to write any code. One limitation is that Build Pipelines are a simple list of tasks/jobs – historically it’s not been possible to split these instructions into separate stages which can depend on one or more conditions (though later in this post, I’ll write more about a preview feature which introduces this capability).

Release Pipelines

I’ve used these for Continuous Deployment (CD) activities, like creating environments and deploying artifacts created during the Build Pipeline process to these environments. Azure Release Pipelines can be broken up into stages with lots of customized input and output conditions, e.g. you could choose to not allow artifacts to be deployed to an environment until a manual approval process is completed. At the time of writing this, Release Pipelines cannot be saved as YAML.

What does this mean in practice?

Let’s make up a simple scenario.

A client has defined a website that they would like my team to develop, and would like to see a demonstration of progress to a small group of stakeholders at the end of each two-week iteration. They’ve also said that they care about good practice – site stability and security are important aspects of system quality. If the demonstration validates their concept at the end of the initial phase, they’ll consider opening the application up to a wider audience on a environment designed to handle a lot more user load.

Environments

The pattern I often use for promoting code through environments is:

Development machines:

  • Individual developers work on their own machine (which can be either physical or virtual) and push code from here to branches in the source code repository.
  • These branches are peer reviewed and subject to passing this are merged into the master branch (which is my preference for this scenario – YMMV).

Integration environment (a.k.a. Development Environment, or Dev-Int):

  • If the Build Pipeline has completed successfully (e.g. code compiles and no tests break), then the Release Pipeline deploys the build artifacts to the Integration Environment (it’s critical that these artifacts are used all the way through the deployment process as these are the ones that have been tested).
  • This environment is pretty unstable – code is frequently pushed here. In my world, it’s most typically used by developers who want to check that their code works as expected somewhere other than their machine. It’s not really something that testers or demonstrators would want to use.
  • I also like to run vulnerability scanning software like OWASP ZAP regularly on this environment to highlight any security issues, or run a baseline accessibility check using something like Pa11y.

Test environment:

  • The same binary artifacts deployed to Integration (like a zipped up package of website files) are then deployed to the Test environment.
  • I’ve usually set up this environment for testers who use it for any manual testing processes. Obviously user journey testing is automated as much as possible, but sometimes manual testing is still required – e.g. for further accessibility testing.

Demonstration environment:

  • I only push to this environment when I’m pretty confident that the code does what I expect and I’m happy to present it to a room full of people.

And in this scenario, if the client wishes to open up to a wider audience, I’d usually recommend the following two environments:

Pre-production a.k.a. QA (Quality Assurance), or Staging

  • Traditionally security checks (e.g. penetration tests) are run on this environment first, as are final performance tests.
  • This is the last environment before Production, and the infrastructure should mirror the infrastructure on Production so that results of any tests here are indicative of behaviour on Production.

Production a.k.a. Live

  • This is the most stable environment, and where customers will spend most of their time when using the application.
  • Often there’ll also be a mirror of this environment for ‘Disaster Recovery’ (DR) purposes.

Obviously different teams will have different and more customized needs – for example, sometimes teams aren’t able to deploy more frequently than once every sprint. If an emergency bug fix is required it’s useful to have a separate environment to allow these bug fixes to be tested before production, without disrupting the development team’s deployment process.

Do we always need all of these environments?

The environments used depend on the user needs – there’s no strategy which works in all cases. For our simple fictitious case, I think we only need Integration, Testing and Demonstration.

Here’s a high level process using the Microsoft stack that I could use to help meet the client’s initial need (only deploying as far as a demonstration platform):

reference-architecture-diagram-2

  • Developers build a website matching client needs, often pushing new code and features to a source code environment (I usually go with either GitHub or Azure DevOps Repos).
  • Code is compiled and tested using Azure DevOps Pipelines, and then the tested artifacts are deployed to:
    • The Integration Environment, then
    • The Testing Environment, and then
    • The Demonstration Environment.
  • Each one of these environments lives in its own Azure Resource Group with identical infrastructure (e.g. web farm, web application, database and monitoring capabilities). These are built using Azure Resource Manager (ARM) templates.

Using Azure Pipelines, I can create a Release Pipeline to build artifacts and deploy to the three environments above in three separate stages, as shown in the image below.

stages8

But as mentioned previously, a limitation is that this Release Pipeline doesn’t exist as code in my source code repository.

Fancy All New Multi-Stage Pipelines

But recently Azure have introduced a preview feature, where Build Pipelines have been renamed as “Pipelines”, and added some new functions.

If you want to see these in your Azure DevOps instance, log into your DevOps instance on Azure, and head over to the menu in the top right – select “Preview features”:

pipelinemenu

In the dialog window that appears, turn on the “Multi-stage Pipelines” option, highlighted in the image below:

multistagepipelines

Now your DevOps pipeline menu will look like the one below – note how the “Builds” sub-menu item has been renamed to Pipelines:

newpipelines

Now I’m able to use YAML to not only capture individual build steps, but I can package them up into stages. The image below shows how I’ve started to mirror the Release Pipeline process above using YAML – I’ve build and deployed to integration and testing environments.

build stages

I’ve also shown a fork in the process where I can run my OWASP ZAP vulnerability scanning tool after the site has been deployed on integration, at the same time as the Testing environment is being built and having artefacts deployed to it. The image below shows the tests that have failed and how they’re reported – I can select individual tests and add them as Bugs to Azure DevOps Boards.

failing tests

Microsoft have supplied some example YAML to help developers get started:

  • A simple Build -> Test -> Staging -> Production scenario.
  • A scenario with a stage that on completion triggers two stages, which then are both required for the final stage.

It’s a huge process improvement to be able to have my website source code and tests as C#, my infrastructure code as ARM templates, and my pipeline code as YAML.

For example, if someone deleted the pipeline (either accidentally or deliberately), it’s not really a big deal – we can recreate everything again in a matter of minutes. Or if the pipeline was acting in an unexpected way, I could spin up a duplicate of the pipeline and debug it safely away from production.

Current Limitations

Multi-stage pipelines are a preview feature in Azure DevOps, and personally I wouldn’t risk this with every production application yet. One major missing feature is the ability to manually approve progression from one stage to another, though I understand this is on the team’s backlog.

Wrapping Up

I really like how everything can live in source code – my full build and release process, both CI and CD, are captured in human readable YAML. This is incredibly powerful – code, tests, infrastructure and the CI/CD pipeline can be created as a template and new projects can be spun up in minutes rather than days. Additionally, I’m able to create and tear down containers which cover some overall system quality testing aspects, for example using the OWASP ZAP container to scan for vulnerabilities on the integration environment website.

As I mentioned at the start of this post, I’ll be writing more over the coming weeks about the example scenario in this post – with topics such as:

  • writing a multi-stage pipeline in YAML to create resource groups to encapsulate resources for each environment;
  • how to deploy infrastructure using ARM templates in each of the stages;
  • how to deploy artifacts to the infrastructure created at each stage;
  • use the OWASP ZAP tool to scan for vulnerabilities in the integration website, and the YAML code to do this.

 

Azure, GIS, Leaflet, Security, Web Development

Getting started with Azure Maps, using Leaflet to display roads and satellite images, and comparing different browser behaviours

In this post I’m going to describe how to use the new(ish) Azure Maps service from Microsoft with the Leaflet JavaScript library. Azure Maps provides its own API for Geoservices, but I have an existing application that uses Leaflet, and I wanted to try out using the Azure Maps tiling services.

Rather than just replicating the example that already exists on the excellent Azure Maps Code Samples site, I’ll go a bit further:

  • I’ll show how to display both the tiles with roads and those with aerial images
  • I’ll show how to switch between the layers using a UI component on the map
  • I’ll show how Leaflet can identify your present location
  • And I’ll talk about my experiences of location accuracy in Chrome, Firefox and Edge browsers on my desktop machine.

As usual, I’ve made my code open source and posted it to GitHub here.

First, use your Azure account to get your map API Key

I won’t go into lots of detail about this part – Microsoft have documented the process very well here. In summary:

  • If you don’t have an Azure account, there are instructions here on how to create one.
  • Create a Maps account within the Azure Portal and get your API Key (instructions here).

Once you have set up a resource group in Azure to manage your mapping services, you’ll be able to track usage and errors through the portal – I’ve pasted graphs of my usage and recorded errors below.

graphs

You’ll use this API Key to identify yourself to the Azure Maps tiling service. Azure Maps is not a free service – pricing information is here – although presently on the S0 tier there is an included free quantity of tiles and services.

API Key security is one key area of Azure Maps that I would like to be enhanced – the API Key has to be rendered on the client somewhere in plain text and then passed back to the maps API. Even with HTTPS, the API Key could be easily intercepted by someone viewing the page source, or using a tool to read outgoing requests.

Many other tiling services use CORS to restrict which domains can make requests, but:

  • Azure Maps doesn’t do this at the time of writing and
  • This isn’t real security because the Origin header can be easily modified (I know it’s a forbidden header name for a browser but tools like cUrl can spoof the Origin). More discussion here and here.

So this isn’t a solved problem yet – I’d recommend you consider how you use your API Key very carefully and bear in mind that if you expose it on the internet you’re leaving your account open to abuse. There’s an open issue about this raised on GitHub and hopefully there will be an announcement soon.

Next, set up your web page to use the Leaflet JS library

There’s a very helpful ‘getting started‘ tutorial on the Leaflet website – I added the stylesheet and javascript to my webpage’s head using the code below.

<link rel="stylesheet" href="https://unpkg.com/leaflet@1.3.1/dist/leaflet.css"
      integrity="sha512-Rksm5RenBEKSKFjgI3a41vrjkw4EVPlJ3+OiI65vTjIdo9brlAacEuKOiQ5OFh7cOI1bkDwLqdLw3Zg0cRJAAQ=="
      crossorigin="" />
 
<script src="https://unpkg.com/leaflet@1.3.1/dist/leaflet.js"
        integrity="sha512-/Nsx9X4HebavoBvEBuyp3I7od5tA0UzAxs+j83KgC8PU0kgB4XiK4Lfe4y4cgBtaRJQEIFCW+oC506aPT2L1zw=="
        crossorigin=""></script>

Now add the URLs to your JavaScript to access the tiling services

I’ve included some very simple JavaScript code below for accessing two Azure Maps services – the tiles which display roadmaps and also those which have satellite images.

function satelliteImageryUrl() {
    return "https://atlas.microsoft.com/map/imagery/png?api-version=1&style=satellite&tileSize=512&zoom={z}&x={x}&y={y}&subscription-key={subscriptionKey}";
}
 
function roadMapTilesUrl() {
    return "https://atlas.microsoft.com/map/tile/png?api-version=1&layer=basic&style=main&TileFormat=pbf&tileSize=512&zoom={z}&x={x}&y={y}&subscription-key={subscriptionKey}";
}

If you’re interested in reading more about these two tiling services, there’s more about the road map service here and more about the satellite image service here.

Now add the tiling layers to Leaflet and create the map

I’ve written a JavaScript function below which registers the two tiling layers (satellite and roads) with Leaflet. It also instantiates the map object, and attempts to identify the user’s location from the browser. Finally it registers a control which will appear on the map and list the available tiling services, allowing me to toggle between them on the fly.

var map;
 
function GetMap() {
    var subscriptionKey = '[[[**YOUR API KEY HERE**]]]';
 
    var satellite = L.tileLayer(satelliteImageryUrl(), {
        attribution: '© ' + new Date().getFullYear() + ' Microsoft, © 1992 - ' + new Date().getFullYear() + ' TomTom',
        maxZoom: 18,
        tileSize: 512,
        zoomOffset: -1,
        id: 'azureSatelliteMaps',
        crossOrigin: true,
        subscriptionKey: subscriptionKey
    });
 
    var roads = L.tileLayer(roadMapTilesUrl(), {
        attribution: '© ' + new Date().getFullYear() + ' Microsoft, © 1992 - ' + new Date().getFullYear() + ' TomTom',
        maxZoom: 18,
        tileSize: 512,
        zoomOffset: -1,
        id: 'azureRoadMaps',
        crossOrigin: true,
        subscriptionKey: subscriptionKey
    });
 
    // instantiate the map object and display the 'roads' layer
    map = L.map('myMap', { layers: [roads] });
 
    // attempt to identify the user's location from the browser
    map.locate({ setView: true, enableHighAccuracy: true });
    map.on('locationfound', onLocationFound);
 
    // create an array of the tiling base layers and their 'friendly' names
    var baseMaps = {
        "Azure Satellite Imagery": satellite,
        "Azure Roads": roads
    };
 
    // add a control to map (top-right by default) allowing the user to toggle the layer
    L.control.layers(baseMaps, null, { collapsed: false }).addTo(map);
}

Finally, I’ve added a div to my page which specifies the size of the map, gives it the Id “mymap” (which I’ve used in the JavaScript above when instantiating the map object), and I call the GetMap() method when the page loads.

<body onload="GetMap()">
    <div id="myMap" style="position:relative;width:900px;height:600px;"></div>
</body>

If the browser GeoServices have identified my location, I’ll also be given an accuracy in meters – the JavaScript below allows me to draw a circle on my map to indicate where the browser believes my location to be.

map.on('locationfound', onLocationFound);
 
function onLocationFound(e) {
    var radius = e.accuracy / 2;
 
    L.marker(e.latlng)
        .addTo(map)
        .bindPopup("You are within " + radius + " meters from this point")
        .openPopup();
 
    L.circle(e.latlng, radius).addTo(map);
}

And I’ve taken some screenshots of the results below – first of all the results in the MS Edge browser showing roads and landmarks near my location…

roads

…and swapping to the satellite imagery using the control at the top right of the map.

satellite

Results in Firefox and Chrome

When I ran this in Firefox and Chrome, I found that my location was identified with much less accuracy. I know both of these browsers use the Google GeoLocation API and MS Edge uses the Windows Location API so this might account for the difference on my machine (Windows 10), but I’d need to do more experimentation to better understand. Obviously my laptop machine doesn’t have GPS hardware, so testing on a mobile phone probably would give very different results.

roads2

Wrapping up

We’ve seen how to use the Azure Maps tiling services with the Leaflet JS library, and create a very basic web application which uses the Azure Maps tiling services to display both road and landmark data, and also satellite aerial imagery. It seems to me that MS Edge is able to identify my location much more accurately on a desktop machine than Firefox or Chrome on my Windows 10 machine (within a 75m radius on Edge, and over 3.114km radius on Firefox and Chrome) – however, your mileage may vary.

Finally, as I emphasised above, I’ve concerns about the security of a production application using an API Key in plain text inside my JavaScript, and hopefully Microsoft will deploy a solution with improved security soon.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

Web Development

Benchmarking your webpages for responsiveness, speed and security

Everyone knows that a website needs to be responsive, fast and secure – but how are your sites being tested? Are you doing it first, or leaving it to your users?

My next few posts are going to be a bit different to the ones I’ve done previously – I’m going to suggest a few links that might help give you a shortcut to solving some common issues. In this post I’ll share some links that enable you to benchmark your webpages. It’s always better for you to find the kind of problems these sites highlight (because if you don’t find these problems first, then it means your users are suffering).

I’ve often found these kind of sites to be really useful – occasionally they’ll flag something I’ve forgotten or not thought about, and I’ve found them most helpful when I’m auditing someone else’s site for snags.

Responsive Behaviour

Google have created an application to which rates your application for how it responds on different device types (mobile, tablet, desktop).

https://testmysite.thinkwithgoogle.com/

screenshot.1483378770.png

‘Mobile First’ is more than just testing for responsiveness – ideally sites today should be built to be Progressive Web Apps. If you don’t know what a Progressive Web App is, then check out the Wikipedia entry for a Progressive Web App.

Google have created a plugin for their Chrome browser called “Lighthouse” – this will test your website to see if it meets the basic criteria of a progressive web app. This isn’t strictly a benchmarking site (although the plugin runs in a browser) – it’ll give you a score out of 100 for how your site rates against their best practices.

https://developers.google.com/web/updates/2016/12/lighthouse-dbw

screenshot-1483379397

Build for speed

There are a few really good benchmarking sites to allow you to see if there’s any immediate room for improvement on your pages. If you’ve forgotten to compress your JavaScript or CSS, or if you’re not gzip-ing your response stream, these sites will spot it.

PageSpeed from Google: https://developers.google.com/speed/pagespeed/insights/

screenshot.1483379585.png

GTmetrix: https://gtmetrix.com/

screenshot.1483379678.png

Pingdom: https://tools.pingdom.com/

screenshot.1483379761.png

What about CSS?

I hate to say it, but sometimes CSS is treated like a poor relation when it comes to page optimisation. Just because we’ve minified the CSS and gzip-ed the stream doesn’t mean we’ve optimised it – there’s often lots of CSS classes which aren’t used or even have errors – the links below help you see if your CSS is bloated or has errors.

Parse CSS gives a nice minimal assessment and summary of the styles available to the page being tested, and is really useful as it provides a preview of these styles. The site doesn’t make any judgement on whether the CSS is good or bad – but you might find that a vast amount of the styles available to you actually aren’t needed.

 

screenshot.1483380493.png

If you want to read more about how to use this information to improve your site, check out CSS Purge at http://www.csspurge.com/. This gives a good explanation of why CSS bloat is damaging your site performance, and how to improve it.

Security checks

I’ve covered some of this in a previous post, but I’ll include the benchmarking sites that I like here again for completeness.

SecurityHeaders.io – this is a simple interface which checks if your site’s headers could be tuned be more secure against attack. It really helpfully links to explanations of jargon, and explains why it matters in plain English.

https://securityheaders.io

screenshot-1483379947

Also, High Tech Bridge also provide a site to do a similar test.

https://www.htbridge.com/websec/

screenshot.1483379844.png

Wrapping Up

I hope these links and tools are useful – they’ll maybe give you a new way of looking at your site, and might point you in the direction of making it better.

 

.net core, C# tip, Web Development

Creating a RESTful Web API template in .NET Core 1.1 – Part #2: Improving search

Previously I’ve written about how to improve the default Web API template in .NET Core 1.1. Presently it’s really basic – we’ve methods for PUT, POST, DELETE and a couple of GET methods – one which returns all objects, and one which returns an object matching the integer ID passed to the method.

You probably know that in a real application, searching for objects through an API is a lot more complex:

  • Presently the default template just returns simple strings – but usually we’re going to return more complex objects, which we probably want to pass back in a structured format.
  • It would be nice if we could provide some level of validation feedback to the user if they do something wrong – for example, we might be able to support the following RESTful call:
/api/Values/123   // valid - searches by unique integer identifier

But we can’t support the following call and should return something useful.

/api/Values/objectname   // invalid - several objects might have the same name
  • Usually we need more fine grained control than “get one thing” or “get all the things”.

So let’s look at some code which addresses each of these to improve our template.

Format the results using JSON.net

I want my default template to be a useful starting point – not just to me, but also to other members on my team.

  • Most of the time we won’t be returning a simple string – so I’ve constructed a simple anonymous object to return.
  • I prefer to use JSON.net from NewtonSoft which is pretty much the industry standard at this point, and not waste time with a bunch of answers on StackOverflow describing other (slower and more complex) ways of doing it.
// GET api/values/5
[HttpGet("{id}")]
public IActionResult Get(int id)
{
    var customObject = new { id = id, name = "name" };
 
    var formattedCustomObject = JsonConvert.SerializeObject(customObject, Formatting.Indented);
 
    return Ok(formattedCustomObject);
}

Easy – and when we run our service and browse to api/values/5, we get the following JSON response.

{
  "id": 5,
  "name": "name"
}

Help the user when they make a common mistake

I’ve frequently made a mistake when manually modifying a GET call – I search by name when I should search by id.

/api/Values/123   // correct - searches by unique integer identifier

/api/Values/objectname   // wrong - this is the mistake I sometimes make

/api/Values?name=objectname   // correct - this is what I should have done

So what result do we presently get when I make this mistake? I’d expect some kind of error message…but actually we get this:

{
  "id": 0,
  "name": "name"
}

OK – our template is somewhat contrived, but somehow the text “objectname” has been converted to a zero. What’s happened?

Our GET action has tried to interpret the value “objectname” as an integer, because there’s only one GET action which accepts one parameter, and that parameter is an integer. So the string is cast to an int, becomes zero, and out comes our incorrect and unhelpful result.

But with .NET Core we can fix this – route templates are a pretty cool and configurable feature, and can do some useful type validation for us using Route Constraints. You can read more about this here, and it allows us to create two different GET actions in our controller – one for when we’re passed an integer parameter, and one for when we’re passed a string.

First of all, let’s handle the correct case – passing an integer. I’ll rename my action to be “GetById” to be more explicit about the action’s purpose.

[HttpGet("{id:int:min(1)}", Name = "GetById")]
public IActionResult GetById([Required]int id)
{
    try
    {
        // Dummy search result - this would normally be replaced by another service call, perhaps to a database
        var customObject = new { id = id, name = "name" };
 
        var formattedCustomObject = JsonConvert.SerializeObject(customObject, Formatting.Indented);
 
        return Ok(formattedCustomObject);
    }
    catch (KeyNotFoundException)
    {
        Response.Headers.Add("x-status-reason"$"No resource was found with the unique identifier '{id}'.");
        return NotFound();
    }
}

In addition to the renamed action, you can see that the route constraint has changed to the code shown below:

[HttpGet("{id:int:min(1)}", Name = "GetById")]

So for the parameter named “id”, we’ve specified that it needs to be an integer, with a minimum value of 1. If the data doesn’t match those criteria…then this action won’t be recognised or called.

Now let’s handle the situation where someone tries to GET a text value – we’ll call this method “GetByName”.

[HttpGet("{name}")]
public IActionResult GetByName([Required]string name)
{
    Response.Headers.Add("x-status-reason"$"The value '{name}' is not recognised as a valid integer to uniquely identify a resource.");
    return BadRequest();
}

There’s a few things worth calling out here:

  • This method only returns a BadRequest() – so when we pass in a string like “objectname”, we get a 400 error.
  • Additionally, I pass an error message through the headers which I hope helpfully describes the mistake and how to correct it.
  • Finally, I’ve included a catch clause for when the id we search for hasn’t been found – in this case I return a helpful message in the response headers, and the HTTP status code 404 using the NotFound() method.

A more traditional method of error handling would be to check for all the errors in one method – but there’s a couple of reasons why I prefer to have two methods:

  • If the parameter to a single GET method is an integer, we will lose the string value passed by the client and have it replaced by a simple zero. I’d like to pass that string value back in an error message.
  • I think using two methods – one for the happy path, one for the exception – is more consistent with the single responsibility principle.

Allow custom search queries

So say we want to do something a bit more sophisticated than just search by Id or return all results – say we want to search our repository of data by a field like “name”.

The first step is to create a custom SearchOptions class.

public class SearchOptions
{
    public string Name { getset; }
}

It’s easy to add custom properties to this class – say you want to search for a people whose ages are between two limits, so you’d add properties like the ones below:

public int? LowerAgeLimit { getset; }
 
public int? UpperAgeLimit { getset; }

How do we use this “SearchOptions” class?

I’d expect a simple RESTful search query to look something like this:

/api/Values?name=objectname&lowerAgeLimit=20

If a GET method takes SearchOptions as a parameter, then because of the magic of MVC and auto-wiring properties, the searchOptions object will be populated with the name and LowerAgeLimit specified in the querystring.

The method below shows what I mean. You can see below that I’ve simply created a list of anonymous objects in the method and pretend they are the search results – we’d obviously replace this method with some kind of service call which would accept searchOptions as a parameter, and use that information to get some real search results.

[HttpGet]
public IActionResult Get([FromQueryRequired]SearchOptions searchOptions)
{
    var searchResults = new[]
                            {
                                new { id = 1, Name = "value 1" },
                                new { id = 2, Name = "value 2" }
                            }.ToList();
 
    var formattedResult = JsonConvert.SerializeObject(searchResults, Formatting.Indented);
 
    Response.Headers.Add("x-total-count", searchResults.Count.ToString());
 
    return Ok(formattedResult);
}

I’ve structured the results as JSON like I’ve shown previously, but one more thing that I’ve done is add another header which contains the total number of results – I’ve done this to present some helpful meta-information to the service consumer.

Wrapping Up

So we’ve covered quite a lot of ground in this post – previously we ended with a very simple controller which returned HTTP status codes, but this time we have something a little more advanced:

  • We return complex objects using JSON;
  • We validate data passed to the GET method, passing back a 400 code and error message if the service is being used incorrectly;
  • We provide a mechanism to allow for more complex searching and pass some useful meta-data.

I’ve included the complete class below.

using Microsoft.AspNetCore.Mvc;
using Newtonsoft.Json;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Linq;
 
namespace MyWebAPI.Controllers
{
    [Route("api/[controller]")]
    public class ValuesController : Controller
    {
        // GET api/values
        [HttpGet]
        public IActionResult Get([FromQueryRequired]SearchOptions searchOptions)
        {
            // Dummy search results - this would normally be replaced by another service call, perhaps to a database
            var searchResults = new[]{
                                        new{ id=1, Name="value 1" },
                                        new{ id=2, Name="value 2"}
                                     }.ToList();
 
            var formattedResult = JsonConvert.SerializeObject(searchResults, Formatting.Indented);
 
            Response.Headers.Add("x-total-count", searchResults.Count.ToString());
 
            return Ok(formattedResult);
        }
 
        // GET api/values/5
        [HttpGet("{id:int:min(1)}", Name = "GetById")]
        public IActionResult GetById([Required]int id)
        {
            try
            {
                // Dummy search result - this would normally be replaced by another service call, perhaps to a database
                var customObject = new { id = id, name = "name" };
 
                var formattedCustomObject = JsonConvert.SerializeObject(customObject, Formatting.Indented);
 
                return Ok(formattedCustomObject);
            }
            catch (KeyNotFoundException)
            {
                Response.Headers.Add("x-status-reason"$"No resource was found with the unique identifier '{id}'.");
                return NotFound();
            }
        }
 
        [HttpGet("{name}")]
        public IActionResult GetByName([Required]string name)
        {
            Response.Headers.Add("x-status-reason"$"The value '{name}' is not recognised as a valid integer to uniquely identify a resource.");
            return BadRequest();
        }
 
        // POST api/values
        [HttpPost]
        public IActionResult Post([FromBody]string value)
        {
            return Created($"api/Values/{value}", value);
        }
 
        // PUT api/values/5
        [HttpPut("{id}")]
        public IActionResult Put(int id, [FromBody]string value)
        {
            return Accepted(value);
        }
 
        // DELETE api/values/5
        [HttpDelete("{id}")]
        public IActionResult Delete(int id)
        {
            return NoContent();
        }
    }
 
    // This should go into its own separate file - included here for simplicity
    public class SearchOptions
    {
        public string Name { getset; }
    }
}

Obviously this is still a template – I’m aiming to include the absolute minimum amount of code to demonstrate how to do common useful things. Hopefully this is helpful to anyone reading this.

.net, Accessibility, Non-functional Requirements, Visual Studio, Visual Studio Plugin, Web Development

How to use the Web Accessibility Checker for Visual Studio to help conform to accessibility guidelines

I’ve previously blogged about assessibility a few times and I’d love to find a good way to identify accessibility issues from my development environment. So I was really interested to see that recently Mads Kristensen from Microsoft released the Web Accessibility Checker for Visual Studio 2015. This extension uses the aXe-core library for analysing code in Visual Studio.

The Visual Studio Gallery gives some good instructions on how to install and use this extension. It’s a pretty straightforward install – once you run your website, a list of non-conformances will appear in the Error List in VS 2015 (to see the Error List, go to the View Menu and select Error List from there).

Obviously this can’t identify every accessibility problem on your site, so fixing all the errors on this list isn’t going to guarantee your website is accessible. But one of the manifesto items from aXe-core’s github page states the tool aims to report zero false positives – so if aXe-core is raising an error, it’s worth investigating.

Let’s look at an example.

How does it report errors?

I’ve written some HTML code and pasted it below…ok, it’s some pretty ropey HTML code, with some really obvious accessibility issues.

<!DOCTYPE html>
<html>
<body>
    <form>
        This is simple text on a page.
 
        Here's a picture:
        <br />
        <img src="/image.png" />
        <br />
        And here's a button:
        <br />
        <button></button>
    </form>
</body>
</html>

 

Let’s see what the Web Accessibility Checker picks up:

screenshot.1460325884

Four errors are reported:

  • No language attribute is specified in the HTML element. This is pretty easy to fix – I’ve blogged about this before;
  • The <button> element has no text inside it;
  • The page has no <title> element.
  • The image does not have an alternative text attribute.

Note – these errors are first reported at the application runtime, don’t expect to see them when your writing your code, or just after compiling it.

If you want to discover more about any of these errors, the Error List has a column called “Code”, and clicking the text will take you to an explanation of what the problem is.

In addition, you can just double click on the description, and the VS editor focus will move to the line of code where the issue is.

I’ve corrected some of the errors – why are they still in the Error List?

I found that the errors stayed in the list, even after starting to fix the issues. In order to clear the errors away, I found that I needed to right click on the Error List, and from the context menu select “Clear All Accessibility Errors”.

screenshot.1460326466

When I hit refresh on my browser, and I was able to see the remaining issues without it showing the ones that I had fixed.

What more does this give me when compared to some of the existing accessibility tools?

Previously I’ve used tools like the HTML_CodeSniffer bookmarklet, which also report accessibility errors.

screenshot.1460326977

This is a great tool, but it will only point to the issues on the web page – the Web Accessibility Checker in VS2015 has the advantage of taking your cursor straight to the line of source code with the issue.

Conclusion

Obviously you can’t completely test if a website is accessible using automated tools. But you can definitely use tools to check if certain rules are being adhered to in your code. Tools like the Web Accessibility Checker for VS2015 help you identify and locate accessibility issues in your code – and when it’s free, there’s no reason not use it in your web application development process today.

.net, C# tip, IIS, MVC, Non-functional Requirements, Performance, Web Development

More performance tips for .NET websites which access data

I recently wrote about improving the performance of a website that accesses a SQL Server database using Entity Framework, and I wanted to follow up with a few more thoughts on optimising performance in an MVC website written in .NET. I’m coming towards the end of a project now where my team built an MVC 5 site, and accessed a database using Entity Framework. The engineers were all scarred survivors from previous projects pretty experienced, so we were able to implement a lot of non-functional improvements during sprints as we went along. As our site was data driven, looking at that part was obviously important, but it wasn’t the only thing we looked at. I’ve listed a few of the other things we did during the project – some of these were one off settings, and others were things we checked for regularly to make sure problems weren’t creeping in.

Compress, compress, compress

GZip your content! This makes a huge difference to your page size, and therefore to the time it takes to render your page. I’ve written about how to do this for a .NET site and test that it’s working here. Do it once at the start of your project, and you can forget about it after that (except occasionally when you should check to make sure someone hasn’t switched it off!)

Check your SQL queries, tune them, and look out for N+1 problems

As you might have guessed from one of my previous posts, we were very aware of how a few poorly tuned queries or some rogue N+1 problems could make a site grind to a halt once there were more than a few users. We tested with sample data which was the “correct size” – meaning that it was comparable with the projected size of the production database. This gave us a lot of confidence that the indexes we created in our database were relevant, and that our automated integration tests would highlight real N+1 problems. If we didn’t have “real sized data” – as often happens where a development database just has a few sample rows – then you can’t expect to discover real performance issues early.

Aside: Real sized data doesn’t have to mean real data – anonymised/fictitious data is just as good for performance analysis (and obviously way better from a security perspective).

Use MiniProfiler to find other ADO.NET bottlenecks

Just use it. Seriously, it’s so easy, read about it here. There’s even a nuget repository to make it even easier to include in your project. It automatically profiles ADO.NET calls, and allows you to profile individual parts of your application with a couple of simple lines of code (though I prefer to use this during debugging, rather than pushing those profile customisations into the codebase). It’s great for identifying slow parts of the site, and particularly good at identifying repeated queries (which is a giveaway symptom of the N+1 problem).

Reduce page bloat by optimising your images

We didn’t have many images in the site – but they were still worth checking. We used the Firefox Web Developer Toolbar plugin, and the “View Document Size” item from the “Information” menu. This gave us a detailed breakdown of all the images on the page being tested – and highlighted a couple of SVGs which had crept in unexpectedly. These were big files, and appeared in the site’s header, so every page would have been affected. They didn’t need to be SVGs, and it was a quick fix to change it to a GIF which made every page served a lot smaller.

For PNGs, you can use the PNGOut utility to optimise images – and you can convert GIFs to PNG as well using this tool.

For JPEGs, read about progressive rendering here. This is something where your mileage may vary – I’ll probably write more about how to do this in Windows at a future time.

Minifying CSS and JavaScript

The Web Developer Toolbar saved us in another way – it identified a few JavaScript and CSS files issues. We were using the built in Bundling feature of MVC to combine and minify our included scripts – I’ve written about how to do this here – and initially it looked like everything had worked. However, when we looked at the document size using the Web Developer Toolbar, we saw that some documents weren’t being minified. I wrote about the issue and solution here, but the main point was that the Bundling feature was failing silently, causing the overall page size to increase very significantly. So remember to check that bundling/minifying is actually working – just because you have it enabled doesn’t mean it’s being done correctly!

Remember to put CSS at the top of your page, and include JavaScript files at the bottom.

Check for duplicated scripts and remove them

We switched off bundling and minification to see all the scripts being downloaded and noticed that we had a couple of separate entries for the JQuery library, and also for some JQuery-UI files. These were big files and downloading them once is painful enough, never mind unnecessarily doing it again everytime. It’s really worth checking to make sure you’re not doing this – not just for performance reasons, but if you find this is happening it’s also a sign that there’s maybe an underlying problem in your codebase. Finding it early gives you a chance to fix this.

Do you really need that 3rd party script?

We worked hard to make sure that we weren’t including libraries just for the sake of it. There might be some cool UI feature which is super simple to implement by just including that 3rd party library…but every one of those 3rd party libraries includes page size. Be smart about what you include.

Tools like JQuery UI even allow you to customise your script to be exactly as big or small as you need it to be.

Is your backup schedule causing your site to slow down?

I witnessed this on a previous project – one of our team had scheduled the daily database backup to happen after we went home…leading to some of our users in the world in a later time zone to see a performance deterioration for about half an hour at the same time every day. Rescheduling the daily backup to later in the day caused us no problems and removed a significant problem for our users.

Is someone else’s backup schedule causing your site to slow down?

There’s a corollary to the previous point – if you’re seeing a mysterious performance deterioration at the same time every day and you’re absolutely sure it’s not something that you or your users are doing, check if your site is on shared hosting. When I contacted our hosts and requested that our company VMs were moved onto a different SAN, it miraculously cleared up a long-standing performance issue.

Summary

There’s a few tips here which really helped us keep our pages feeling fast to our users (and some other tips that I’ve picked up over the years). We didn’t do all of this at the end of the project, this was something we focussed on all the way through. It’s really important to make sure you’re checking these things during sprints – and part of your Definition of Done if possible.

Non-functional Requirements, Security, Web Development

Use https://securityheaders.io to check your site’s header security in an instant

A while back I posted an article on how to improve the security of your site by configuring headers in IIS.

I thought I’d follow up on this with a quick post about a fantastic utility online – https://securityheaders.io/.

Plug your website URL into this site, and get a report immediately about how good your site headers are, and what you can do to tighten things up. The report is understandable, and every bit of information – whether that’s missing headers, or headers configured insecurely – will have a link to the site creator’s blog explaining what this means in great detail.

Sadly my blog – which is all managed by WordPress.com – comes out with an E rating. How embarrassing…one day I will find the time to host all this on my own domain.

Final hint – as you might expect, if you put https://securityheaders.io into the site, you’ll see what an A+ report looks like!