.net core, Azure, C# tip, Clean Code, Cloud architecture, Security

Simplifying Azure Key Vault and .NET Core Web App (includes NuGet package)

In my previous post I wrote about securing my application secrets using Azure Key Vault, and in this post I’m going to write about how to simplify the code that a .NET Core web app needs to use the Key Vault.

I previously went into a bit of detail about how to create a Key Vault and add a secret to that vault, and then add a Managed Service Identity to a web app. At the end of the post, I showed some C# code about how to access a secret inside a controller action.

public class HomeController : Controller
{
    public async Task<ActionResult> Index()
    {
        var azureServiceTokenProvider = new AzureServiceTokenProvider();
        var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
        var secret = await keyVaultClient.GetSecretAsync("https://mywebsitesecret.vault.azure.net/secrets/TheSecret").ConfigureAwait(false);
        ViewBag.Secret = secret.Value;
        return View();
    }
    // rest of the class...
}

Whereas this works for the purposes of an example of how to use it in a .NET Core MVC application, it’s not amazingly pretty code – for any serious application, I wouldn’t have all this code in my controller.

I think it would be more logical to access my secrets at the time my web application starts up, and put them into the Configuration for the app. Therefore if I need them later, I can just inject an IConfiguration object into my class, and use that to get the secret values.

using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Configuration.AzureKeyVault;

namespace MyWebApplication
{
    public class Program
    {
        public static void Main(string[] args)
        {
            BuildWebHost(args).Run();
        }
 
        private static IWebHost BuildWebHost(string[] args)
        {
            return WebHost.CreateDefaultBuilder(args)
                .ConfigureAppConfiguration(builder =>
                {
                    var azureServiceTokenProvider = new AzureServiceTokenProvider();
                    var keyVaultClient =
                        new KeyVaultClient(
                            new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
                    builder.AddAzureKeyVault("https://mywebsitesecret.vault.azure.net/",
                            keyVaultClient, new DefaultKeyVaultSecretManager());
                })
                .UseStartup<Startup>()
                .Build();
        }
    }
}

But as you can see above, the code I need to add here to build my web host is not very clear either – it seemed a shame to lose all the good work done in .NET Core 2 to simplify this class.

So I’ve created a NuGet package to allow me to have a simpler and cleaner interface – it’s at uploaded to the NuGet repository here, and I’ve open sourced the code at GitHub here.

As usual you can install pretty easily from the command-line:

Install-Package Kodiak.Azure.WebHostExtension -prerelease

Now my BuildWebHost method looks much cleaner – I can just add the fluent extension AddAzureKeyVaultSecretsToConfiguration and pass in the URL of the vault).

private static IWebHost BuildWebHost(string[] args)
{
    return WebHost.CreateDefaultBuilder(args)
        .AddAzureKeyVaultSecretsToConfiguration("https://myvaultname.vault.azure.net")
        .UseStartup<Startup>()
        .Build();
}

I think this is a more elegant implementation, and now if I need to access the secret inside my controller’s action, I can use the cleaner code below.

using System.Diagnostics;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Configuration;
 
namespace MyWebApplication.Controllers
{
    public class HomeController : Controller
    {
        private readonly IConfiguration _configuration;
 
        public HomeController(IConfiguration configuration)
        {
            _configuration = configuration;
        }
 
        public IActionResult Index()
        {
            ViewBag.Secret = _configuration["MySecret"];
 
            return View();
        }
    }
}

Summing up

Azure Key Vault (AKV) is a useful tool to help keep production application secrets secure, although like any tool it’s possible to mis-use it, so don’t assume that just because you’re using AKV that your secrets are secured – always remember to examine threat vectors and impacts.

There’s a lot of documentation on the internet about AKV, and a lot of it recommends using the client Id and secrets in your code – I personally don’t like this because it’s always risky to have any kind of secret in your code. The last couple of posts that I’ve written have been about how to use an Azure Managed Service Identity with your application to avoid secrets in your code, and how to simplify the C# code you might use in your application to access AKV secrets.

.net core, C# tip, MVC, Non-functional Requirements, Performance

Creating a RESTful Web API template in .NET Core 1.1 – Part #3: Improving the performance by using compression

One of the simplest and most effective improvements you can make to your website or web service is to compress the stream of data sent from the server. With .NET Core 1.1, it’s really simple to set this up – I’ve decided to include this in my template project, but the instructions below will work for any .NET Core MVC or Web API project.

Only really ancient browsers are going to have problems with gzip – I’m pretty happy to switch it on by default.

.NET Core 1.1 adds compression to the ASP.NET HTTP pipeline using some middleware in the Microsoft.AspNetCore.ResponseCompression package. Let’s look at how to add this to our .NET Core Web API project.

Step 1: Add the Microsoft.AspNetCore.ResponseCompression package

There’s a few different ways to do this – I prefer to add packages from within PowerShell. From within Visual Studio (with my project open), I open a Package Manager Console, and run:

Install-Package Microsoft.AspNetCore.ResponseCompression

(But it’s obviously possible to do this from within the NuGet package manager UI as well)

This will add the package to the Web API project, and you can see this in the project.json file (partially shown below).

{
  "dependencies": {
    "Microsoft.NETCore.App": {
      "version": "1.1.0",
      "type": "platform"
    },
    "Microsoft.AspNetCore.Mvc": "1.1.0",
    "Microsoft.AspNetCore.Routing": "1.1.0",
    "Microsoft.AspNetCore.Server.IISIntegration": "1.1.0",
    "Microsoft.AspNetCore.Server.Kestrel": "1.1.0",
    "Microsoft.Extensions.Configuration.EnvironmentVariables": "1.1.0",
    "Microsoft.Extensions.Configuration.FileExtensions": "1.1.0",
    "Microsoft.Extensions.Configuration.Json": "1.1.0",
    "Microsoft.Extensions.Logging": "1.1.0",
    "Microsoft.Extensions.Logging.Console": "1.1.0",
    "Microsoft.Extensions.Logging.Debug": "1.1.0",
    "Microsoft.Extensions.Options.ConfigurationExtensions": "1.1.0",
    "Microsoft.AspNetCore.ResponseCompression": "1.0.0"
  },
  ...

Step 2: Update and configure services in the project Startup.cs file

We now just need to add a couple of lines to the Startup.cs project file, which will:

  • Add the services available to the runtime container, and
  • Use the services in the HTTP pipeline at runtime.

I’ve highlighted the lines that I added in bold red font in the code below.

public class Startup
{
    public Startup(IHostingEnvironment env)
    {
        var builder = new ConfigurationBuilder()
            .SetBasePath(env.ContentRootPath)
            .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
            .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
            .AddEnvironmentVariables();
        Configuration = builder.Build();
    }
 
    public IConfigurationRoot Configuration { get; }
 
    // This method gets called by the runtime. Use this method to add services to the container.
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddResponseCompression();
 
        // Add framework services.
        services.AddMvc();
    }
 
    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    {
        app.UseResponseCompression();
 
        loggerFactory.AddConsole(Configuration.GetSection("Logging"));
        loggerFactory.AddDebug();
 
        app.UseMvc();
    }
}

Now when I call my web service, all responses are zipped by default.

We can prove this by looking at the headers sent with the response – I’ve pasted a screenshot of the headers sent back when I call a GET method in my Web API service. There is a header named “Content-Encoding” which has the value “gzip” – this signals that the response has been zipped.

screenshot-1480371947

Wrapping up

This is a really easy way to improve the performance of your website or your web service – this is one of the first things I configure when starting a new project.

Continue reading

.net core, C# tip, Web Development

Creating a RESTful Web API template in .NET Core 1.1 – Part #2: Improving search

Previously I’ve written about how to improve the default Web API template in .NET Core 1.1. Presently it’s really basic – we’ve methods for PUT, POST, DELETE and a couple of GET methods – one which returns all objects, and one which returns an object matching the integer ID passed to the method.

You probably know that in a real application, searching for objects through an API is a lot more complex:

  • Presently the default template just returns simple strings – but usually we’re going to return more complex objects, which we probably want to pass back in a structured format.
  • It would be nice if we could provide some level of validation feedback to the user if they do something wrong – for example, we might be able to support the following RESTful call:
/api/Values/123   // valid - searches by unique integer identifier

But we can’t support the following call and should return something useful.

/api/Values/objectname   // invalid - several objects might have the same name
  • Usually we need more fine grained control than “get one thing” or “get all the things”.

So let’s look at some code which addresses each of these to improve our template.

Format the results using JSON.net

I want my default template to be a useful starting point – not just to me, but also to other members on my team.

  • Most of the time we won’t be returning a simple string – so I’ve constructed a simple anonymous object to return.
  • I prefer to use JSON.net from NewtonSoft which is pretty much the industry standard at this point, and not waste time with a bunch of answers on StackOverflow describing other (slower and more complex) ways of doing it.
// GET api/values/5
[HttpGet("{id}")]
public IActionResult Get(int id)
{
    var customObject = new { id = id, name = "name" };
 
    var formattedCustomObject = JsonConvert.SerializeObject(customObject, Formatting.Indented);
 
    return Ok(formattedCustomObject);
}

Easy – and when we run our service and browse to api/values/5, we get the following JSON response.

{
  "id": 5,
  "name": "name"
}

Help the user when they make a common mistake

I’ve frequently made a mistake when manually modifying a GET call – I search by name when I should search by id.

/api/Values/123   // correct - searches by unique integer identifier

/api/Values/objectname   // wrong - this is the mistake I sometimes make

/api/Values?name=objectname   // correct - this is what I should have done

So what result do we presently get when I make this mistake? I’d expect some kind of error message…but actually we get this:

{
  "id": 0,
  "name": "name"
}

OK – our template is somewhat contrived, but somehow the text “objectname” has been converted to a zero. What’s happened?

Our GET action has tried to interpret the value “objectname” as an integer, because there’s only one GET action which accepts one parameter, and that parameter is an integer. So the string is cast to an int, becomes zero, and out comes our incorrect and unhelpful result.

But with .NET Core we can fix this – route templates are a pretty cool and configurable feature, and can do some useful type validation for us using Route Constraints. You can read more about this here, and it allows us to create two different GET actions in our controller – one for when we’re passed an integer parameter, and one for when we’re passed a string.

First of all, let’s handle the correct case – passing an integer. I’ll rename my action to be “GetById” to be more explicit about the action’s purpose.

[HttpGet("{id:int:min(1)}", Name = "GetById")]
public IActionResult GetById([Required]int id)
{
    try
    {
        // Dummy search result - this would normally be replaced by another service call, perhaps to a database
        var customObject = new { id = id, name = "name" };
 
        var formattedCustomObject = JsonConvert.SerializeObject(customObject, Formatting.Indented);
 
        return Ok(formattedCustomObject);
    }
    catch (KeyNotFoundException)
    {
        Response.Headers.Add("x-status-reason"$"No resource was found with the unique identifier '{id}'.");
        return NotFound();
    }
}

In addition to the renamed action, you can see that the route constraint has changed to the code shown below:

[HttpGet("{id:int:min(1)}", Name = "GetById")]

So for the parameter named “id”, we’ve specified that it needs to be an integer, with a minimum value of 1. If the data doesn’t match those criteria…then this action won’t be recognised or called.

Now let’s handle the situation where someone tries to GET a text value – we’ll call this method “GetByName”.

[HttpGet("{name}")]
public IActionResult GetByName([Required]string name)
{
    Response.Headers.Add("x-status-reason"$"The value '{name}' is not recognised as a valid integer to uniquely identify a resource.");
    return BadRequest();
}

There’s a few things worth calling out here:

  • This method only returns a BadRequest() – so when we pass in a string like “objectname”, we get a 400 error.
  • Additionally, I pass an error message through the headers which I hope helpfully describes the mistake and how to correct it.
  • Finally, I’ve included a catch clause for when the id we search for hasn’t been found – in this case I return a helpful message in the response headers, and the HTTP status code 404 using the NotFound() method.

A more traditional method of error handling would be to check for all the errors in one method – but there’s a couple of reasons why I prefer to have two methods:

  • If the parameter to a single GET method is an integer, we will lose the string value passed by the client and have it replaced by a simple zero. I’d like to pass that string value back in an error message.
  • I think using two methods – one for the happy path, one for the exception – is more consistent with the single responsibility principle.

Allow custom search queries

So say we want to do something a bit more sophisticated than just search by Id or return all results – say we want to search our repository of data by a field like “name”.

The first step is to create a custom SearchOptions class.

public class SearchOptions
{
    public string Name { getset; }
}

It’s easy to add custom properties to this class – say you want to search for a people whose ages are between two limits, so you’d add properties like the ones below:

public int? LowerAgeLimit { getset; }
 
public int? UpperAgeLimit { getset; }

How do we use this “SearchOptions” class?

I’d expect a simple RESTful search query to look something like this:

/api/Values?name=objectname&lowerAgeLimit=20

If a GET method takes SearchOptions as a parameter, then because of the magic of MVC and auto-wiring properties, the searchOptions object will be populated with the name and LowerAgeLimit specified in the querystring.

The method below shows what I mean. You can see below that I’ve simply created a list of anonymous objects in the method and pretend they are the search results – we’d obviously replace this method with some kind of service call which would accept searchOptions as a parameter, and use that information to get some real search results.

[HttpGet]
public IActionResult Get([FromQueryRequired]SearchOptions searchOptions)
{
    var searchResults = new[]
                            {
                                new { id = 1, Name = "value 1" },
                                new { id = 2, Name = "value 2" }
                            }.ToList();
 
    var formattedResult = JsonConvert.SerializeObject(searchResults, Formatting.Indented);
 
    Response.Headers.Add("x-total-count", searchResults.Count.ToString());
 
    return Ok(formattedResult);
}

I’ve structured the results as JSON like I’ve shown previously, but one more thing that I’ve done is add another header which contains the total number of results – I’ve done this to present some helpful meta-information to the service consumer.

Wrapping Up

So we’ve covered quite a lot of ground in this post – previously we ended with a very simple controller which returned HTTP status codes, but this time we have something a little more advanced:

  • We return complex objects using JSON;
  • We validate data passed to the GET method, passing back a 400 code and error message if the service is being used incorrectly;
  • We provide a mechanism to allow for more complex searching and pass some useful meta-data.

I’ve included the complete class below.

using Microsoft.AspNetCore.Mvc;
using Newtonsoft.Json;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Linq;
 
namespace MyWebAPI.Controllers
{
    [Route("api/[controller]")]
    public class ValuesController : Controller
    {
        // GET api/values
        [HttpGet]
        public IActionResult Get([FromQueryRequired]SearchOptions searchOptions)
        {
            // Dummy search results - this would normally be replaced by another service call, perhaps to a database
            var searchResults = new[]{
                                        new{ id=1, Name="value 1" },
                                        new{ id=2, Name="value 2"}
                                     }.ToList();
 
            var formattedResult = JsonConvert.SerializeObject(searchResults, Formatting.Indented);
 
            Response.Headers.Add("x-total-count", searchResults.Count.ToString());
 
            return Ok(formattedResult);
        }
 
        // GET api/values/5
        [HttpGet("{id:int:min(1)}", Name = "GetById")]
        public IActionResult GetById([Required]int id)
        {
            try
            {
                // Dummy search result - this would normally be replaced by another service call, perhaps to a database
                var customObject = new { id = id, name = "name" };
 
                var formattedCustomObject = JsonConvert.SerializeObject(customObject, Formatting.Indented);
 
                return Ok(formattedCustomObject);
            }
            catch (KeyNotFoundException)
            {
                Response.Headers.Add("x-status-reason"$"No resource was found with the unique identifier '{id}'.");
                return NotFound();
            }
        }
 
        [HttpGet("{name}")]
        public IActionResult GetByName([Required]string name)
        {
            Response.Headers.Add("x-status-reason"$"The value '{name}' is not recognised as a valid integer to uniquely identify a resource.");
            return BadRequest();
        }
 
        // POST api/values
        [HttpPost]
        public IActionResult Post([FromBody]string value)
        {
            return Created($"api/Values/{value}", value);
        }
 
        // PUT api/values/5
        [HttpPut("{id}")]
        public IActionResult Put(int id, [FromBody]string value)
        {
            return Accepted(value);
        }
 
        // DELETE api/values/5
        [HttpDelete("{id}")]
        public IActionResult Delete(int id)
        {
            return NoContent();
        }
    }
 
    // This should go into its own separate file - included here for simplicity
    public class SearchOptions
    {
        public string Name { getset; }
    }
}

Obviously this is still a template – I’m aiming to include the absolute minimum amount of code to demonstrate how to do common useful things. Hopefully this is helpful to anyone reading this.

.net, C# tip, Clean Code, Visual Studio

Creating a RESTful Web API template in .NET Core 1.1 – Part #1: Returning HTTP Codes

I’ve created RESTful APIs with the .NET framework and WebAPI before, but nothing commercial with .NET Core yet. .NET Core has been out for a little while now – version 1.1 was released at Connect(); //2016 – I’ve heard that some customers now are willing to experiment with this to achieve some of the potential performance and stability gains.

To prepare for new customer requests, I’ve been experimenting with creating a simple RESTful API with .NET Core to see how different it is to the alternative version with the regular .NET Framework…and I’ve found that it’s really pretty different.

I’ve already written about some of the challenges in upgrading from .NET Core 1.0 to 1.1 when creating a new project – this post is about how to start with the default template for Web API projects, and transform it into something that is more like a useful project to host RESTful microservices.

This first post in the series is about turning the default project into a good HTTP citizen and return HTTP status codes.

When I create a new WebAPI project using .NET Core 1.1 from the default Visual Studio template, a number of files are created in the project. The more interesting one is the “ValuesController” – this holds the standard verbs associated with RESTful services, GET, POST, PUT and DELETE. I’ve pasted the default code created below:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
 
namespace MyWebAPI.Controllers
{
    [Route("api/[controller]")]
    public class ValuesController : Controller
    {
        // GET api/values
        [HttpGet]
        public IEnumerable<string> Get()
        {
            return new string[] { "value1""value2" };
        }
 
        // GET api/values/5
        [HttpGet("{id}")]
        public string Get(int id)
        {
            return "value";
        }
 
        // POST api/values
        [HttpPost]
        public void Post([FromBody]string value)
        {
        }
 
        // PUT api/values/5
        [HttpPut("{id}")]
        public void Put(int id, [FromBody]string value)
        {
        }
 
        // DELETE api/values/5
        [HttpDelete("{id}")]
        public void Delete(int id)
        {
        }
    }
}

However, one of the things I don’t like about this which would be very easy to change is the return type of each verb. A good RESTful service should return HTTP status codes describing the result of the action – typically 200 codes for success:

  • 200 – Request is Ok;
  • 201 – Resource created successfully;
  • 202 – Update accepted and will be processed (although may be rejected);
  • 204 – Request processed and there is no content to return.

Additionally, responses to RESTful actions will sometimes contain information:

  • 200 – OK – if the action is GET, the response will contain an object (or list of objects) which were requested.
  • 201 – Created – the response will contain the object which was created, and also the unique URI required to get that object.
  • 202 – Accepted – the response will contain the object for which an update was requested.
  • 204 – No content to return – this could be returned as a result of a delete request, where it would make no sense to return an object (as it theoretically no longer exists).

    As a brief aside, some writers disagree that the Delete request should return no content – for a HATEOAS application, I can see why returning an empty response is not helpful.

I think the default ValuesController would be more useful if it implemented a pattern of returning responses with correctly configured HTTP status codes, and I think the first step towards this would be to use the default code below for the ValueController (which – as a default template – obviously does nothing useful yet).

using Microsoft.AspNetCore.Mvc;
 
namespace MyWebAPI.Controllers
{
    [Route("api/[controller]")]
    public class ValuesController : Controller
    {
        // GET api/values
        [HttpGet]
        public IActionResult Get()
        {
            return Ok(new string[] { "value1""value2" });
        }
 
        // GET api/values/5
        [HttpGet("{id}")]
        public IActionResult Get(int id)
        {
            return Ok("value");
        }
 
        // POST api/values
        [HttpPost]
        public IActionResult Post([FromBody]string value)
        {
            return Created($"api/Values/{value}", value);
        }
 
        // PUT api/values/5
        [HttpPut("{id}")]
        public IActionResult Put(int id, [FromBody]string value)
        {
            return Accepted(value);
        }
 
        // DELETE api/values/5
        [HttpDelete("{id}")]
        public IActionResult Delete(int id)
        {
            return NoContent();
        }
    }
}

The main changes I’ve made so far are:

  • The return type of each action is now IActionResult, which allows for Http status codes to be returned.
  • For the GET actions, I’ve just wrapped the objects returned (which are simple strings) with the Ok result.
  • For the POST action, I’ve used the Created result object. This is different to OK because in addition to including an object, it also includes a URI pointing to the location of the object.
  • For the PUT action, I just wrapped the object returned with the Accepted result. The return type of Accepted is new in .NET Core v1.1 – this won’t compile if you’re targeting previous versions.
  • Finally, for the DELETE action, rather than returning void I’ve returned a NoContent result type.

I really like how .NET Core v1.1 bakes in creating great RESTful services in a clean and simple way and prefer it to the way previously used in .NET. I’m planning a number of other posts which will focus on some functional and non-functional aspects of creating a clean RESTful service:


About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net core, C# tip

Upgrading from .NET Core 1.0 to 1.1 with Visual Studio 2015

I’ve been experimenting with .NET Core – and creating my new projects with Visual Studio 2015. There are a few gotchas – I think Microsoft understand that the numbering conventions need some work, and I’d expect these to be sorted out soon. I’ve found this a bit confusing, so I wanted to post my findings and workarounds in case anyone else was struggling too.

As is the case with a few of my posts – which cover preview technologies – I hope this information becomes obsolete soon, as new and more stable packages are released.

Some useful links are below:

  • The .NET Core Download page is here.
  • I’ve installed the .NET Core 1.1 SDK (64 bit) from here.
    • The file is called: dotnet-dev-win-x64.1.0.0-preview2-1-003177.exe

screenshot-1479660931

  • I’ve also installed the tools (Preview 2) for .NET Core 1.1 for Visual Studio 2015 from here.
    • The file is called: DotNetCore.1.0.1-VS2015Tools.Preview2.0.3.exe so unfortunately you can see it’s still marked as version 1.0.1.

screenshot-1479660318When I had installed the new .NET Core SDK (v 1.1), there was a new SDK folder on my hard disk at C:\Program Files\dotnet\sdk, as shown below. You can see I’ve also installed a couple of other previous versions of .NET Core which can happily exist side by side.

screenshot-1479661066It’s worth noting that the .NET Core 1.1 SDK has a folder that doesn’t actually contain the version number “1.1” – it’s still marked as “1.0.0”, but it at least has the sub-version of 3177 present at the end of the folder name.

Creating a new project targeting .NET Core 1.1 with Visual Studio 2015

I opened Visual Studio 2015 and created a new Web API project targeting .NET Core. VS2015 doesn’t specify the target version at the time of project creation.

screenshot-1479661596

Immediately after creating the project, VS2015 alerted me that it was restoring packages, and it completed this successfully. However, if I open the global.json file for this project, we can see it does not target .NET Core v1.1.

{
  "projects": [ "src", "test" ],
  "sdk": {
    "version": "1.0.0-preview2-003131"
  }
}

It’s a bit confusing to see what version is actually being targeted as the version numbers are in such a state of flux – however, for version 1.1, I would expect the SDK version to be specified as : 1.0.0-preview2-1-003177. I literally just copied this text from the folder name of the SDK I want to target which is stored under “C:\Program Files\dotnet\sdk”.

The code below shows the project.json text created by Visual Studio for the project.
The text created in the default version of project.json is pasted below – this all targets .NET Core v1.0:

{
  "dependencies": {
    "Microsoft.NETCore.App": {
      "version": "1.0.1",
      "type": "platform"
    },
    "Microsoft.AspNetCore.Mvc": "1.0.1",
    "Microsoft.AspNetCore.Routing": "1.0.1",
    "Microsoft.AspNetCore.Server.IISIntegration": "1.0.0",
    "Microsoft.AspNetCore.Server.Kestrel": "1.0.1",
    "Microsoft.Extensions.Configuration.EnvironmentVariables": "1.0.0",
    "Microsoft.Extensions.Configuration.FileExtensions": "1.0.0",
    "Microsoft.Extensions.Configuration.Json": "1.0.0",
    "Microsoft.Extensions.Logging": "1.0.0",
    "Microsoft.Extensions.Logging.Console": "1.0.0",
    "Microsoft.Extensions.Logging.Debug": "1.0.0",
    "Microsoft.Extensions.Options.ConfigurationExtensions": "1.0.0"
  },
 
  "tools": {
    "Microsoft.AspNetCore.Server.IISIntegration.Tools": "1.0.0-preview2-final"
  },
 
  "frameworks": {
    "netcoreapp1.0": {
      "imports": [
        "dotnet5.6",
        "portable-net45+win8"
      ]
    }
  },
 
  "buildOptions": {
    "emitEntryPoint": true,
    "preserveCompilationContext": true
  },
 
  "runtimeOptions": {
    "configProperties": {
      "System.GC.Server": true
    }
  },
 
  "publishOptions": {
    "include": [
      "wwwroot",
      "**/*.cshtml",
      "appsettings.json",
      "web.config"
    ]
  },
 
  "scripts": {
    "postpublish": [ "dotnet publish-iis --publish-folder %publish:OutputPath% --framework %publish:FullTargetFramework%" ]
  }
}

We can prove this project configuration targets .NET Core 1.0 by building the project as it is using a PowerShell prompt – I’ve pasted the output from this build operation below:

PM> dotnet build
Project MyWebAPI (.NETCoreApp,Version=v1.0) will be compiled because expected outputs are missing
Compiling MyWebAPI for .NETCoreApp,Version=v1.0

Compilation succeeded.
    0 Warning(s)
    0 Error(s)

Time elapsed 00:00:02.8137094

You can see in the console output above that Version=v1.0 is presently targeted.

Upgrading to .NET Core 1.1

To upgrade, first I change the version specified in global.json to use the version “1.0.0-preview2-1-003177”.

{
  "projects": [ "src", "test" ],
  "sdk": {
    "version": "1.0.0-preview2-1-003177"
  }
}

Next, we need to update the project.json file in the Web project itself. I needed to change this – in two places:

  • The version of Microsoft.NETCore.App needs to be updated to 1.1.0 (from 1.0.1)
  • The framework “netcoreapp1.0” needs to be updated to “netcoreapp1.1

I’ve pasted the corrected project.json below:

{
  "dependencies": {
    "Microsoft.NETCore.App": {
      "version": "1.1.0",
      "type": "platform"
    },
    "Microsoft.AspNetCore.Mvc": "1.0.1",
    "Microsoft.AspNetCore.Routing": "1.0.1",
    "Microsoft.AspNetCore.Server.IISIntegration": "1.0.0",
    "Microsoft.AspNetCore.Server.Kestrel": "1.0.1",
    "Microsoft.Extensions.Configuration.EnvironmentVariables": "1.0.0",
    "Microsoft.Extensions.Configuration.FileExtensions": "1.0.0",
    "Microsoft.Extensions.Configuration.Json": "1.0.0",
    "Microsoft.Extensions.Logging": "1.0.0",
    "Microsoft.Extensions.Logging.Console": "1.0.0",
    "Microsoft.Extensions.Logging.Debug": "1.0.0",
    "Microsoft.Extensions.Options.ConfigurationExtensions": "1.0.0"
  },
 
  "tools": {
    "Microsoft.AspNetCore.Server.IISIntegration.Tools": "1.0.0-preview2-final"
  },
 
  "frameworks": {
    "netcoreapp1.1": {
      "imports": [
        "dotnet5.6",
        "portable-net45+win8"
      ]
    }
  },
 
  "buildOptions": {
    "emitEntryPoint": true,
    "preserveCompilationContext": true
  },
 
  "runtimeOptions": {
    "configProperties": {
      "System.GC.Server": true
    }
  },
 
  "publishOptions": {
    "include": [
      "wwwroot",
      "**/*.cshtml",
      "appsettings.json",
      "web.config"
    ]
  },
 
  "scripts": {
    "postpublish": [ "dotnet publish-iis --publish-folder %publish:OutputPath% --framework %publish:FullTargetFramework%" ]
  }
}

We’re not done yet – whereas if we build the projects we’ll see that version 1.1 is being targeted, but we’re not going to see many changes from 1.1 until we update the APIs available to us. To do this, we need to update NuGet packages, but fortunately this is easy.

  • I open a PowerShell prompt (by going to Tools -> NuGet Package Manager -> Package Manager Console) – first I check my present working directory is at the root of my solution by typing “pwd”.
  • Next I type “Update-Package” to get the latest NuGet packages for the v1.1 configuration.
  • Next, I type “dotnet restore” (I think that this step is technically not necessary, but I like to do it to be sure I have updated the entire solution).
  • Finally, I change directory into my project by typing “cd .\src\MyWebAPI” (my project is called “MyWebAPI”, yours will be different), and then I type “dotnet build“to compile the project. The text below is written to the console.
PM> dotnet build
Project MyWebAPI (.NETCoreApp,Version=v1.1) will be compiled because inputs were modified
Compiling MyWebAPI for .NETCoreApp,Version=v1.1

Compilation succeeded.
    0 Warning(s)
    0 Error(s)

Time elapsed 00:00:01.6162872

As you can see, the version v1.1 is now targeted. I hope this helps anyone who’s trying to upgrade to the new SDK – I’ve pasted a link to some more official Microsoft information below.


About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

Continue reading

.net, C# tip, UWP, Visual Studio

Testing your Windows App with Appium in Windows 10 and Visual Studio 2015

At Connect(); // 2016, Scott Hanselman’s keynote include a short description of a tool called Appium (presented by Stacey Doerr). This tool allows you to create and automate UI tests for Windows Apps – not just UWP apps, but basically any app which runs on your Windows machine. Automated UI testing is definitely something that I’ve missed when moving from web development to UWP development, so I was quite excited to find out there’s a project would help fill this gap.

As is often the case, getting started with new things is tricky –  when I follow present instructions from Microsoft, I found some errors occurred. That’s likely to be caused by my development machine set up – but you might hit the same issue. In this post, I’ll describe the process I followed to get Appium working, and I’ll also document the error messages I found on the way.

I hope that this blog post becomes irrelevant soon and that this isn’t an issue affecting a lot of people.

Installing and Troubleshooting Appium

Step 1 – Install Node.js

Install Node.js from here.

Step 2 – Open a PowerShell prompt as Administrator, and install Appium

From an elevated PowerShell prompt, run the command:

npm install –g appium

When I ran this command, the following warnings were printed to the screen – however I don’t think they’re anything to worry about:

npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@^1.0.12(node_modules\appium\node_modules\fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.0.15: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"})

 Step 3 – From an elevated PowerShell prompt, run Appium

From an elevated PowerShell prompt, run the command:

appium

After a few seconds, the following text is printed to the screen.

Welcome to Appium v1.6.0
Appium REST http interface listener started on 0.0.0.0:4723

At this point I tried to run the tests in the Sample Calculator App provided by Appium on GitHub – found here. I used Visual Studio to run these tests, but found all 5 tests failed, and the following error was printed to the PowerShell prompt.

[Appium] Creating new WindowsDriver session
[Appium] Capabilities:
[Appium]   app: 'Microsoft.WindowsCalculator_8wekyb3d8bbwe!App'
[Appium]   platformName: 'Windows'
[Appium]   deviceName: 'WindowsPC'
[BaseDriver] The following capabilities were provided, but are not recognized by appium: app.
[BaseDriver] Session created with session id: dcfce8e7-9615-4da1-afc5-9fa2097673ed
[WinAppDriver] Verifying WinAppDriver is installed with correct checksum
[debug] [WinAppDriver] Deleting WinAppDriver session
[MJSONWP] Encountered internal error running command: 
	Error: Could not verify WinAppDriver install; re-run install
    at WinAppDriver.start$ (lib/winappdriver.js:35:13)
    at tryCatch (C:\Users\Jeremy\AppData\Roaming\npm\node_modules\appium\node_modules\babel-runtime\regenerator\runtime.js:67:40)
    at GeneratorFunctionPrototype.invoke [as _invoke] (C:\Users\Jeremy\AppData\Roaming\npm\node_modules\appium\node_modules\babel-runtime\regenerator\runtime.js:315:22)
    at GeneratorFunctionPrototype.prototype.(anonymous function) [as next] (C:\Users\Jeremy\AppData\Roaming\npm\node_modules\appium\node_modules\babel-runtime\regenerator\runtime.js:100:21)
    at GeneratorFunctionPrototype.invoke (C:\Users\Jeremy\AppData\Roaming\npm\node_modules\appium\node_modules\babel-runtime\regenerator\runtime.js:136:37)

For some reason on my machine, the WinAppDriver has not been installed correctly during the installation of Appium.

Step 4 – Manually install v0.5-beta of the WinAppDriver

This is pretty easy to fix – we can just grab the WinAppDriver installer from its GitHub site. But for version 1.6.0 of Appium, I found that it was important to select the correct version of WinAppDriver – specifically v0.5-beta, released on September 16 2016. Higher versions did not work for me with Appium v1.6.0.

Step 5 – Restart Appium from an elevated PowerShell prompt

Installed WinAppDriver v0.5-beta was a pretty simple process, I just double clicked on the file and selected all the default options. Then I repeated Step 3 and restarted Appium from the elevated PowerShell prompt. Again, after a few seconds, the same message appeared.

Welcome to Appium v1.6.0
Appium REST http interface listener started on 0.0.0.0:4723

This time, when I ran the tests for the Sample Calculator App from GitHub they all passed. Also, the PowerShell prompt showed no errors – instead of saying that it couldn’t verify the WinAppDriver install, I got the message below:

[WinAppDriver] Verifying WinAppDriver is installed with correct checksum
[debug] [WinAppDriver] WinAppDriver changed state to 'starting'
[WinAppDriver] Killing any old WinAppDrivers, running: FOR /F "usebackq tokens=5" %a in (`netstat -nao ^| findstr /R /C:"4823 "`) do (FOR /F "usebackq" %b in (`TASKLIST /FI "PID eq %a" ^| findstr /I winappdriver.exe`) do (IF NOT %b=="" TASKKILL /F /PID %a))
[WinAppDriver] No old WinAppDrivers seemed to exist
[WinAppDriver] Spawning winappdriver with: undefined 4823/wd/hub
[WinAppDriver] [STDOUT] Windows Application Driver Beta listening for requests at: http://127.0.0.1:4823/wd/hub
[debug] [WinAppDriver] WinAppDriver changed state to 'online'

I was able to see the standard Windows Calculator appear, and a series of automated UI tests were carried out on the app.

How do I get automation information for these apps?

When you look at the Sample Calculator App and the basic scenarios for testing, you’ll see some code with some strange constant values – such as the in the snippet below.

DesiredCapabilities appCapabilities = new DesiredCapabilities();
appCapabilities.SetCapability("app""Microsoft.WindowsCalculator_8wekyb3d8bbwe!App");
appCapabilities.SetCapability("platformName""Windows");
appCapabilities.SetCapability("deviceName""WindowsPC");
CalculatorSession = new RemoteWebDriver(new Uri(WindowsApplicationDriverUrl), appCapabilities);
Assert.IsNotNull(CalculatorSession);
CalculatorSession.Manage().Timeouts().ImplicitlyWait(TimeSpan.FromSeconds(2));
 
// Make sure we're in standard mode
CalculatorSession.FindElementByXPath("//Button[starts-with(@Name, \"Menu\")]").Click();
OriginalCalculatorMode = CalculatorSession.FindElementByXPath("//List[@AutomationId=\"FlyoutNav\"]//ListItem[@IsSelected=\"True\"]").Text;
CalculatorSession.FindElementByXPath("//ListItem[@Name=\"Standard Calculator\"]").Click();

The code above shows that the test looks for an app with identifier:

“Microsoft.WindowsCalculator_8wekyb3d8bbwe!App”

It’s obvious this is for the Microsoft Windows Calculator app – but most of us won’t recognise the strange looking code appended at the end of this string. This is the application’s automation identifier.

In order to locate this identifier, start the standard Calculator application from within Windows (open a Run prompt and enter “Calc”).

There’s a tool shipped with the Visual Studio 2015 called “Inspect” – it should normally be available at the location:

C:\Program Files (x86)\Windows Kits\10\bin\x86

Start Inspect.exe from the directory specified above. When you run the Inspect application, you’ll get a huge amount of information about the objects currently being managed by Windows 10 – when you drill into the tree view on the left side of the screen to see running applications, you can select “Calculator”, and on the right hand side a value for “AutomationId” will be shown – I’ve highlighted it in red below.

inspect

The other items – menus, buttons, and display elements – can also be obtained from this view when you select the corresponding menu, button or display elements – a particularly useful property is “Legacy|Accessible:Name” when identifying elements using the FindElementByXPath method.

Conclusion

I hope this post is useful for anyone interested in automating UI tests for Windows App, and particularly if you’re having problems getting Appium to work. There’s some really useful sample apps on GitHub from Appium – I found coding for Windows Apps to be a bit confusing to start off with, but with a few key bits of information – like using the Inspect tool  – you can start to tie together how the sample apps were written and how they work. This should get you up and running with your own automated UI test code. I’m excited about the opportunities this tool gives me to improve the quality of my applications – I hope this post helps you get started too.

.net, Accessibility, C# tip, UWP

How to use C# and the Windows.Media.SpeechSynthesis library to make your UWP app talk

This is a short post, on the topic of building speech enabled UWP apps for the Windows Store.

The features available through the Universal Windows Platform are pretty interesting – and also pretty incredible, when you consider you get these APIs for free as long as you’re building an app. One of these features is speech synthesis.

I particularly find this interesting because I’ve been researching some of Microsoft’s Cognitive Services – and one of these services is Text to Speech. These services are not free – at the time of writing, it’s 5000 transactions free per month, and after that it’s $4 per 1000 transactions. This is pretty good value…but free is better. Also, I’ll show you that a lot less code is required for the offline app version of code – you can see the code that’s required to use the online API here.

So in this post, I’ll walk through the steps of how to get an app to talk to you.

Building the UI

First, open VS2015 and create a blank Windows 10 UWP.

screenshot.1460844693

When the app has been created successfully, I’d like to create a UI where the top two thirds of the screen are used for user-entered text, and the bottom third is a button which will make the device read the text entered.

I can do this by defining Grid rows using the code below – this splits the screen into two rows, with the top row being twice the size of the bottom row.

<TextBox
    Grid.Column="0" 
    Grid.Row="0" 
    HorizontalAlignment="Stretch" 
    VerticalAlignment="Stretch"
    Width="Auto" 
    Height="Auto" 
    Name="textToSpeak"
    AcceptsReturn="True"
    Text="Enter text here."/>
<Button 
    Grid.Column="0" 
    Grid.Row="1" 
    HorizontalAlignment="Stretch" 
    VerticalAlignment="Stretch" 
    Width="Auto" 
    Click="Speak_Click">
        Speak
</Button>

Finally, we need to enter the magic element – the media element.

<MediaElement Name="media"  Visibility="Collapsed"/>

That’s the XAML part of the project completed.

Writing the code

The code is written to trigger speech synthesis of what’s in the text box when the button is clicked. It’s pretty simple code – we instantiate the SpeechSynthesizer object in the page constructor, and the call a Talk method. This asynchronous method converts the text to a Speech Synthesis Stream, and then sets the source of the Media element to be this stream. Once that’s set, we can call the Play method of the Media element to hear the computer talk.

using System;
using Windows.Media.SpeechSynthesis;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
 
namespace SpeakingApp
{
    public sealed partial class MainPage : Page
    {
        SpeechSynthesizer speechSynthesizer;
 
        public MainPage()
        {
            InitializeComponent();
            speechSynthesizer = new SpeechSynthesizer();
        }
 
        private void Speak_Click(object sender, RoutedEventArgs e)
        {
            Talk(textToSpeak.Text);
        }
 
        private async void Talk(string message)
        {
            var stream = await speechSynthesizer.SynthesizeTextToStreamAsync(message);
            media.SetSource(stream, stream.ContentType);
            media.Play();
        }
    }
}

And that’s it – very simple code to allow your app to talk to the user. I hope you find this helpful.

.net, C# tip, Clean Code

How to use the FileSystemWatcher in C# to report file changes on disk

A useful feature supplied in .NET is the FileSystemWatcher object. If you need to know when changes are made to a directory (e.g. files being added, changed or deleted), this object allows you to capture an event describing what’s different just after the change is made.

Why is this useful?

There’s a number of scenarios – a couple are:

  • You might want to audit changes made to a directory;
  • After files are copied to a directory, you might want to automatically process them according to a property of that file (e.g. one user might be scanning files and saving those scans to a shared directory on your network, and this process could be processing files as they’re dropped into a directory by the scanner;

I’ve seen instances of where developers allow a user to upload a file through a website, and have lots of file processing code within their web application. One way to make the application cleaner would have been to separate out the file processing concern away from website.

How do you use it?

It’s pretty simple to use this class. I’ve written a sample program and pasted it below:

using System.IO;
using static System.Console;
using static System.ConsoleColor;
 
namespace FileSystemWatcherSample
{
    class Program
    {
        static void Main(string[] args)
        {
            // instantiate the object
            var fileSystemWatcher = new FileSystemWatcher();
 
            // Associate event handlers with the events
            fileSystemWatcher.Created += FileSystemWatcher_Created;
            fileSystemWatcher.Changed += FileSystemWatcher_Changed;
            fileSystemWatcher.Deleted += FileSystemWatcher_Deleted;
            fileSystemWatcher.Renamed += FileSystemWatcher_Renamed;
 
            // tell the watcher where to look
            fileSystemWatcher.Path = @"C:\Users\Jeremy\Pictures\Screenshots\";
 
            // You must add this line - this allows events to fire.
            fileSystemWatcher.EnableRaisingEvents = true;
 
            WriteLine("Listening...");
            WriteLine("(Press any key to exit.)");
            
            ReadLine();
        }
 
        private static void FileSystemWatcher_Renamed(object sender, RenamedEventArgs e)
        {
            ForegroundColor = Yellow;
            WriteLine($"A new file has been renamed from {e.OldName} to {e.Name}");
        }
 
        private static void FileSystemWatcher_Deleted(object sender, FileSystemEventArgs e)
        {
            ForegroundColor = Red;
            WriteLine($"A new file has been deleted - {e.Name}");
        }
 
        private static void FileSystemWatcher_Changed(object sender, FileSystemEventArgs e)
        {
            ForegroundColor = Green;
            WriteLine($"A new file has been changed - {e.Name}");
        }
 
        private static void FileSystemWatcher_Created(object sender, FileSystemEventArgs e)
        {
            ForegroundColor = Blue;
            WriteLine($"A new file has been created - {e.Name}");
        }
    }
}

Are there any problems?

Well maybe I wouldn’t call them problems, but there’s certainly a few things that surprised me when I was using this utility.

As an example, when I took a screenshot and saved to my Screenshots folder, I expected just one event to be called – the Created event. But the picture below shows all the events that actually were called.File System Watcher

Let’s look at what happens:

  • First a file is created;
  • Then it’s somehow changed three times;
  • Then it’s renamed;
  • Then another file is created, and changed;
  • And finally, the original file is deleted.

This tells me something interesting about how my screenshot capture program works – but it also tells me to expect that the Created event will be fired twice when I take a single screenshot, so I’d have to code to prepare for that.

Ashutosh Nilkanth has blogged with another few tips and notes on using this class.

Summary

The FileSystemWatcher class is a useful .NET tool for observing changes to a directory structure. Because this watches for changes at an operating system level, events might be called in unexpected ways. Therefore it makes sense to properly understand the operating system events called when changes are made to the directory you’re monitoring, and design your solution to handle the real events (rather than the ones you might logically expect).

 

C# tip, Computer Vision, OCR, Optical Character Recognition

Optical Character Recognition in C# – Part #3, using Microsoft Cognitive Services (formerly Project Oxford)

This is the third part of my series on Optical Character Recognition (OCR), and what options are available for .NET applications – particularly low cost options. The first part was about using the open source package Tesseract, and the second part was about using the Windows.Media.Ocr libraries available to applications on the Universal Windows Platform (UWP). This part is about using Microsoft’s Project Oxford – this has a component which could be described as ‘OCR as a Service’.

Since I started this series, Build 2016 has happened and a few things have changed. Project Oxford has been rebranded as part of a wide suite of API services, known as Microsoft Cognitive Services. These APIs offer functions including:

  • Computer Vision
  • Speech;
  • Language;
  • Knowledge;
  • Search (better known as Bing services);

Microsoft have open sourced their client SDKs on Github here – this still carries some of the Project Oxford branding.

Getting started with OCR and Cognitive Services

In order to use OCR as a Service, you’ll need to get a subscription key from Microsoft. It’s pretty easy to get this, and you can sign up at this address, previewed below.

subscribe in seconds

I chose to sign up for the computer vision services (and also for Speech and Speaker Recognition previews). This allows me up to 5,000 transactions per month free of charge.

I’m able to view my subscriptions here, which shows me a screen like the one below.

subscription_dashboard

Let’s look at some code next.

Accessing OCR services using C#

In the previous two posts, I’ve been using a screenshot of one of my other blog posts – I want to keep using the same screenshot (shown below) in each of the three methods to be consistent.

sample_for_reading

As a reminder, Tesseract performed reasonably well, but wasn’t able to interpret the light grey text at the top of the page. The Windows.Media.Ocr library performed very well – it detected the grey text (although didn’t translate it very well), but the rest of the text was detected and interpreted perfectly.

I created a new C# console project to test Project Oxford. The next step was to get the necessary client packages from Nuget.

Install-Package Microsoft.ProjectOxford.Vision

Next, I ran the code below – this is a very simple test application. I’ve created an ImageToTextInterpreter class which basically wraps the asynchronous call to Microsoft’s servers. The text results come back as an “OcrResults” object, and I’ve written a simple static function to output the textual contents of this object to the console.

Remember to enter your own Subscription Key and image file path if you try the code below.

namespace CognitiveServicesConsoleApplication
{
    using Microsoft.ProjectOxford.Vision;
    using Microsoft.ProjectOxford.Vision.Contract;
    using System;
    using System.IO;
    using System.Linq;
    using System.Threading.Tasks;
    
    class Program
    {
        static void Main(string[] args)
        {
            Task.Run(async () =>
            {
                var cognitiveService = new ImageToTextInterpreter {
                    ImageFilePath = @"C:\Users\jeremy\Desktop\sample.png",
                    SubscriptionKey = "<<--[put your secret key here]-->>"
                };
 
                var results = await cognitiveService.ConvertImageToStreamAndExtractText();
 
                OutputToConsole(results);
             }).Wait();
        }
 
        private static void OutputToConsole(OcrResults results)
        {
            Console.WriteLine("Interpreted text:");
            Console.ForegroundColor = ConsoleColor.Yellow;
 
            foreach (var region in results.Regions)
            {
                foreach (var line in region.Lines)
                {
                    Console.WriteLine(string.Join(" ", line.Words.Select(w => w.Text)));
                }
            }
 
            Console.ForegroundColor = ConsoleColor.White;
            Console.WriteLine("Done.");
            Console.ReadLine();
        }
    }
 
    public class ImageToTextInterpreter
    {
        public string ImageFilePath { getset; }
 
        public string SubscriptionKey { getset; }
 
        const string UNKNOWN_LANGUAGE = "unk";
        
        public async Task<OcrResults> ConvertImageToStreamAndExtractText()
        {
            var visionServiceClient = new VisionServiceClient(SubscriptionKey);
 
            using (Stream imageFileStream = File.OpenRead(ImageFilePath))
            {
                return await visionServiceClient.RecognizeTextAsync(imageFileStream, UNKNOWN_LANGUAGE);
            }
        }
    }
}

I’ve pasted the results outputted to the console below – predictably, the result quality is almost identical to the results from the Windows.Media.Ocr test in Part #2 of the series (as the online service probably uses the same algorithms as the UWP libraries). The light grey text at the top of the image is interpreted badly, but the rest of the text has been interpreted perfectly.

translated_text
Conclusion

I’ve tried three methods of OCR using .NET technology – Tesseract, Windows.Media.Ocr for UWP, and online Cognitive Services. Each of these have different advantages and disadvantages.

Tesseract interprets text reasonably well. Its big advantage is that this is a free and open source solution, which can be integrated into regular C# applications without any need to be online. However, there’s some complexity around setting up English language files.

Windows.Media.Ocr interpreted black text very well (although lower contrast text wasn’t interpreted quite as well). This can be used offline also. However, this can only be used with Windows Store Apps, which might not be suitable for every application.

Cognitive Services (Project Oxford) also interpreted text very well, and as it’s a regular web service, it can be used in any C# application (so both classic C# apps and UWP apps). However, these services require the application to be online to function. This is a commercial application which limits free use to 5,000 transactions per month, and over this limit a purchase plan will apply.

.net, C# tip, Clean Code, Dependency Injection, Inversion of Control, MVC, Solid Principles

How to use built-in dependency inversion in MVC6 and ASP.NET Core

I’ve previously posted about the new logging features in ASP.NET Core RC1 and MVC6. This time I’m going to write about how Microsoft now has dependency inversion baked into the new Core framework.

Dependency inversion is a well documented and understood principle – it’s what the D stands for in SOLID, and says that your code should only depend on abstractions, not concrete implementations. So plug your services into your application through interfaces.

ojfhi
No

In previous versions of MVC, I’ve needed to download a 3rd party library to assist with dependency inversion – these libraries are also sometimes called “containers”. Examples of containers I’ve used are NInject.MVC, Autofac, and Sprint.NET.

In MVC6, Microsoft has entered this field, by including a simple container in the new version of ASP.NET. This isn’t intended to replicate all the features of other containers – but it provides dependency inversion features which may be suitable for many projects. This allows us to avoid adding a heavyweight 3rd party dependency to our solution (at least until there’s a feature we need from it).

Getting started

For our example, first create the default MVC6 web application in Visual Studio 2015.

webapp1.png

Now let’s create a simple stubbed service and interface to get some users. We’ll save this in the “Services”folder of the project.

public interface IUserService
{
    IEnumerable<User> Get();
}

We’ll need a User object too – we’ll put this in the “Models” folder.

public class User
{
    public string Name { getset; }
}

Let’s create a concrete implementation of this interface, and save this in the “Services” folder too.

public class UserService : IUserService
{
    public IEnumerable<User> Get()
    {
        return new List<User>{ new User { Name = "Jeremy" } };
    }
}

Now modify the HomeController to allow us to display these users on the Index page – we need to change the constructor (to inject the interface as a class dependency), and to change the Index action to actually get the users.

public class HomeController : Controller
{
    private readonly IUserService _userService;
 
    public HomeController(IUserService userService)
    {
        _userService = userService;
    }
 
    public IActionResult Index()
    {
        var users = _userService.Get();
        return View(users);
    }
}

If we just run our project now, we’ll get an exception – the HomeController’s Index action is trying to get users, but the IUserService has not been instantiated yet.

error

We need to configure the services that the container knows about. This is where Microsoft’s new dependency inversion container comes in. You just need to add a single line of code in the ConfigureServices method in Startup.cs to make sure the controller is given a concrete instance of UserService when it asks the container “Can you give me something that implements IUserService?

public void ConfigureServices(IServiceCollection services)
{
    ...
    services.AddTransient<IUserServiceUserService>();
}

If we run the project again now, we won’t get any exceptions – obviously we’d have to change the Index view to display the users.

Transient, Scoped, Singleton, Instance

In the example above, I used the “AddTransient” method to register the service. There’s actually 4 options to register services:

  • AddTransient
  • AddScoped
  • AddSingleton
  • AddInstance

Which option you choose depends on the lifetime of your service:

  • Transient services are created each time they are called. This would be useful for a light service, or when you need to guarantee that every call to this service comes from a fresh instantiation (like a random number generator).
  • Scoped services are created once per request. Entity Framework contexts are a good example of this kind of service.
  • Singleton services are created once and then every request after that uses the service that was created the first time. A static calculation engine might be a good candidate for this kind of service.
  • Instance services are similar to Singleton services, but they’re created at application startup from the ConfigureServices method (whereas the Singleton service is only created when the first request is made). Instantiating the service at startup would be useful if the service is slow to start up, so this would save the site’s first user from experiencing poor performance.

Conclusion

Microsoft have added their own dependency inversion container to the new ASP.NET Core framework in MVC6. This should be good enough for the needs of many ASP.NET projects, and potentially allows us to avoid adding a heavyweight third party IoC container.