.net core, Azure, C# tip, Clean Code, Dependency Injection, Inversion of Control, MVC

Azure Cache as a session data store in C# and MVC

Using the HTTP Session is one of those things that provokes…opinions. A lot of developers think that sessions are evil. And whereas I understand some of the reasons why that’s a common viewpoint – I’ve had my share of problems with code that uses sessions in the past – I’d have to qualify any criticism by saying the problems were more about how I was using the technique, rather than any inherent problem with sessions as a concept.

For example, some instances where using an in-memory session store can cause problems are:

  • If you chuck lots of website data into an in-memory session, you’ll quickly eat up lots of RAM on your web server. This might eventually cause performance problems.
  • Sessions are often short lived – often around 20 minutes – leading to a poor user experience after a period of inactivity (like being unexpectedly logged out).
  • Also in a load balanced environment, users might experience issues – if their first request leads to a session being created on one web server, and then their next request is routed to a different (less-busy) web server, then it won’t know anything about their previous session. Apparently you can work around this with using sticky sessions…and my own experiences with the sticky sessions approach are best described as “mixed”. But YMMV.
  • If you’re not using SSL/TLS, your session might be vulnerable to the Man-in-the-Middle attack vector. But the easy answer to this is….use SSL.

Anyway – I think most people would agree with the basic need for one web page to access data entered on another web page. However, if you have high throughput, a need for a large session store, or a load balanced environment, then the out-of-the-box HTTP Session object might not be for you. But that doesn’t mean you can’t use sessions at all.

‘Azure Cache for Redis’ to the rescue

So even though my application isn’t in a load-balanced environment right now, I’d still like to make sure it’s easy to port to one in the future. So I’ve been looking for alternatives to using the Session object:

  • I could use Cookies, but I can’t store very much information in them.
  • I could use a SQL database, but this seems heavyweight for my need for shortlived session-based information.
  • Something like a NoSQL store like Redis would suit very well – it’s super fast with low-latency, high-throughput performance.

I’ve written about using Redis as a fast-access data store a long time ago, but that post is out of date now and worth updating as there’s now a built-in Azure option – Azure Cache for Redis.

Spinning up Azure Cache for Redis

Check out the official docs for how to create a cache in Azure – it’s clearly described here with lots of screenshots to guide you through.

But how can I using Azure Cache for Redis in a website?

I don’t really like the ASP.NET implementation from the official documentation. It works, but there’s a lot of code in the controller’s action, and I’d like a cleaner solution. Ideally I’d like to inject an interface into my controller as a dependency, and use ASP.NET Core’s service container to instantiate the dependency on demand.

I found this really useful post from Simon Holman, and he also has created a super helpful example on GitHub. I tested this with an MVC project in .NET Core v2.2, and the implementation is very simple (check out Simon’s source code for exactly where to put these snippets).

  • Update the Startup.cs file’s ConfigureServices method after putting the connection string into your appsettings.json file:
services.AddDistributedRedisCache(options =>
{
    options.Configuration = Configuration.GetConnectionString("RedisCache");
});
 
services.AddSession();
  • Update the Startup.cs file’s Configure method:
app.UseSession();
  • Here’s how to set data into the cache:
var sessionstartTime = DateTime.Now.ToLongTimeString();
HttpContext.Session.SetString("mysessiondata"sessionstartTime);
  • …get data from the cache:
var sessionstartTime = HttpContext.Session.GetString("mysessiondata");
  • …and remove data from the cache:
HttpContext.Session.Remove("mysessiondata");

Some things to bear in mind about Azure Cache for Redis

I wanted to dive into the details of Azure Cache a bit more, just to understand what’s actually going on beneath the hood when we’re reading from, and writing to, the session object.

You can see what session keys are saved in your Redis cache using the Azure portal

redis console arrow

Once you’re in the console, you run the command “scan 0 count 100 match *” to see up to the first 100 keys in your Redis cache.

redis console

From the screenshot above, I can see that I’ve got 16 sessions open.

The Guid in the Key is actually your Session’s *private* “_sessionKey”

In the image above, you can see a bunch of GUID objects which are the keys of the items in my Redis cache. And if you look at the 16th item in the list above, you can see that it corresponds to the private “_sessionKey” value, which is held in my HttpContext.Session object (compare with the VS2019 watch window below).

redis session

So this information is interesting…but I’m not sure how useful it is. Since that property is private, you can’t access it (well you can, but not easily, you have to use reflection). But it might be helpful to know this at debug time.

Browsers behave differently when in incognito mode

I thought I’d try the application with my browser in incognito mode. And every time I hit refresh on the browser when I was in incognito or private browsing mode, a new session key was created on the server – which meant it wasn’t able to obtain data from the session previously created in the same browser instance.

You can see the number of keys has hugely increased in the image below, corresponding to the number of times I hit refresh:

private window redis

But at least I can detect when the session isn’t available through the HttpContext.Session.IsAvailable property – when a session is available, the image below is what I can see in the session using a watch in the VS2019 debugger:

session available

And when a session isn’t available (such as when my browser is in incognito mode), this is what I see:

session unavailable

So at least I can programmatically distinguish between when the session can work for the user and when it can’t.

In summary, this behaviour had a couple of implications for me:

  • Session persistence didn’t work in incognito/private windows – values weren’t persistent in the same session across pages.
  • Hitting refresh a bunch of times in incognito will create lots of orphan session objects in your server, which might have security/availability implications for your application, especially if your sessions are large and fill up available memory.

Clearing down sessions was harder than I thought

HttpContext.Session.Clear() emptied my session, but didn’t delete the key from the server, as I could still see it in the Redis console.

In fact, the only way I was able to remove sessions held in Redis was to get right into the guts of the StackExchange.Redis package using the code below. Since I knew the exact session that I wanted to delete had the key “57154387-d8b7-c361-a174-9d27b6c6caae“:

var connectionMultiplexer = StackExchange.Redis.ConnectionMultiplexer.Connect(Configuration.GetConnectionString("RedisCache"));
 
connectionMultiplexer.GetDatabase(connectionMultiplexer.GetDatabase().Database).KeyDelete("57154387-d8b7-c361-a174-9d27b6c6caae");

But this is only useful if you can get the exact session key that you want to delete, and that isn’t particularly easy. You could use reflection to get that private value like in the code below, but I get why that’s not something you might want to do.

var _sessionKey = typeof(DistributedSession)
                .GetField("_sessionKey"BindingFlags.NonPublic | BindingFlags.Instance)
                .GetValue(HttpContext.Session);

Wrapping up

I’ve talked a little bit about sessions in this post – they’re not a magic hammer, but also aren’t inherently a bad tool – maybe consider an alternative to in-memory sessions if you have large session objects, or are working in a load balanced environment. Azure Cache for Redis might be one of those alternatives. I’ve found it to be interesting and useful, and relatively easy to set up as an alternative to an in-memory session, but there are a few quirks – sessions may not work the way you expect them to for users who are incognito/using private browsing, and it’s hard to completely delete a session once it has been created.

.net core, Azure, C# tip, Clean Code, Cloud architecture, Security

Simplifying Azure Key Vault and .NET Core Web App (includes NuGet package)

In my previous post I wrote about securing my application secrets using Azure Key Vault, and in this post I’m going to write about how to simplify the code that a .NET Core web app needs to use the Key Vault.

I previously went into a bit of detail about how to create a Key Vault and add a secret to that vault, and then add a Managed Service Identity to a web app. At the end of the post, I showed some C# code about how to access a secret inside a controller action.

public class HomeController : Controller
{
    public async Task<ActionResult> Index()
    {
        var azureServiceTokenProvider = new AzureServiceTokenProvider();
        var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
        var secret = await keyVaultClient.GetSecretAsync("https://mywebsitesecret.vault.azure.net/secrets/TheSecret").ConfigureAwait(false);
        ViewBag.Secret = secret.Value;
        return View();
    }
    // rest of the class...
}

Whereas this works for the purposes of an example of how to use it in a .NET Core MVC application, it’s not amazingly pretty code – for any serious application, I wouldn’t have all this code in my controller.

I think it would be more logical to access my secrets at the time my web application starts up, and put them into the Configuration for the app. Therefore if I need them later, I can just inject an IConfiguration object into my class, and use that to get the secret values.

using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Configuration.AzureKeyVault;

namespace MyWebApplication
{
    public class Program
    {
        public static void Main(string[] args)
        {
            BuildWebHost(args).Run();
        }
 
        private static IWebHost BuildWebHost(string[] args)
        {
            return WebHost.CreateDefaultBuilder(args)
                .ConfigureAppConfiguration(builder =>
                {
                    var azureServiceTokenProvider = new AzureServiceTokenProvider();
                    var keyVaultClient =
                        new KeyVaultClient(
                            new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
                    builder.AddAzureKeyVault("https://mywebsitesecret.vault.azure.net/",
                            keyVaultClient, new DefaultKeyVaultSecretManager());
                })
                .UseStartup<Startup>()
                .Build();
        }
    }
}

But as you can see above, the code I need to add here to build my web host is not very clear either – it seemed a shame to lose all the good work done in .NET Core 2 to simplify this class.

So I’ve created a NuGet package to allow me to have a simpler and cleaner interface – it’s at uploaded to the NuGet repository here, and I’ve open sourced the code at GitHub here.

As usual you can install pretty easily from the command-line:

Install-Package Kodiak.Azure.WebHostExtension -prerelease

Now my BuildWebHost method looks much cleaner – I can just add the fluent extension AddAzureKeyVaultSecretsToConfiguration and pass in the URL of the vault).

private static IWebHost BuildWebHost(string[] args)
{
    return WebHost.CreateDefaultBuilder(args)
        .AddAzureKeyVaultSecretsToConfiguration("https://myvaultname.vault.azure.net")
        .UseStartup<Startup>()
        .Build();
}

I think this is a more elegant implementation, and now if I need to access the secret inside my controller’s action, I can use the cleaner code below.

using System.Diagnostics;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Configuration;
 
namespace MyWebApplication.Controllers
{
    public class HomeController : Controller
    {
        private readonly IConfiguration _configuration;
 
        public HomeController(IConfiguration configuration)
        {
            _configuration = configuration;
        }
 
        public IActionResult Index()
        {
            ViewBag.Secret = _configuration["MySecret"];
 
            return View();
        }
    }
}

Summing up

Azure Key Vault (AKV) is a useful tool to help keep production application secrets secure, although like any tool it’s possible to mis-use it, so don’t assume that just because you’re using AKV that your secrets are secured – always remember to examine threat vectors and impacts.

There’s a lot of documentation on the internet about AKV, and a lot of it recommends using the client Id and secrets in your code – I personally don’t like this because it’s always risky to have any kind of secret in your code. The last couple of posts that I’ve written have been about how to use an Azure Managed Service Identity with your application to avoid secrets in your code, and how to simplify the C# code you might use in your application to access AKV secrets.

.net, C# tip, Clean Code, Visual Studio

Creating a RESTful Web API template in .NET Core 1.1 – Part #1: Returning HTTP Codes

I’ve created RESTful APIs with the .NET framework and WebAPI before, but nothing commercial with .NET Core yet. .NET Core has been out for a little while now – version 1.1 was released at Connect(); //2016 – I’ve heard that some customers now are willing to experiment with this to achieve some of the potential performance and stability gains.

To prepare for new customer requests, I’ve been experimenting with creating a simple RESTful API with .NET Core to see how different it is to the alternative version with the regular .NET Framework…and I’ve found that it’s really pretty different.

I’ve already written about some of the challenges in upgrading from .NET Core 1.0 to 1.1 when creating a new project – this post is about how to start with the default template for Web API projects, and transform it into something that is more like a useful project to host RESTful microservices.

This first post in the series is about turning the default project into a good HTTP citizen and return HTTP status codes.

When I create a new WebAPI project using .NET Core 1.1 from the default Visual Studio template, a number of files are created in the project. The more interesting one is the “ValuesController” – this holds the standard verbs associated with RESTful services, GET, POST, PUT and DELETE. I’ve pasted the default code created below:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
 
namespace MyWebAPI.Controllers
{
    [Route("api/[controller]")]
    public class ValuesController : Controller
    {
        // GET api/values
        [HttpGet]
        public IEnumerable<string> Get()
        {
            return new string[] { "value1""value2" };
        }
 
        // GET api/values/5
        [HttpGet("{id}")]
        public string Get(int id)
        {
            return "value";
        }
 
        // POST api/values
        [HttpPost]
        public void Post([FromBody]string value)
        {
        }
 
        // PUT api/values/5
        [HttpPut("{id}")]
        public void Put(int id, [FromBody]string value)
        {
        }
 
        // DELETE api/values/5
        [HttpDelete("{id}")]
        public void Delete(int id)
        {
        }
    }
}

However, one of the things I don’t like about this which would be very easy to change is the return type of each verb. A good RESTful service should return HTTP status codes describing the result of the action – typically 200 codes for success:

  • 200 – Request is Ok;
  • 201 – Resource created successfully;
  • 202 – Update accepted and will be processed (although may be rejected);
  • 204 – Request processed and there is no content to return.

Additionally, responses to RESTful actions will sometimes contain information:

  • 200 – OK – if the action is GET, the response will contain an object (or list of objects) which were requested.
  • 201 – Created – the response will contain the object which was created, and also the unique URI required to get that object.
  • 202 – Accepted – the response will contain the object for which an update was requested.
  • 204 – No content to return – this could be returned as a result of a delete request, where it would make no sense to return an object (as it theoretically no longer exists).

    As a brief aside, some writers disagree that the Delete request should return no content – for a HATEOAS application, I can see why returning an empty response is not helpful.

I think the default ValuesController would be more useful if it implemented a pattern of returning responses with correctly configured HTTP status codes, and I think the first step towards this would be to use the default code below for the ValueController (which – as a default template – obviously does nothing useful yet).

using Microsoft.AspNetCore.Mvc;
 
namespace MyWebAPI.Controllers
{
    [Route("api/[controller]")]
    public class ValuesController : Controller
    {
        // GET api/values
        [HttpGet]
        public IActionResult Get()
        {
            return Ok(new string[] { "value1""value2" });
        }
 
        // GET api/values/5
        [HttpGet("{id}")]
        public IActionResult Get(int id)
        {
            return Ok("value");
        }
 
        // POST api/values
        [HttpPost]
        public IActionResult Post([FromBody]string value)
        {
            return Created($"api/Values/{value}", value);
        }
 
        // PUT api/values/5
        [HttpPut("{id}")]
        public IActionResult Put(int id, [FromBody]string value)
        {
            return Accepted(value);
        }
 
        // DELETE api/values/5
        [HttpDelete("{id}")]
        public IActionResult Delete(int id)
        {
            return NoContent();
        }
    }
}

The main changes I’ve made so far are:

  • The return type of each action is now IActionResult, which allows for Http status codes to be returned.
  • For the GET actions, I’ve just wrapped the objects returned (which are simple strings) with the Ok result.
  • For the POST action, I’ve used the Created result object. This is different to OK because in addition to including an object, it also includes a URI pointing to the location of the object.
  • For the PUT action, I just wrapped the object returned with the Accepted result. The return type of Accepted is new in .NET Core v1.1 – this won’t compile if you’re targeting previous versions.
  • Finally, for the DELETE action, rather than returning void I’ve returned a NoContent result type.

I really like how .NET Core v1.1 bakes in creating great RESTful services in a clean and simple way and prefer it to the way previously used in .NET. I’m planning a number of other posts which will focus on some functional and non-functional aspects of creating a clean RESTful service:


About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, Clean Code, Making, Raspberry Pi 3

A servo library in C# for Raspberry Pi – Part #3: Implementing the interface

Last time, I developed an interface that would allow me to control a servo directly from my Raspberry Pi 3 which is hosting Windows 10 IoT Core. In this post, I’ll describe an implementation of this interface. The code will a cleaner implementation of the code I got working in Part #1 of the series.

Let’s look at the interface I described last time:

public interface IServoController : IDisposable
{
    int Frequency { get; set; }
 
    double MaximumDutyCycle { get; set; }
 
    double MinimumDutyCycle { get; set; }
 
    int ServoPin { get; set; }
 
    Task Connect();
 
    void Go();
 
    IServoController SetPosition(int degree);
 
    IServoController AllowTimeToMove(int pauseInMs);
}

Implementing the interface

The code implementation is quite straightforward – I needed to specify the control pin for the servo, and to check that the Lightning provider is being used – so I put these items in the constructor.

public ServoController(int servoPin)
{
    if (LightningProvider.IsLightningEnabled)
    {
        LowLevelDevicesController.DefaultProvider = LightningProvider.GetAggregateProvider();
    }
 
    ServoPin = servoPin;
}

When I set the position of the servo, I have to calculate what duty cycle is necessary to move the servo’s wiper to that position. This is a very simple calculation, given that we know the duty cycles necessary to move to the minimum (0 degree) and maximum (180 degree) positions. The difference between the two extreme duty cycle values divided by 180 is the incremental value corresponding to 1 degree of servo movement. Therefore, we just multiply this increment by the number of degrees we want to move from the starting position, add the minimum duty cycle value, and this gives us the duty cycle corresponding to the servo position we want.

public IServoController SetPosition(int degree)
{
    ServoGpioPin?.Stop();
 
    // For example:
    // minimum duty cycle = 0.03 (0.6ms pulse in a period of 20ms) = 0 degrees
    // maximum duty cycle = 0.12 (2.4ms pulse in a period of 20ms) = 180 degrees
    // degree is between 0 and 180
    // => 0.0005 per degree [(0.12 - 0.03) / 180]
 
    var pulseWidthPerDegree = (MaximumDutyCycle - MinimumDutyCycle) / 180;
 
    var dutyCycle = MinimumDutyCycle + pulseWidthPerDegree * degree;
    ServoGpioPin.SetActiveDutyCyclePercentage(dutyCycle);
 
    return this;
}

The full code for the class is below – it’s also available here.

public class ServoController : IServoController
{
    public ServoController(int servoPin)
    {
        if (LightningProvider.IsLightningEnabled)
        {
            LowLevelDevicesController.DefaultProvider = LightningProvider.GetAggregateProvider();
        }
 
        ServoPin = servoPin;
    }
 
    public int Frequency { getset; } = 50;
 
    public double MaximumDutyCycle { getset; } = 0.1;
 
    public double MinimumDutyCycle { getset; } = 0.05;
 
    public int ServoPin { getset; }
 
    public int SignalDuration { getset; }
 
    private PwmPin ServoGpioPin { getset; }
 
    public async Task Connect()
    {
        var pwmControllers = await PwmController.GetControllersAsync(LightningPwmProvider.GetPwmProvider());
 
        if (pwmControllers != null)
        {
            // use the on-device controller
            var pwmController = pwmControllers[1];
 
            // Set the frequency, defaulted to 50Hz
            pwmController.SetDesiredFrequency(Frequency);
 
            ServoGpioPin = pwmController.OpenPin(ServoPin);
        }
    }
 
    public void Dispose()
    {
        ServoGpioPin?.Stop();
    }
 
    public void Go()
    {
        ServoGpioPin.Start();
        Task.Delay(SignalDuration).Wait();
        ServoGpioPin.Stop();
    }
 
    public IServoController SetPosition(int degree)
    {
        ServoGpioPin?.Stop();
 
        // For example:
        // minimum duty cycle = 0.03 (0.6ms pulse in a period of 20ms) = 0 degrees
        // maximum duty cycle = 0.12 (2.4ms pulse in a period of 20ms) = 180 degrees
        // degree is between 0 and 180
        // => 0.0005 per degree [(0.12 - 0.03) / 180]
 
        var pulseWidthPerDegree = (MaximumDutyCycle - MinimumDutyCycle) / 180;
 
        var dutyCycle = MinimumDutyCycle + pulseWidthPerDegree * degree;
        ServoGpioPin.SetActiveDutyCyclePercentage(dutyCycle);
 
        return this;
    }
 
    public IServoController AllowTimeToMove(int pauseInMs)
    {
        this.SignalDuration = pauseInMs;
 
        return this;
    }
}

Using this code

There are three key things to remember:

  1. Enable the Microsoft Lightning Provider’s “Direct Memory Mapped Driver” through the Pi’s web interface – described under the “Runtime Requirements” heading at the URL: https://developer.microsoft.com/en-us/windows/iot/win10/LightningProviders.htm
  2. In your Windows UWP project, change your package.appxmanifest to enable the necessary capabilities. Change the Package root node to include the xmlns.iot namespace, and add “iot” to the Ignorable Namespaces, i.e.
    <Package
        xmlns="http://schemas.microsoft.com/appx/manifest/foundation/windows10"
        xmlns:mp="http://schemas.microsoft.com/appx/2014/phone/manifest"
        xmlns:uap="http://schemas.microsoft.com/appx/manifest/uap/windows10"
        xmlns:iot="http://schemas.microsoft.com/appx/manifest/iot/windows10"
             IgnorableNamespaces="uap mp iot">

    b. Add the iot:Capability and DeviceCapability to the capabilities node, i.e.

    <Capabilities>
        <iot:Capability Name="lowLevelDevices" />
        <DeviceCapability Name="109b86ad-f53d-4b76-aa5f-821e2ddf2141" />
    </Capabilities>
  3. In your Windows UWP project:
    • Open the Reference Manager (to open the reference manager, right click on your project’s references and select “Add reference…”);
    • Expand “Universal Windows”;
    • Select “Extensions”;
    • Enable the “Windows IoT Extensions for the UWP”;
    • Click “OK”.

I’ve packaged the code into a NuGet package which is available here. I’ve also included a ReadMe for this library here.

So assuming that you’ve connected your servo’s control line to Pin GPIO 5 (pin 29 on the Pi 3) – then you can call a method like the one below to move to the 90 degree position:

private async void MoveServoToCentre()
{
    using (var servo = new ServoController(5))
    {
        await servo.Connect();
 
        servo.SetPosition(90).AllowTimeToMove(1000).Go();
    }
}

Conclusion

So that’s it for this series – obviously this is still Alpha code and I’ve only tested it on my own 9g Tower Pro servo. But hopefully this code and implementation will provide some inspiration for other makers out there who are trying to get a servo working with a Raspberry Pi 3 and Windows 10 IoT Core.

In the future, I’m planning to use the Adafruit servo driver to control several servos at once – this wouldn’t be possible with just the Raspberry Pi as it’s not powerful enough to drive numerous devices like a servo. I’ll write about this soon.

 

.net, Clean Code, Making, nuget, Raspberry Pi 3, UWP

A servo library in C# for Raspberry Pi – Part #2: Designing the interface, IServoController

Last time I posted an article describing a proof of concept for how to control a servo using a Raspberry Pi 3. This time, I want to improve the code so it’s better than just a rough proof of concept – I’d prefer to write a re-usable library. So the first step of my design is to build an interface through which I can talk to the servo.

There were a few principles that I wanted to adhere to:
1. Make the interface fluent (where it was sensible to do so);
2. Make the interface as small as possible;
3. Add public properties so this could be used for servos which have slightly different profiles.

Finally, I wanted to implement the IDisposable interface – this would be useful to close any connections if that was necessary.

Connecting to the servo

There are a couple of things that need to be done when setting up the servo:
1. Specify the GPIO pin on the Raspberry Pi which is going to output a PWM signal;

int ServoPin { getset; }

2. Once we have specified the pin, we need to open a connection. I knew from my proof of concept code that this used asynchronous methods, so I needed the return type to be a Task (as asynchronous methods cannot return void in C#).

Task Connect();

Moving the servo wiper

The first and most obvious thing to tell a servo to do is move to a particular rotational position. This position would be most commonly measured in degrees.

However, one issue is that the source program has no means of knowing when the servo’s blade has reached position. In Arduino code, I’ve seen this handled by just putting a delay in after the instruction to move to a particular position.

I liked the idea of a chain of commands, which would tell the servo the position to move to, specify the amount of time allowed to move to this position, and then go.

// This moves to 45 degrees, allowing 500ms to reach that position
servo.SetPosition(45).AllowTimeToMove(500).Go();

So this ideal chain told me that I needed to have the methods:

void Go();
 
IServoController SetPosition(int degree);
 
IServoController AllowTimeToMove(int pauseInMs);

Altering frequency duty cycle properties

My research told me that servos usual expected duty cycles of 5% to 10% to sweep from 0 to 180 degrees. However, I also found some people who found these are idealised figures – in fact, with my own servos, I found that a better range of duty cycle went from 3% to 12%. So I realised that any servo controller probably needed to have public properties to set frequency, and minimum and maximum duty cycle values.

int Frequency { getset; }
 
double MaximumDutyCycle { getset; }
 
double MinimumDutyCycle { getset; }

The finished interface

So that described what I wanted my servo controller interface to look like – I’ve pasted the code for this below.

public interface IServoController : IDisposable
{
    int Frequency { getset; }
 
    double MaximumDutyCycle { getset; }
 
    double MinimumDutyCycle { getset; }
 
    int ServoPin { getset; }
 
    Task Connect();
 
    void Go();
 
    IServoController SetPosition(int degree);
 
    IServoController AllowTimeToMove(int pauseInMs);
}

Publishing the interface as a library

The final step I wanted to take was to publish this interface to NuGet. I decided to publish the interface in a separate package to the implementation, so that it would be easy to swap out the implementation if necessary.

Presently this interface is available here, and it can be downloaded from NuGet using the command:

Install-Package Magellanic.ServoController.Interfaces -Pre

It’s presently in an alpha (pre-release) status so the “-Pre” switch is needed for the moment.

Next time, I’ll write about how to implement this interface, and I’ll write a simple UWP app to test this.

.net, C# tip, Clean Code

How to use the FileSystemWatcher in C# to report file changes on disk

A useful feature supplied in .NET is the FileSystemWatcher object. If you need to know when changes are made to a directory (e.g. files being added, changed or deleted), this object allows you to capture an event describing what’s different just after the change is made.

Why is this useful?

There’s a number of scenarios – a couple are:

  • You might want to audit changes made to a directory;
  • After files are copied to a directory, you might want to automatically process them according to a property of that file (e.g. one user might be scanning files and saving those scans to a shared directory on your network, and this process could be processing files as they’re dropped into a directory by the scanner;

I’ve seen instances of where developers allow a user to upload a file through a website, and have lots of file processing code within their web application. One way to make the application cleaner would have been to separate out the file processing concern away from website.

How do you use it?

It’s pretty simple to use this class. I’ve written a sample program and pasted it below:

using System.IO;
using static System.Console;
using static System.ConsoleColor;
 
namespace FileSystemWatcherSample
{
    class Program
    {
        static void Main(string[] args)
        {
            // instantiate the object
            var fileSystemWatcher = new FileSystemWatcher();
 
            // Associate event handlers with the events
            fileSystemWatcher.Created += FileSystemWatcher_Created;
            fileSystemWatcher.Changed += FileSystemWatcher_Changed;
            fileSystemWatcher.Deleted += FileSystemWatcher_Deleted;
            fileSystemWatcher.Renamed += FileSystemWatcher_Renamed;
 
            // tell the watcher where to look
            fileSystemWatcher.Path = @"C:\Users\Jeremy\Pictures\Screenshots\";
 
            // You must add this line - this allows events to fire.
            fileSystemWatcher.EnableRaisingEvents = true;
 
            WriteLine("Listening...");
            WriteLine("(Press any key to exit.)");
            
            ReadLine();
        }
 
        private static void FileSystemWatcher_Renamed(object sender, RenamedEventArgs e)
        {
            ForegroundColor = Yellow;
            WriteLine($"A new file has been renamed from {e.OldName} to {e.Name}");
        }
 
        private static void FileSystemWatcher_Deleted(object sender, FileSystemEventArgs e)
        {
            ForegroundColor = Red;
            WriteLine($"A new file has been deleted - {e.Name}");
        }
 
        private static void FileSystemWatcher_Changed(object sender, FileSystemEventArgs e)
        {
            ForegroundColor = Green;
            WriteLine($"A new file has been changed - {e.Name}");
        }
 
        private static void FileSystemWatcher_Created(object sender, FileSystemEventArgs e)
        {
            ForegroundColor = Blue;
            WriteLine($"A new file has been created - {e.Name}");
        }
    }
}

Are there any problems?

Well maybe I wouldn’t call them problems, but there’s certainly a few things that surprised me when I was using this utility.

As an example, when I took a screenshot and saved to my Screenshots folder, I expected just one event to be called – the Created event. But the picture below shows all the events that actually were called.File System Watcher

Let’s look at what happens:

  • First a file is created;
  • Then it’s somehow changed three times;
  • Then it’s renamed;
  • Then another file is created, and changed;
  • And finally, the original file is deleted.

This tells me something interesting about how my screenshot capture program works – but it also tells me to expect that the Created event will be fired twice when I take a single screenshot, so I’d have to code to prepare for that.

Ashutosh Nilkanth has blogged with another few tips and notes on using this class.

Summary

The FileSystemWatcher class is a useful .NET tool for observing changes to a directory structure. Because this watches for changes at an operating system level, events might be called in unexpected ways. Therefore it makes sense to properly understand the operating system events called when changes are made to the directory you’re monitoring, and design your solution to handle the real events (rather than the ones you might logically expect).

 

.net, C# tip, Clean Code, Dependency Injection, Inversion of Control, MVC, Solid Principles

How to use built-in dependency inversion in MVC6 and ASP.NET Core

I’ve previously posted about the new logging features in ASP.NET Core RC1 and MVC6. This time I’m going to write about how Microsoft now has dependency inversion baked into the new Core framework.

Dependency inversion is a well documented and understood principle – it’s what the D stands for in SOLID, and says that your code should only depend on abstractions, not concrete implementations. So plug your services into your application through interfaces.

ojfhi
No

In previous versions of MVC, I’ve needed to download a 3rd party library to assist with dependency inversion – these libraries are also sometimes called “containers”. Examples of containers I’ve used are NInject.MVC, Autofac, and Sprint.NET.

In MVC6, Microsoft has entered this field, by including a simple container in the new version of ASP.NET. This isn’t intended to replicate all the features of other containers – but it provides dependency inversion features which may be suitable for many projects. This allows us to avoid adding a heavyweight 3rd party dependency to our solution (at least until there’s a feature we need from it).

Getting started

For our example, first create the default MVC6 web application in Visual Studio 2015.

webapp1.png

Now let’s create a simple stubbed service and interface to get some users. We’ll save this in the “Services”folder of the project.

public interface IUserService
{
    IEnumerable<User> Get();
}

We’ll need a User object too – we’ll put this in the “Models” folder.

public class User
{
    public string Name { getset; }
}

Let’s create a concrete implementation of this interface, and save this in the “Services” folder too.

public class UserService : IUserService
{
    public IEnumerable<User> Get()
    {
        return new List<User>{ new User { Name = "Jeremy" } };
    }
}

Now modify the HomeController to allow us to display these users on the Index page – we need to change the constructor (to inject the interface as a class dependency), and to change the Index action to actually get the users.

public class HomeController : Controller
{
    private readonly IUserService _userService;
 
    public HomeController(IUserService userService)
    {
        _userService = userService;
    }
 
    public IActionResult Index()
    {
        var users = _userService.Get();
        return View(users);
    }
}

If we just run our project now, we’ll get an exception – the HomeController’s Index action is trying to get users, but the IUserService has not been instantiated yet.

error

We need to configure the services that the container knows about. This is where Microsoft’s new dependency inversion container comes in. You just need to add a single line of code in the ConfigureServices method in Startup.cs to make sure the controller is given a concrete instance of UserService when it asks the container “Can you give me something that implements IUserService?

public void ConfigureServices(IServiceCollection services)
{
    ...
    services.AddTransient<IUserServiceUserService>();
}

If we run the project again now, we won’t get any exceptions – obviously we’d have to change the Index view to display the users.

Transient, Scoped, Singleton, Instance

In the example above, I used the “AddTransient” method to register the service. There’s actually 4 options to register services:

  • AddTransient
  • AddScoped
  • AddSingleton
  • AddInstance

Which option you choose depends on the lifetime of your service:

  • Transient services are created each time they are called. This would be useful for a light service, or when you need to guarantee that every call to this service comes from a fresh instantiation (like a random number generator).
  • Scoped services are created once per request. Entity Framework contexts are a good example of this kind of service.
  • Singleton services are created once and then every request after that uses the service that was created the first time. A static calculation engine might be a good candidate for this kind of service.
  • Instance services are similar to Singleton services, but they’re created at application startup from the ConfigureServices method (whereas the Singleton service is only created when the first request is made). Instantiating the service at startup would be useful if the service is slow to start up, so this would save the site’s first user from experiencing poor performance.

Conclusion

Microsoft have added their own dependency inversion container to the new ASP.NET Core framework in MVC6. This should be good enough for the needs of many ASP.NET projects, and potentially allows us to avoid adding a heavyweight third party IoC container.