Jeremy Lindsay

.net core, Cake, Raspberry Pi 3

Running a .NET Core 2 app on Raspbian Jessie, and deploying to the Pi with Cake

I’ve managed to get .NET Core apps running on Windows 10 IoT Core, and on Ubuntu 16.04 (and also Ubuntu MATE), but until recently I’d never tried with Raspbian. I’ve read a few posts from people saying that they couldn’t get it to work, and a couple of nights ago I decided to bite the bullet and give it a go.

There’s a good post here on how to get the .NET Core runtime on a Raspberry Pi running Raspbian, and as always, there are a few tricks to getting things running from scratch. To augment this, I’ve created a “Hello World” application template for .NET Core Raspberry Pi on NuGet which I think will make things easier for the community.

At a very high level, the steps to getting a .NET Core 2 app on Raspbian are:

  • Install Raspbian onto an SD card and insert into your Raspberry Pi 3.
  • Set up SSH and test that you can log into your Raspberry Pi from your development machine.
  • Install .NET Core 2 onto the Raspberry Pi.
  • On your development machine, install the Raspberry Pi C# template for dotnet core.
  • Create a new console application using this template.
  • Deploy this application to your Pi running Raspberry using Cake.

I’ll run through each of these steps below.

Install Raspbian onto an SD card and insert into your Raspberry Pi 3

There are already great explanations of how to install the Raspbian OS onto a Raspberry Pi 3 – many people who have a Pi know how to do this already and I don’t really want to just repeat a well understood process here – so I’ve just put some useful links below:

Set up SSH and test that you can log into your Raspberry Pi from your development machine

Once you’ve set up Raspbian and booted your Pi to the desktop, you’ll need to allow SSH connections. These aren’t enabled by default but it’s very easy to configure this.

First open the main Raspberry Pi menu on your desktop as shown in the image below, and open the Preferences sub-menu to reveal the “Raspberry Pi Configuration” option.

2017-07-21-102129_1824x984_scrot

Open the “Raspberry Pi Configuration” option screen, and click on the “Interfaces” tab. There’s lots of useful settings here, but the one we want to enable is SSH – click on the “Enabled” radio button as shown below, and then click on OK. SSH is now enabled on your Pi.

2017-07-21-102157_1824x984_scrot

You’ll need to find the IP address of your Raspberry Pi – I think the easiest way is to open a terminal on your Pi and type:

hostname -I

This tells me that my Pi has the IP address 192.168.1.111.

Now we need to check you can log in from your development machine. I personally find it’s easiest to use PuTTY to do this. I’ve blogged about installing PuTTY before so I won’t repeat it all here – but a few tips are:

  • Download the PuTTY installer from here.
  • It makes life easier to add the installation directory to your machine’s path, so you can open PuTTY by just typing “putty” at a command prompt.

So open PuTTY, enter the Pi’s IP address and select the “Connection Type” to be SSH, as shown below:

screenshot.1500803365

When you click Open, a command prompt should open where you can type the username and password for the Pi 3.

screenshot.1500805617

The default username and password combo for Raspbian is “pi” and “raspberry“, and you should change the default password as soon as possible.

Install .NET Core 2 onto the Raspberry Pi

There’s a straightforward set of commands that you can run through PuTTY to install .NET Core 2 onto your Pi running Raspbian – I’ve written them below:

# Update the Raspbian Jessie install
sudo apt-get -y update

# Install the packages necessary for .NET Core
sudo apt-get -y install libunwind8 gettext

# Download the nightly binaries for .NET Core 2
wget https://dotnetcli.blob.core.windows.net/dotnet/Runtime/release/2.0.0/dotnet-runtime-latest-linux-arm.tar.gz

# Create a folder to hold the .NET Core 2 installation
sudo mkdir /opt/dotnet

# Unzip the dotnet zip into the dotnet installation folder
sudo tar -xvf dotnet-runtime-latest-linux-arm.tar.gz -C /opt/dotnet

# set up a symbolic link to a directory on the path so we can call dotnet
sudo ln -s /opt/dotnet/dotnet /usr/local/bin

Now you can test this install by running the dotnet –info command to see the version installed on Raspbian.

screenshot.1500810837

On your development machine, install the Raspberry Pi C# template

Now that we have .NET Core 2 installed on our Raspbian, we can go back to our development machine to create an application to run on the Pi.

First, install the template for creating Raspberry Pi applications

 dotnet new -i RaspberryPi.Template::*

This will create a new template available to dotnet core – you can list them all with the command:

dotnet new --list

And in the screenshot below, you can see there is now a new template called “Empty .NET Core IoT Project”, highlighted in red below.

screenshot.1500806357

Create a new console application using this template

It’s really easy to create a new console application now – just run the command below (obviously my application is called “HelloRaspbian”, but yours could be something different):

dotnet new coreiot -n HelloRaspbian

When you browse to this new application folder using your preferred development tool (mine is VSCode), you’ll see some files – we need to make a couple of changes.

First, run the command below to pull down the latest Cake build PowerShell file:

Invoke-WebRequest http://cakebuild.net/download/bootstrapper/windows -OutFile build.ps1

This command is also in the README.txt file which comes packaged with the application.

Now, open the build.cake file and you’ll see some defaults at the top of the file:

///////////////////////////////////////////////////////////////////////
// ARGUMENTS (WITH DEFAULT PARAMETERS FOR LINUX (Ubuntu 16.04, Raspbian Jessie, etc)
///////////////////////////////////////////////////////////////////////
var runtime = Argument("runtime", "linux-arm");
var destinationIp = Argument("destinationPi", "<>");
var destinationDirectory = Argument("destinationDirectory", @"<>");
var username = Argument("username", "<>");
var executableName = Argument("executableName", "HelloRaspbian");

Replaced those placeholders with the correct environment variables – I’ve shown my own settings below:

///////////////////////////////////////////////////////////////////////
// ARGUMENTS (WITH DEFAULT PARAMETERS FOR LINUX (Ubuntu 16.04, Raspbian Jessie, etc)
///////////////////////////////////////////////////////////////////////
var runtime = Argument("runtime", "linux-arm");
var destinationIp = Argument("destinationPi", "192.168.1.111");
var destinationDirectory = Argument("destinationDirectory", @"/home/pi/DotNetConsoleApps/RaspbianTest");
var username = Argument("username", "pi");
var executableName = Argument("executableName", "HelloRaspbian");

I’ve created a folder on the Pi to deploy my application to, using the command below at the PuTTY SSH prompt at my home directory (/home/pi/).

mkdir -p DotNetConsoleApps/RaspbianTest

Deploy this application to your Pi running Raspberry using Cake

Once I’ve replaced the placeholders in my Cake file, the only thing left to do is run the build.ps1 file from a PowerShell prompt.

screenshot.1500811107

To test this, go back to the PuTTY SSH prompt and navigate to your home directory and run:

./DotNetConsoleApps/RaspbianTest/HelloRaspbian

And you’ll get a text output saying “Hello Internet of Things!”

screenshot.1500810451

Wrapping up

I hope this post is useful to anyone trying to get a C# console application running on Raspbian. I think Raspbian is the default OS for Raspberry Pi users, so this should open up many development opportunities. My Raspberry Pi template makes creating the default console application easier, and Cake is a brilliant way to orchestrate the deployment process (rather than dragging and dropping files using tools like WinSCP, and having to change file permission manually). I’ll be blogging more on the future on deploying IoT applications to this platform.

I’ve written a few posts now about how to deploy C# Raspberry Pi applications to Windows 10 IoT Core, Ubuntu, and Raspbian (all using Cake as the orchestration tool) – next time I’ll write about how to use Cake to automatically build a UWP AppxBundle and deploy that AppxBundle to Windows 10 IoT Core.


About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, Cake, Performance

Calling a custom executable from Cake using StartProcess and ProcessSettings

I’ve previously written about how I’ve used Cake to orchestrate my build and deployment processes, and write the code for these processes in C# rather than PowerShell. This time I’ll write about how I’ve improved the speed of my deployment process by using custom tools which aren’t yet built into Cake.

Some background about my deployment process

A common part of a deployment process is copying files repetitively from a source to a destination, and Cake provides a good way to do this – the CopyFiles static method.

To use this, we just need to specify the source directory, the remote destination directory, and plug these in as parameters. I’ve written some sample code below showing how a “Deploy” task might move an application from a “publish” directory to a remote machine.

Task("Deploy")
    .Does(() =>
    {
	// The files I want to copy are in the publish directory - I use the
	// wildcard character to specify that I want to copy everything
	var source = @".\publish\*";
 
	// The destination is on my local network and accessible on the path below
	var destination = @"\\192.168.1.125\c$\ConsoleApps\DeployedApplication";
 
	// Now just plug in the source, destination
	// The boolean parameter ensures we preserve the folder structure
	CopyFiles(source, destination, true);
    });

This works well, but it also copies across every file, every time – it doesn’t matter whether the file has changed or not – and this the slowest part of my deployment process.  I would prefer to mirror my source and destination files, and only copy across files that have changed. This would speed up deployments across my local network.

Using RoboCopy to mirror directory structures

Microsoft have created a command line file copy utility which allows me to copy or mirror directory structures called RoboCopy (Robust File Copy) – it’ll only copy the files/directories that have changed, and this sounds like exactly what I need.

In copying files from source to destination, I’ve chosen the word “mirror” because RoboCopy needs to be passed a switch “/MIR”, which is short for mirror. From Microsoft’s TechNet documentation:

/MIR is an option to ROBOCOPY where you mirror a directory tree with all the subfolders including the empty directories and you purge files and folders on the destination server that no longer exists in source.

The command I’d need to mirror files has the format:

robocopy /MIR source_directory destination_directory

And to copy from my source directory

".\publish\"

to the destination on the C drive of a machine with IP address 192.168.1.125:

"\ConsoleApps\DeployedApplication"

I just need to plug these values as arguments to the robocopy executable, as shown below:

robocopy /MIR ".\publish\" "\\192.168.1.125\c$\ConsoleApps\DeployedApplication"

But how can I use RoboCopy with Cake?

Turns out it’s quite easy with a few things in Cake which can help us.

  • We can use the StartProcess method – I can pass in executable that I want to run (i.e. robocopy.exe), and I can also pass in the arguments for this executable.

I don’t need to pass in the full path to the executable because robocopy.exe is in a folder which is in my machine’s path, i.e. C:\Windows\System32

  • I can also clean up my code a little by keeping this code in its own method in the Cake.build file.
private void MirrorFiles(string source, string destination)
{
    StartProcess("robocopy", new ProcessSettings {
        Arguments = new ProcessArgumentBuilder()
            .Append(@"/MIR")
            .Append(source)
            .Append(destination)
        }
    );
}

Now I can adjust the Task shown previously (i.e. “Deploy”) to use this method instead:

Task("Deploy")
    .Does(() =>
    {
	// The files I want to copy are in the publish directory
	var source = @".\publish\";
 
	// The destination is on my local network and accessible on the path below
	var destination = @"\\192.168.1.125\c$\ConsoleApps\DeployedApplication";
 
	// Now just plug in the source, destination
	MirrorFiles(source, destination);
    }

What practical difference does this make?

In my application’s Cake build script, there’s very little difference – a new method to mirror files, and a slight change to a task to copy (or mirror) files across a network.

But the real advantage comes when we look at the time taken to deploy my application.

Another nice feature of Cake is that it writes out the time taken by each task which helps with performance benchmarking.

I’ve pasted the timings for each stage of my original deployment process below for when I just copy files instead of using robocopy:

Task                  Duration 
--------------------------------------------------
Clean                 00:00:00.2378497 
Restore               00:00:03.9107662 
Build                 00:00:05.2128133 
Publish               00:00:08.0715728 
Deploy                00:00:43.1527382 
Default               00:00:00.0021827 
--------------------------------------------------
Total:                00:01:00.5879229

Notice it took 43 seconds to deploy my application’s files from source to destination – approximately 75% of the total time. And everytime I change my application and re-deploy, the time taken to carry out this operation is approximately the same, copying across files that have changed and also those that haven’t changed.

Let’s change the script to mirror files using robocopy (i.e. only copy across files that have changed since the last build) rather than just copying all files – I’ve pasted the new timings below:

Task                  Duration 
--------------------------------------------------
Clean                 00:00:00.2661543 
Restore               00:00:02.7529030 
Build                 00:00:04.7478403 
Publish               00:00:06.3981560 
Deploy                00:00:00.6685282 
Default               00:00:00.0033186 
--------------------------------------------------
Total:                00:00:14.8369004

It has gone from copying every file in 43 seconds to just copying 5 files that changed in 0.66 seconds – this is a huge difference for me, making it much quicker and more convenient for me to make an application, change, build and deploy to my test rig.

Wrapping up

In this post I wanted to share with the community how flexible Cake is by demonstrating how I’ve used Robocopy to speed my up deployments.

  • I’ve been able to switch out Cake’s built in copying feature, and instead use a local executable (that isn’t a core part of Cake or an addin) by passing it to the StartProcess method.
  • I’ve been able to write a private method in my C# Cake.build script to keep the code clean.
  • Finally I’ve been able to use Cake’s default output to benchmark performance before and after my change.

Being able to extend the core features in Cake with StartProcess is really powerful – it’s not quite as re-useable as building a dedicated add-in, but it can still allow us to quickly customise and optimise build scripts.


About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, Troubleshooting, Xamarin

Troubleshooting the default install of Xamarin with Visual Studio 2017 and Windows 10 Creators Update

Since I’ve recently started using Xamarin for cross platform development, I thought it would be nice to share with the community how I’ve got past some of the issues which tripped me up for a while when I was becoming familiar with it. This isn’t really a normal “getting started with Xamarin” tutorial – there’s lots of them already out there already, like this one – but hopefully anyone starting off with Xamarin will find it useful.

I use Windows 10 (and have installed the Creator’s Update) – this allows me to use Xamarin.Forms. You’ll know if you’re on the Creator’s Update version if your Windows build number is 1703. If you’re on a different version of Windows, you might have different experiences to me (you can check your version by going to Windows Settings -> System -> About).

Before we begin – what’s Xamarin and why should I use it?

With Xamarin tools built into Visual Studio, developers can create native applications in C# for Windows, Mac and Linux devices. So instead of writing and managing three different codebases for three different platforms, developers can just write their code once and deploy it to different app-stores.

Installing Xamarin tools for Visual Studio 2017

With Visual Studio 2017, it’s very easy just open up the setup wizard, select the Xamarin tools (as shown below) and just wait for it to install.

vs_community_image

It’s probably going to take a long time to install VS2017 with Xamarin – adding Xamarin to the base Visual Studio install makes it about 25GB bigger.

Tip: If you leave your machine to download and install Xamarin, it’s worth adjusting your power settings to make sure an unattended machine doesn’t switch off in the middle of the download – like mine did the first time (facepalm).

Creating a project with the default Xamarin template

This bit is straightforward to anyone who’s created a new project in Visual Studio 2017 before.

Select File -> New Project to open the dialog below, and choose a name for the project:

screenshot.1

After clicking OK on the dialog above (which chooses a Cross Platform App project type), the dialog will close and open a new project. I chose to use Xamarin.Forms (which allows developers to create cross platform user interfaces). I also chose to create a Shared Project because I only expect my code to be used in my application, rather than shared with other developers as a Portable Class Library (you can read more about the differences between Shared Projects and Portable Class Libraries here).

screenshot.2

When you click OK, the project and files will be created, and a window like the one below will appear with instructions for setting up the Mac Agent. (I don’t have a Mac and I’d need Visual Studio Enterprise to use this anyway, so I normally click on the “Don’t show this again” box in the bottom left corner).

screenshot.3

Finally you’ll be prompted for the versions of Windows that you want the UWP flavour of your project to target. I normally just click OK here.

screenshot.4

At this point, you’ll have a simple Xamarin solution in Visual Studio 2017, which contains 4 projects – one for iOS, one for Android, one for UWP, and one shared project.

Also notice that there is one file open in VS2017 after you create the solution – App.xaml.cs in the shared project. I’ll explain why this is relevant later.

And now for the gremlins ex machina

After this point, I hit a few snags. Things I wanted to do that didn’t work out of the box for me were:

  • Compile the application without error or warnings
  • Run the application in a Windows Phone Emulator
  • Run the application in an Android Emulator

I’ll run through some of the symptoms of problems I encountered trying the things above, and how I fixed them.

Compiling the solution led to multiple warnings and errors

Tip: Prepare to wait a while when building the solution for the first time – it needs to download a lot of NuGet packages.

Unfortunately my attempt to compile the project out of the box showed an error in the UWP project and a bunch of warning messages for the Android project.

Getting rid of the error CS0103 –  ‘InitializeComponent’ does not exist in the current context

The error reports “The name ‘InitializeComponent’ does not exist in the current context.”

screenshot-7

I eventually noticed a couple of things that seemed a bit bizarre:

  • Even though I have an error, the message in the status bar in the bottom left reports “Rebuild All succeeded” – both can’t be right surely?
  • This error relates to the App.xaml.cs file which is open in the editor panel. When I opened Main.xaml.cs from the Shared Project in the VS2017 editor, I now see two errors (as shown in the image below).

screenshot.13

So these errors don’t seem to negatively affect the build, and if I really want to get rid of them, I can just close those files which gets rid of the errors (as shown below).

screenshot.11

Getting rid of warnings about $(TargetFrameworkVersion) mismatches

Three of the warnings I saw were very similar:

The $(TargetFrameworkVersion) for Xamarin.Forms.Platform.dll (v7.1) is 
greater than the $(TargetFrameworkVersion) for your project (v6.0). 
You need to increase the $(TargetFrameworkVersion) for your project.

The $(TargetFrameworkVersion) for Xamarin.Forms.Platform.Android.dll (v7.1) 
is greater than the $(TargetFrameworkVersion) for your project (v6.0). 
You need to increase the $(TargetFrameworkVersion) for your project.

The $(TargetFrameworkVersion) for FormsViewGroup.dll (v7.1) is greater 
than the $(TargetFrameworkVersion) for your project (v6.0). 
You need to increase the $(TargetFrameworkVersion) for your project.

The warning says I need to increase the TargetFrameworkVersion for my Android project, but when I look at the properties for this project, I actually can’t increase it past version 6 (MarshMallow).

screenshot.1

Fortunately we’re not at a dead end here – we can go to the Start Menu, and search for the “SDK Manager” for Android, which is installed with the Xamarin component of Visual Studio 2017 (shown below).

screenshot.2

Tip: Run the Android SDK Manager as administrator by right-clicking on the shortcut and select “More -> Run as administrator”. If you don’t run as administrator, you might get an error later when the program tries to create a temporary folder for downloads.

When I start the Android SDK Manager, it analyses the packages presently installed, and advises what needs to be updated. On my system, 10 packages needed to be installed or updated, as shown below.

screenshot.3

When I click on the “Install 10 packages…” button, another window appears asking me to accept the licence. I accepted the licence and clicked on “Install”.

screenshot.4

The installation and update procedure starts – this can take a few minutes.

screenshot.5

Once it’s finished installing, let’s return to Visual Studio 2017  –  I restarted it, and then cleaned and rebuilt the solution. This time the warnings about $(TargetFrameworkVersion) mismatches are gone.

Getting rid of warning IDE0006 – “Error encountered while loading the project”

I sometimes found that I had a warning IDE0006 which advised “Error encountered while loading the project. Some project features, such as full solution analysis for the failed project and projects that depend on it, have been disabled“.

screenshot.6

This usually happened just after I created a project, and I found that if I close VS2017, restart it, and re-open and rebuild the solution the warning disappears.

So to summarise, in order to compile the default project without errors or warnings:

  • Run the Android SDK manager as administrator, and install/update the recommended packages.
  • Restart Visual Studio 2017 and re-open the project.
  • Close all files from the shared project which have the type *.xaml.cs.

Running in your application in the Windows Phone Emulator

Tip: If you want to run your application in an emulator, you’ll definitely need a 64-bit machine which allows hardware virtualisation. Your machine will also need to be pretty powerful, or you might find running an emulator to be unbearably slow.

I found this to be straightforward as soon as I installed a Windows Phone emulator.

If you don’t have any Windows Phone emulators installed, you can grab some from here: https://developer.microsoft.com/en-us/windows/downloads/sdk-archive

I changed the start-up project to the UWP project, and changed the debugging target to be one of the Windows Phone Mobile emulators.

screenshot.14

After hitting play (or F5) to start running the Windows UWP application in a Windows Phone emulator, I was prompted to set my machine into Developer mode to allow me to load apps – I just had to select the third option (“Developer mode”) as shown in the image below (you can access this screen from Start -> Settings -> For developers).

screenshot.20

But after changing this setting, everything worked well – no gremlins here. The phone emulator starts up after a couple of minutes, and I was easily able to see the Xamarin application in the list of apps installed to the phone emulator.

screenshot.39

And when I run the Xamarin app in the emulator, I see the correct result – a simple form with a message saying “Welcome to Xamarin Forms!”

screenshot.40

Running your application in the Android Emulator

Visual Studio 2017 comes packaged with several Android Emulators – you can see them if you change the target project to the Android one, and look at the dropdown list on its right.

screenshot.15

Tip: I never use either the two emulators which target ARM. I have never managed to successfully deploy a project to one of these emulators, even with a reasonably powerful machine.

If you don’t believe me, Visual Studio even gives you a warning if you try to use them:screenshot.25

It’s much, much faster to target one of the emulators which targets x86.

Use the Android x86 emulator – but you need to turn Hyper-V off

You don’t need to uninstall Hyper-V to run the Android x86 emulator on Windows 10 – you just need to turn it off. The command to do this is very simple from a command prompt running as administrator:

bcdedit /set hypervisorlaunchtype off

Reboot for this setting change to take effect.

Of course it might not suit you to turn Hyper-V off on your machine – and another alternative is to deploy to an actual Android device – there’s some great instructions for this here: https://developer.xamarin.com/guides/android/getting_started/installation/set_up_device_for_development/

My experience was that I couldn’t successfully start and deploy my project to an Android emulator from Visual Studio 2017. However, I was able to start the Android emulator from the Android AVD Manager, available from the start menu (as shown below).

screenshot.16

When you start this program, you’ll see a dialog like the one below which lists the Android Virtual Devices available on your development machine.

screenshot.17

Select one of the x86 emulators, and click on the “Start…” button. Accept the default options on the launch screen, and an Android phone emulator will start.

screenshot.18

Now go back to Visual Studio 2017. Select the emulator which you’ve just started in the drop down list on the right of the green “Play” arror. Now right click on the Android project, and select “Deploy Solution”.

This should now deploy the Xamarin application to the Android emulator, as shown below (our app is in the top row, second column):

screenshot.19

And when we click on the Xamarin application icon in the emulator, as expected we see the same screen as in the Windows Phone emulator which says “Welcome to Xamarin Forms!”

screenshot.20

Wrapping up

This was just a quick post to help other Xamarin developers who are starting out avoid some of the headaches I had. And just to be really clear, I’m not criticising Xamarin or Visual Studio – getting code to work on 3 different platforms which are constantly changing and updating is pretty miraculous, and ultimately the things I had to do weren’t that big a deal to change.

There are already a few troubleshooting guides from Microsoft on Xamarin like this one. The tips below are things I didn’t find covered anywhere else.

  • Sometimes errors (for example, CS0103) are mis-reported by VS2017 for files which are open in the editor, particularly *.xaml.cs files from the Shared Project – try closing these files and rebuilding to see if the errors go away.
  • Other warnings appear after the project is first created (for example, IDE0006), but if you restart VS2017 and re-build the project, the warning disappears.
  • Opening the Android SDK Manager as administrator and updating the libraries you have on your development machine can help to remove warnings related to TargetFrameworkVersion mismatches – remember to restart VS2017 after the update, and then clean and rebuild your solution through VS2017.
  • Don’t use the Android ARM emulators on Windows 10 – instead start an x86 emulator from the Android AVD manager, and deploy from VS2017 to the emulator which is running.
  • If the x86 emulator won’t start, you might need to disable Hyper-V using the command “bcdedit /set hypervisorlaunchtype off“.

About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, .net core, Raspberry Pi 3

Automating .NET Core deployments to different platforms with Cake

Recently I’ve been using Cake to automate code deployments to Windows and Ubuntu devices.

Since .NET Core 2 became available, I’ve been able to write C# applications to work with devices which can host different operating systems – specifically the Raspberry Pi, where I’ve been targeting both Windows 10 IoT Core and Ubuntu 16.04 ARM.

I can deploy my code to hardware and test it there because I own a couple of Pi devices – each running one of the operating systems mentioned above. Each operating system requires code to be deployed in different ways:

  • The Windows 10 IoT Core device just appears on my home workgroup as another network location, and it’s easy to deploy the applications by copying files from my Windows development machine to a network share location.
  • For Ubuntu 16.04 ARM, it’s a bit more complicated – I need to use the PuTTY secure client program (PSCP) to get files from my Windows development machine to the device running Ubuntu, and then use Plink to make application files executable.

But I’ve not really been happy with how I’ve automated code deployment to these devices. I’ve used Powershell scripts to manage the deployment – the scripts work fine, but I find there’s a bit of friction when jumping from programming C# to Powershell, and some of dependencies between scripts are not really intuitive.

It’s also a bit difficult to explain to people how to use the scripts, and that’s always a sure sign of a code smell.

Recently I’ve found a better way to manage my build and deployment tasks. At my local .NET user group, we had a demonstration of Cake which is a tool that allows me to orchestrate my build and deployment process in C#. It looked like it could help remove some of my deployment issues – and I’ve written about my experiences with it below.

Getting started

There’s lots more detail on how to get started in the CakeBuild.net website here, but I’ll run through the process that I followed.

Create a project

I’ve previously created a simple project template for a Raspberry Pi which is in a custom nuget package (I’ve written more about that here). You can install the template from nuget by running the command below.

dotnet new -i RaspberryPi.Template::*

This creates a simple hello world application targeting the .NET Core 2 framework.

dotnet new coreiot -n SamplePi

You don’t have to create a project for a Raspberry Pi to follow along with this post – you could just use the regular dotnet command for a new console project:

dotnet new console -n HelloWorld

Create the bootstrapper script

After the project was created, I opened the project folder in VSCode, opened the Powershell terminal, and ran the code below.

Invoke-WebRequest http://cakebuild.net/download/bootstrapper/windows -OutFile build.ps1

This creates a new file in the root of my project called “build.ps1“. It won’t do anything useful yet until we’ve defined our build and deployment process (which we’ll do in the next few sections) – but this bootstrapping script takes care of lots of clever things for us later. It verifies our build script compiles, and it’ll automatically pull down any library and plugin dependencies we need.

Create a Cake build script

The build script – called build.cake – will contain all of the logic and steps needed to build and deploy my code. There’s already an example repository on GitHub which has a few common tasks already. Let’s use the script in that sample repository as our starting point and download it to our project using the PowerShell script below.

Invoke-WebRequest https://raw.githubusercontent.com/cake-build/example/master/build.cake -OutFile build.cake

At this point, if you’re using VSCode, your project should look something like the image below.

screenshot.1499111294

Once you’ve loaded the example project’s build script – you can see it here – there’s a few things worth noting:

  • The build script uses C# (or more specifically, a C# domain specific language). This means there’s less friction between creating functional code in C# and orchestrating a build and deployment process in Cake. If you can write code in C#, you’ve got all the skills necessary to build and deploy your code using Cake.
  • Each section of the build and deployment process is nicely separated into logical modules, and it’s really clear for each step what tasks need to complete before that step can start. And because the code written in a fluent style, this means that clarity is already baked into the code.
  • The script shows how we can process arguments passed to the build script:
var target = Argument("target""Default");

The Argument method defines a couple of things:

  1. “target” is the name of the parameter passed to the build.ps1 script
  2. “Default” is the value assigned to the C# target variable if nothing is specified.

So we could get our build script to use something different to the default value using:

.\build.ps1 -target Clean // this would run the "Clean" task in our script, and all the tasks it depends on.

Customizing the build.cake script

Of course this build.cake file is just a sample to help me get started – I need to make a few changes for my own .NET Core 2 projects.

The build and deployment steps that I need to follow are listed below:

  • Clean the existing binary, object and publish directories
  • Restore missing nuget packages
  • Build the .NET Core code
  • Publish the application (targeting Windows or Ubuntu operating systems)
  • Deploy the application (targeting Windows or Ubuntu operating systems)

I’m going to write a different Cake task for each of the steps above.

Modify the build.cake script to clean the build directories

This is a very simple change to the existing build.cake file – I can specify the binary, object and publish directories using C# syntax, and then make a minor change to the task called “Clean” (which already exists in the build.cake file we created earlier).

var binaryDir = Directory("./bin");
var objectDir = Directory("./obj");
var publishDir = Directory("./publish");

// ...
Task("Clean")
    .Does(() =>
    {
        CleanDirectory(binaryDir);
        CleanDirectory(objectDir);
        CleanDirectory(publishDir);
    });

Modify the build.cake script to restore missing nuget packages

Again, there’s a task already in the build.cake file which could do this job for us called “Restore-nuget-packages”. This would work, but I’d like to clearly signal in code that I’m using the normal commands for .NET Core projects – “dotnet restore”.

DotNetCoreRestore is a method built into Cake (see the source code here).

I created the C# variable to hold the name of my project (csproj) file, and can call the task shown below.

var projectFile = "./SamplePi.csproj";

// ...
Task("Restore")
    .IsDependentOn("Clean")
    .Does(() =>
    {
        DotNetCoreRestore(projectFile);
    });

Notice how I’ve specified a dependency in the code, which requires that the “Clean” task runs before the “Restore” task can start.

Modify the build.cake script to build the project

The methods that Cake uses to restore and build projects are quite similar – I need to specify C# variables for the project file, and this time also what version of the .NET Core framework that I want to target. Of course this task depends on the “Restore” task we just created – but notice that we don’t need to specify the dependency on “Clean”, because that’s automatically inferred from the “Restore” dependency.

We also need to specify the framework version and build configuration – I’ve specified them as parameters with defaults of “.netcoreapp2.0” and “Release” respectively.

var configuration = Argument("configuration""Release");
var framework = Argument("framework""netcoreapp2.0");

// ...
Task("Build")
    .IsDependentOn("Restore")
    .Does(() =>
    {
        var settings = new DotNetCoreBuildSettings
        {
            Framework = framework,
            Configuration = configuration,
            OutputDirectory = "./bin/"
        };
 
        DotNetCoreBuild(projectFile, settings);
    });

Modify the build.cake script to publish the project

This is a little bit more complex because there are different outputs depending on whether we want to target Windows (the win10-arm runtime) or Ubuntu (the ubuntu.16.04-arm runtime). But it’s still easy enough to do this – we just create an argument to allow the user to pass their desired runtime to the build script. I’ve decided to make the win10-arm runtime the default.

var runtime = Argument("runtime""win10-arm");

// ...
Task("Publish")
    .IsDependentOn("Build")
    .Does(() =>
    {
        var settings = new DotNetCorePublishSettings
        {
            Framework = framework,
            Configuration = configuration,
            OutputDirectory = "./publish/",
            Runtime = runtime
        };
 
        DotNetCorePublish(projectFile, settings);
    });

Modify the build.cake script to deploy the project to Windows

I need to deploy to Windows and Ubuntu – I’ll consider these separately, looking at the easier one first.

As I mentioned earlier, it’s easy for me to deploy the published application to a device running Windows – since the device is on my network and I know the IP address, I can just specify the IP address of the device, and the directory that I want to deploy to. These can both be parameters that I pass to the build script, or set as defaults in the build.cake file.

var destinationIp = Argument("destinationPi""192.168.1.125");
var destinationDirectory = Argument("destinationDirectory"@"c$\ConsoleApps\Test");
 
// ...
 
Task("Deploy")
    .IsDependentOn("Publish")
    .Does(() =>
    {
        var files = GetFiles("./publish/*");
 
        var destination = @"\\" + destinationIp + @"\" + destinationDirectory;
        CopyFiles(files, destination, true);
 
    });

Modify the build.cake script to deploy the project to Ubuntu

This is a bit more complex – remember that deploying from a Windows machine to an Ubuntu machine needs some kind of secure copy program. We also need to be able to modify the properties of some files on the remote device to make them executable. Fortunately a Cake add-in already exists which helps with both of these operations!

There are hundreds of add-ins for Cake – you can find more of them here and search the API here.

First, let’s structure the code to differentiate between deployment to a Windows device and deployment to an Ubuntu device. It’s easy enough to work out if we’re targeting the Windows or Ubuntu runtimes by looking at the start of the runtime passed as a parameter. I’ve written the skeleton of this task below.

Task("Deploy")
    .IsDependentOn("Publish")
    .Does(() =>
    {
        var files = GetFiles("./publish/*");
 
        if (runtime.StartsWith("win"))
        {
            var destination = @"\\" + destinationIp + @"\" + destinationDirectory;
            CopyFiles(files, destination, true);
        }
        else
        {
            // TODO: logic to deploy to Ubuntu goes here
        }
    });

I found an add-in for securely copying files called Cake.Putty – you can read more about the Cake.Putty library on Github here.

All we need to do to get Cake to pull the necessary libraries and tools is add one line to our build.cake script:

#addin "Cake.Putty"

That’s it – we don’t need to explicitly start any other downloads, or move files around – it’s very similar to how we’d include a “using” statement at the top of a C# class to make another library available in the scope of that class.

So next we want to understand how to use this add-in – I’ve found there’s good documentation on how to use the methods available in the plugin’s GitHub repository here.

From the documentation on how to use the PSCP command in the add-in, I need to pass two parameters:

  • a string array of file paths as the first parameter, and
  • the remote destination folder as the second parameter.

The second parameter is easy, but the first one is a bit tricky – there’s a function built into Cake called GetFiles(string path) but this returns an IEnumerable collection, which obviously is different to a string array – so I can’t use that.

But this is a great example of an area where I’m really able to take advantage of being able to write C# in the build script. I can easily convert the IEnumerable collection to a string array using LINQ, and pass this as the correctly typed parameter.

var destination = destinationIp + ":" + destinationDirectory;
var fileArray = files.Select(m => m.ToString()).ToArray();
Pscp(fileArray, destination, new PscpSettings
    {
        SshVersion = SshVersion.V2,
        User = username
    });

So now the deployment code has a very clear intent and easily readable to a C# developer – a great advantage of using Cake.

Finally, I can use Plink (also available in the Cake.Putty add-in) to make the application executable on the remote machine – again we need to specify the file to make executable, and the location of this file, which is straightforward.

var plinkCommand = "chmod u+x,o+x " + destinationDirectory + "/SamplePi";
Plink(username + "@" + destination, plinkCommand);

So now our deployment task is written in C#, and can deploy to Windows or Ubuntu devices, as shown below.

Task("Deploy")
    .IsDependentOn("Publish")
    .Does(() =>
    {
        var files = GetFiles("./publish/*");
 
        if (runtime.StartsWith("win"))
        {
            var destination = @"\\" + destinationIp + @"\" + destinationDirectory;
            CopyFiles(files, destination, true);
        }
        else
        {
            var destination = destinationIp + ":" + destinationDirectory;
            var fileArray = files.Select(m => m.ToString()).ToArray();
            Pscp(fileArray, destination, new PscpSettings
                {
                    SshVersion = SshVersion.V2,
                    User = username
                }
            );
 
            var plinkCommand = "chmod u+x,o+x " + destinationDirectory + "/SamplePi";
            Plink(username + "@" + destination, plinkCommand);
        }
    });

I’ve noticed that this add-in doesn’t handle file paths which have spaces in them – but it works if the full file path has no spaces.

One last thing – I’ve included the parameters for a Windows deploy all the way through this post – however, if I wanted to change these, I could override the defaults by passing them to the ScriptArgs switch using a command like the one below:

.\build.ps1 
       -ScriptArgs '--runtime=ubuntu.16.04-arm', 
                   '--os=ubuntu', 
                   '--destinationPi=192.168.1.110', 
                   '--destinationDirectory=/home/ubuntu/ConsoleApps/Test', 
                   '--username=ubuntu', 
                   '--executableName=SamplePi' 
      -target Publish

I can pass values to the “-target” and “-configuration” parameters directly because they’re explicitly mentioned in the build.ps1 script – the rest have to be passed as a comma separated list of name-value pairs to the “-ScriptArgs” parameter. There’s a bit more on passing parameters to the build script on StackOverflow here.

I’ve pushed my new deployment scripts to GitHub here and the rest of this sample project to here.

Wrapping up

Cake allows me to write my build and deployment scripts in C# – this makes it much easier for developers who are familiar with C# to write automated deployment scripts. It also makes the dependencies between tasks really clear.

I’m much happier using this deployment mechanism rather than the one I had previously.  Cake especially helped me to deploy from a Windows development environment to a device running an Ubuntu operating system – and the principles I’ve learned and written about here don’t just apply to Raspberry Pi devices, I could use them if I wanted to develop a website in .NET Core on my Windows machine, and deploy to a web server running Linux.

Footnote: I’ve written about an improvement to the deployment process to Windows 10 IoT Core here – robocopy makes things much faster


About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, Non-functional Requirements, Performance

Measuring your code’s performance during development with BenchmarkDotNet – Part #2: Methods with parameters

Last time, I wrote about how to use BenchmarkDotNet (Github here: NuGet: here) to measure code performance for a very simple method with no parameters. This time I’ll write about testing another scenario that I find is more common – methods with parameters.

Let’s start with a simple case – primitive parameters.

Methods with Primitive Parameters

Let’s write a method which takes an integer parameter and calculates the square.

I know that I could use the static System.Math.Pow(int a, int b) for this instead of rolling my own method – but more on this later.

I written a little static method like this.

public class MathFunctions
{
    public static long Square(int number)
    {
        return number * number;
    }
}

Nothing wrong with that – but not that easy to test with BenchmarkDotNet and decorate with a simple [Benchmark] attribute because I need to specify the number parameter.

There are a couple of ways to test this.

Refactor and use the Params attribute

Instead of passing the number as a parameter to the Square method, I can refactor the code so that Number is a property of the class, and the Square method uses this property.

public class MathFunctions
{
    public int Number { getset; }
 
    public long Square()
    {
        return this.Number * this.Number;
    }
}

Now I can decorate Square method with the [Benchmark] attribute, and I can use the ParamsAttribute in BenchmarkDotNet to decorate the property with numbers that I want to test.

public class MathFunctions
{
    [Params(12)]
    public int Number { getset; }
        
    [Benchmark]
    public int Square()
    {
        return this.Number * this.Number;
    }
}

And then it’s very simple to execute a performance runner class like the code below:

using BenchmarkDotNet.Running;
using Services;
 
namespace PerformanceRunner
{
    class Program
    {
        static void Main(string[] args)
        {
            var summary = BenchmarkRunner.Run<MathFunctions>();
        }
    }
}

Which yields the results:

// * Summary *

BenchmarkDotNet=v0.10.8, OS=Windows 10 Redstone 2 (10.0.15063)
Processor=Intel Core i7-2640M CPU 2.80GHz (Sandy Bridge), ProcessorCount=4
Frequency=2728178 Hz, Resolution=366.5450 ns, Timer=TSC
dotnet cli version=2.0.0-preview2-006127
 [Host] : .NET Core 4.6.25316.03, 64bit RyuJIT
 DefaultJob : .NET Core 4.6.25316.03, 64bit RyuJIT


 Method | Number | Mean      | Error     | StdDev    | Median    |
------- |------- |----------:|----------:|----------:|----------:|
 Square | 1      | 0.0429 ns | 0.0370 ns | 0.0658 ns | 0.0001 ns |
 Square | 2      | 0.0035 ns | 0.0086 ns | 0.0072 ns | 0.0000 ns |

This mechanism has the advantage that you can specify a range of parameters and observe the behaviour for each of the values.

But I think it has a few disadvantages:

  • I’m a bit limited in the type of parameter that I can specify in an attribute. Primitives like integers and strings are easy, but instantiating a more complex data transfer object is harder.
  • I have to refactor my code to measure performance – you could argue that the refactored version is better code, but to me the code below is simple and has a clear intent:
var output = MathFunctions.Square(10);

Whereas I think the code below is more obtuse.

var math = new MathFunctions { Number = 10 };
var output = math.Square();
  • My source code has a tight dependency on the BenchmarkDotNet library, and the attributes add a little litter to the class.

Basically I’m not sure I’ve made my code better by refactoring it to measure performance. Let’s look at other techniques.

Separate performance measurement code into a specific test class

I can avoid some of the disadvantages of the technique above by creating a dedicated class to measure the performance of my method, as shown below.

public class MathFunctions
{
    public static long Square(int number)
    {
        return number * number;
    }
}
 
public class PerformanceTestMathFunctions
{
    [Params(12)]
    public int Number { getset; }
 
    [Benchmark]
    public long Measure_Speed_of_Square_Function()
    {
        return MathFunctions.Square(Number);
    }
}

So now I can run the code below to measure the performance of my method.

using BenchmarkDotNet.Running;
using Services;
 
namespace PerformanceRunner
{
    class Program
    {
        static void Main(string[] args)
        {
            var summary = BenchmarkRunner.Run<PerformanceTestMathFunctions>();
        }
    }
}

This time I’ve not had to refactor my original code, and I’ve moved the dependency from my source code under test to the dedicated test class. But I’m still a bit limited in what types of parameter I can supply to my test class.

Using GlobalSetup for methods with non-primitive data transfer object parameters

Let’s try benchmarking an example which is a bit more involved – how to measure the performance of some more math functions I’ve written which use Complex Numbers.

Complex numbers are nothing to do with BenchmarkDotNet – I’m just using this as an example of a non-trivial problem space and how to run benchmark tests against it.

At school you might have done some work with Complex Numbers. These numbers have a real and imaginary component – which sounds weird if you’re not used to it, but they can be represented as:

1 + 2i

Where 1 is the real component, and 2 is the size of the ‘imaginary’ component.

If you want to calculate the magnitude of a complex number, you just use Pythagorean maths – namely:

  • Calculate the square of the real component, and the square of the imaginary component.
  • Add these two squares together.
  • The magnitude is the square root of the sum of the two squares.

So I can represent a Complex Number in code in the object class shown below:

public class ComplexNumber
{
    public int Real { getset; }
 
    public int Imaginary { getset; }
}

And I can instantiate a complex number 1 + 2i with the code:

new ComplexNumber { Real = 1, Imaginary = 2 };

If I want to calculate the magnitude of this Complex Number, I can pass the ComplexNumber data transfer object as a parameter to a method shown below.

public class ComplexMathFunctions
{
    public static double Magnitude(ComplexNumber complexNumber)
    {
        return Math.Pow(Math.Pow(complexNumber.Real, 2                        + Math.Pow(complexNumber.Imaginary, 2), 0.5);
    }
}

But how do I benchmark this?

I can’t instantiate a ComplexNumber parameter in the Params attribute supplied by BenchmarkDotNet.

Fortunately there’s a GlobalSetup attribute – this is very similar to the Setup attribute used by some unit test frameworks, were we can arrange our parameters before they are used by a test.

The code below shows how to create a dedicated test class, and instantiate a Complex Number in the GlobalSetup method which is used in the method being benchmarked.

public class PerformanceTestComplexMathFunctions
{
    private ComplexNumber ComplexNumber;
 
    [GlobalSetup]
    public void GlobalSetup()
    {
        this.ComplexNumber = new ComplexNumber { Real = 1, Imaginary = 2 };
    }
 
    [Benchmark]
    public double Measure_Magnitude_of_ComplexNumber_Function()
    {
        return ComplexMathFunctions.Magnitude(ComplexNumber);
    }
}

This yields the results below:

// * Summary *

BenchmarkDotNet=v0.10.8, OS=Windows 10 Redstone 2 (10.0.15063)
Processor=Intel Core i7-2640M CPU 2.80GHz (Sandy Bridge), ProcessorCount=4
Frequency=2728178 Hz, Resolution=366.5450 ns, Timer=TSC
dotnet cli version=2.0.0-preview2-006127
 [Host] : .NET Core 4.6.25316.03, 64bit RyuJIT
 DefaultJob : .NET Core 4.6.25316.03, 64bit RyuJIT


 Method                                      | Mean     | Error    | StdDev    |
-------------------------------------------- |---------:|---------:|----------:|
 Measure_Magnitude_of_ComplexNumber_Function | 110.5 ns | 1.058 ns | 0.9897 ns |

I think this eliminates pretty much all the disadvantages I listed earlier, but does add a restriction that I’m only testing one instantiated value of the data transfer object parameter.

You might wonder why we need the GlobalSetup at all when we could just instantiate a local variable in the method under test – I don’t think we should do that because we’d also be including the time taken to set up the experiment in the method being benchmarked – which reduces the accuracy of the measurement.

Addendum

I was kind of taken aback by how slow my Magnitude function was, so I started playing with some different options – instead of using the built in System.Math.Pow static method, I decide to calculate a square by just multiplying the base by itself. I also decided to use the System.Math.Sqrt function to calculate the square root, rather than the equivalent of raising the base to the power of 0.5. My refactored code is shown in the code below.

public class ComplexMathFunctions
{
    public static double Magnitude(ComplexNumber complexNumber)
    {
        return Math.Sqrt(complexNumber.Real * complexNumber.Real 
                    + complexNumber.Imaginary * complexNumber.Imaginary);
    }
}

Re-running the test yielded the benchmark results below:

// * Summary *

BenchmarkDotNet=v0.10.8, OS=Windows 10 Redstone 2 (10.0.15063)
Processor=Intel Core i7-2640M CPU 2.80GHz (Sandy Bridge), ProcessorCount=4
Frequency=2728178 Hz, Resolution=366.5450 ns, Timer=TSC
dotnet cli version=2.0.0-preview2-006127
 [Host] : .NET Core 4.6.25316.03, 64bit RyuJIT
 DefaultJob : .NET Core 4.6.25316.03, 64bit RyuJIT


 Method                                      | Mean     | Error     | StdDev    |
-------------------------------------------- |---------:|----------:|----------:|
 Measure_Magnitude_of_ComplexNumber_Function | 4.192 ns | 0.0371 ns | 0.0347 ns |

So with a minor code tweak, the time taken to calculate the magnitude dropped from 110.5 nanoseconds to 4.192 nanoseconds. That’s a pretty big performance improvement. If I hadn’t been measuring this, I’d probably never have known that I could have improved my original implementation so much.

Of course, this performance improvement might only work for small integers – it could be that large integers have a different performance profile. But it’s easy to understand how we could set up some other tests to check this.

Wrapping up

This time I’ve written about how to use BenchmarkDotNet to measure the performance of methods which have parameters, even ones that are data transfer objects. The Params attribute can be useful sometimes for methods which have simple primitive parameters, and the GlobalSetup attribute can specify a method which sets up more complicated scenarios. I’ve also shown how we can create classes dedicated to testing individual methods, and keep benchmarking test references isolated in their own classes and projects.

This makes it really simple to benchmark your existing codebase, even code which wasn’t originally designed with performance testing in mind. I think it’s worth doing – even while writing this post, I unexpectedly discovered a simple way to change my example code that made a big performance improvement.

I hope you find this post useful in starting to measure the performance of your codebase. If you want to dig into understanding BenchmarkDotNet more, I highly recommend this post from Andrey Akinshin – it goes into lots more detail.


About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, Non-functional Requirements, Performance

Measuring your code’s performance during development with BenchmarkDotNet – Part #1: Getting started

I was inspired to write a couple of posts after recently reading articles by Stephen Toub and Andrey Akinshin about BenchmarkDotNet from the .NET Blog, and I wanted to write about how I could use BenchmarkDotNet to understand my own existing codebase a little bit better.

A common programming challenge is how to manage complexity around code performance – a small change might have a large impact on application performance.

I’ve managed this in the past with page-level performance tests (usually written in JMeter) running on my integration server – and it works well.

However, these page-level performance tests only give me coarse grained results – if the outputs of the JMeter tests start showing a slowdown, I’ll have to do more digging in the code to find the problem. At this point, tools like ANTS or dotTrace are really good for finding the bottlenecks – but even with these, I’m reacting to a problem rather than managing it early.

I’d like to have more immediate feedback – I’d like to be able to perform micro-benchmarks against my code before and after I make small changes, and know right away if I’ve made things better or worse. Fortunately BenchmarkDotNet helps with this.

This isn’t premature optimisation – this is about how I can have a deeper understanding of the quality of code I’ve written. Also, if you don’t know if your code is slow or not, how can you argue that any optimisation is premature?

A simple example

Let’s take a simple example – say that I have a .NET Core website which has a single page that just generates random numbers.

Obviously this application wouldn’t be a lot of use – I’m deliberately choosing something conceptually simple so I can focus on the benchmarking aspects.

I’ve created a simple HomeController, which has an action called Index that returns a random number. This random number is generated from a service called RandomNumberGenerator.

Let’s look at the source for this. I’ve put the code for the controller below – this uses .NET Core’s built in dependency injection feature.

using Microsoft.AspNetCore.Mvc;
using Services;
 
namespace SampleFrameworkWebApp.Controllers
{
    public class HomeController : Controller
    {
        private readonly IRandomNumberGenerator _randomNumberGenerator;
        
        public HomeController(IRandomNumberGenerator randomNumberGenerator)
        {
            _randomNumberGenerator = randomNumberGenerator;
        }
 
        public IActionResult Index()
        {
            ViewData["randomNumber"= _randomNumberGenerator.GetRandomNumber();
 
            return View();
        }
    }
}

The code below shows the RandomNumberGenerator – it uses the Random() class from the System library.

using System;
 
namespace Services
{
    public class RandomNumberGenerator : IRandomNumberGenerator
    {
        private static Random random = new Random();
 
        public int GetRandomNumber()
        {
            return random.Next();
        }
    }
}

A challenge to make it “better”

But after a review, let’s say a colleague tells me that the System.Random class isn’t really random – it’s really only pseudo random, certainly not random enough for any kind of cryptographic purpose. If I want to have a really random number, I need to use the RNGCryptoServiceProvider class.

So I’m keen to make my code “better” – or at least make the output more cryptographically secure – but I’m nervous that this new class is going to make my RandomNumberGenerator class slower for my users. How can I measure the before and after performance without recording a JMeter test?

Using BenchmarkDotNet

With BenchmarkDotNet, I can just decorate the method being examined using the [Benchmark] attribute, and use this to measure the performance of my code as it is at the moment.

To make this attribute available in my Service project, I need to include a nuget package in my project, and you can use the code below at the Package Manager Console:

Install-Package BenchmarkDotNet

The code for the RandomNumberGenerator class now looks like the code below – as you can see, it’s not changed much at all – just an extra library reference at the top, and a single attribute decorating the method I want to test.

using System;
using BenchmarkDotNet.Attributes;
 
namespace Services
{
    public class RandomNumberGenerator : IRandomNumberGenerator
    {
        private static Random random = new Random();
 
        [Benchmark]
        public int GetRandomNumber()
        {
            return random.Next();
        }
    }
}

I like to keep my performance benchmarking code in a separate project (in the same way that I keep my unit tests in a separate project). That project is a simple console application, with a main class that looks like the code below (obviously I need to install the BenchmarkDotNet nuget package in this project as well):

using BenchmarkDotNet.Running;
using Services;
 
namespace PerformanceRunner
{
    class Program
    {
        static void Main(string[] args)
        {
            var summary = BenchmarkRunner.Run<RandomNumberGenerator>();
        }
    }
}

And now if I run this console application at a command line, BenchmarkDotNet presents me with some experiment results like the ones below.

// * Summary *

BenchmarkDotNet=v0.10.8, OS=Windows 10 Redstone 2 (10.0.15063)
Processor=Intel Core i7-2640M CPU 2.80GHz (Sandy Bridge), ProcessorCount=4
Frequency=2728183 Hz, Resolution=366.5443 ns, Timer=TSC
dotnet cli version=2.0.0-preview2-006127
 [Host] : .NET Core 4.6.25316.03, 64bit RyuJIT
 DefaultJob : .NET Core 4.6.25316.03, 64bit RyuJIT


          Method | Mean     | Error     | StdDev    |
---------------- |---------:|----------:|----------:|
 GetRandomNumber | 10.41 ns | 0.0468 ns | 0.0365 ns |

As you can see above, my machine specifications are listed, and the experiment results suggest that my RandomNumberGenerator class presently takes about 10.41 nanoseconds to generate a random number.

So now I have a baseline – after I change my code to use the more cryptographically secure RNGCryptoServiceProvider, I’ll be able to run this test again and see if I’ve made it faster or slower.

How fast is the service after the code changes?

I’ve changed the service to use the RNGCryptoServiceProvider – the code is below.

using System;
using BenchmarkDotNet.Attributes;
using System.Security.Cryptography;
 
namespace Services
{
    public class RandomNumberGenerator : IRandomNumberGenerator
    {
        private static Random random = new Random();
 
        [Benchmark]
        public int GetRandomNumber()
        {
            using (var randomNumberProvider = new RNGCryptoServiceProvider())
            {
                byte[] randomBytes = new byte[sizeof(Int32)];
 
                randomNumberProvider.GetBytes(randomBytes);
 
                return BitConverter.ToInt32(randomBytes, 0);
            }
        }
    }
}

And now, when I run the same performance test at the console, I get the results below. The code has become slower, and now takes 154.4 nanoseconds instead of 10.41 nanoseconds.

BenchmarkDotNet=v0.10.8, OS=Windows 10 Redstone 2 (10.0.15063)
Processor=Intel Core i7-2640M CPU 2.80GHz (Sandy Bridge), ProcessorCount=4
Frequency=2728183 Hz, Resolution=366.5443 ns, Timer=TSC
dotnet cli version=2.0.0-preview2-006127
 [Host] : .NET Core 4.6.25316.03, 64bit RyuJIT
 DefaultJob : .NET Core 4.6.25316.03, 64bit RyuJIT


          Method | Mean     | Error    | StdDev   |
---------------- |---------:|---------:|---------:|
 GetRandomNumber | 154.4 ns | 2.598 ns | 2.028 ns |

So it’s more functionally correct, and unfortunately it has become a little slower. But I can now go to my technical architect with a proposal to change the code, and present a more complete picture – they’ll be able to not only understand why my proposed code is more cryptographically secure, but also I’ll be able to show some solid metrics around the performance deterioration cost. With this data, they have can make better decisions about what mitigations they might want to put in place.

How should I use these numbers?

A slow down from about 10 to 150 nanoseconds doesn’t mean that the user’s experience deteriorates by a factor of 15 – remember that in this case, a single user’s experience is over the entire lifecycle of the page, so really a single user should only see a slowdown of 140 nanoseconds over the time it takes to refresh the whole page. Obviously a website will have many more users than just one at a time, and this is where our JMeter tests will be able to tell us more accurately how the page performance deteriorates at scales of hundreds or thousands of users.

Wrapping up

BenchmarkDotNet is a great open-source tool (sponsored by the .NET Foundation) that allows us to perform micro-benchmarking experiments on methods in our code. Check out more of the documentation here.

I’ve chosen to demonstrate BenchmarkDotNet with a very small service that has methods which take no parameters. The chances are that your code is more complex than this example, and you can structure your code to so that you can pass parameters to BenchmarkDotNet – I’ll write more about these more complicated scenarios in the next post.

Where I think BenchmarkDotNet is most valuable is that it changes the discussion in development teams around performance. Rather than changing code and hoping for the best – or worse, reacting to an unexpected performance drop affecting users – micro-benchmarking is part of the development process, and helps developers understand and mitigate code problems before they’re even pushed to an integration server.


About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, .net core

Contributing to the .NET Core SDK source code for the first time, and how OSS helped me

The .NET Core source code has been open sourced on GitHub for a while now, and the community is free to raise issues and submit pull requests – though I’d not really expected that I’d ever actually need to. That’s mainly because I always expect that thousands of other talented developers will have tested the code paths I’m working with and found (and solved) those issues before me.

But shortly after I installed .NET Core 2.0.0 Preview 1, I found that all my .NET Core projects that I had written for Windows 10 IoT Core suddenly stopped working – and the reason was that the executable file wasn’t being generated any more after I published the project.

I tested the hell out of this – I originally suspected that I had done something wrong or different, and I really didn’t want to report an issue and then find I was the one who had actually made a mistake. But I eventually concluded that something had changed in the code, so I raised a bug under the title “Publishing to win10-arm or win8-arm doesn’t generate an exe file for a Console application“, and this ultimately led to me committing some test code to the .NET Core codebase.

So the fact that .NET Core is completely open source and receiving community contributions suddenly became extremely relevant to me – previously I’d have just had to suffer the problem.

None of this stuff I write about below is a particularly big deal – just a part of software development – but dipping my toe into the waters of a massively public open source project was, well, a bit nerve wracking.

In some ways I felt like when I start a new job, where I’ve joined a team that has patterns and practices that I’m not entirely familiar with – I’m always worried I’ll do something that makes things harder for other developers, invokes justified wrath… and reminds me that it’s only Imposter Syndrome if I’m not actually stupid.

None of the stuff I was worried about happened – and was it was never going to happen. The .NET development team were super helpful, open, friendly, and encouraged me right from the start – and there were safety nets all along the way to stop anything bad happening. They even suggested a workaround to solve my problem on the same day I raised the issue, which massively helped me before the resolution was merged in.

I’ve written about my experiences below – things I got right, and things I got wrong – hopefully this will be useful to other developers thinking about putting their toe in the same waters.

Tips for a good issue report

The first part of this was writing up the issue – I think that there are essentially three parts to a good issue report:

  • Steps to recreate the issue
  • Actual behaviour
  • Expected behaviour – don’t forget to say why you think this is the expected behaviour.

What sort of things do I need to do when submitting a pull request to the .NET Core repositories?

I wasn’t the developer who actually solved the issue – the .NET team get the credit for that – but I did see an opportunity to write a test to make sure the issue didn’t reoccur, and I submitted a PR for that code change.

First, fork the .NET Core SDK repository

This bit’s really easy – just click on the “Fork” button in the top right corner of the GitHub repository. This’ll create a fork of the original Microsoft source code in your own GitHub profile.

Clone the repo locally, and make sure you choose the correct branch to code against

I used TortoiseGit to clone the repository to my local development machine, and just started coding – and that turned out to be a bit too quick on the draw. I don’t think this is written down anywhere, but I should have targeted the release/2.0.0 branch.

How do I choose the right branch? I think the best way is to look at some recently closed pull requests, and see where the other developers are pushing their code.

With TortoiseGit, it’s easy to switch branches.

  • Right click on the root of the repo you’ve cloned, select “TortoiseGit > Switch/Checkout”.

screenshot.1497111687

  • A window will appear, where you can select the branch you want from a dropdown list. In the image below, you can see I’ve selected the release/2.0.0 branch. Click OK to switch your local repo to the code in this branch.

screenshot.1497111727

I initially (but wrongly) wrote my code against the default branch – in some repositories that’s possibly ok, but at the time of writing, the best branch to target in the .NET SDK repo is release/2.0.0. By the time I realised I should have targeted the release/2.0.0 branch and tried to switch to it, GitHub invited me to resolve lots of conflicts in files I hadn’t touched. Rather than trying to rebase and introducing lots of risk, I just closed the original pull request, selected the correct branch, and opened a new pull request which included my code change. Don’t make the same mistake I did!

Test that you can build the branch before making any changes

Once your locally cloned repository targets the correct branch, you should try building the code before making any changes. If it doesn’t build at this point or tests fail, then at least you know the problem isn’t caused by something you did.

In the root folder of the source for .NET Core’s SDK, there are three files which can be used to build the code:

  • build.cmd
  • build.ps1
  • build.sh

Open a command prompt, and run whichever one of the three options that is your favourite.

If you find that the code doesn’t build or the tests don’t pass, check the build status on the repo’s home page.

Make your changes, commit them, and push the changes to the right branch in your remote fork on GitHub

Don’t forget your unit tests, make sure everything builds, and comment your changes appropriately.

Now create a pull request

From your forked repository, hit the “New Pull Request” button. Here are a few things that I think are useful to think about:

  • You’ll need to enter a comment – make sure it’s a useful one.
  • Describe why your change is valuable – does it fix an issue? Is it a unit test, related to another pull request?
  • If you can, link to an issue or pull request in the comment to give the reviewers some context.
  • I try not to submit a pull request which changes many files – lots of changes make it difficult to review. If you have to change lots of files, try to explain why it wasn’t possible to separate this out into smaller chunks.
  • And remember to open the pull request against the correct branch!

screenshot.1497100934

What happens when I submit the pull request?

Once you submit your first pull request, it’ll immediately be assigned a label “cla-required” by the dnfclas bot.

screenshot.1496089436

cla is short for “contribution licence agreement“.

dnfclas means “dot net foundation contribution licence agreement” and is the Pull Request Bot.

To proceed beyond this point, you need to click on the link to https://cla2.dotnetfoundation.org to sign a Contribution Licence Agreement. When you click on that link, you’ll be redirected to a page like this.

screenshot.1496089699

Sign in using your GitHub credentials, and you’ll be invited to enter some details and sign the agreement. If you sign it, you’ll eventually be shown a page like the one below.

screenshot.1496089798

At this point, the dnfclas bot automatically recognises that you’ve signed the agreement (you don’t need to tell it), and it updates the label in the pull request from “cla-required” to “cla-signed”. You’ll see this on your pull request as an update, similar to the one below.

screenshot.1496089456

As you might expect, there’s a series of integration environments where your pull request will be tested. For the .NET Core SDK continuous integration process, there are presently 10 environments where code is automatically tested:

  • OSX10.12 Debug
  • OSX10.12 Release
  • Ubuntu14.04 Debug
  • Ubuntu14.04 Release
  • Ubuntu16.04 Debug
  • Ubuntu16.04 Release
  • Windows_NT Debug
  • Windows_NT Release
  • Windows_NT_FullFramework Debug
  • Windows_NT_FullFramework Release

There are lots of dotnet repositories, and an issue which manifests itself in one repo might have the root cause in another one – and this was the case for me. The issue that I observed in the SDK actually started in the .NET CoreFx repository.

It takes a while for fixes in one repo to flow across to the other, so if you submit a unit test to one repo for a fix that lives somewhere else, the test might fail for a while – and that’ll stop it being merged in immediately.

So if you’re only submitting tests, expect that all the checks will fail until the code you’re covering with your unit test flows across to the .NET Core SDK continuous integration environment.

screenshot.1496092980

Once the fixed code has flowed through, you’ll see this (assuming your code works…):

screenshot.1496355381

The .NET Team will choose a reviewer for you – you don’t need to choose anyone

Finally – and probably most importantly – someone from the .NET Core SDK team will review your code. I think it’s mandatory (as well as courteous) to address any comments from your reviewer – these are helpful pointers from a team of super smart people who care about good code.

Other gotchas

One thing that caught me out was that GitHub marked some of the review comments as “outdated” (as shown below). I should have clicked on these – if I had, I would have seen a few comments that I hadn’t addressed.

screenshot.1496092853

Another thing was I wish I had a copy of Resharper on my development machine – one of the review comments was that I had left an unused variable in my code. Resharper would have caught this error for me.

Wrapping up

So, much to my surprise, I’ve contributed to the .NET Core codebase – albeit in a very small way!

screenshot.1497101854

In summary, I was a bit nervous about submitting my first pull request to the .NET Core SDK repository – but I decided to create a simple test which covered a bug fix from the .NET team. Apart from signing a contribution licence agreement, this was a pretty standard process of submitting a pull request for review and automated testing. One really nice thing is that changes are tested not only against Windows, but also different versions of Ubuntu and OSX.  Also, if you’re about to submit your own pull request to a .NET Core repo, I’d recommend checking out other pull requests first as a guideline – and don’t forget to look at what branch the developers are merging to.

Hopefully this description of my experiences will help other developers thinking of contributing feel a bit more confident. I’d recommend to anyone thinking of making their first contribution, choose something small – it’ll help you get familiar with the process.


About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!