.net core, arduino, Bifröst, IOT, Making, Raspberry Pi 3

Using .NET Core 2 on Raspbian Jessie to read serial data from an Arduino

I’ve previously written about how to use the System.IO.Ports library in .NET Core 2 to read serial data from an Arduino using a Windows 10 PC – but this library doesn’t work for Linux machines. This time I’m going to look at how to read data from an Arduino over USB with .NET Core running on a Raspberry Pi 3 with Raspbian Jessie.

There’s a few steps to getting this running.

  • First you’ll need:
    • A development machine (I’m using Windows 10),
    • A Raspberry Pi 3,
    • An Arduino,
    • A USB cable to link your Arduino and Raspberry Pi.
  • Next I’ll write a simple sketch and deploy it to my Arduino.
  • I’ll connect the Arduino to my Raspberry Pi 3, and check that the Pi can see my Arduino and the serial input using minicom.
  • Finally I’ll write and deploy a .NET Core application written in C# to my Raspberry Pi, and I’ll show how this application can read serial input from the Arduino.

As usual, I’ve uploaded all my code to GitHub and you can see it here.

For a few of the steps in this guide, I’ll refer to other sources – there’s not a lot of value in me writing a step by step guide for things which are commonly understood.

For example, I have a fresh install of Raspbian Jessie on my Raspberry Pi 3, and then I set up SSH on the Pi. I also use Putty, PSCP, Plink and Cake.net to deploy from my Windows machine to the Raspberry Pi (I’ve blogged in detail about this here).

Writing a sketch that writes serial data from the Arduino

In a previous post, I use VSCode to deploy a simple Arduino application that writes serial data. I could have used the Arduino IDE just as easily – and it’s widely known how to verify and upload sketches to the Arduino, so I’m not going to go into this part in great detail.

The code for the Arduino sketch is below:

int i = 0;
void setup(){
void loop(){
  Serial.print("Hello Pi ");

One thing worth noting is that the Baud rate is 9600 – we’ll use this information later.

I can test this is writing serial data by plugging the Arduino into my Windows 10 development machine, and using VSCode or the Arduino IDE. I’ve shown a screenshot below of the serial monitor in my Arduino IDE, which just prints the text “Hello Pi” followed by a number. This confirms that my Arduino is writing data to the serial port.


Let’s test the Raspberry Pi can receive the serial data

Before writing a C# application in .NET Core to read serial data on my Raspberry Pi, I wanted to test that my Pi can receive data at all (obviously after connecting my Arduino to my Raspberry Pi using a USB cable).

This isn’t mandatory for this guide – I just wanted to definitely know that the Pi and Arduino communication was working before starting to write my C# application.

First, I need to find the name of the serial port used by the Arduino. I can find the names of serial ports by using PuTTY to SSH into my Pi 3, and then running the command:

ls -l /dev | grep dialout

Before connecting my Arduino UNO to my Raspberry Pi, this reports back two serial ports – ttyAMA0 and ttyS0.


After connecting my Arduino to my Raspberry Pi, this now reports back three serial ports – ttyACM0, ttyAMA0, and ttyS0.


Therefore I know the port used by my Arduino over USB is /dev/ttyACM0.

As an aside – not all Arduino boards will use the port /dev/ttyACM0. For example, I repeated this with my Arduino Nano and Arduino Duemilanove, and found they both use /dev/ttyUSB0, as shown below:


But for my Arduino Yun and my Arduino Primo, the port is /dev/ttyACM0.


So the point here is that you need to check what your port name is when you connect it to a Linux machine, like your Pi – the port name can be different, depending on what kind of hardware you connect.

Finally, if you’re interested in why “tty” is used in the Linux world for ports, check out this post.

Tools to read serial data

Minicom is a serial communication program which will confirm my Pi is receiving serial data from the Arduino. It’s very easy to install – I just used PuTTY to SSH into my Pi 3, and ran the command:

sudo apt-get install minicom

Then I was able to run the command below, using the port name (/dev/ttyACM0) and the Baud rate (9600).

minicom -b 9600 -o -D /dev/ttyACM0

If the Raspberry Pi is receiving serial data from the Arduino, it’ll be written to the SSH terminal.

Some posts I’ve read say it’s necessary to disable serial port logins to allow the Arduino to send messages to the Raspberry Pi, and modify files in the “/boot” directory – but on Jessie, I didn’t actually find this to be necessary – it all worked out of the box with my fresh install of Raspbian Jessie. YMMV.

Another alternative to prove serial communication is working is to install the Arduino IDE onto the Raspberry Pi, and just open the serial monitor on device. Again, installing the IDE on your Pi is very easy – just run the command below at a terminal:

sudo apt-get install arduino

This will even install an Arduino shortcut into the main Raspbian menu.


Once you’ve started the IDE and connected the Arduino to a USB port, select the serial port /dev/ttyACM0 (shown available on the Tools menu in the screenshot below):


Then open the serial monitor to check that the “Hello Pi” messages are coming through correctly (as shown below):


Writing the C# application

Now that I’m sure that the physical connection between the Arduino and Pi works, I can start writing the C# application.

TL:DR; I’ve uploaded my working code to GitHub here.

When writing my code, I wanted to stay close to the existing API provided by Microsoft in their library System.IO.Ports, which allows Windows machines to read from the serial port (I’ve blogged about this here). I was able to look at their source code on GitHub, and from this I designed the serial port interface below:

using Bifrost.IO.Ports.Core;
using System;
namespace Bifrost.IO.Ports.Abstractions
    public interface ISerialPort : IDisposable
        int BaudRate { getset; }
        string PortName { getset; }
        bool IsOpen { getset; }
        string ReadExisting();
        void Open();
        event SerialDataReceivedEventHandler DataReceived;
        void Close();

I like interfaces because consumers can use this interface, and don’t care if change my implementation behind the interface. It also makes my libraries more testable with mocking libraries. You can see this interface on GitHub here.

The next task was to design a .NET Core implementation for this interface which would work for Linux. I’ve previously done something similar to this for I2C communication, using P/Invoke calls (I’ve written about this here). After reading the documentation and finding some inspiration from another open source sample here, I knew I needed the following six P/Invoke calls:

[DllImport("libc", EntryPoint = "open")]
public static extern int Open(string portName, int mode);
[DllImport("libc", EntryPoint = "close")]
public static extern int Close(int handle);
[DllImport("libc", EntryPoint = "read")]
public static extern int Read(int handle, byte[] data, int length);
[DllImport("libc", EntryPoint = "tcgetattr")]
public static extern int GetAttribute(int handle, [Outbyte[] attributes);
[DllImport("libc", EntryPoint = "tcsetattr")]
public static extern int SetAttribute(int handle, int optionalActions, byte[] attributes);
[DllImport("libc", EntryPoint = "cfsetspeed")]
public static extern int SetSpeed(byte[] attributes, int baudrate);

These calls allow me to:

  • Open a port in read/write mode and get an integer handle to this port;
  • I can also get a list of attributes, specify the baudrate attribute, and then set these attributes.
  • Given the handle to the port, I can read from the port into an array of bytes.
  • Finally, I can also close the connection.

I’ll look at the most important elements below.

Opening the serial port

If we have instantiated a port with a name (/dev/ttyACM0) and a Baud rate (9600), we can use these P/Invoke calls in C# to open the port.

public void Open()
    int handle = Open(this.PortName, OPEN_READ_WRITE);
    if (handle == -1)
        throw new Exception($"Could not open port ({this.PortName})");
    Task.Run(() => StartReading(handle));

You’ll notice that if the request to open the port is successful, it’ll return a non-negative integer, which will be the handle to the port that we’ll use throughout the rest of the class.

Setting the Baud rate is straightforward – we get the array of port attributes using the port’s handle, specify the Baud rate, and then send this array of attributes back to the device.

private void SetBaudRate(int handle)
    byte[] terminalData = new byte[256];
    GetAttribute(handle, terminalData);
    SetSpeed(terminalData, this.BaudRate);
    SetAttribute(handle, 0, terminalData);

I give the port a couple of seconds to settle down – I often find that the first few messages come through out of order, or with missing bytes – and then run the “StartReading” method in a separate thread using Task.Run.

Reading from the serial port

Reading from the port is quite straightforward too – given the handle, we just use the P/Invoke call “Read” to copy the serial data into a byte array which is stored as a member variable. Before invoking an event corresponding to a successful read, I check that there actually is valid data returned (i.e. the return value is non-negative), and that any data returned isn’t just a single newline character. If it passes this test, I pass control to the event handler for the DataReceived event.

private void StartReading(int handle)
    while (true)
        Array.Clear(serialDataBuffer, 0, serialDataBuffer.Length);
        int lengthOfDataInBuffer = Read(handle, serialDataBuffer, SERIAL_BUFFER_SIZE);
        // make sure there is data in the buffer, and check if it's just one character that it's not just a newline character
        if (lengthOfDataInBuffer != -1 && !(lengthOfDataInBuffer == 1 && serialDataBuffer[0== ASCII_NEWLINE_CODE))
            DataReceived.Invoke(thisnew SerialDataReceivedEventArgs());

Putting it all together

I’ve put my interfaces and implementations into separate .NET Standard libraries so that I can re-use them in my other .NET Core applications. And when I write a sample program for my Raspberry Pi to read from my Arduino, the implementation is very similar to the implementation that works for Windows x86/x64 devices reading from an Arduino (covered in this post).

using Bifrost.IO.Ports;
using Bifrost.IO.Ports.Core;
using System;
namespace SerialSample
    class Program
        static void Main(string[] args)
            var serialPort = new SerialPort()
                PortName = "/dev/ttyACM0",
                BaudRate = 9600
            // Subscribe to the DataReceived event.
            serialPort.DataReceived += SerialPort_DataReceived;
            // Now open the port.
        private static void SerialPort_DataReceived(object sender, SerialDataReceivedEventArgs e)
            var serialPort = (SerialPort)sender;
            // Read the data that's in the serial buffer.
            var serialdata = serialPort.ReadExisting();
            // Write to debug output.

Once I’ve compiled my project, and deployed it to my Raspberry Pi (using my build.cake script), I can SSH into my Pi and run the .NET Core application – and this displays the “Hello Pi” serial output being sent by the Arduino, as expected.


Wrapping up

It’s possible to read serial data from an Arduino connected by USB to a Raspberry Pi 3 using C#. Obviously the code here is just a proof of concept – it doesn’t use handshaking, parity bits or stop bits, and it only reads from the Arduino, but writing back to the serial port could be achieved using the P/Invoke call to the “write” function. Hopefully this post is useful to anyone trying to use serial communications between a Raspberry Pi 3 and an Arduino.

About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!


arduino, IOT, Making, PCB Manufacture

Upload a sketch to an Arduino UNO with bluetooth using the Adafruit Bluefruit EZ-Link breakout

I’m working on ways to make my Arduino UNO communicate with my Raspberry Pi 3 over a serial connection, and I’ve used a USB cable to connect the two devices together.

But because the Type-B USB connector on the Arduino is being used, I don’t have a convenient way to upload sketches to the Arduino UNO from my development machine.

I can obviously disconnect the USB connector from the Arduino UNO and connect it to my development machine, but this gets kind of old after a while, and I’d like an easier way to upload sketches.

So I dug deep into my box of electronic bits and pieces, and fortunately found I’ve got another way to do this – I can add bluetooth capability to my Arduino UNO with the Adafruit Bluefruit EZ-link breakout board.

First connect the board to the Arduino UNO

I could have done this with wires and a breadboard, but I decided to make a custom shields with some copper clad PCB board and my Shapeoko.


Fortunately there was very little soldering to be done on my custom PCB as my soldering could be better!



I designed the PCB according the pinout recommended by Adafruit, which is:

EZ-Link Arduino UNO
DSR Not connected
Tx Digital Pin 0 (Rx)
Rx Digital Pin 1 (Tx)
DTR 1uF in series with Reset

Attaching the breakout board to a custom made PCB makes the whole device more self contained and I can also add other Arduino shields on top.

I found out afterwards that Adafruit actually sells something similar to this already – a Bluefruit EZ-Link Shield.

Now pair the Bluefruit board with a development machine

Adafruit provide some really good documentation for pairing the Bluefruit module with a Windows 7 machine – but I’ve written my own notes before for the Windows 10 operating system.

My Arduino is powered by the USB connection to my Raspberry Pi, so once I attached the shield to the Arduino the blue power light came on immediately. The next step was to pair the Bluefruit device with my development machine.

My machine has bluetooth v4.0 on board, but I could have used an after-market bluetooth dongle like this one – it’s important to get one that supports bluetooth v4.0, some of the cheaper dongles do not support v4.0.

From Windows 10, I typed “Bluetooth” into the search box on the taskbar, and the results gave me the option to select “Bluetooth and other devices settings”.


I clicked on the “+” button beside the text “Add Bluetooth or other device”, which opened the screen below.


I clicked on the Bluetooth option, which opened the screen below, and this shows the Adafruit EZ-Link device.


I selected the Adafruit device for pairing, and received the success message below.


So at this point, it’s possible to upload sketches to the Arduino through the Bluetooth COM port – and in the image below, you can see that even though my Arduino isn’t connected by USB cable to my development machine, the serial port COM6 is available to me.


I talked about how to use VSCode to develop and deploy to the Arduino last time, and it works exactly the same way in VSCode.

And as far as the Arduino is concerned, this is just another COM port, as if the Arduino was directly connected to the PC using a USB cable. I don’t need to do any further set up, and I can verify and upload my sketch to the Arduino UNO wirelessly through COM6 (it’ll probably be different on another machine). It’s a little bit slower to transfer a sketch over the air, but at least it’s wireless.

Wrapping up

This was a short post about how to wirelessly deploy sketches to the Arduino using the Adafruit Bluefruit EZ-Link breakout. When I can’t use the Arduino’s on board USB port, this device makes my Arduino development much more convenient. If you have this device in your toolkit, hopefully you find this post useful!

About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, .net core, Making, Raspberry Pi 3

Installing Ubuntu 16.04 on a Raspberry Pi 3, installing .NET Core 2, and running a sample .NET Core 2 app

I usually work with Windows 10 IoT Core on my Raspberry Pi 3, but recently I’ve started to think about how I could use .NET Core (which is portable across Windows and Linux) with an Ubuntu installation on a Raspberry Pi 3.

I’ve written previously about running a .NET Core “hello world” application on a Pi 3 with Windows IoT core. This time I’ll write a post for a .NET Core application deployed to Ubuntu 16.04 from a Windows 10 machine – the post below describes:

  • Installing Ubuntu 16.04 on a Raspberry Pi 3,
  • Installing .NET Core 2
  • Testing this installation
  • Creating a “hello world” app which is targeted at Ubuntu
  • Deploying this application to the Raspberry Pi 3, and
  • Finally running the application.

There are a few posts about different parts of this already, but I wasn’t able to find a single post which described all the steps. I’m not very familiar with Linux so some of these steps might be really obvious to more skilled Linux users.

Since writing this, I’ve written another post about deploying .NET Core 2 C# apps to Linux distros for the Raspberry Pi – check it out here.

Install Ubuntu 16.04 LTS

Download Ubuntu for the Raspberry Pi 3 ARM Processor

You can download the zipped up image file from here – this is listed as “Ubuntu Classic Server 16.04 for the Raspberry Pi 3” on this page (shown below, highlighted in red).


Once you’ve downloaded this zipped file, you’ll need to extract it (using a tool such as 7-zip).

Format the SD Card

If you’ve a brand new card you might not need to format it, but if you’ve used your card for a previous installation, I think the easiest way to format a card is to use the diskpart tool which is shipped with Windows. I’ve previously blogged about how to do this at the link below:


The image below shows a summary of how I formatted my disk:

  • First I call diskpart
  • Then I list the disks (using list disk)
  • Then I select the disk which is my SD card (using select disk 1, though your number might be different)
  • Then I clean the disk (using clean)
    • This sometimes fails with a permission error – I find that just calling clean again solves the problem
  • Then I create a primary partition on the cleaned disk (using create partition primary)
  • Finally I make this partition active (using active).

disk part

Write the Ubuntu image to the SD Card

Now that I have an un-zipped Ubuntu 16.04 image and a clean SD card, I need to flash this image to the card. I did this using a tool called “Win32DiskImager” which I downloaded from https://sourceforge.net/projects/win32diskimager/. There’s more information about this tool here: https://wiki.ubuntu.com/Win32DiskImager.

Another cool application for doing this is Etcher.io.

I browsed to the image file after opening “Win32 Disk Imager”, selected the drive letter associated with my SD card, and then I clicked on the “Write” button. It took my machine about 7 minutes to flash the image to my SD card.


Insert the SD card to the Raspberry Pi 3 and boot

Now I insert the SD card into my Raspberry Pi 3, and connect the USB power supply. The easiest way to see what happens when you boot your Raspberry Pi 3 is to connect it to an HDMI monitor – I am lucky enough to have one of these monitor types.

However, I’ve also done this without a monitor – I happen to know that the wired IP address that my Raspberry Pi 3 always chooses is – so if I insert the SD card into my Pi 3, and then switch it on, I know that if I run “ping -t“, it’ll time out until the wired ethernet connects.


You might notice in the picture above that after the initial connection has been made, a few seconds later the connection drops once – I found this happens quite often, so before logging in with PuTTY I often leave the connection up for a few seconds to settle.

Connect to the Raspberry Pi 3 over ssh using PuTTY

I downloaded an installer for PuTTY from here – this allows me to SSH into my Raspberry Pi 3 from my Windows machine.

I find it helps to add the path to PuTTY to my machine path – I found the default path for the 64-bit installer to be “C:\Program Files\PuTTY“, which I then added to my machine’s path.

You can see your machine’s path from a PowerShell prompt using the command below:

Get-ChildItem -Path Env:Path | Select-Object -ExpandProperty Value

Once my path is updated, I’m able to type “putty” at a command prompt and a window opens like the one below:


My Raspberry Pi 3 has IP address, and I typed this into the “Host Name” box in the window above. When I clicked on the “Open” button, the system shows me a window like the one below.


Since I was expecting this, I clicked on Yes, and a window opens asking for a username and password. The first time you log in, the username is ubuntu and the password is ubuntu. However, the first time you log in, you’ll be asked to change this password.


After I confirm the new password by typing it for a the second time, the PuTTY connection closes and I need to SSH in again – this time with the new password.

At this point Ubuntu 16.04 is installed onto the Raspberry Pi 3 and ready to be used – I can verify this by using the command below:

lsb_release -a

This prints distribution specific information, as shown below:


Install .NET Core 2 on the Raspberry Pi 3

Running .NET Core on Linux is not a surprising thing anymore – but generally these installations are on machines with an underlying x86 or x64 architecture. The Raspberry Pi 3 has an ARM 32-bit architecture, which makes things a little bit more unusual.

Fortunately there are some preview builds of .NET Core 2 which run on Ubuntu and an ARM 32-bit architecture, which are available at https://github.com/dotnet/core-setup/ (shown below).


Update: Recently Microsoft updated this page – they’ve removed the Ubuntu specific builds and replaced them with a generic Linux built targetting ARM processor types.


This is way better than before – it means we can now deploy to Raspbian as well as Ubuntu!

This part is reasonably straightforward – as long as you know the right steps. I’ve found lots of web posts which mention a few of the steps below, but many of these leave me halfway through the process, or the steps present error messages.

I’ve commented the commands below which I’ve found consistently get me from a clean install of Ubuntu 16.04 on a Raspberry Pi 3 to a working installation of .NET Core 2.

# Update Ubuntu 16.04
sudo apt-get -y update

# Install the packages necessary for .NET Core
sudo apt-get -y install libunwind8 libunwind8-dev gettext libicu-dev liblttng-ust-dev libcurl4-openssl-dev libssl-dev uuid-dev

# Download the latest binaries for .NET Core 2
wget https://dotnetcli.blob.core.windows.net/dotnet/Runtime/release/2.0.0/dotnet-runtime-latest-linux-arm.tar.gz

# Make a directory for .NET Core to live in
mkdir /home/ubuntu/dotnet

# Unzip the binaries into the directory we just created
tar -xvf dotnet-runtime-latest-linux-arm.tar.gz -C /home/ubuntu/dotnet

# Now add the path to the dotnet executable to the environment path
# This ensures the next time you log in, the dotnet exe is on your path
echo "PATH=\$PATH:/home/ubuntu/dotnet" >> dotnetcore.sh
sudo mv dotnetcore.sh /etc/profile.d

Then run the command below to  add the path to the dotnet executable to the current session


Test the .NET Core 2 installation

I can now test my installation by simply calling one command from my PuTTY prompt:


When I call this, I can see that I have version 2.0.0-preview1-001887-00 installed.


Create a hello world .NET Core 2 app for Ubuntu 16.04 ARM 32

Installing .NET Core 2 is just the first step – now we have to create a working .NET Core 2 application which is targetted at Ubuntu 16.04.

Previously I’ve written about how to create a .NET Core 2 app on Windows, and deploy to a Raspberry Pi 3 running Windows 10 IoT Core here.

This post has all the instructions to create the executables targetted at Ubuntu – I’ll not repeat it all here. The tutorial creates an application called “coreiot” (because it’s an IoT application for .NET Core).

The block of code below shows the C# content of the application – running this application should print the text “Hello Internet of Things!”.

using System;
namespace RaspberryPiCore
    class Program
        static void Main(string[] args)
            Console.WriteLine("Hello Internet of Things!");

For an IoT application called “coreiot”, the command to create executables targetting Ubuntu 16.04 is:

dotnet publish -r ubuntu.16.04-arm

and all the files will be found in the folder:



The next step is to deploy the files in this folder to the Raspberry Pi 3.

Deploy this application to the Raspberry Pi 3

First, I logged into the Raspberry Pi 3 using PuTTY and created a folder named “UbuntuHelloWorld”

mkdir UbuntuHelloWorld

One of the tools installed alongside PuTTY is call pscp, which allows files to be transferred from a Windows machine to Linux machine.

From my Windows machine where I compiled the .NET Core 2 application in the previous step, I opened Powershell and browsed to the \coreiot\bin\Debug\netcoreapp2.0\ubuntu.16.04-arm\publish folder.

I then run the command below.

pscp -r * ubuntu@
  • The switch “-r” tells pscp to recursively copy.
  • The “*” symbol tells pscp to copy everything
  • ubuntu@” is the destination, with “ubuntu” as the username, “” as the IP address of the destination, and “/home/ubuntu/UbuntuHelloWorld” is the folder to copy files to.

After I run the command, I’m challenged for a password, and then the files are copied across from my Windows machine to my Raspberry Pi 3.


So now if I ssh into my Raspberry Pi 3 and look into the UbuntuHelloWorld folder, I can see all the files have been copied into this folder.


Finally, I need to make these files executable using the command below to allow me to run my .NET Core 2 application.

sudo chmod u+x *

Run the application

Now we’ve done all the hard work – it’s easy to run the application by just browsing to the UbuntuHelloWorld directory, and running the command:


As shown below, the application outputs the text “Hello Internet of Things!“.


Footnote: After writing this post, I created a nuget library which simplifies much of this – check out my post about using it here.


This has been a long post – I made a lot of mistakes on the way to finding this series of steps, but I’ve found following these reliably help me get Ubuntu and .NET Core 2 running on my Raspberry Pi 3.

About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, HoloLens, Making, Unity, UWP

Coding for the HoloLens with Unity 5 – Part #2: Creating a simple Hello World project

Last time I looked at setting up my development environment to allow me to develop for the Microsoft HoloLens Emulator. This time, I’m going to create a project in Unity, add in a simple primitive object, and use some C# to do something interesting with this object.

Creating a new Unity 5 project

If you’ve installed Unity correctly, you’ll be shown a screen like the one below after you open Unity 5 HTP for the first time.


Click on the “New project” button, and the screen should change to one similar to the one below. I’ve chosen the name “HelloWorld” for my project, and I’ve saved it to my desktop.


After entering the name and location of the new Unity project, I clicked the “Create project” button and Unity shows the screen below. This is an (almost) empty project, which only has the project’s main camera and the default directional light.


The next step is to update the scene with some settings which make sense for a HoloLens app.

Updating the scene for the HoloLens

The default camera is set about 10m behind the origin point of the scene. We’re going to make a few changes to this camera using the Inspector tab on the right hand side.

  • First, I changed the position of the camera to (0,  0, 0), meaning X = 0, Y = 0, and Z = 0;
  • Next, in the Camera section, I changed the Clear Flags dropdown value to Solid Color.
  • Finally, I change the Background property to Black (R = 0, G = 0, B = 0, A = 0).


These ensure that the camera – i.e. the point through which we will view the world with the HoloLens – is at the origin point.

Also, we’ve removed the default Skybox (i.e. background image), and any pixels rendered as black in our scene will appear as transparent in the HoloLens.

Add a cube

Now that we have the scene configured for the HoloLens, it’s time to add a simple object to our scene.

First, we right click on our Hierarchy pane on the left and side, select “3d Object”, and then select “Cube” from the sub-menu that appears.


A simple cube should appear in the centre of the scene, like in the image below. If the image does not appear in the correct place, make sure that the cube object appears in the Hierarchy menu at the same level of indentation as the Main Camera and the Directional Light.


Create a material

I’d like my cube to be a little more interesting than just a grey block – I’d like it to have a red colour. In Unity, we can achieve this by creating a Material asset and adding this component to the grey cube.

To create a material, I right click on the Assets node in the Project panel in the bottom left of the screen. From the context menu that appears, I select “Create”, and from the next menu that appears I select “Material”.


A new item is created and appears in the Assets panel – the cursor and focus are on this item, and I entered the value “Red”. Also, a grey ball appears in the bottom right hand corner. In the Inspector panel, I clicked on the color picker beside the “Albedo” label. In the pop up that appears, I selected a red colour, which updates the colour of the ball in the bottom right hand corner, as shown below.


Now that I’ve created a material, I can assign this to the cube. First I selected the Cube object in the Hierarchy panel. Next, I dragged the material named “Red” onto the Inspector panel on the right hand side. This is a surface that I can drag and drop components to. As soon as I drag the Red material to the Inspector for the cube, the cube turns red.


Moving the cube

It’s not very useful to have this cube surrounding our point of view – it makes more sense to have this sitting out in front of our point of view.

The easiest way to move the cube is to use the draggable axis which point outwards from the visible faces of the block. I clicked on the blue arrow – corresponding to the Z-direction – and dragged it forward about 3.5 units.


Just to make this block a little more visually interesting, I’d like to rotate it about its axes. To do this, I click on the rotate button in the top left hand corner (it’s the third button in the group of five, and is selected in the image below). The red cube now has a set of circles surrounding it, rather than the three arrows. You can click on these circles, and drag them to rotate the cube, as shown below.


That’s about it for the first section. You can preview what you’re going to see through the HoloLens by clicking on the Play button in the top-centre of the screen, which will show something like the screen below. The rotated cube floats in a black world, directly in front of our point of view.

Finally I saved the scene by hitting Ctrl+S, and typed in HelloWorld – you can see this in the assets panel.


Create a C# script to make the object rotate

Let’s take the complexity up a notch. We can write C# scripts and apply them to objects in our virtual world.

It’s very straightforward to create a script – right click on the Assets note in the Projects panel, and create a C# script from the context menus, as shown below.


I created a script called RotatorScript. To edit this, I double click on it. This opens VS2015 for me, though on your install it might open MonoDevelop.

I entered the code below:

using UnityEngine;
public class RotationScript : MonoBehaviour {
	public float YAxisRotationSpeed;
	// Update is called once per frame
	void Update () {
            this.transform.Rotate(0, YAxisRotationSpeed * Time.deltaTime, 0Space.Self);

The code above does one thing – each time the frame is updated by the rendering engine, the object that the script is applied to rotates a little around it’s own axes. Specifically in this case, I’ve specified the X-axis rotation and Z-axis rotation to be zero, and the rotation around the Y-axis will be YAxisRotationSpeed degrees per second.

The code above refers to Time.deltaTime – this is a built in Unity function to tell us how long it has been since the last frame. Therefore if we multiply the speed – YAxisRotationSpeed – by the amount of time that passed – Time.deltaTime – the result is the number of degrees to rotate our cube by.

Once I saved the script in Visual Studio, I switched back to Unity. I selected my Cube in the Hierarchy panel, and then dragged the RotationScript to the Inspector for the Cube. In the property page which appears in the Inspector, I changed the value of the “Y Axis Rotation Speed” to be 50.


Now when I click on the Play button in Unity, I’m able to see the Game view of the scene again, but this time the cube is rotating about its own Y-axis.


Hello World!

It occurred to me that with the simple skills learned in this post that I could do something quite interesting with Unity – instead of a rotating cube, I could add a sphere to the scene, apply a material which was an image of Earth, and show a rotating globe, which would be a much more appropriate “Hello, World” project. I could even add a second sphere to rotate around this one, which could represent the Moon.

I’m sure I’m not the first person to do this in Unity in a blog post – I’m sure there are many like it, but this one is mine.

  • As a first step, I clicked on the Cube object in my hierarchy and deleted it. This removed the red cube from my scene.
  • Next, I right-clicked on the Hierarchy panel, and selected “Create Empty”. This created an empty GameObject to the hierarchy.
  • Using the Transform panel in the Inspector for the GameObject, I changed the Z-position to be 4, thus placing the GameObject 4m in front of my point of view.


  • Next, I right clicked on the GameObject in the Hierarchy and added a sphere 3d Object. I renamed this “Earth”, and changed the X, Y, and Z scale values to be 2 (i.e. doubling its size). Notice how this is indented under GameObject, and also how its position in the Transform box in the Inspector is at (0, 0, 0). This means that its centre is at the origin of the parent GameObject, and changes to the position will move it relative to the parent GameObject.


  • Following this, I right clicked on the GameObject in the Hierarchy again, and added another 3d sphere – I named this object “Moon”, and changed the X, Y, and Z scale values to be 0.5 (i.e. halving its size). I also changed the X-position value to be 2, thus moving its centre 2m to the right of the centre of the “Earth” object.


  • Finally for this part, I selected the parent GameObject in the Hierarchy view, and dragged the “RotationScript” to the Inspector surface. In the property page which appears on the Inspector, I change the “Y Axis Rotation Speed” to be 50.

When I hit the Play button, I can see the animation rendered, and show a scene from this below.


I can see that both objects are rotating correctly – the larger central sphere is rotating about its own central vertical axis, and the smaller sphere is orbiting that same axis. However, it doesn’t look very good with the default white colour. I can improve this by using some free assets from the Unity Asset Store.

Downloading assets from the Unity Asset Store

I searched the Unity Asset store through a browser – at http://www.assetstore.unity3d.com – for free renderings of Earth, and found the resource shown below (and linked to here).


I clicked on the “Open in Unity” button, and this switched my in-focus application to Unity. The Asset Store tab was open, and I was able to click on the “Download” button to acquire this resource (I did see a compatibility warning about how this was created with Unity 4). After a couple of pop-ups, I was shown the window below and chose to import one of the Earth material files, shown below.


After clicking on the “Import” button, this jpeg file appeared in my list of Assets, using its original directory structure.

I was able to select this from the Assets/EarthSimplePlanets/Textures folder in the Project panel, and drag the “EarthSimple1.jpg” file to the Inspector surface for the Earth sphere, and, the surface of this sphere updates to look much more like more characteristic world.

Finally, I selected the GameObject from the Hierarchy, and tilted the Z-axis by -15 degrees, to give a slight planetary tilt. After hitting the Play button, the animation shows a white sphere rotating around a planet.

We could enhance this further by downloading more assets from the store for the Moon – a good candidate is the moon landscape linked to here – but for right now, I think this will look pretty good in our HoloLens mixed reality world.


Wrapping up

That’s it for this post – so far, we’ve:

  • created a new project with Unity,
  • added some primitive objects to this world,
  • changed these object’s colour with materials,
  • added a C# script to make this object move,
  • arranged objects to make them orbit an axis outside the object, and
  • used the Unity Asset Store to download assets that make our model more realistic.

Next time, we’ll talk about actually deploying to the HoloLens emulator – there are a few tips and gotchas that I want to share to make other people’s journey a little smoother than mine.

Making, Raspberry Pi 3

Creating a NuGet package to simplify development for I2C devices

Last time, I looked at the MCP9808 temperature sensor, and how to get it to work with the Raspberry Pi 3 using the I2C protocol. As part of this development, I identified an initialisation function which I thought could be useful across a number of other I2C devices. I abstracted this out into a separate project, and I added a reference to this project from my MCP9808 project.

However, I’d prefer to refer to this class using a NuGet package – this allows me to pull the code for my MCP9808 project from GitHub, and know that it’ll work out of the box (rather than have to manually pull the code down from another repo on GitHub and refer to it).

The image below shows the contents of the NuGet package I’ve uploaded – there’s some package metadata, and also just one binary library, targeting the Windows 10 UAP.


I uploaded the NuGet package to the nuget.org site here, and I pull the reference into my project by either using the NuGet package manager through Visual Studio, or I can use the code:

install-package Magellanic.I2c -pre

Of course the Visual Studio solution must target the ARM architecture for the Raspberry Pi, rather than just any CPU.


The code in this library specifies an abstract class, which the code for my other I2C devices can inherit from. I’ve shown below how I’ve used this library to simplify the initialisation code for the MCP9808 – the Initialize method is available when I extend the AbstractI2CDevice class.

public class MCP9808 : AbstractI2CDevice
    private const byte I2C_ADDRESS = 0x18;
    private byte[] DeviceIdAddress = new byte[] { 0x07 };
    public MCP9808()
        this.DeviceIdentifier = new byte[2] { 0x040x00 };
    public override byte GetI2cAddress()
        return I2C_ADDRESS;
    //...other code...

I have specified the device’s unique I2C address, and the device’s identifier code. This means I can now simply initialise the device in an asynchronous way. I can also implement a method to check that the actual values held in the device’s identification registers match those in the datasheet (and therefore this can act as a good proxy to determine if I’ve successfully connected to the right device).

This makes the code for querying the device very simple.

private async Task WriteTemperatureSettingsToDebug()
    var temperatureSensor = new MCP9808();
    await temperatureSensor.Initialize();
    if (temperatureSensor.IsConnected())
            var temperature = temperatureSensor.GetTemperature();
            Debug.WriteLine("Temperature = " + temperature);

The full class for the MCP9808 is linked to here.

I hope this simple library will be helpful in controlling other I2C devices with my Raspberry Pi – the next device I’ll try to control will be the HMC588L digital compass.

.net, Making, Raspberry Pi 3, UWP

How to use a Microsoft LifeCam Studio with the Raspberry Pi 3 using C# and Windows 10 IoT Core

I’ve previously written about how to use the C# UWP APIs to access the camera on your Windows device. In the example code, I experimented with my Windows Phone (Nokia 1520) and my Windows 10 laptop, which has an integrated WebCam. Since I’ve recently been working with the Raspberry Pi 3 using Windows 10 IoT Core, I asked myself the question: Can I write the same C# UWP code and deploy it to a 64-bit laptop, an ARM Windows phone, and an ARM Raspberry Pi 3?

I decided to try the Microsoft LifeCam Studio with my Pi 3 – at least partly because it’s listed on the Compatible Hardware list, but it’s presently not “Microsoft Verified”. One definite use for a camera and my Raspberry Pi is a pretty standard one – I wanted to be able to use it to keep an eye on my 3d printer.

Designing the Interface

My usual design process for a component is to start defining the interface. I start small – rather than trying to think of every single possible thing that I (or others) might ever need, I just choose to define what I need for my use case. I also allow proof of concept code to influence me – it helps me move from purely theoretical requirements towards a practical and useable interface.

For my application, I wanted to initialise the camera and preview the display on different devices. I didn’t need to focus or save video (at this point anyway).

  • I knew the main thing I needed to do before previewing the output was to initialise the camera – from my previous work, I knew that UWP allows me to do this through the MediaCapture object asynchronously.
  • I also knew that I need to choose the camera that I wanted to initialise. Therefore it made sense to me that I needed to pass the camera’s device information to the initialisation method.
Task InitialiseCameraAsync(DeviceInformation cameraToInitialise);
  • In order to pass the camera device information, I knew I’d have to get this information somehow – for the phone I knew I’d probably need to get the back facing camera, but for the laptop or the Pi, I’d need to be able to get the first or default camera.
Task<DeviceInformation> GetCameraAtPanelLocation(Panel cameraLocation);
Task<DeviceInformation> GetDefaultCamera();
  • Finally for now, I knew that the MediaCapture object would certainly be needed. I actually really didn’t like the name “MediaCapture” – I thought this object should be named as a noun, rather than based on the verb “to capture”. I prefer the name of “ViewFinder”, because I think this is a more commonly understood term.
MediaCapture ViewFinder { getset; }

So with all of this, I was in a position to define a draft interface for my UWP application.

namespace Magellanic.Camera.Interfaces
    public interface ICameraDevice : IDisposable
        MediaCapture ViewFinder { getset; }
        Task<DeviceInformation> GetCameraAtPanelLocation(Panel cameraLocation);
        Task<DeviceInformation> GetDefaultCamera();
        Task InitialiseCameraAsync(DeviceInformation cameraToInitialise);

I’ve uploaded this project to GitHub, and I’ve created a NuGet project for this interface.

Implementing the interface

The next step was to create a library which implements this interface. I created a new Windows 10 UWP class library, and created a class called CameraDevice. I made this implement the interface I defined above, taking some of the implementation details from my previous post on how to use camera with a Windows phone.

public class CameraDevice : ICameraDevice
    public MediaCapture ViewFinder { getset; }
    public void Dispose()
        ViewFinder = null;
    public async Task<DeviceInformation> GetCameraAtPanelLocation(Panel cameraPosition)
        var cameraDevices = await GetCameraDevices();
        return cameraDevices.FirstOrDefault(c => c.EnclosureLocation?.Panel == cameraPosition);
    public async Task<DeviceInformation> GetDefaultCamera()
        var cameraDevices = await GetCameraDevices();
        return cameraDevices.FirstOrDefault();
    public async Task InitialiseCameraAsync(DeviceInformation cameraToInitialise)
        await ViewFinder?.InitializeAsync(
            new MediaCaptureInitializationSettings
                VideoDeviceId = cameraToInitialise.Id
    private async Task<DeviceInformationCollection> GetCameraDevices()
        return await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);

There’s not very much code here – this class is about allowing the user to choose a camera, and then allowing them to initialise it for use. I’ve uploaded this code to GitHub, and again released a NuGet package for it.

Building the UWP to access a camera

This part is the real proof of concept – can I write the same C# UWP code and deploy it to a 64-bit laptop, an ARM Windows phone, and an ARM Raspberry Pi 3?

I used VS2015 to create a new Windows 10 UWP Blank App. There were a few steps I needed to do:

  • I needed to change the capabilities in the apps Package.appxmanifest to allow the UWP app to use the webcam and microphone features of the device. I’ve included the XML for this below.
  <DeviceCapability Name="webcam" />
  <DeviceCapability Name="microphone" />
  • I needed to modify the XAML of the MainPage.Xaml file to add a “CaptureElement”:
<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
    <CaptureElement Name="PreviewControl" Stretch="Uniform"/>
  •  I needed to install the NuGet package that I created earlier.

Install-Package Magellanic.Camera -Pre

  • Now that these were in place, I was able to add some events to the app’s MainPage.xaml.cs. All I wanted to do in this app was to initialise the camera preview asynchronously, so I knew the basic structure of the MainPage.xaml.cs would look like the code below:
public MainPage()
    Application.Current.Resuming += Application_Resuming;
    Application.Current.Suspending += Application_Suspending;
protected override async void OnNavigatedTo(NavigationEventArgs e)
    await InitialiseCameraPreview();
private async void Application_Resuming(object sender, object o)
    await InitialiseCameraPreview();
protected override void OnNavigatedFrom(NavigationEventArgs e)
private void Application_Suspending(object sender, SuspendingEventArgs e)

I coded the “InitialCameraPreview” method to initialise the camera, set the XAML source to a ViewFinder object, and then start previewing through the initialised ViewFinder. The only slight complication is that I try to get a rear facing camera first – and if that doesn’t work, I get the default device.

private CameraDevice _cameraDevice = new CameraDevice();
private async Task InitialiseCameraPreview()
    await _cameraDevice.InitialiseCameraAsync(await GetCamera());
    // Set the preview source for the CaptureElement
    PreviewControl.Source = _cameraDevice.ViewFinder;
    // Start viewing through the CaptureElement 
    await _cameraDevice.ViewFinder.StartPreviewAsync();
private async Task<DeviceInformation> GetCamera()
    var rearCamera = await _cameraDevice.GetCameraAtPanelLocation(Windows.Devices.Enumeration.Panel.Back);
    var defaultCamera = await _cameraDevice.GetDefaultCamera();
    return rearCamera ?? defaultCamera;

So given I had this application, time to try to deploy to the three devices.

Device 1 – my local machine

In VS2015, I set my configuration to be Release for x64, and started it on my local machine – this worked fine, showing the output from my laptop’s onboard webcam in an app window;

Device 2 – my Windows 10 Phone (Nokia 1520)

In VS2015, I set my configuration to be Release for ARM, and changed the deployment target to be “Device”. I connected my Windows Phone to my development machine using a micro USB cable, and deployed and ran the app – again, this worked fine, showing the output from the rear facing camera on screen.

Device 3 – my Raspberry Pi 3 and a Microsoft LifeCam Studio camera

I connected my LifeCam Studio device to a USB port on my Raspberry Pi, and then connected the Pi to my laptop via a micro USB cable to provide power. I allowed the device to boot up, using the Windows IoT client to view the Raspberry Pi’s desktop. In the screenshot below, you can see the LifeCam Studio listed as one of the attached devices.


In VS2015, I changed the deployment device to be a “Remote Machine” – this brought up the dialog where I have to select the machine to deploy to – I selected my Pi 3, which has the name minwinpc.


When I used VS2015 to deploy the app, the blue light on the webcam came on, and the Remote IoT Desktop app correctly previewed the output from the LifeCam Studio.


This is pretty amazing. I can use exactly the same codebase across 3 completely different device types, but which are all running Windows 10. Obviously the app I’ve developed is very simple – it only previews the output of a camera device – but it proves for me that the UWP is truly universal, not just for PCs and phones, but also for external IoT devices.


.net, 3d Printing, Making, Raspberry Pi 3, Robotics

3d printed robotic hand – Part #5, attaching the servos to fingers

Last time in this series, I verified that a servo would be a better way to control finger movement than using a solenoid. Since then:

  • I’ve been re-developing the base of the palm to hold servos, and
  • I’ve been researching how to control 4 servos using a single device, such as a Raspberry Pi.

Redesigning the palm

In my first attempt at powering the robotic hand, I had tried to fit in 4 bulky solenoids. This time, I’ve been trying to squeeze in four 9g Tower Pro servos. These are significantly smaller and lighter than the solenoids, but they present their own challenge. Whereas the main shaft of the solenoid retracted into its body, the servos control movement using a wiper blade, which sits outside the servo. There must be enough free space for this wiper blade to move freely.

I decided that the best way to do this was to put the servos on their sides, in stacks of two. I positioned the wipers on opposite sides. My current design for the palm is shown below:

  • The four knuckles are at the back of the diagram;
  • The two towers in the middle are to hold the four servos – I intend to secure the servos using a small plastic bar and three threaded bolts.
  • There is plenty of room towards the bottom of the palm to add another servo and mounting point for the thumb – but I’ve not designed this part yet.


I know It’s a little bit difficult to work out how the part above allows the knuckles to fit, and connects the servos to these fingers. I’ve included a couple of photos below from either side of the printed object which I hope will clear up how the parts connect together.



There’s two different aspects to address – how all the mechanical parts connected together, and how the electronics and programming worked.

You can see it working so far in the embedded Vine below:


Getting everything on board the palm was pretty tight, as mentioned before. I connected the servo wipers to the fingers by linkages, which were bolted on. This was a very fiddly process. There’s a lot of friction in these linkages too.

Also, the servos are quite strong, but the fingers don’t have very much gripping power. I’m not sure how much I can do about this – the principle of moments is against me here.

For the next version:

  • I’d like to try using bearings to reduce the friction in the rotating parts.
  • I need to find a better way to position the servos to allow more room.
  • I will make the fingers more narrow and rounded – I think that angling the knuckles so that the fingers weren’t just paralled was a good idea, but they clashed slightly when fully clenched shut.

Electronics and Software

I users the Raspberry Pi 3 and the Servo Hat that I researched in a previous post. This needed an external 6v supply to power the 4 servos, and I just used a supply I had in the house which transformed mains down to 6v. The Raspberry Pi and Hat are probably a bit big for any real application of this device – the Pi Zero might be better, although Windows 10 IoT Core isn’t available for this yet.

The other thing is a similar problem to the solenoids – right now, the finger is either extended, or clenched. This is an issue with the software, in that I haven’t programmed it so that I can regulate the speed of the fingers when they’re clenching.

For the next version:

  • I’d like to re-write the software to control the speed of the fingers. This also means that I need some way of inputting what I want the speed to be. Right now I am not sure what that might be…an Xbox controller perhaps?
  • I’ll use 4 x 1.5v batteries instread of the external power supply to make the device more portable.


This second version of my robotic hand is much better than the first one – it’s a lot lighter, a lot smaller, and I have the ability to actually control the start and position of the fingers using software, rather than use springs to control the tensed and rested positions. I also need to work on the thumb – another good reason to try to make the mechanics a bit more compact.

Next time I’m going to re-design a lot of the 3d-printed parts. I’m a lot more familiar with the tools (like AutoDesk 123d Design), and I’ve learned a lot (from mistakes!) from the first couple of iterations.