.net, Cake, Continuous Integration, Raspberry Pi 3, UWP

Deploy a UWP application to a Windows 10 device from the command line with Cake

I’ve wanted to improve my continuous integration process for building, testing and deploying UWP applications for a while. For these UWP apps, I’ve been tied to using VS2017 for build and deploy operations – and VS2017 is great, but I’ve felt restricted by the ‘point and click’ nature of these operations in VS2017.

Running automated tests for any .NET project is well documented, but until relatively recently I’ve not had a really good way to use a command line to:

  • build my UWP project and solution,
  • run tests for the solution,
  • build an .appxbundle file if the tests pass, and
  • and deploy the appxbundle to my Windows 10 device.

Trying to find out what’s going on under the hood is the kind of challenge that’s catnip for me, and this is my chance to share what I’ve learned with the community.

This is part of a series of posts about building and deploying different types of C# applications to different operating systems.

If you follow this blog, you’ll probably know I normally deploy apps to my Raspberry Pi 3, but the principles in here could be applied to other Windows 10 devices, such as an Xbox or Windows Phone.

Step 1 – Create the demo UWP and test projects.

I’ll keep the description of this bit quick – I’ll just use the UWP template in Visual Studio 2017 – it’s only a blank white screen but that’s ok for this demonstration.

screenshot.1500076564

I’ve also created an empty unit test project – again the function isn’t important for this demonstration, we just need a project with a runnable unit test.

screenshot.1500076646

I’ve written a simple dummy ‘test’, shown below – this is just created for the purposes of demonstrating how Cake can run a Unit Test project written using MSTest:

using Microsoft.VisualStudio.TestTools.UnitTesting;
 
namespace UnitTestProject2
{
    [TestClass]
    public class UnitTest1
    {
        [TestMethod]
        public void TestMethod1()
        {
            Assert.IsTrue(true);
        }
    }
}

Step 2: Let’s build our project and run the tests using Cake

Open a powershell prompt (I use the package manager console in VS2017) and navigate to the UWP project folder. Now get the Cake bootstrapper script and example Cake build file using the commands below:

Invoke-WebRequest http://cakebuild.net/download/bootstrapper/windows -OutFile build.ps1

Invoke-WebRequest https://raw.githubusercontent.com/cake-build/example/master/build.cake -OutFile build.cake

I edited the build.cake file to have the text below – this script cleans the binaries, restores the NuGet packages for the projects, builds them and runs the MSTests we created.

#tool nuget:?package=NUnit.ConsoleRunner&version=3.4.0
//////////////////////////////////////////////////////////////////////
// ARGUMENTS
//////////////////////////////////////////////////////////////////////

var target = Argument("target", "Default");
var configuration = Argument("configuration", "Release");

//////////////////////////////////////////////////////////////////////
// PREPARATION
//////////////////////////////////////////////////////////////////////

// Define directories.
var buildDir = Directory("./App3/bin") + Directory(configuration);

//////////////////////////////////////////////////////////////////////
// TASKS
//////////////////////////////////////////////////////////////////////

Task("Clean")
    .Does(() =>
{
    CleanDirectory(buildDir);
});

Task("Restore-NuGet-Packages")
    .IsDependentOn("Clean")
    .Does(() =>
{
    NuGetRestore("../App3.sln");
});

Task("Build")
    .IsDependentOn("Restore-NuGet-Packages")
    .Does(() =>
{
    if(IsRunningOnWindows())
    {
      // Use MSBuild
      MSBuild("../App3.sln", settings =>
        settings.SetConfiguration(configuration));
    }
    else
    {
      // Use XBuild
      XBuild("../App3.sln", settings =>
        settings.SetConfiguration(configuration));
    }
});

Task("Run-Unit-Tests")
    .IsDependentOn("Build")
    .Does(() =>
{
    MSTest("../**/bin/" + configuration + "/UnitTestProject2.dll");
});

//////////////////////////////////////////////////////////////////////
// TASK TARGETS
//////////////////////////////////////////////////////////////////////

Task("Default")
    .IsDependentOn("Run-Unit-Tests");

//////////////////////////////////////////////////////////////////////
// EXECUTION
//////////////////////////////////////////////////////////////////////

RunTarget(target);

Cake’s built in benchmarking shows the order in which the tasks are executed

Task Duration 
--------------------------------------------------
Clean                  00:00:00.0124995 
Restore-NuGet-Packages 00:00:03.5300892 
Build                  00:00:00.8472346 
Run-Unit-Tests         00:00:01.4200992 
Default                00:00:00.0016743 
--------------------------------------------------
Total:                 00:00:05.8115968

And obviously if any of these steps had failed (for example if a test failed), execution would stop at this point.

Step 3: Building an AppxBundle in Cake

If I want to build an appxbundle for a UWP project from the command line, I’d run the code below:

MSBuild ..\App3\App3.csproj /p:AppxBundle=Always /p:AppxBundlePlatforms="x86|arm" /Verbosity:minimal

There’s four arguments have told MSBuild about:

  • The location of the csproj file that I want to target
  • I want to build the AppxBundle
  • I want to target x86 and ARM platforms (ARM doesn’t work on its own)
  • And that I want to minimise the verbosity of the output logs.

I could use StartProcess to get Cake to run MSBuild in a task, but Cake already has methods for MSBuild (and many of its parameters) baked in. For those parameters which Cake doesn’t know about, it’s very easy to use the WithProperty fluent method to add the argument’s parameter and value. The code below shows how I can implement the command to build the AppxBundle in Cake’s C# syntax.

var applicationProjectFile = @"../App3/App3.csproj";
 
// ...

MSBuild(applicationProjectFile, new MSBuildSettings
    {
        Verbosity = Verbosity.Minimal
    }
    .WithProperty("AppxBundle", "Always")
    .WithProperty("AppxBundlePlatforms", "x86|arm")
);

After this code runs in a task, an AppxBundle is generated in a folder in the project with the path:

AppPackages\App3_1.0.0.0_Debug_Test\App3_1.0.0.0_x86_arm_Debug.appxbundle

The path and file name isn’t massively readable, and is also likely to change, so I wrote a short method to search the project directories and return the path of the first AppxBundle found.

private string FindFirstAppxBundlePath()
{
    var files = System.IO.Directory.GetFiles(@"..\", @"*.appxbundle", SearchOption.AllDirectories);
    
    if (files.Count() > 0)
    {
        return files[0];
    }
    else
    {
        throw new System.Exception("No appxbundle found");
    }
}

Now that I have the path to the AppxBundle, I’m ready to deploy it to my Windows device.

Step 4: Deploying the AppxBundle

Microsoft have provided a command line tool in the Windows 10 SDK for deploying AppxBundles – this tool is called WinAppDeployCmd. The syntax used to deploy an AppxBundle is:

WinAppDeployCmd install -file "\MyApp.appxbundle" -ip 192.168.0.1

It’s very straightforward to use a command line tool with Cake – I’ve blogged about this before and how to use StartProcess to call an executable which Cake’s context is aware about.

But what about command line tools which Cake doesn’t know about? It turns out that it’s easy to register tools within Cake’s context – you just need to know the path to the tool, and the code below shows how to add the UWP app deployment tool to the context:

Setup(context => {
    context.Tools.RegisterFile(@"C:\Program Files (x86)\Windows Kits\10\bin\x86\WinAppDeployCmd.exe");
});

If you don’t have this tool on your development machine or CI box, it might be because you don’t have the Windows 10 SDK installed.

So with this tool in Cake’s context, it’s very simple to create a dedicated task and pull the details of this tool out of context for use with StartProcess, as shown below.

Task("Deploy-Appxbundle")
	.IsDependentOn("Build-Appxbundle")
	.Does(() =>
{
    FilePath deployTool = Context.Tools.Resolve("WinAppDeployCmd.exe");
 
    Information(appxBundlePath);
 
    var processSuccessCode = StartProcess(deployTool, new ProcessSettings {
        Arguments = new ProcessArgumentBuilder()
            .Append(@"install")
            .Append(@"-file")
            .Append(appxBundlePath)
            .Append(@"-ip")
            .Append(raspberryPiIpAddress)
        });
 
    if (processSuccessCode != 0)
    {
        throw new Exception("Deploy-Appxbundle: UWP application was not successfully deployed");
    }
});

And now we can run our Cake script to automatically build and deploy the UWP application – I’ve pasted the benchmarking statistics from Cake below.

Task                     Duration
--------------------------------------------------
Clean                    00:00:00.0821960
Restore-NuGet-Packages   00:00:09.7173174
Build                    00:00:01.5771689
Run-Unit-Tests           00:00:03.2204312
Build-Appxbundle         00:01:09.6506712
Deploy-Appxbundle        00:02:13.8439852
--------------------------------------------------
Total:                   00:03:38.0917699

And to prove it was actually deployed, here’s a screenshot of the list of apps on my Raspberry Pi (from the device portal) before running the script…

screenshot.1500907026

…and here’s one from after – you can see the UWP app was successfully deployed.

screenshot.1500907690

I’ve uploaded my project’s build.cake file into a public gist – you can copy this and chan-ge it to suit your particular project (I haven’t uploaded a full UWP project because sometimes people have issues with the *.pfx file).

Wrapping up

I’ve found it’s possible to build and deploy a UWP app using the command line, and beyond that it’s possible to integrate the build and deployment process into a Cake script. So even though I still create my application in VS2017 – and I’ll probably keep using VS2017 – it means that I have a much more structured and automated integration process.


About me: I regularly post about .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, C# tip, UWP, Visual Studio

Testing your Windows App with Appium in Windows 10 and Visual Studio 2015

At Connect(); // 2016, Scott Hanselman’s keynote include a short description of a tool called Appium (presented by Stacey Doerr). This tool allows you to create and automate UI tests for Windows Apps – not just UWP apps, but basically any app which runs on your Windows machine. Automated UI testing is definitely something that I’ve missed when moving from web development to UWP development, so I was quite excited to find out there’s a project would help fill this gap.

As is often the case, getting started with new things is tricky –  when I follow present instructions from Microsoft, I found some errors occurred. That’s likely to be caused by my development machine set up – but you might hit the same issue. In this post, I’ll describe the process I followed to get Appium working, and I’ll also document the error messages I found on the way.

I hope that this blog post becomes irrelevant soon and that this isn’t an issue affecting a lot of people.

Installing and Troubleshooting Appium

Step 1 – Install Node.js

Install Node.js from here.

Step 2 – Open a PowerShell prompt as Administrator, and install Appium

From an elevated PowerShell prompt, run the command:

npm install –g appium

When I ran this command, the following warnings were printed to the screen – however I don’t think they’re anything to worry about:

npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@^1.0.12(node_modules\appium\node_modules\fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.0.15: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"})

 Step 3 – From an elevated PowerShell prompt, run Appium

From an elevated PowerShell prompt, run the command:

appium

After a few seconds, the following text is printed to the screen.

Welcome to Appium v1.6.0
Appium REST http interface listener started on 0.0.0.0:4723

At this point I tried to run the tests in the Sample Calculator App provided by Appium on GitHub – found here. I used Visual Studio to run these tests, but found all 5 tests failed, and the following error was printed to the PowerShell prompt.

[Appium] Creating new WindowsDriver session
[Appium] Capabilities:
[Appium]   app: 'Microsoft.WindowsCalculator_8wekyb3d8bbwe!App'
[Appium]   platformName: 'Windows'
[Appium]   deviceName: 'WindowsPC'
[BaseDriver] The following capabilities were provided, but are not recognized by appium: app.
[BaseDriver] Session created with session id: dcfce8e7-9615-4da1-afc5-9fa2097673ed
[WinAppDriver] Verifying WinAppDriver is installed with correct checksum
[debug] [WinAppDriver] Deleting WinAppDriver session
[MJSONWP] Encountered internal error running command: 
	Error: Could not verify WinAppDriver install; re-run install
    at WinAppDriver.start$ (lib/winappdriver.js:35:13)
    at tryCatch (C:\Users\Jeremy\AppData\Roaming\npm\node_modules\appium\node_modules\babel-runtime\regenerator\runtime.js:67:40)
    at GeneratorFunctionPrototype.invoke [as _invoke] (C:\Users\Jeremy\AppData\Roaming\npm\node_modules\appium\node_modules\babel-runtime\regenerator\runtime.js:315:22)
    at GeneratorFunctionPrototype.prototype.(anonymous function) [as next] (C:\Users\Jeremy\AppData\Roaming\npm\node_modules\appium\node_modules\babel-runtime\regenerator\runtime.js:100:21)
    at GeneratorFunctionPrototype.invoke (C:\Users\Jeremy\AppData\Roaming\npm\node_modules\appium\node_modules\babel-runtime\regenerator\runtime.js:136:37)

For some reason on my machine, the WinAppDriver has not been installed correctly during the installation of Appium.

Step 4 – Manually install v0.5-beta of the WinAppDriver

This is pretty easy to fix – we can just grab the WinAppDriver installer from its GitHub site. But for version 1.6.0 of Appium, I found that it was important to select the correct version of WinAppDriver – specifically v0.5-beta, released on September 16 2016. Higher versions did not work for me with Appium v1.6.0.

Step 5 – Restart Appium from an elevated PowerShell prompt

Installed WinAppDriver v0.5-beta was a pretty simple process, I just double clicked on the file and selected all the default options. Then I repeated Step 3 and restarted Appium from the elevated PowerShell prompt. Again, after a few seconds, the same message appeared.

Welcome to Appium v1.6.0
Appium REST http interface listener started on 0.0.0.0:4723

This time, when I ran the tests for the Sample Calculator App from GitHub they all passed. Also, the PowerShell prompt showed no errors – instead of saying that it couldn’t verify the WinAppDriver install, I got the message below:

[WinAppDriver] Verifying WinAppDriver is installed with correct checksum
[debug] [WinAppDriver] WinAppDriver changed state to 'starting'
[WinAppDriver] Killing any old WinAppDrivers, running: FOR /F "usebackq tokens=5" %a in (`netstat -nao ^| findstr /R /C:"4823 "`) do (FOR /F "usebackq" %b in (`TASKLIST /FI "PID eq %a" ^| findstr /I winappdriver.exe`) do (IF NOT %b=="" TASKKILL /F /PID %a))
[WinAppDriver] No old WinAppDrivers seemed to exist
[WinAppDriver] Spawning winappdriver with: undefined 4823/wd/hub
[WinAppDriver] [STDOUT] Windows Application Driver Beta listening for requests at: http://127.0.0.1:4823/wd/hub
[debug] [WinAppDriver] WinAppDriver changed state to 'online'

I was able to see the standard Windows Calculator appear, and a series of automated UI tests were carried out on the app.

How do I get automation information for these apps?

When you look at the Sample Calculator App and the basic scenarios for testing, you’ll see some code with some strange constant values – such as the in the snippet below.

DesiredCapabilities appCapabilities = new DesiredCapabilities();
appCapabilities.SetCapability("app""Microsoft.WindowsCalculator_8wekyb3d8bbwe!App");
appCapabilities.SetCapability("platformName""Windows");
appCapabilities.SetCapability("deviceName""WindowsPC");
CalculatorSession = new RemoteWebDriver(new Uri(WindowsApplicationDriverUrl), appCapabilities);
Assert.IsNotNull(CalculatorSession);
CalculatorSession.Manage().Timeouts().ImplicitlyWait(TimeSpan.FromSeconds(2));
 
// Make sure we're in standard mode
CalculatorSession.FindElementByXPath("//Button[starts-with(@Name, \"Menu\")]").Click();
OriginalCalculatorMode = CalculatorSession.FindElementByXPath("//List[@AutomationId=\"FlyoutNav\"]//ListItem[@IsSelected=\"True\"]").Text;
CalculatorSession.FindElementByXPath("//ListItem[@Name=\"Standard Calculator\"]").Click();

The code above shows that the test looks for an app with identifier:

“Microsoft.WindowsCalculator_8wekyb3d8bbwe!App”

It’s obvious this is for the Microsoft Windows Calculator app – but most of us won’t recognise the strange looking code appended at the end of this string. This is the application’s automation identifier.

In order to locate this identifier, start the standard Calculator application from within Windows (open a Run prompt and enter “Calc”).

There’s a tool shipped with the Visual Studio 2015 called “Inspect” – it should normally be available at the location:

C:\Program Files (x86)\Windows Kits\10\bin\x86

Start Inspect.exe from the directory specified above. When you run the Inspect application, you’ll get a huge amount of information about the objects currently being managed by Windows 10 – when you drill into the tree view on the left side of the screen to see running applications, you can select “Calculator”, and on the right hand side a value for “AutomationId” will be shown – I’ve highlighted it in red below.

inspect

The other items – menus, buttons, and display elements – can also be obtained from this view when you select the corresponding menu, button or display elements – a particularly useful property is “Legacy|Accessible:Name” when identifying elements using the FindElementByXPath method.

Conclusion

I hope this post is useful for anyone interested in automating UI tests for Windows App, and particularly if you’re having problems getting Appium to work. There’s some really useful sample apps on GitHub from Appium – I found coding for Windows Apps to be a bit confusing to start off with, but with a few key bits of information – like using the Inspect tool  – you can start to tie together how the sample apps were written and how they work. This should get you up and running with your own automated UI test code. I’m excited about the opportunities this tool gives me to improve the quality of my applications – I hope this post helps you get started too.

.net, HoloLens, UWP, Windows Store Apps

Coding for the HoloLens with Unity 5 – Part #8 – Adding an image to the HUD (and then changing it in C# code)

Last time, we looked at creating a simple HUD for the HoloLens, and displayed text with different colours in each of the corners of the viewable screen.

Obviously you won’t always want to just have text on your HUD – so this time we’re going to look at a very simple extension of this – adding an image to the HUD.

Let’s pick this up from where we left the last post. We’ve already created a HUD with text in the four corners, as shown in the emulator below.

screenshot.1472339560

Say we want to add some kind of visual cue – for example, a status icon to show if there’s a WiFi connection.

Note that I’m not going to write code here to test if there actually is a WiFi connection for the HoloLens – I’m just looking at visual cues, with this as a possible application.

I’m going to delete the red Text UI element from the bottom right of the application, as this is where I’ve decided I want my image to appear.

screenshot.1473006215.png

Now I want to add a new UI element to the canvas – specifically a RawImage element. You can select this from the context menu, as shown below.

screenshot.1473006281.png

This will just add a new blank white image to your canvas, as shown in the scene below.

screenshot.1473006376

 

We obviously need to adjust this raw image to be the correct position, size, and to have the correct source. We can do all of this in the Inspector panel. The panel below shows the defaults that my version of Unity gives.

screenshot.1473006620

First, I’d like to change the position of the image to be in the bottom right of the canvas. I can do this by clicking on the position icon (the part that looks like a crosshairs in the top left of the image above). Once I’ve clicked on it, I hit “Alt” on the keyboard to get an alternative menu, shown below.

screenshot.1473006537

Using the mouse, I select the icon – highlighted with a red box above – which positions the image in the bottom right of the canvas.

Now, I need to select an image to add – I’ve an image of a cloud which I’ll use to signify a connection to the cloud. This image is 100px by 100px, it’s a PNG, and it has a transparent background.

First I create a new folder called “Resources” under Assets in the Unity Project view. Then I right-click, select “Import New Asset…” and browse to where I have the cloud image saved.

screenshot.1473007623

Now I select the RawImage object which is stored under the Main Canvas object so I can see the RawImage Inspector panel. Right now, the Texture property of the RawImage is empty, but next I’ll drag the image from the Resources folder onto the Texture property.

The image below shows the cloud image rendered on our HUD canvas.

screenshot.1473007987

Now if we build this and deploy to the emulator, you’ll see the cloud image in your HUD.

screenshot.1473008573

Changing the image in code

Sometimes we’ll want to change our image in code, as dragging the image from the Resources folder to the Inspector panel at design time is not flexible enough.

Fortunately, doing this in code is pretty straightforward – we just have to define what image (or in Unity’s terms, what “Texture”) that we want to display and set the RawImage’s texture to be this.

First, I add a new GameObject to the scene called “ScriptManagerCollection”.

Then I add another image to my Resources folder, called “NotConnected.png” – this image is what I’ll use when the WiFi is not connected.

Next, I add a new C# script to the Assets called “ImageManager”. I opened ImageManager in Visual Studio, and added the code below.

using UnityEngine.VR.WSA.Input;
using UnityEngine.UI;
 
public class ImageManager : MonoBehaviour {

    GestureRecognizer recognizer;
 
    public RawImage wifiConnection;
 
    // Use this for initialization
    void Start () {
        recognizer = new GestureRecognizer();
 
        recognizer.TappedEvent += Recognizer_TappedEvent;
 
        recognizer.StartCapturingGestures();
    }

    private void Recognizer_TappedEvent(InteractionSourceKind source, int tapCount, Ray headRay)
    {
        var myGUITexture = (Texture2D)Resources.Load("NotConnected");
 
        wifiConnection.texture = myGUITexture;
    }
}

You can see that I’ve written some simple code which recognises a tap gesture, and changes the source of the wifiConnection image to be “NotConnected.png”.

Note how I’ve not had to add the “.png” extension to the name of the image.

I dragged this script to the ScriptManagerCollection GameObject in Unity, and selected this GameObject. The Inspector updates, and shows a public RawImage property called “Wifi Connection”. Drag the RawImage object from the canvas in the Hierarchy window to this property.

screenshot.1473010066

Now I can build this project, and run it in the HoloLens emulator.

So when the application runs first, it shows the cloud icon in the lower right of the screen:

screenshot.1473008573

And if I emulate a click gesture, the image changes to the “Not Connected” cloud icon.

screenshot.1473010576

Conclusion

So we can now integrate images – and changing images – into our HUD for the HoloLens. Next time I’m going to look at creating a complete application for the HoloLens using some of the tutorials I’ve created over the last few weeks.

 

.net, HoloLens, Unity, UWP

Coding for the HoloLens with Unity 5 – Part #7 – Creating a basic HUD

One of the elements of augmented reality that’s probably most widely known is a HUD – this is a Heads Up Display. If you’ve played an FPS computer game you’ll be familiar with this as the area of the screen that shows your health, or score, or the number of lives you have left in the game.

This isn’t really a hologram as such, but it’s still something we can develop for the HoloLens. The key is making sure the artefacts rendered by the HoloLens are kept in the same position in front of you – and essentially, it means making those artefacts child objects of the camera.

Let’s have a closer look.

Keeping an object in one place

I’ll demonstrate the principle of keeping an object in one place in the steps below – later we’ll look at how to render text.

First, create a new project in Unity for the HoloLens (I’ve previously described how to do this here).

screenshot.1472152960

Next, right click on the Main Camera object in the Hierarchy. Add a new Cube GameObject.

screenshot.1472153013

Change the position of this Cube object so that it’s 2m in front of you, and scale it to 0.1 of its original size. This should be a white cube, sitting 2m in front of the camera, which has sides of about 10cm in length.

screenshot.1472153122

If you now build this project and deploy it to the emulator, you’ll see a white cube as described above. If you try to move around in the emulator, nothing will happen (apparently). This is because the cube is in a static position in front of the camera, so even though you are moving, the cube moves with you.

screenshot.1472157675

Let’s prove this by adding another object. This time, add another cube to the main Hierarchy panel, but not as a child of the camera object. Make it 2m in front of you and 1m to the left, resize it to 0.1 scale, and add a material to colour the cube red (I write about how to change an object’s colour here).

screenshot.1472157923

Again, build this project, deploy to the emulator, and try to move around. This time you’ll be able to look around the red cube and move your position relative to it, but the white cube will stay in the same spot.

screenshot.1472159248

If you have a HoloLens, try deploying to the HoloLens and you’ll be able to see this more clearly – whereas you can walk around the red cube, the white cube stays still in front of you.

A more useful example

So having a white cube as a HUD isn’t very useful – but that was just to demonstrate how to keep an object in a static position in front of you. Now, let’s look at adding some text to our HUD.

Open the HUD project again, and remove the white and red cubes we created in the last step.

Now add a canvas object as a child of the Main Camera – this is available by right clicking on the Main Camera, selecting UI from the context menu, and then selecting Canvas from the fly-out menu.

  • Position the Canvas to be 1m in front of you – meaning change the Z position to be 1.
  • Change the width to 460, and height to 280.
  • Change the scale to be 0.001 for the X, Y and Z axes.
  • Also, change the Dynamic Pixels per Unit in the Canvas Scaler component from 1 to 10 (this makes the text we’ll add later less blurry).

screenshot.1472338895

Next, add a Text GUI object as a child of this Canvas object (this is also available from the same UI menu).

  • Position this to be in the top left of the canvas using the Paragraph -> Alignment options.
  • Change the text to “Top Left”.
  • Change the font to be 14.
  • Change the colour to be something distinctive. I’ve used green in my example.
  • Make sure the positions in the X, Y and Z axes are all zero, and that the scales are all set to 1.
  • Finally, in the Text object’s Rect Transform component, ensure that the object is set to stretch in both vertical and horizontal directions.

screenshot.1472339683

Now build your project, and deploy it to the emulator.

This time, you should see some green text floating in the top left corner of your field of view.

screenshot.1472339239

If you can’t see this text, change the position from top left to centre – it may be the you need to adjust the canvas dimensions to be different from mine.

You can take this a bit further as I’ve shown in the picture below, where you can align text to different positions on the canvas.

screenshot.1472339560

This is a very powerful technique – you can use scripts to adjust this text depending on actions in your surroundings. Also, you aren’t constrained to just using text objects – you could use an image, or something else.

Hopefully this is useful inspiration in creating a HUD for your HoloLens.

.net, HoloLens, Unity, UWP

Coding for the HoloLens with Unity 5 – Part #5: Creating holograms from prefabs at runtime using gestures

Up to now in this series, I’ve added holograms to my scene within Unity. But it’s much more useful to be able to create holograms at run-time. This tutorial will show how to create a pre-fabricated object (called a prefab) in Unity, and how to use a simple tap gesture to add this hologram prefab to your scene.

Creating a pre-fabricated object in Unity

Unity has an asset type called prefab. This allows a GameObject to be created as a kind of global project asset which can be re-used numerous times in the project. Changing the prefab asset in one place also allows all instantiated occurences of the asset in your scene to change also.

Let’s create a simple object in a project hierarchy, and convert it to a prefab asset.

First, in a Unity project, right click on the Hierarchy surface and create a new Cube 3d object – call it “Cube”.

Next, right click on the Asset node in the Project surface, create a new material (the picture below shows how to select Material from the context menu). Call the material “Blue”.

screenshot.1469374606

For this material, selected the Albedo option and from colour chooser palette which appears, select a blue colour.

screenshot.1469374837

Now drag this material onto the “Cube” object in the Hierarchy view. The cube which is in the centre of the scene should now turn to a blue colour.

screenshot.1469375525

Next, right click on the Assets node in the Project view, and select the Create item in the context menu. From this, select the Prefab option.

screenshot.1469375699

Call this prefab object “BlueCube”. This will have the default icon of a white box.

screenshot.1469376023

If you now click on the Cube in the Hierarchy view, ou can drag this onto the BlueCube prefab object. The icon will change from a white box to a blue box, previewing what the object looks like in our virtual world.

screenshot.1469376154

You have now created a prefab object – whenever you want to create a BlueCube object like this in your scene, you can just use the prefab object, instead of having to create a cube and assign a material to it each time. Additionally, if you want to change the object in some way – for example to change the size, orientation, or shade of blue – you can change the prefab object, and this change will be reflected across all instantiations of this prefab.

How can we create a prefab hologram at runtime?

Let’s start by deleting the cube object from the scene. Either click on the cube in the scene, or click on the “Cube” object in the Hierarchy view, and hit delete. The scene will now be empty.

Now let’s create a new C# script to help us manage creating holograms. Right click on the Assets panel, and create a new C# script called “CubeManager”. Now double click on this script to open up your preferred script editor (e.g. MonoDevelop or Visual Studio).

There’s two things I want to do in this script – I need to capture a tap gesture, and when I detect a tap, I want to instantiate a “BlueCube” object 2m in front of where I’m presently looking.

First, add a public member GameObject variable to the CubeManager script called blueCubePrefab, as shown in the code below:

public class CubeManager : MonoBehaviour
{
    public GameObject blueCubePrefab;
}

Now we have to let our scene know about this script. Switch back to Unity, and right click on the Hierarchy panel – from the context menu, select “Create Empty”. Give this object the name “BlueCubeCollection”.

Drag the “CubeManager” C# script to the new “BlueCubeCollection” object. On the Inspector panel for the BlueCubeCollection object, you’ll see a new script property called “Cube Manager”.

screenshot.1469379685

Notice in the diagram above that the Cube Manager script has a variable called “Blue Cube Prefab”. Unity has created this property based on the public GameObject variable called “blueCubePrefab” in the C# script.

But also notice that the property has a value of “None” – whereas there’s a declaration, there’s no instantiation. We can fix this by dragging the BlueCube prefab we created earlier onto the textbox that says “None (Game Object)”. When you do this, the panel will change to look like the diagram below – notice that it now says “BlueCube” below.

screenshot.1469379987

Let’s go back to the C# script. In order to recognise gestures like a tap, the script needs to have a GestureRecognizer object. This object has an event called “TappedEvent”, and when this event is registered, we can start capturing gestures. The code below shows how this works.

using UnityEngine;
using UnityEngine.VR.WSA.Input;

public class CubeManager : MonoBehaviour
{
    public GameObject blueCubePrefab;
 
    GestureRecognizer recognizer;
 
    void Start()
    {
        recognizer = new GestureRecognizer();
 
        recognizer.TappedEvent += Recognizer_TappedEvent;
 
        recognizer.StartCapturingGestures();
    }
 
    private void Recognizer_TappedEvent(InteractionSourceKind source, int tapCount, Ray headRay)
    {
        // process the event.
    }

The last part of this is instantiating the new BlueCube object at a specific location. The key to this is the parameter headRay in the Recognizer_TappedEvent above. This headRay object has a couple of properties, which will help us position the new object – the properties are direction and origin. These are both of the type Vector3 – this object type is used for passing positions and directions.

  • headRay.origin gives us the position that the HoloLens wearer is at.
  • headRay.direction gives us the direction that the HoloLens wearer is looking.

Therefore, if we want to get the position 2m in front of the HoloLens, we can multiply the direction by 2, and add it to the origin value, like the code below:

var direction = headRay.direction;
 
var origin = headRay.origin;
 
var position = origin + direction * 2.0f;

So now we have the position where we want to place our hologram.

Finally, we just need the code to instantiate the blueCubeFab hologram. Fortunately, this is very easy.

Instantiate(blueCubePrefab, position, Quaternion.identity);

This call places an instance of the blueCubePrefab at the Vector3 position defined by position. The Quaternion.identity object simply means that the object is in the default rotation.

So the complete code for the CubeManager is below:

using UnityEngine;
using UnityEngine.VR.WSA.Input;

public class CubeManager : MonoBehaviour
{
    public GameObject blueCubePrefab;
 
    GestureRecognizer recognizer;
 
    void Start()
    {
        recognizer = new GestureRecognizer();
 
        recognizer.TappedEvent += Recognizer_TappedEvent;
 
        recognizer.StartCapturingGestures();
    }
 
    private void Recognizer_TappedEvent(InteractionSourceKind source, int tapCount, Ray headRay)
    {
        var direction = headRay.direction;
 
        var origin = headRay.origin;
 
        var position = origin + direction * 2.0f;
 
        Instantiate(blueCubePrefab, position, Quaternion.identity);
    }
}

Now we can build and run the project using the settings defined in my other post here. After running the project in Visual Studio through the HoloLens emulator which showed an empty scene, I created a few boxes (using the enter key to simulate an air-tap). I’ve navigated to the side to show these holograms.

screenshot.1469385281

So now we know how to create holograms at run-time from a prefabricated object using a gesture.

 

.net, HoloLens, Unity, UWP

Coding for the HoloLens with Unity 5 – Part #4: Preparing the Unity project for source code management

This will be a super short post, but something that I thought deserved its own post.

One thing I’ve noticed with Unity projects is that, by default, some of the files are created as binary files – for example, files in the “ProjectSettings” folder. This isn’t great for me if I want to commit files to GitHub or Subversion. I prefer to check in text files, so if a file changes, at least I can understand what changed.

To ensure files are generated as text, open the Unity editor, and go to Edit -> Project Settings -> Editor, which will open an Inspector panel in the Unity editor (shown below).

screenshot.1468703470

I’ve highlighted the values I changed in red above:

  • I’ve changed the default version control mode from Hidden Meta Files to “Visible Meta Files” – this means each asset (even binary) has a text file containing meta data, which is available through the file system. More information is available at this link.
  • I’ve also changed the Asset Serialization Mode from “Mixed” to “Force Text”.

After restarting Unity, you should notice that project settings and assets (such as prefabs) are now text files. I think this is more suitable for management in a code versioning system.

The only folders that I commit in my project are the “Assets”, “Library” and “ProjectSettings” folders. I choose to add all other folders and files to the ignore list.

.net, HoloLens, Unity, UWP

Coding for the HoloLens with Unity 5 – Part #3: Deploying to the emulator (and using it)

Last time, I created a simple Hello World project in Unity 5 for the HoloLens. This time, I’ll deploy it to the HoloLens Emulator.

There’s a few other things I’ll look at as well – how to use the Emulator from your keyboard, and some hints on setting up your Unity HoloLens project for a code versioning system.

Building your Unity HoloLens project

Now that we have created and saved our Unity project, we’re ready to deploy it to our HoloLens. Inside the Unity environment, open the File menu and select “Build Settings…”.

screenshot.1468616434

This will open a screen similar to the one below. I’ve modified some of the settings:

  • I clicked on the “Add Open Scenes” to add the default “Hello World” scene;
  • I selected the platform “Windows Store”;
  • I’ve changed the SDK to “Universal 10”;
  • I’ve changed the UWP Build Type to D3D;
  • Finally I’ve checked the box beside “Unity C# Projects”.

screenshot.1468616326

Next, click on the button saying “Player Settings…” – the Inspector view will appear on the main Unity development environment.

screenshot.1468618370

The most important property page is “Other Settings”. Make sure that the “Virtual Reality Supported” box is ticked, and that the value under “Virtual Reality SDKs” is “Windows Holographic” (I’ve highlighted this in red below).

screenshot.1468618700

Now click on the “Build” button on the “Build Settings” window, which should still be on top of the Unity IDE. You’ll immediately be shown a folder dialog box, asking you to select a folder in which to create the Visual Studio project. I normally choose to create a folder named “App” alongside the other folders in the Unity Project. Choose this folder and hit the “Select Folder” button.

screenshot.1468619335

A few windows will pop up showing progress bars, and eventually a Windows Explorer window appears with the folder “App” selected. I opened the “App” folder, and double clicked on the “HelloWorld.sln” file, to open the solution in Visual Studio 2015.

Deploying to the emulator

When Visual Studio opens, ensure that the “Release” configuration is chosen, targetting the x86 architecture. I’ve selected the HoloLens Emulator as a deployment device, rather than a HoloLens.

screenshot.1468620806

After clicking on the button to start the HoloLens Emulator, the emulator starts but I see a warning message (shown below).

screenshot.1468621830

I select “Continue Debugging”, and the application starts in the emulator, showing a sphere orbiting the planet.

screenshot.1468622276

Using the emulator

The emulator is obviously not a perfect substitute for the real HoloLens hardware, though it’s still pretty good. One of the things I initially found difficult was how to navigate around the virtual world. If you’re familiar with gaming on a keyboard you’ll probably find it quite easy – forward, back, left and right can be controlled by W,S,A and D, and looking up, down, left and right can be controlled by the corresponding keyboard arrow keys.

I personally don’t really like using the keyboard to control the emulator, and fortunately an alternative option is to connect an Xbox 360 controller to my PC by USB – I find the game controller a much easier way to navigate in the virtual world.

There’s a set of instructions on how to use the emulator at this link, and full details on advanced emulator control at this link.

The emulator even offers an online device portal, available by clicking on the globe icon in the list of icons on the top right of the emulator – this just opens up a site in a browser, as shown below. There’s more information on how to use the device portal at this link.

screenshot.1468700023

All the information and functions available through the device portal are also available through a RESTful API – so for example, to see the device battery status, you can browse to:

http://<your device’s IP address>/api/power/battery

and the response will be a JSON representation of your battery status.

Full details of the RESTful API are available at this link.

Wrapping up

So in the last three posts, we’ve:

We’ve also looked at how to use the emulator, and how to see its Device Portal.

Next time, I’ll look at preparing our code for a code versioning system such as Subversion or GitHub.