.net, HoloLens

Coding for the HoloLens with Unity 5 – building a subtitling app

Last time I said I’d create a complete app for the HoloLens – I’ve been transcribing the steps in a written post, and found that trying to describe a long and quite complex procedure is difficult and not really that helpful to readers.

So instead, I’ve started a YouTube channel and uploaded some short videos describing the process there.

The app that I’ve built uses the HoloLens’s microphone to detect speech, and uses the DictationRecognizer component in C# to convert this speech into text. I then display this text in a subtitle box on a HUD which the HoloLens wearer can see. The point of this app – I call it “Holo Listener” – is to assist people who may have some hearing difficulties, and can use the HoloLens to get real-time subtitles for a conversation.

Obviously there are some limitations – the speech recognition software is good but not perfect, and the app works best in a quiet environment where the speaker is close to the person wearing the HoloLens.

Anyway, I hope these videos are a helpful demonstration of creating and developing an app from end-to-end.

Part #1 – Creating the app in Unity and setting up the UI components of the HUD

Part #2 – Switching the app on and off using the Tap gesture

Part #3 – Using the DictationRecognizer to convert speech and show subtitles

.net, HoloLens, UWP, Windows Store Apps

Coding for the HoloLens with Unity 5 – Part #8 – Adding an image to the HUD (and then changing it in C# code)

Last time, we looked at creating a simple HUD for the HoloLens, and displayed text with different colours in each of the corners of the viewable screen.

Obviously you won’t always want to just have text on your HUD – so this time we’re going to look at a very simple extension of this – adding an image to the HUD.

Let’s pick this up from where we left the last post. We’ve already created a HUD with text in the four corners, as shown in the emulator below.

screenshot.1472339560

Say we want to add some kind of visual cue – for example, a status icon to show if there’s a WiFi connection.

Note that I’m not going to write code here to test if there actually is a WiFi connection for the HoloLens – I’m just looking at visual cues, with this as a possible application.

I’m going to delete the red Text UI element from the bottom right of the application, as this is where I’ve decided I want my image to appear.

screenshot.1473006215.png

Now I want to add a new UI element to the canvas – specifically a RawImage element. You can select this from the context menu, as shown below.

screenshot.1473006281.png

This will just add a new blank white image to your canvas, as shown in the scene below.

screenshot.1473006376

 

We obviously need to adjust this raw image to be the correct position, size, and to have the correct source. We can do all of this in the Inspector panel. The panel below shows the defaults that my version of Unity gives.

screenshot.1473006620

First, I’d like to change the position of the image to be in the bottom right of the canvas. I can do this by clicking on the position icon (the part that looks like a crosshairs in the top left of the image above). Once I’ve clicked on it, I hit “Alt” on the keyboard to get an alternative menu, shown below.

screenshot.1473006537

Using the mouse, I select the icon – highlighted with a red box above – which positions the image in the bottom right of the canvas.

Now, I need to select an image to add – I’ve an image of a cloud which I’ll use to signify a connection to the cloud. This image is 100px by 100px, it’s a PNG, and it has a transparent background.

First I create a new folder called “Resources” under Assets in the Unity Project view. Then I right-click, select “Import New Asset…” and browse to where I have the cloud image saved.

screenshot.1473007623

Now I select the RawImage object which is stored under the Main Canvas object so I can see the RawImage Inspector panel. Right now, the Texture property of the RawImage is empty, but next I’ll drag the image from the Resources folder onto the Texture property.

The image below shows the cloud image rendered on our HUD canvas.

screenshot.1473007987

Now if we build this and deploy to the emulator, you’ll see the cloud image in your HUD.

screenshot.1473008573

Changing the image in code

Sometimes we’ll want to change our image in code, as dragging the image from the Resources folder to the Inspector panel at design time is not flexible enough.

Fortunately, doing this in code is pretty straightforward – we just have to define what image (or in Unity’s terms, what “Texture”) that we want to display and set the RawImage’s texture to be this.

First, I add a new GameObject to the scene called “ScriptManagerCollection”.

Then I add another image to my Resources folder, called “NotConnected.png” – this image is what I’ll use when the WiFi is not connected.

Next, I add a new C# script to the Assets called “ImageManager”. I opened ImageManager in Visual Studio, and added the code below.

using UnityEngine.VR.WSA.Input;
using UnityEngine.UI;
 
public class ImageManager : MonoBehaviour {

    GestureRecognizer recognizer;
 
    public RawImage wifiConnection;
 
    // Use this for initialization
    void Start () {
        recognizer = new GestureRecognizer();
 
        recognizer.TappedEvent += Recognizer_TappedEvent;
 
        recognizer.StartCapturingGestures();
    }

    private void Recognizer_TappedEvent(InteractionSourceKind source, int tapCount, Ray headRay)
    {
        var myGUITexture = (Texture2D)Resources.Load("NotConnected");
 
        wifiConnection.texture = myGUITexture;
    }
}

You can see that I’ve written some simple code which recognises a tap gesture, and changes the source of the wifiConnection image to be “NotConnected.png”.

Note how I’ve not had to add the “.png” extension to the name of the image.

I dragged this script to the ScriptManagerCollection GameObject in Unity, and selected this GameObject. The Inspector updates, and shows a public RawImage property called “Wifi Connection”. Drag the RawImage object from the canvas in the Hierarchy window to this property.

screenshot.1473010066

Now I can build this project, and run it in the HoloLens emulator.

So when the application runs first, it shows the cloud icon in the lower right of the screen:

screenshot.1473008573

And if I emulate a click gesture, the image changes to the “Not Connected” cloud icon.

screenshot.1473010576

Conclusion

So we can now integrate images – and changing images – into our HUD for the HoloLens. Next time I’m going to look at creating a complete application for the HoloLens using some of the tutorials I’ve created over the last few weeks.

 

.net, HoloLens, Unity, UWP

Coding for the HoloLens with Unity 5 – Part #7 – Creating a basic HUD

One of the elements of augmented reality that’s probably most widely known is a HUD – this is a Heads Up Display. If you’ve played an FPS computer game you’ll be familiar with this as the area of the screen that shows your health, or score, or the number of lives you have left in the game.

This isn’t really a hologram as such, but it’s still something we can develop for the HoloLens. The key is making sure the artefacts rendered by the HoloLens are kept in the same position in front of you – and essentially, it means making those artefacts child objects of the camera.

Let’s have a closer look.

Keeping an object in one place

I’ll demonstrate the principle of keeping an object in one place in the steps below – later we’ll look at how to render text.

First, create a new project in Unity for the HoloLens (I’ve previously described how to do this here).

screenshot.1472152960

Next, right click on the Main Camera object in the Hierarchy. Add a new Cube GameObject.

screenshot.1472153013

Change the position of this Cube object so that it’s 2m in front of you, and scale it to 0.1 of its original size. This should be a white cube, sitting 2m in front of the camera, which has sides of about 10cm in length.

screenshot.1472153122

If you now build this project and deploy it to the emulator, you’ll see a white cube as described above. If you try to move around in the emulator, nothing will happen (apparently). This is because the cube is in a static position in front of the camera, so even though you are moving, the cube moves with you.

screenshot.1472157675

Let’s prove this by adding another object. This time, add another cube to the main Hierarchy panel, but not as a child of the camera object. Make it 2m in front of you and 1m to the left, resize it to 0.1 scale, and add a material to colour the cube red (I write about how to change an object’s colour here).

screenshot.1472157923

Again, build this project, deploy to the emulator, and try to move around. This time you’ll be able to look around the red cube and move your position relative to it, but the white cube will stay in the same spot.

screenshot.1472159248

If you have a HoloLens, try deploying to the HoloLens and you’ll be able to see this more clearly – whereas you can walk around the red cube, the white cube stays still in front of you.

A more useful example

So having a white cube as a HUD isn’t very useful – but that was just to demonstrate how to keep an object in a static position in front of you. Now, let’s look at adding some text to our HUD.

Open the HUD project again, and remove the white and red cubes we created in the last step.

Now add a canvas object as a child of the Main Camera – this is available by right clicking on the Main Camera, selecting UI from the context menu, and then selecting Canvas from the fly-out menu.

  • Position the Canvas to be 1m in front of you – meaning change the Z position to be 1.
  • Change the width to 460, and height to 280.
  • Change the scale to be 0.001 for the X, Y and Z axes.
  • Also, change the Dynamic Pixels per Unit in the Canvas Scaler component from 1 to 10 (this makes the text we’ll add later less blurry).

screenshot.1472338895

Next, add a Text GUI object as a child of this Canvas object (this is also available from the same UI menu).

  • Position this to be in the top left of the canvas using the Paragraph -> Alignment options.
  • Change the text to “Top Left”.
  • Change the font to be 14.
  • Change the colour to be something distinctive. I’ve used green in my example.
  • Make sure the positions in the X, Y and Z axes are all zero, and that the scales are all set to 1.
  • Finally, in the Text object’s Rect Transform component, ensure that the object is set to stretch in both vertical and horizontal directions.

screenshot.1472339683

Now build your project, and deploy it to the emulator.

This time, you should see some green text floating in the top left corner of your field of view.

screenshot.1472339239

If you can’t see this text, change the position from top left to centre – it may be the you need to adjust the canvas dimensions to be different from mine.

You can take this a bit further as I’ve shown in the picture below, where you can align text to different positions on the canvas.

screenshot.1472339560

This is a very powerful technique – you can use scripts to adjust this text depending on actions in your surroundings. Also, you aren’t constrained to just using text objects – you could use an image, or something else.

Hopefully this is useful inspiration in creating a HUD for your HoloLens.

.net, HoloLens

Coding for the HoloLens with Unity 5 – Part #6 – How can I get my 3d model into the mixed reality world?

Unity is great for creating primitive objects, and the asset store is a great repository for finding prefabricated objects to use in your project. But what if you’ve got your own object that you’d like to see and share in a mixed reality world?

I’ve certainly done a lot of work using Autodesk 123d with my 3d printing projects, and I was interested to see if I could get some of my historical projects to display in the HoloLens – specifically my 3d printed prosthetic robot hand project.

So how can I get from Autodesk 123d to Unity, and render it as a static object in the HoloLens?

Starting with Autodesk 123d

This is just my CAD tool of choice – this isn’t a mandatory tool. This package creates files which have extension “123dx”, which is a proprietary Autodesk format.

However, you can export these to an open format of STL – I use this format commonly for 3d printing.

I located a 123d file of part of the robot hand I referred to earlier, and loaded it up in Autodesk 123d.

screenshot.1470687988

It’s worth noting that I’ve orientated this to be centred on the origin of the blue grid, and also that the front of the 3 fingers is orientated to align with Autodesk’s front view also.

Next, I exported this to the STL format, using the menu item below:

screenshot.1470688084.png

Converting the STL to OBJ format

I’m sure there are a bunch of ways to do this – I chose to use an online conversion tool. This allowed me to upload an STL, and then download a OBJ file.

The tool I use is http://www.greentoken.de/onlineconv/.

There are lots of options – another is: http://www.meshconvert.com/

From here, it’s a simple step into Unity.

Creating a Unity Prefab

I created a new project in Unity, and configured it for mixed reality. I’ve described the steps for this before, but basically to re-itereate:

  • Create the project;
  • First, I changed the position of the camera to (0,  0, 0), meaning X = 0, Y = 0, and Z = 0;
  • Next, in the Camera section, I changed the Clear Flags dropdown value to Solid Color.
  • Finally, I change the Background property to Black (R = 0, G = 0, B = 0, A = 0).

 

Once the base project was set up, I dragged the OBJ file from where I saved it on my hard-drive to the Assets folder in Unity. This showed a little preview icon of the hand, which showed me I was on the right track.

screenshot.1470683445
The prefab for the Fingers object is shown above.

Then I dragged this prefab object into the Hierarchy view as a static object, and moved it to be 1m in front of my field of view (i.e. I changed the Z-position to have value 1).

I then built the app, and deployed it to the HoloLens emulator. But there were a couple of immediate and obvious problems.

  • The object was many times bigger than I expected, and
  • The object was rotated by 90 degrees anti-clockwise around the X-axis, and 90 degrees anti-clockwise around the Y-axis.

Fixing Scaling and Rotation

There’s an element of guesswork here, but I think that the default unit in Autodesk 123d is millimetres. When I export to STL and convert to OBJ, it doesn’t store the measurement of the unit – and when I load it up in Unity, since the default unit is metres, then every 1mm in Autodesk 123d is seen as 1m in Unity. Therefore the object is 1000 times too big, and I need to scale by 0.001.

Regarding rotation, different CAD packages have different ideas of what “up” means. For me, it was pretty straightforward to rotate by -90 degrees in the X and Y axis to make the object render correctly.

screenshot.1470689047

Once I implemented the scaling and rotations shown above, Unity showed the scene below:

screenshot.1470689019.png

When I run this program in the emulator with the modified values, I see the hologram below:

screenshot.1470689252

Finally, I loaded this app into the physical HoloLens to look at it – it rendered pretty much perfectly, and identical to how it rendered in the HoloLens Emulator.

screenshot.1470690067

This opens up a new world of possibilities for me with the HoloLens – I’m not restricted to primitive objects within Unity, or using other people’s prefabs from the Unity Asset Store.

Conclusion

This is an incredibly powerful technique – you can create your own complex 3d objects, export them to a standard format like STL or OBJ files, and then import into Unity for display. Some post processing was necessary – I found that I needed to scale the object down by 1000 times, and rotate in a couple of different axis, but this is a pretty trivial modification to make. I chose to display this as a static object – but as I’ve discussed before, there’s no reason why this couldn’t be a dynamically generated object.

 

HoloLens

Using a HoloLens (rather than the emulator) – getting started and first thoughts

Thanks to Luke McNeice (Innovation Lead at Kainos), I’ve temporarily got access to a physical HoloLens. This is a great opportunity to make my augmented reality projects…well, more real.

This post will be about my experience with a real device – initial impressions, app deployment, and a few concluding thoughts about what the community needs to do to make this technology succeed.

Unboxing

It’s obvious, even from the packaging, that this is a prestigious product from Microsoft. And there’s no reason why it shouldn’t be, since this is aimed at developers – ultimately, developers are the people who will make this device a success. From the minute you open the box, you strongly remember that this is a $3,000 device.

WP_20160807_15_29_46_Pro
The carry pouch.
WP_20160807_15_30_10_Pro
The device in the unzipped pouch.

Getting started and fitting

There’s a handy little booklet in the pouch, which gives instructions on how to turn the HoloLens on and how to fit it. When I first saw the device I was pretty concerned that it was aimed at someone with a smaller head, but I needn’t have worried – the band which fits around your head extends a lot.

WP_20160807_15_31_29_Pro
The HoloLens, with the headband angled up.

I wear glasses, and the first time I put the device on I found it quite difficult to fit – I knew I wasn’t meant to touch the lens, which meant I had to hold the device using the side-bands. This doesn’t feel very safe and I was more than a bit nervous that I was going to drop it (fortunately I didn’t).

Turning the HoloLens on for the first time

The on/off switch is at the back of the device on the left hand side – the first time you power the device on, you have to press hold the button in for 3 seconds (though everytime after that it’s activated with a simple press.

WP_20160807_15_30_26_Pro
The micro USB port and the on/off switch.

The first thing I was greeted with was the ghostly floating text saying “Hello”, followed shortly by a message asking me to adjust the fitting of the HoloLens on my head so that I could see all four corners of a square. After that, Cortana automatically takes you through the set-up and calibration process, where you train the headset to recognise air-tap gestures, and the device shows you the “gesture windows”, also known as the HoloLens’s field of view.

During set-up, the only timezones available are American/Canadian zones, so be prepared for that.

Charging

If you need to charge the HoloLens, you can use a regular micro USB charger in the port, which is underneath the left hand leg of the device (shown above).

Deploying a UWP app from Visual Studio

Obviously you’ll need the software pre-requisites first – I’ve described these in a previous post. These include things like Visual Studio 2015, and Unity.

When you’re ready to deploy to the HoloLens, rather than the emulator, you’ll need to perform a few actions first:

  1. Open the Settings app on the HoloLens, go to the “Update” options, and enable Developer Mode on your HoloLens.
  2. From this screen, tap on the Pair button – a 6 digit PIN will be displayed. Make a note of this – you’ll need to enter it to Visual Studio later.
  3. In Visual Studio, select the Master x86 build configuration and choose Remote Machine. For Remote Machine, choose the HoloLens.
  4. When you deploy your first app, you’ll be challenged for a PIN – use the PIN you received in Step 2.

There’s more information from Microsoft on the set-up process at this link.

So what’s the experience like with the real device?

I want to give an honest review here – and whereas there’s a part of me wants to gush about how amazing some of the experience is (and it really is), at the same time I feel I have to draw more attention to the negative aspects of the experience – after all, these are the things that need to change for the next version to improve.

It’s often said that Microsoft need 3 versions of something to really get it right, and this is only Version 1 of the HoloLens – so I wasn’t that surprised to learn that whereas there are some really impressive features, there’s also some pretty serious limitations. You have to want to make the HoloLens work for you.

I found that I needed to carefully adjust the positioning of the HoloLens on my head to see the full field of view – and I have to do that each time I put the device on. It’s not a big inconvenience, just a small frustration. Again, this might be down to my glasses. I found that replacing the default nose-rest with the longer one included in the carry pouch improved my experience a lot.

Also, it’s pretty heavy – I didn’t notice it so much when I first put it on, but after an hour or so you really become aware that it’s giving your neck muscles a real workout.

But when I kept my head reasonably stable and I dealt with holograms which fitted inside the field of view, the experience was pretty amazing  – sometimes jaw droppingly amazing. The resolution and crispness of the graphics were superb. A good example is when browsing the internet with Edge (the MS browser) – resolution was sharp enough to easily read text.

The sound is excellent also. I could clearly hear from the speaker outputs above my ears. I’d like to experiment more with the 3d nature of the sound later.

WP_20160807_15_31_35_Pro
The spatial sound speakers are the red attachments under the HoloLens.

But I found it difficult to not feel a bit frustrated by when holograms were cropped, or disappeared from the view field. I wonder if some of this frustration came from my history of playing computer games, where the character’s HUD would typically have information projected constantly around the edges of the screen – the experience with the HoloLens is the opposite of this, where the edges of your field of view are what you would see regularly.

To be fair, it’s possible that I will get used to this cropped field of view, and learn how to keep my viewpoint from straying away from the centre. As I said previously, you have to want the device to work – it takes a little bit of patience.

This device is aimed at developers, and Microsoft must be hoping that the development community will start building a corpus of apps for the HoloLens. I guess by the time Version 2 rolls around – which might be released to a worldwide market, rather than just a developer release in North America – we’ll see some significant changes and improvements.

Conclusion

Microsoft have allowed a limited release of the HoloLens at the first point where they think developers will be able to build meaningful content. It’s not really consumer ready yet – problems with the field of view and weight are easy and obvious criticisms – and I think Microsoft know that. They want to avoid making the same mistake here as they did with the Kinect, where there was  a huge consumer uptake without anything that consumers could really use it for. This time, they’re asking the market first “what problems do you think this could solve, and can you build software to do that?”

For the HoloLens to succeed, developers have to keep building apps, writing libraries, and creating blog posts to encourage adoption. I think we’re still a while away from this being a mainstream device, but its success is in the community’s hands.

.net, HoloLens, Unity, UWP

Coding for the HoloLens with Unity 5 – Part #5: Creating holograms from prefabs at runtime using gestures

Up to now in this series, I’ve added holograms to my scene within Unity. But it’s much more useful to be able to create holograms at run-time. This tutorial will show how to create a pre-fabricated object (called a prefab) in Unity, and how to use a simple tap gesture to add this hologram prefab to your scene.

Creating a pre-fabricated object in Unity

Unity has an asset type called prefab. This allows a GameObject to be created as a kind of global project asset which can be re-used numerous times in the project. Changing the prefab asset in one place also allows all instantiated occurences of the asset in your scene to change also.

Let’s create a simple object in a project hierarchy, and convert it to a prefab asset.

First, in a Unity project, right click on the Hierarchy surface and create a new Cube 3d object – call it “Cube”.

Next, right click on the Asset node in the Project surface, create a new material (the picture below shows how to select Material from the context menu). Call the material “Blue”.

screenshot.1469374606

For this material, selected the Albedo option and from colour chooser palette which appears, select a blue colour.

screenshot.1469374837

Now drag this material onto the “Cube” object in the Hierarchy view. The cube which is in the centre of the scene should now turn to a blue colour.

screenshot.1469375525

Next, right click on the Assets node in the Project view, and select the Create item in the context menu. From this, select the Prefab option.

screenshot.1469375699

Call this prefab object “BlueCube”. This will have the default icon of a white box.

screenshot.1469376023

If you now click on the Cube in the Hierarchy view, ou can drag this onto the BlueCube prefab object. The icon will change from a white box to a blue box, previewing what the object looks like in our virtual world.

screenshot.1469376154

You have now created a prefab object – whenever you want to create a BlueCube object like this in your scene, you can just use the prefab object, instead of having to create a cube and assign a material to it each time. Additionally, if you want to change the object in some way – for example to change the size, orientation, or shade of blue – you can change the prefab object, and this change will be reflected across all instantiations of this prefab.

How can we create a prefab hologram at runtime?

Let’s start by deleting the cube object from the scene. Either click on the cube in the scene, or click on the “Cube” object in the Hierarchy view, and hit delete. The scene will now be empty.

Now let’s create a new C# script to help us manage creating holograms. Right click on the Assets panel, and create a new C# script called “CubeManager”. Now double click on this script to open up your preferred script editor (e.g. MonoDevelop or Visual Studio).

There’s two things I want to do in this script – I need to capture a tap gesture, and when I detect a tap, I want to instantiate a “BlueCube” object 2m in front of where I’m presently looking.

First, add a public member GameObject variable to the CubeManager script called blueCubePrefab, as shown in the code below:

public class CubeManager : MonoBehaviour
{
    public GameObject blueCubePrefab;
}

Now we have to let our scene know about this script. Switch back to Unity, and right click on the Hierarchy panel – from the context menu, select “Create Empty”. Give this object the name “BlueCubeCollection”.

Drag the “CubeManager” C# script to the new “BlueCubeCollection” object. On the Inspector panel for the BlueCubeCollection object, you’ll see a new script property called “Cube Manager”.

screenshot.1469379685

Notice in the diagram above that the Cube Manager script has a variable called “Blue Cube Prefab”. Unity has created this property based on the public GameObject variable called “blueCubePrefab” in the C# script.

But also notice that the property has a value of “None” – whereas there’s a declaration, there’s no instantiation. We can fix this by dragging the BlueCube prefab we created earlier onto the textbox that says “None (Game Object)”. When you do this, the panel will change to look like the diagram below – notice that it now says “BlueCube” below.

screenshot.1469379987

Let’s go back to the C# script. In order to recognise gestures like a tap, the script needs to have a GestureRecognizer object. This object has an event called “TappedEvent”, and when this event is registered, we can start capturing gestures. The code below shows how this works.

using UnityEngine;
using UnityEngine.VR.WSA.Input;

public class CubeManager : MonoBehaviour
{
    public GameObject blueCubePrefab;
 
    GestureRecognizer recognizer;
 
    void Start()
    {
        recognizer = new GestureRecognizer();
 
        recognizer.TappedEvent += Recognizer_TappedEvent;
 
        recognizer.StartCapturingGestures();
    }
 
    private void Recognizer_TappedEvent(InteractionSourceKind source, int tapCount, Ray headRay)
    {
        // process the event.
    }

The last part of this is instantiating the new BlueCube object at a specific location. The key to this is the parameter headRay in the Recognizer_TappedEvent above. This headRay object has a couple of properties, which will help us position the new object – the properties are direction and origin. These are both of the type Vector3 – this object type is used for passing positions and directions.

  • headRay.origin gives us the position that the HoloLens wearer is at.
  • headRay.direction gives us the direction that the HoloLens wearer is looking.

Therefore, if we want to get the position 2m in front of the HoloLens, we can multiply the direction by 2, and add it to the origin value, like the code below:

var direction = headRay.direction;
 
var origin = headRay.origin;
 
var position = origin + direction * 2.0f;

So now we have the position where we want to place our hologram.

Finally, we just need the code to instantiate the blueCubeFab hologram. Fortunately, this is very easy.

Instantiate(blueCubePrefab, position, Quaternion.identity);

This call places an instance of the blueCubePrefab at the Vector3 position defined by position. The Quaternion.identity object simply means that the object is in the default rotation.

So the complete code for the CubeManager is below:

using UnityEngine;
using UnityEngine.VR.WSA.Input;

public class CubeManager : MonoBehaviour
{
    public GameObject blueCubePrefab;
 
    GestureRecognizer recognizer;
 
    void Start()
    {
        recognizer = new GestureRecognizer();
 
        recognizer.TappedEvent += Recognizer_TappedEvent;
 
        recognizer.StartCapturingGestures();
    }
 
    private void Recognizer_TappedEvent(InteractionSourceKind source, int tapCount, Ray headRay)
    {
        var direction = headRay.direction;
 
        var origin = headRay.origin;
 
        var position = origin + direction * 2.0f;
 
        Instantiate(blueCubePrefab, position, Quaternion.identity);
    }
}

Now we can build and run the project using the settings defined in my other post here. After running the project in Visual Studio through the HoloLens emulator which showed an empty scene, I created a few boxes (using the enter key to simulate an air-tap). I’ve navigated to the side to show these holograms.

screenshot.1469385281

So now we know how to create holograms at run-time from a prefabricated object using a gesture.

 

.net, HoloLens, Unity, UWP

Coding for the HoloLens with Unity 5 – Part #4: Preparing the Unity project for source code management

This will be a super short post, but something that I thought deserved its own post.

One thing I’ve noticed with Unity projects is that, by default, some of the files are created as binary files – for example, files in the “ProjectSettings” folder. This isn’t great for me if I want to commit files to GitHub or Subversion. I prefer to check in text files, so if a file changes, at least I can understand what changed.

To ensure files are generated as text, open the Unity editor, and go to Edit -> Project Settings -> Editor, which will open an Inspector panel in the Unity editor (shown below).

screenshot.1468703470

I’ve highlighted the values I changed in red above:

  • I’ve changed the default version control mode from Hidden Meta Files to “Visible Meta Files” – this means each asset (even binary) has a text file containing meta data, which is available through the file system. More information is available at this link.
  • I’ve also changed the Asset Serialization Mode from “Mixed” to “Force Text”.

After restarting Unity, you should notice that project settings and assets (such as prefabs) are now text files. I think this is more suitable for management in a code versioning system.

The only folders that I commit in my project are the “Assets”, “Library” and “ProjectSettings” folders. I choose to add all other folders and files to the ignore list.