.net, HoloLens, UWP, Windows Store Apps

Coding for the HoloLens with Unity 5 – Part #8 – Adding an image to the HUD (and then changing it in C# code)

Last time, we looked at creating a simple HUD for the HoloLens, and displayed text with different colours in each of the corners of the viewable screen.

Obviously you won’t always want to just have text on your HUD – so this time we’re going to look at a very simple extension of this – adding an image to the HUD.

Let’s pick this up from where we left the last post. We’ve already created a HUD with text in the four corners, as shown in the emulator below.

screenshot.1472339560

Say we want to add some kind of visual cue – for example, a status icon to show if there’s a WiFi connection.

Note that I’m not going to write code here to test if there actually is a WiFi connection for the HoloLens – I’m just looking at visual cues, with this as a possible application.

I’m going to delete the red Text UI element from the bottom right of the application, as this is where I’ve decided I want my image to appear.

screenshot.1473006215.png

Now I want to add a new UI element to the canvas – specifically a RawImage element. You can select this from the context menu, as shown below.

screenshot.1473006281.png

This will just add a new blank white image to your canvas, as shown in the scene below.

screenshot.1473006376

 

We obviously need to adjust this raw image to be the correct position, size, and to have the correct source. We can do all of this in the Inspector panel. The panel below shows the defaults that my version of Unity gives.

screenshot.1473006620

First, I’d like to change the position of the image to be in the bottom right of the canvas. I can do this by clicking on the position icon (the part that looks like a crosshairs in the top left of the image above). Once I’ve clicked on it, I hit “Alt” on the keyboard to get an alternative menu, shown below.

screenshot.1473006537

Using the mouse, I select the icon – highlighted with a red box above – which positions the image in the bottom right of the canvas.

Now, I need to select an image to add – I’ve an image of a cloud which I’ll use to signify a connection to the cloud. This image is 100px by 100px, it’s a PNG, and it has a transparent background.

First I create a new folder called “Resources” under Assets in the Unity Project view. Then I right-click, select “Import New Asset…” and browse to where I have the cloud image saved.

screenshot.1473007623

Now I select the RawImage object which is stored under the Main Canvas object so I can see the RawImage Inspector panel. Right now, the Texture property of the RawImage is empty, but next I’ll drag the image from the Resources folder onto the Texture property.

The image below shows the cloud image rendered on our HUD canvas.

screenshot.1473007987

Now if we build this and deploy to the emulator, you’ll see the cloud image in your HUD.

screenshot.1473008573

Changing the image in code

Sometimes we’ll want to change our image in code, as dragging the image from the Resources folder to the Inspector panel at design time is not flexible enough.

Fortunately, doing this in code is pretty straightforward – we just have to define what image (or in Unity’s terms, what “Texture”) that we want to display and set the RawImage’s texture to be this.

First, I add a new GameObject to the scene called “ScriptManagerCollection”.

Then I add another image to my Resources folder, called “NotConnected.png” – this image is what I’ll use when the WiFi is not connected.

Next, I add a new C# script to the Assets called “ImageManager”. I opened ImageManager in Visual Studio, and added the code below.

using UnityEngine.VR.WSA.Input;
using UnityEngine.UI;
 
public class ImageManager : MonoBehaviour {

    GestureRecognizer recognizer;
 
    public RawImage wifiConnection;
 
    // Use this for initialization
    void Start () {
        recognizer = new GestureRecognizer();
 
        recognizer.TappedEvent += Recognizer_TappedEvent;
 
        recognizer.StartCapturingGestures();
    }

    private void Recognizer_TappedEvent(InteractionSourceKind source, int tapCount, Ray headRay)
    {
        var myGUITexture = (Texture2D)Resources.Load("NotConnected");
 
        wifiConnection.texture = myGUITexture;
    }
}

You can see that I’ve written some simple code which recognises a tap gesture, and changes the source of the wifiConnection image to be “NotConnected.png”.

Note how I’ve not had to add the “.png” extension to the name of the image.

I dragged this script to the ScriptManagerCollection GameObject in Unity, and selected this GameObject. The Inspector updates, and shows a public RawImage property called “Wifi Connection”. Drag the RawImage object from the canvas in the Hierarchy window to this property.

screenshot.1473010066

Now I can build this project, and run it in the HoloLens emulator.

So when the application runs first, it shows the cloud icon in the lower right of the screen:

screenshot.1473008573

And if I emulate a click gesture, the image changes to the “Not Connected” cloud icon.

screenshot.1473010576

Conclusion

So we can now integrate images – and changing images – into our HUD for the HoloLens. Next time I’m going to look at creating a complete application for the HoloLens using some of the tutorials I’ve created over the last few weeks.

 

.net, Accessibility, Cortana, UWP, Windows Store Apps

How to integrate Cortana with a Windows 10 UWP app in C#

Over the last few weeks, I’ve been writing a lot about how to use C# with the Raspberry Pi. I’m really interested in different ways that I can use software to interact with the physical world. Another interaction that I’m interested in is using voice commands, and recently I started looking into ways to use Cortana to achieve this. This post is an introduction to asking Cortana to control Windows apps.

In this post, I’ll look at the simple case of setting up a Windows app so that I can ask Cortana to start the app from my phone.

How does Cortana know what to listen for?

There is some seriously advanced technology in the Microsoft Cognitive Services, particularly software like LUIS – but for this simple case, I’ll store the voice commands Cortana listens for in an XML Voice Command Definition (VCD) file.

  • First we need to define a CommandSet – this has name and language attributes. The voice commands will only work for the CommandSet which has a language attribute matching that on the Windows 10 device. So if your Windows device has the language set to en-us, only the CommandSet matching that attribute will be used by Cortana.
  • We also can define an alternative name for the app as a CommandPrefix.
  • To help the user, we can provide an Example command.
  • The most interesting node in the file is Command:
    • Example: Windows shows examples for each individual command, and this node is where we can specify the examples.
    • ListenFor: These are the words Cortana listens for.
    • Feedback: This is what Cortana replies with.
    • Navigate: This is the XAML page that Cortana navigates to when it parses what you’ve said.

The app I’ve modified is my Electronic Resistance Calculator. I’ve added the file below – which I’ve named ‘ResistorCommands.xml’ – to the root of this directory.

<?xml version="1.0" encoding="utf-8" ?>
<VoiceCommands xmlns="http://schemas.microsoft.com/voicecommands/1.2">
  <CommandSet xml:lang="en-us" Name="EnglishCommands-us">
    <!-- The CommandPrefix provides an alternative name for your app -->
    <CommandPrefix>Resistor</CommandPrefix>
    <!-- The CommandSet Example appears beside your app's name in the global help -->
    <Example>Open</Example>
    <Command Name="OpenCommand">
      <Example>Open</Example>
      <ListenFor>Open</ListenFor>
      <Feedback>You got it!</Feedback>
      <Navigate Target="MainPage.xaml" />
    </Command>
  </CommandSet>
 
  <CommandSet xml:lang="en-gb" Name="EnglishCommands-gb">
    <!-- The CommandPrefix provides an alternative name for your app -->
    <CommandPrefix>Resistor</CommandPrefix>
    <!-- The CommandSet Example appears beside your app's name in the global help -->
    <Example>Open</Example>
    <Command Name="OpenCommand">
      <Example>Open</Example>
      <ListenFor>Open</ListenFor>
      <Feedback>I'm on it!</Feedback>
      <Navigate Target="MainPage.xaml" />
    </Command>
  </CommandSet>
</VoiceCommands>

Adding these voice commands to the Device Definition Manager

The Windows 10 VoiceCommandDefinitionManager is the resource that Cortana uses when trying to interpret the voice commands. It’s very straightforward to get the Voice Command Definition file from application storage, and then install this storage file into the VoiceCommandDefinitionManager.

We need to add those definitions at application start up, which we can do by overriding the OnNavigatedTo method in MainPage.xaml.cs.

private async Task AddVoiceCommandDefinitionsAsync()
{
    var storageFile = await StorageFile.GetFileFromApplicationUriAsync(new Uri("ms-appx:///ResistorCommands.xml "));
    await VoiceCommandDefinitionManager.InstallCommandDefinitionsFromStorageFileAsync(storageFile);
}
        
protected override async void OnNavigatedTo(NavigationEventArgs e)
{
    if (e.NavigationMode == NavigationMode.New)
    {
        await AddVoiceCommandDefinitionsAsync();
    }
}

At this point, we actually have enough code to allow us to ask Cortana to start our app.

Running the app on a Windows 10 device

I added the VCD ResistorCommands.xml file to the root of the Electronic Resistance Calculator project, and I added the code snippet above to MainPage.xaml.cs, and ran this in debug mode on my Nokia 1520 Windows 10 device.

When I activate Cortana, I can click on the hamburger menu and select Help in the top left to see the list of apps which are controlled by voice commands. My Electronic Resistance Calculator is available – you can see in the screenshot below that the word “Open” as an example voice command is visible.

wp_ss_20160630_0003

If I click on the Resistor app, the phone shows a list of valid example commands. Because we’re just opening the app, there’s just one example – “Open”. Obviously we can do more complex things than this with a VCD, which I’ll show in a later post.

wp_ss_20160630_0005

When I say “Resistor Show”, Cortana recognises this and replies with “I’m on it” – the feedback specified for devices set to have language “en-gb” (which is correct for my device). After a short pause, the app starts.

wp_ss_20160630_0004

In a later post, I’ll look at how to use the VCD to issue more complex voice commands.

.net, UWP, Windows Store Apps

How to use the camera on your device with C# in a UWP application: Part #4, cleaning up resources (and other bits)

In the final part of this series (here are links for Part 1, Part 2 and Part 3), I’ll describe how to apply some of the finishing touches to the application, such as how to handle application suspension, and disposing of resources. I’ll also show how to make sure the screen doesn’t go to sleep when the app is switched on, and how to make sure the preview image rotates to fill the whole screen. I finish the post (and series) by including all the code necessary for this small project.

Resource disposal and application suspension

It’s always good practice to clean up resources when we aren’t using them, and two intensive resources used in this application are the _mediaCapture member variable, and the PreviewControl which is used in the XAML. A disposal method which we can call to release these is very simple and would look like the code below:

private void Dispose()
{
    if (_mediaCapture != null)
    {
        _mediaCapture.Dispose();
        _mediaCapture = null;
    }
 
    if (PreviewControl.Source != null)
    {
        PreviewControl.Source.Dispose();
        PreviewControl.Source = null;
    }
}

When we navigate away from the app, the Windows Mobile OS suspends it – but while it is suspended, the OS may also terminate the app while it’s suspended to free resources for the device. Therefore we should always handle the event when the application transitions into suspension. When this event fires, this is the only chance we have to do something (e.g. save data perhaps) before the app suspends – and fortunately, one of the event arguments gives us the opportunity to delay app suspension so we can clean up resources.

The event registration for suspension looks like this:

Application.Current.Suspending += Application_Suspending;

My app’s suspension handler looks like the code below:

private void Application_Suspending(object sender, SuspendingEventArgs e)
{
    var deferral = e.SuspendingOperation.GetDeferral();
    Dispose();
    deferral.Complete();
}

Additionally, I have overridden the OnNavigatedFrom event, and added the Dispose() method in here too.

protected override void OnNavigatingFrom(NavigatingCancelEventArgs e)
{
    Dispose();
}

Stopping the app from going to sleep

Presently our app goes to sleep when there’s no active use, just the same as any other Windows store app. This can be very annoying when we’re watching the screen’s preview control is being updated! Fortunately Microsoft have given us an object to allow us to manage this – the DisplayRequest class. We can declare this as a member variable…

// This object allows us to manage whether the display goes to sleep 
// or not while our app is active.
private readonly DisplayRequest _displayRequest = new DisplayRequest();

…and then use it in the InitialiseCameraAsync to request the app stays active when the user has navigated to it.

// Stop the screen from timing out.
_displayRequest.RequestActive();

Rotating the image to fill the screen

Finally, if you’ve built this app and deployed it to a phone, you’ll have seen that the camera previewer doesn’t actually fill the screen.

This is because the video feed has a default rotation stored in the feed’s meta data – but we can change this by detecting the device’s rotation, and alter the meta data. Of course, if we have an external camera we don’t want to rotate the feed, so we have to treat these devices in different ways.

Let’s set up a couple of member variables, one to track whether the device is an external camera or not, and one to store the property name (a Guid) associated with rotation in the video feed meta data.

// Taken from https://msdn.microsoft.com/en-us/library/windows/apps/xaml/hh868174.aspx
private static readonly Guid RotationKey = new Guid("C380465D-2271-428C-9B83-ECEA3B4A85C1");
 
private bool _externalCamera = false;

The code below is the asynchronous method to set the meta data in the video feed. We call as a last step in the InitialiseCameraAsync() method.

private async Task SetPreviewRotationAsync()
{
    // Only need to update the orientation if the camera is mounted on the device
    if (_externalCamera) return;
 
    // Calculate which way and how far to rotate the preview
    int rotation = ConvertDisplayOrientationToDegrees(DisplayInformation.GetForCurrentView().CurrentOrientation);
 
    // Get the property meta data about the video.
    var props = _mediaCapture.VideoDeviceController.GetMediaStreamProperties(MediaStreamType.VideoPreview);
 
    // Change the meta data to rotate the preview to fill the screen with the preview.
    props.Properties.Add(RotationKey, rotation);
 
    // Now set the updated meta data into the video preview.
    await _mediaCapture.SetEncodingPropertiesAsync(MediaStreamType.VideoPreview, props, null);
}
 
// Taken from https://msdn.microsoft.com/en-us/windows/uwp/audio-video-camera/capture-photos-and-video-with-mediacapture
private static int ConvertDisplayOrientationToDegrees(DisplayOrientations orientation)
{
    switch (orientation)
    {
        case DisplayOrientations.Portrait:
            return 90;
        case DisplayOrientations.LandscapeFlipped:
            return 180;
        case DisplayOrientations.PortraitFlipped:
            return 270;
        case DisplayOrientations.Landscape:
        default:
            return 0;
    }
}

Finally, we add one more line to the InitialiseCameraAsync method – this just tracks whether we’re connected to an external camera or not.

// Store whether the camera is onboard of if it's external.
_externalCamera = backFacingDevice == null;

Conclusion

That’s it for this series – I’ve pasted the code below which includes everything we’ve covered over the last four parts. You might have to refer back to Part 1, Part 2 and Part 3 for some additional information on how to set up the UWP project. I hope this code is helpful to you – if I wanted to improve this further, I would probably refactor it to reduce the length of InitialiseCameraAsync method, and maybe try to create a CameraEngine class in a NuGet package.

I’ve been impressed with how much UWP gives you in such a small amount of code – 200 lines to preview a camera output, focus, rotate, and capture an image. It’s especially impressive that this app can run on my phone, and just as well on my laptop with an integrated webcam (I’d probably need to include a software button to let my laptop capture an image).

Anyway, I hope you have found this helpful and interesting!

using System;
using System.Linq;
using System.Threading.Tasks;
using Windows.ApplicationModel;
using Windows.Devices.Enumeration;
using Windows.Foundation.Metadata;
using Windows.Graphics.Display;
using Windows.Media.Capture;
using Windows.Media.Devices;
using Windows.Media.MediaProperties;
using Windows.Phone.UI.Input;
using Windows.Storage;
using Windows.System.Display;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Navigation;
 
namespace BasicCamera
{
    public sealed partial class MainPage : Page
    {
        // Provides functionality to capture the output from the camera
        private MediaCapture _mediaCapture;
 
        // This object allows us to manage whether the display goes to sleep 
        // or not while our app is active.
        private readonly DisplayRequest _displayRequest = new DisplayRequest();
 
        // Taken from https://msdn.microsoft.com/en-us/library/windows/apps/xaml/hh868174.aspx
        private static readonly Guid RotationKey = new Guid("C380465D-2271-428C-9B83-ECEA3B4A85C1");
 
        // Tells us if the camera is external or on board.
        private bool _externalCamera = false;
 
        public MainPage()
        {
            InitializeComponent();
 
            // https://msdn.microsoft.com/en-gb/library/windows/apps/hh465088.aspx
            Application.Current.Resuming += Application_Resuming;
            Application.Current.Suspending += Application_Suspending;
 
            if (ApiInformation.IsTypePresent("Windows.Phone.UI.Input.HardwareButtons"))
            {
                HardwareButtons.CameraHalfPressed += HardwareButtons_CameraHalfPressed;
                HardwareButtons.CameraPressed += HardwareButtons_CameraPressed;
            }
        }
 
        private void Application_Suspending(object sender, SuspendingEventArgs e)
        {
            var deferral = e.SuspendingOperation.GetDeferral();
            Dispose();
            deferral.Complete();
        }
 
        private async void Application_Resuming(object sender, object o)
        {
            await InitializeCameraAsync();
        }
 
        protected override async void OnNavigatedTo(NavigationEventArgs e)
        {
            await InitializeCameraAsync();
        }
 
        protected override void OnNavigatingFrom(NavigatingCancelEventArgs e)
        {
            Dispose();
        }
 
        private async Task InitializeCameraAsync()
        {
            if (_mediaCapture == null)
            {
                // Get the camera devices
                var cameraDevices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);
 
                // try to get the back facing device for a phone
                var backFacingDevice = cameraDevices
                    .FirstOrDefault(c => c.EnclosureLocation?.Panel == Windows.Devices.Enumeration.Panel.Back);
 
                // but if that doesn't exist, take the first camera device available
                var preferredDevice = backFacingDevice ?? cameraDevices.FirstOrDefault();
 
                // Store whether the camera is onboard of if it's external.
                _externalCamera = backFacingDevice == null;
 
                // Create MediaCapture
                _mediaCapture = new MediaCapture();
 
                // Stop the screen from timing out.
                _displayRequest.RequestActive();
 
                // Initialize MediaCapture and settings
                await _mediaCapture.InitializeAsync(
                    new MediaCaptureInitializationSettings
                    {
                        VideoDeviceId = preferredDevice.Id
                    });
 
                // Set the preview source for the CaptureElement
                PreviewControl.Source = _mediaCapture;
 
                // Start viewing through the CaptureElement 
                await _mediaCapture.StartPreviewAsync();
 
                // Set rotation properties to ensure the screen is filled with the preview.
                await SetPreviewRotationPropertiesAsync();
            }
        }
 
        private async void HardwareButtons_CameraHalfPressed(object sender, CameraEventArgs e)
        {
            // test if focus is supported
            if (_mediaCapture.VideoDeviceController.FocusControl.Supported)
            {
                // get the focus control from the _mediaCapture object
                var focusControl = _mediaCapture.VideoDeviceController.FocusControl;
 
                // try to get full range, but settle for the first supported one.
                var focusRange = focusControl.SupportedFocusRanges.Contains(AutoFocusRange.FullRange) ? AutoFocusRange.FullRange : focusControl.SupportedFocusRanges.FirstOrDefault();
 
                // try to get the focus mode for focussing just once, but settle for the first supported one.
                var focusMode = focusControl.SupportedFocusModes.Contains(FocusMode.Single) ? FocusMode.Single : focusControl.SupportedFocusModes.FirstOrDefault();
 
                // now configure the focus control with the range and mode as settings
                focusControl.Configure(
                    new FocusSettings
                    {
                        Mode = focusMode,
                        AutoFocusRange = focusRange
                    });
 
                // finally wait for the camera to focus
                await focusControl.FocusAsync();
            }
        }
 
        private async void HardwareButtons_CameraPressed(object sender, CameraEventArgs e)
        {
            // This is where we want to save to.
            var storageFolder = KnownFolders.SavedPictures;
 
            // Create the file that we're going to save the photo to.
            var file = await storageFolder.CreateFileAsync("sample.jpg"CreationCollisionOption.ReplaceExisting);
 
            // Update the file with the contents of the photograph.
            await _mediaCapture.CapturePhotoToStorageFileAsync(ImageEncodingProperties.CreateJpeg(), file);
        }
 
        private void Dispose()
        {
            if (_mediaCapture != null)
            {
                _mediaCapture.Dispose();
                _mediaCapture = null;
            }
 
            if (PreviewControl.Source != null)
            {
                PreviewControl.Source.Dispose();
                PreviewControl.Source = null;
            }
        }
 
        private async Task SetPreviewRotationPropertiesAsync()
        {
            // Only need to update the orientation if the camera is mounted on the device
            if (_externalCamera) return;
 
            // Calculate which way and how far to rotate the preview
            int rotation = ConvertDisplayOrientationToDegrees(DisplayInformation.GetForCurrentView().CurrentOrientation);
 
            // Get the property meta data about the video.
            var props = _mediaCapture.VideoDeviceController.GetMediaStreamProperties(MediaStreamType.VideoPreview);
 
            // Change the meta data to rotate the preview to fill the screen with the preview.
            props.Properties.Add(RotationKey, rotation);
 
            // Now set the updated meta data into the video preview.
            await _mediaCapture.SetEncodingPropertiesAsync(MediaStreamType.VideoPreview, props, null);
        }
 
        // Taken from https://msdn.microsoft.com/en-us/windows/uwp/audio-video-camera/capture-photos-and-video-with-mediacapture
        private static int ConvertDisplayOrientationToDegrees(DisplayOrientations orientation)
        {
            switch (orientation)
            {
                case DisplayOrientations.Portrait:
                    return 90;
                case DisplayOrientations.LandscapeFlipped:
                    return 180;
                case DisplayOrientations.PortraitFlipped:
                    return 270;
                case DisplayOrientations.Landscape:
                default:
                    return 0;
            }
        }
    }
}
.net, Computer Vision, UWP, Visual Studio, Windows Store Apps

How to use the camera on your device with C# in a UWP application: Part #3, saving a picture

Previously in this series, we looked at how to preview your device’s camera output, and how to use a physical button to focus the camera.

This time I’d like to look at how to capture an image, and store it in a local device folder.

Adding the capability to save to the pictures folder

If you want to save pictures to one of the many standard Windows folders, you need to add this capability to the package manifest. In the VS2015 project which we’ve been building over the last two parts of this series, double click on the Package.appxmanifest file. In the list of capabilities, tick the box with the text “Pictures Library”.

screenshot.1461274147

Our application is now allowed to save to the Pictures library on our device.

Capture an image using the device button

In part 2, we set up our app to make the camera focus when the button is half pressed – after it has focussed, we’d like to fully press the button to capture the image presently being previewed. To do this, we need to handle the CameraPressed event in our code.

if (ApiInformation.IsTypePresent("Windows.Phone.UI.Input.HardwareButtons"))
{
    HardwareButtons.CameraHalfPressed += HardwareButtons_CameraHalfPressed;
    HardwareButtons.CameraPressed += HardwareButtons_CameraPressed;
}

The next step is to write the event handler.

Writing to “Known Folders”

The Windows UWP API has some functions already baked in that allow us to identify special folders in Windows, and save files to these folders.

To get these special folders, we use the static class “KnownFolders”. For each of these known folders, there are methods available to create files. These created files implement the IStorageFile interface – and fortunately, the _mediaCapture has a method called CapturePhotoToStorageFileAsync, which allows us to save an image to a file which implements this interface. The code below for the event handler shows how it’s done.

private async void HardwareButtons_CameraPressed(object sender, CameraEventArgs e)
{
    // This is where we want to save to.
    var storageFolder = KnownFolders.SavedPictures;
 
    // Create the file that we're going to save the photo to.
    var file = await storageFolder.CreateFileAsync("sample.jpg", CreationCollisionOption.ReplaceExisting);
 
    // Update the file with the contents of the photograph.
    await _mediaCapture.CapturePhotoToStorageFileAsync(ImageEncodingProperties.CreateJpeg(), file);
}

So now we have a basic Windows application, which acts as a viewfinder, allows us to focus if the device is capable, and then allows us to save the presently displayed image to the special Windows SavedPictures folder. This is a pretty good app – and we’ve done it in about 100 lines of code (shown below). Not bad!

using System;
using System.Linq;
using System.Threading.Tasks;
using Windows.Devices.Enumeration;
using Windows.Foundation.Metadata;
using Windows.Media.Capture;
using Windows.Media.Devices;
using Windows.Media.MediaProperties;
using Windows.Phone.UI.Input;
using Windows.Storage;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Navigation;
 
namespace BasicCamera
{
    public sealed partial class MainPage : Page
    {
        // Provides functionality to capture the output from the camera
        private MediaCapture _mediaCapture;
 
        public MainPage()
        {
            InitializeComponent();
 
            Application.Current.Resuming += Application_Resuming;
 
            if (ApiInformation.IsTypePresent("Windows.Phone.UI.Input.HardwareButtons"))
            {
                HardwareButtons.CameraHalfPressed += HardwareButtons_CameraHalfPressed;
                HardwareButtons.CameraPressed += HardwareButtons_CameraPressed;
            }
        }
 
        private async void Application_Resuming(object sender, object o)
        {
            await InitializeCameraAsync();
        }
 
        protected override async void OnNavigatedTo(NavigationEventArgs e)
        {
            await InitializeCameraAsync();
        }
 
        private async Task InitializeCameraAsync()
        {
            if (_mediaCapture == null)
            {
                // Get the camera devices
                var cameraDevices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);
 
                // try to get the back facing device for a phone
                var backFacingDevice = cameraDevices
                    .FirstOrDefault(c => c.EnclosureLocation?.Panel == Windows.Devices.Enumeration.Panel.Back);
 
                // but if that doesn't exist, take the first camera device available
                var preferredDevice = backFacingDevice ?? cameraDevices.FirstOrDefault();
 
                // Create MediaCapture
                _mediaCapture = new MediaCapture();
 
                // Initialize MediaCapture and settings
                await _mediaCapture.InitializeAsync(
                    new MediaCaptureInitializationSettings
                    {
                        VideoDeviceId = preferredDevice.Id
                    });
 
                // Set the preview source for the CaptureElement
                PreviewControl.Source = _mediaCapture;
 
                // Start viewing through the CaptureElement 
                await _mediaCapture.StartPreviewAsync();
            }
        }
 
        private async void HardwareButtons_CameraHalfPressed(object sender, CameraEventArgs e)
        {
            // test if focus is supported
            if (_mediaCapture.VideoDeviceController.FocusControl.Supported)
            {
                // get the focus control from the _mediaCapture object
                var focusControl = _mediaCapture.VideoDeviceController.FocusControl;
 
                // try to get full range, but settle for the first supported one.
                var focusRange = focusControl.SupportedFocusRanges.Contains(AutoFocusRange.FullRange) ? AutoFocusRange.FullRange : focusControl.SupportedFocusRanges.FirstOrDefault();
 
                // try to get the focus mode for focussing just once, but settle for the first supported one.
                var focusMode = focusControl.SupportedFocusModes.Contains(FocusMode.Single) ? FocusMode.Single : focusControl.SupportedFocusModes.FirstOrDefault();
 
                // now configure the focus control with the range and mode as settings
                focusControl.Configure(
                    new FocusSettings
                    {
                        Mode = focusMode,
                        AutoFocusRange = focusRange
                    });
 
                // finally wait for the camera to focus
                await focusControl.FocusAsync();
            }
        }
 
        private async void HardwareButtons_CameraPressed(object sender, CameraEventArgs e)
        {
            // This is where we want to save to.
            var storageFolder = KnownFolders.SavedPictures;
 
            // Create the file that we're going to save the photo to.
            var file = await storageFolder.CreateFileAsync("sample.jpg", CreationCollisionOption.ReplaceExisting);
 
            // Update the file with the contents of the photograph.
            await _mediaCapture.CapturePhotoToStorageFileAsync(ImageEncodingProperties.CreateJpeg(), file);
        }
    }
}

Of course, there’s still a bit more to be done – this code doesn’t handle resource clean-up, or deal with what happens when the application is suspended or loses focus. We’ll look at that next time.

.net, Computer Vision, UWP, Visual Studio, Windows Store Apps

How to use the camera on your device with C# in a UWP application: Part #2, how to focus the preview

In the previous part of the series, we looked at how to preview your device’s camera output.

This part is about how to focus the device using C#. Not all devices will be capable of focussing – for example, a normal laptop webcam won’t be able to focus, but a Nokia 1520 can focus. Fortunately, we don’t need to guess – testing support for focussing is part of the API provided for Windows UWP apps. We can test this by using the “_mediaCapture” object, which we created in the code shown in Part #1.

if (_mediaCapture.VideoDeviceController.FocusControl.Supported)
{
    // Code here is executed if focus is supported by the device.
}

On my phone,  I’d like to use the camera button when it’s half-pressed to focus the image. I’m able to do this in a UWP app, but I need to add a reference to a UWP library first first.

Setting up mobile extension references

In the solution view in VS2015, right click on the “References” node, and select “Add Reference…”.

screenshot.1461183352

The window that appears is called the “Reference Manager”. On the left hand menu, expand the “Universal Windows” node, and select “Extensions”. In the list of extensions, tick the box for “Windows Mobile Extensions for the UWP”. Now click OK.

screenshot.1461183496

Testing for hardware buttons on the device, and handling events

Obviously enough, we’ve now added a reference to a library which allows you to test for the availability of certain sensors which are specific to a mobile device, such as the hardware button used to take a picture.

if (ApiInformation.IsTypePresent("Windows.Phone.UI.Input.HardwareButtons"))
{
    // This code will only run if the HardwareButtons type is present.
}

The Camera button has three events – CameraPressed, CameraHalfPressed, and CameraReleased. I’m interested in intercepting the CameraHalfPressed event for focussing, so I’ve assigned the event handler in the code below, and put this in the constructor for the MainPage class.

if (ApiInformation.IsTypePresent("Windows.Phone.UI.Input.HardwareButtons"))
{
    HardwareButtons.CameraHalfPressed += HardwareButtons_CameraHalfPressed;
}

The event handler is shown below, including the snippet of code to test if focussing is supported.

private void HardwareButtons_CameraHalfPressed(object sender, CameraEventArgs e)
{
    if (_mediaCapture.VideoDeviceController.FocusControl.Supported)
    {
        // Focussing code is here.
    }
}

Focus range and focus mode

To focus the camera device, I need to configure the focus control of the _mediaCapture object – this means getting the focus mode and focus range. We can get the supported ranges and modes from the focus control object, and then assign these as settings. Finally, we need to call the asynchronous focus method. The code below shows how this works.

private async void HardwareButtons_CameraHalfPressed(object sender, CameraEventArgs e)
{
    // test if focus is supported
    if (_mediaCapture.VideoDeviceController.FocusControl.Supported)
    {
        // Get the focus control from the _mediaCapture object.
        var focusControl = _mediaCapture.VideoDeviceController.FocusControl;
 
        // Try to get full range autofocus, but settle for the first supported range.
        var focusRange = focusControl.SupportedFocusRanges.Contains(AutoFocusRange.FullRange) ? AutoFocusRange.FullRange : focusControl.SupportedFocusRanges.FirstOrDefault();
 
        // Try to get the focus mode for focussing just once, but settle for the first supported one.
        var focusMode = focusControl.SupportedFocusModes.Contains(FocusMode.Single) ? FocusMode.Single : focusControl.SupportedFocusModes.FirstOrDefault();
 
        // Now configure the focus control with the range and mode as settings.
        focusControl.Configure(
            new FocusSettings
            {
                Mode = focusMode,
                AutoFocusRange = focusRange
            });
 
        // Finally wait for the camera to focus.
        await focusControl.FocusAsync();
    }
}

So again, only a few lines of code are needed to register a button press event, and then configure the focus control. Hopefully this helps someone trying to set up focussing.

In the next part, I’ll look at how to change our code to actually capture an image when we fully press the camera button.

.net, UWP, Windows Store Apps

How to submit a UWP app to the Windows Store

If you follow me on twitter, you might have seen that I recently tweeted that I had published my first ever Windows Store app.

My plan wasn’t to submit this, retire and wait for the money to come rolling in (at least partly because I made the app free, and there’s no adverts in it). What I really wanted to do was create a UWP (Universal Windows Platform) app to see what the challenges were in creating and publishing something to the store.

I’ll not write about creating the app in this post – this time I want to focus mainly on what I needed to do to get a simple app to the store. I’ll also let you know some of the mistakes I made during the process so hopefully anyone reading this and thinking of submitting an app won’t repeat them.

I should also say there’s a huge amount of brilliant documentation on the Microsoft site about this process, in lots of separate pages – but I wanted to put together a post that showed how to follow the whole submission process from beginning to end.

Register as a developer

The first thing was to let Microsoft know I wanted to register as an app developer – I registered as an individual, and told them my location. There are further details on this process at this link, but it’s pretty straightforward.

There is a charge to register as a developer – this link takes you to a list of fees.

Register your app’s name

The next thing to do is log into your Developer Dashboard here and choose a name for your app – click on the “Create new app” link on the screen below.You can do this when you want to register your app, or you can do it before you’ve written a line of code – it’s just reserving the name you want your app to be listed under in the store.

You can always see the apps you’ve already begun processing – the app I released a couple of days ago is called “Electronic Resistance Calculator”, and I pre-registered that name which you can see on the left hand side of the screen below.

screenshot.1460496162

Once you’ve clicked to “Create new app”, you’re taken to a screen like the one below.

screenshot.1460496430

You can check the availability of the name you want to reserve, and if it’s up for grabs, you can reserve the name.

Once you’ve reserved it, this name will appear as a link on your Developer Dashboard along the left hand side as one of your apps.

Tell Microsoft about the app

Once you’ve developed your app, there’s a certification process to get your app into the store – I was a bit worried that this would be a complex process, but it actually was very straightforward.

I went back to my Developer Dashboard, and clicked on the app name in the left hand menu that reserved earlier. This takes me to a screen like the one below.

screenshot.1460497657

The description on the screen says it all really – I clicked on “Start your submission” to begin the process. This took me to the screen below.

screenshot.1460497781

The headings are pretty self explanatory – you don’t even need to do them in order (although you do need complete “Packages” before you can complete “Descriptions”).

This process did take me a little while to work through and complete – as you might guess, I clicked on each of the linked headings in the table, and these took me to a screen where I had to add information about my app. But even though the screens were quite long sometimes, they weren’t complicated or hard to fill out. I managed to complete all the submission questions in about 15 minutes.

Something I got wrong: In the “App properties” questions, I was asked what hardware capabilities my app required. Since I was building an app to be used on Mobile and Desktop, I checked both the Touch screen and Mouse boxes…this was a bad idea.

screenshot.1460498001

I clicked Touch screen because my mobile phone obviously has a touch screen…but what this meant was that I had accidentally told Microsoft that any device that runs my app needs BOTH a mouse and a touch screen (which isn’t correct). So desktop users who download my app (and don’t have a touch screen) will get an incorrect warning saying my app won’t function correctly on their device. My fault.

Uploading packages

When I came to the “Packages” section of the submission process, I was shown the screen below:

screenshot.1460498247

But how do I create a package? Turns out that I can use Visual Studio to do this.

Use Visual Studio 2015 to create the appxupload file

I had developed the application and tested it using the different simulators in Visual Studio 2015 (e.g. Windows Phone 10 of various sizes, and a desktop app too), I wanted to get this app package to the store during the submission process.

And fortunately this is very easy to do, using Visual Studio 2015. First I went to the “Project -> Store -> Create App Packages…” menu item.

screenshot.1460497235

This launched a wizard to help me through the process.

screenshot.1460296734

After clicking “Next” on the screen above, I was asked to enter my developer account details:

screenshot.1460296780

And after entering my details, I entered my mobile phone number and received a code which I entered into the wizard – this is a pretty standard method of authentication.

I was then shown a list of the names that I had reserved in the Windows Store in the screen below. I selected the app name I reserved earlier, and hit next.

screenshot.1460296973

Next up was the most important part of the process – creating the app package. Again this was a simple screen, in which I chose a few details like where I wanted to save the package locally, the version number, and which hardware architecture I wanted to target. screenshot.1460297016

Once I had chosen all of these, I hit the “Create” button to start generating the package. This takes a long time – I spent a lot of time watching the little icon in VS2015’s status bar.

screenshot.1460297259

Eventually the package is created, and the screen below appears, which allows us to launch the Windows Certification Kit – this runs through a series of quality checks on my app, which saves me submitting something to the store which is going to be rejected because of some stupid thing I’ve forgotten to do.

screenshot.1460297480

On launching the kit, I was allowed to choose which tests to run – I’m probably always going to select all of these tests to reassure myself as much as possible before submitting an app.

screenshot.1460297564

Once I hit next, the certification checks proceed.

screenshot.1460297595

And eventually, I saw a screen like the one below telling me that my app passed initial validation.

screenshot.1460297812

I returned to the “Packages” form in the online Developer Dashboard, and dragged the generated package (with the appxupload extension) to the correct area on the website.

screenshot.1460297901

And eventually, I was shown the screen below which confirmed the package had been uploaded. Nearly there.screenshot.1460297926

I hit save, and at this point I was allowed to edit the “Descriptions” form – obviously enough, this is just a screen to enter a brief description of your app and some release notes.

screenshot.1460299774

Once I saved all this information, I was able to complete my part of the submission process, and I was able to see my submission progress.

screenshot.1460299932

I moved from “Pre-processing” to “Certification” within about 30 minutes.

screenshot.1460300570

One unwelcome surprise: When I returned to this page a few hours later, the message had changed from saying it would take a few hours to saying it would take a few days! There might have been a good reason for this, but it would have been a better user experience to understand why the length of time changed.

Some other observations

In the end, it was certified about 24 hours later which was fine. I received a couple of emails about this – one certifying that my app was suitable for all ages, and another confirming that I had passed certification, and that it should be available in the store in about 16 hours.

The email also gave me a link to my app which would work when it was published – I think that this link only works on devices with Windows Store app (or at least when I tried the link on Windows 7 machine, I just got a blank browser page). It’d be good if this could be handled a bit better on devices without a Store app.

When the app became available in the store, the additional information section section said a few things, a couple of which were surprises:

  • I only have permission to install this app on ten Windows 10 devices. I’m not sure why this restriction exists, although it seems plenty for me.
  • The app has permission to access my internet connection – this is my fault, because I went back to my App’s manifest in VS2015 and found that I had left this checked. This is actually ticked by default when you create a new UWP app in VS2015 – I should have unchecked this because there’s no need for my app to access the internet – I’ll fix this error (and the other mistakes) if I submit an update to the store for this app.

Summary

I hope this summary helps someone working through the process of building and submitting an app – it’s a straightforward process, especially if you have VS2015. I found there were a few unexpected things, but they’re pretty minor and easy to avoid or workaround since I now know to expect them.