I’ve previously written about how to use the C# UWP APIs to access the camera on your Windows device. In the example code, I experimented with my Windows Phone (Nokia 1520) and my Windows 10 laptop, which has an integrated WebCam. Since I’ve recently been working with the Raspberry Pi 3 using Windows 10 IoT Core, I asked myself the question: Can I write the same C# UWP code and deploy it to a 64-bit laptop, an ARM Windows phone, and an ARM Raspberry Pi 3?

I decided to try the Microsoft LifeCam Studio with my Pi 3 – at least partly because it’s listed on the Compatible Hardware list, but it’s presently not “Microsoft Verified”. One definite use for a camera and my Raspberry Pi is a pretty standard one – I wanted to be able to use it to keep an eye on my 3d printer.

Designing the Interface

My usual design process for a component is to start defining the interface. I start small – rather than trying to think of every single possible thing that I (or others) might ever need, I just choose to define what I need for my use case. I also allow proof of concept code to influence me – it helps me move from purely theoretical requirements towards a practical and useable interface.

For my application, I wanted to initialise the camera and preview the display on different devices. I didn’t need to focus or save video (at this point anyway).

  • I knew the main thing I needed to do before previewing the output was to initialise the camera – from my previous work, I knew that UWP allows me to do this through the MediaCapture object asynchronously.
  • I also knew that I need to choose the camera that I wanted to initialise. Therefore it made sense to me that I needed to pass the camera’s device information to the initialisation method.
Task InitialiseCameraAsync(DeviceInformation cameraToInitialise);
  • In order to pass the camera device information, I knew I’d have to get this information somehow – for the phone I knew I’d probably need to get the back facing camera, but for the laptop or the Pi, I’d need to be able to get the first or default camera.
Task<DeviceInformation> GetCameraAtPanelLocation(Panel cameraLocation);
        
Task<DeviceInformation> GetDefaultCamera();
  • Finally for now, I knew that the MediaCapture object would certainly be needed. I actually really didn’t like the name “MediaCapture” – I thought this object should be named as a noun, rather than based on the verb “to capture”. I prefer the name of “ViewFinder”, because I think this is a more commonly understood term.
MediaCapture ViewFinder { getset; }

So with all of this, I was in a position to define a draft interface for my UWP application.

namespace Magellanic.Camera.Interfaces
{
    public interface ICameraDevice : IDisposable
    {
        MediaCapture ViewFinder { getset; }
 
        Task<DeviceInformation> GetCameraAtPanelLocation(Panel cameraLocation);
        
        Task<DeviceInformation> GetDefaultCamera();
 
        Task InitialiseCameraAsync(DeviceInformation cameraToInitialise);
    }
}

I’ve uploaded this project to GitHub, and I’ve created a NuGet project for this interface.

Implementing the interface

The next step was to create a library which implements this interface. I created a new Windows 10 UWP class library, and created a class called CameraDevice. I made this implement the interface I defined above, taking some of the implementation details from my previous post on how to use camera with a Windows phone.

public class CameraDevice : ICameraDevice
{
    public MediaCapture ViewFinder { getset; }
 
    public void Dispose()
    {
        ViewFinder?.Dispose();
        ViewFinder = null;
    }
 
    public async Task<DeviceInformation> GetCameraAtPanelLocation(Panel cameraPosition)
    {
        var cameraDevices = await GetCameraDevices();
 
        return cameraDevices.FirstOrDefault(c => c.EnclosureLocation?.Panel == cameraPosition);
    }
 
    public async Task<DeviceInformation> GetDefaultCamera()
    {
        var cameraDevices = await GetCameraDevices();
 
        return cameraDevices.FirstOrDefault();
    }
 
    public async Task InitialiseCameraAsync(DeviceInformation cameraToInitialise)
    {
        await ViewFinder?.InitializeAsync(
            new MediaCaptureInitializationSettings
            {
                VideoDeviceId = cameraToInitialise.Id
            });
    }
 
    private async Task<DeviceInformationCollection> GetCameraDevices()
    {
        return await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);
    }
}

There’s not very much code here – this class is about allowing the user to choose a camera, and then allowing them to initialise it for use. I’ve uploaded this code to GitHub, and again released a NuGet package for it.

Building the UWP to access a camera

This part is the real proof of concept – can I write the same C# UWP code and deploy it to a 64-bit laptop, an ARM Windows phone, and an ARM Raspberry Pi 3?

I used VS2015 to create a new Windows 10 UWP Blank App. There were a few steps I needed to do:

  • I needed to change the capabilities in the apps Package.appxmanifest to allow the UWP app to use the webcam and microphone features of the device. I’ve included the XML for this below.
<Capabilities>
  <DeviceCapability Name="webcam" />
  <DeviceCapability Name="microphone" />
</Capabilities>
  • I needed to modify the XAML of the MainPage.Xaml file to add a “CaptureElement”:
<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
    <CaptureElement Name="PreviewControl" Stretch="Uniform"/>
</Grid>
  •  I needed to install the NuGet package that I created earlier.

Install-Package Magellanic.Camera -Pre

  • Now that these were in place, I was able to add some events to the app’s MainPage.xaml.cs. All I wanted to do in this app was to initialise the camera preview asynchronously, so I knew the basic structure of the MainPage.xaml.cs would look like the code below:
public MainPage()
{
    this.InitializeComponent();
 
    Application.Current.Resuming += Application_Resuming;
    Application.Current.Suspending += Application_Suspending;
}
        
protected override async void OnNavigatedTo(NavigationEventArgs e)
{
    await InitialiseCameraPreview();
}
 
private async void Application_Resuming(object sender, object o)
{
    await InitialiseCameraPreview();
}
 
protected override void OnNavigatedFrom(NavigationEventArgs e)
{
    _cameraDevice.Dispose();
}
 
private void Application_Suspending(object sender, SuspendingEventArgs e)
{
    _cameraDevice.Dispose();
}

I coded the “InitialCameraPreview” method to initialise the camera, set the XAML source to a ViewFinder object, and then start previewing through the initialised ViewFinder. The only slight complication is that I try to get a rear facing camera first – and if that doesn’t work, I get the default device.

private CameraDevice _cameraDevice = new CameraDevice();
 
private async Task InitialiseCameraPreview()
{
    await _cameraDevice.InitialiseCameraAsync(await GetCamera());
 
    // Set the preview source for the CaptureElement
    PreviewControl.Source = _cameraDevice.ViewFinder;
 
    // Start viewing through the CaptureElement 
    await _cameraDevice.ViewFinder.StartPreviewAsync();
}
 
private async Task<DeviceInformation> GetCamera()
{
    var rearCamera = await _cameraDevice.GetCameraAtPanelLocation(Windows.Devices.Enumeration.Panel.Back);
 
    var defaultCamera = await _cameraDevice.GetDefaultCamera();
 
    return rearCamera ?? defaultCamera;
}

So given I had this application, time to try to deploy to the three devices.

Device 1 – my local machine

In VS2015, I set my configuration to be Release for x64, and started it on my local machine – this worked fine, showing the output from my laptop’s onboard webcam in an app window;

Device 2 – my Windows 10 Phone (Nokia 1520)

In VS2015, I set my configuration to be Release for ARM, and changed the deployment target to be “Device”. I connected my Windows Phone to my development machine using a micro USB cable, and deployed and ran the app – again, this worked fine, showing the output from the rear facing camera on screen.

Device 3 – my Raspberry Pi 3 and a Microsoft LifeCam Studio camera

I connected my LifeCam Studio device to a USB port on my Raspberry Pi, and then connected the Pi to my laptop via a micro USB cable to provide power. I allowed the device to boot up, using the Windows IoT client to view the Raspberry Pi’s desktop. In the screenshot below, you can see the LifeCam Studio listed as one of the attached devices.

screenshot.1463943286

In VS2015, I changed the deployment device to be a “Remote Machine” – this brought up the dialog where I have to select the machine to deploy to – I selected my Pi 3, which has the name minwinpc.

screenshot.1463943207

When I used VS2015 to deploy the app, the blue light on the webcam came on, and the Remote IoT Desktop app correctly previewed the output from the LifeCam Studio.

Conclusion

This is pretty amazing. I can use exactly the same codebase across 3 completely different device types, but which are all running Windows 10. Obviously the app I’ve developed is very simple – it only previews the output of a camera device – but it proves for me that the UWP is truly universal, not just for PCs and phones, but also for external IoT devices.