Azure, GIS, Leaflet, Security, Web Development

Getting started with Azure Maps, using Leaflet to display roads and satellite images, and comparing different browser behaviours

In this post I’m going to describe how to use the new(ish) Azure Maps service from Microsoft with the Leaflet JavaScript library. Azure Maps provides its own API for Geoservices, but I have an existing application that uses Leaflet, and I wanted to try out using the Azure Maps tiling services.

Rather than just replicating the example that already exists on the excellent Azure Maps Code Samples site, I’ll go a bit further:

  • I’ll show how to display both the tiles with roads and those with aerial images
  • I’ll show how to switch between the layers using a UI component on the map
  • I’ll show how Leaflet can identify your present location
  • And I’ll talk about my experiences of location accuracy in Chrome, Firefox and Edge browsers on my desktop machine.

As usual, I’ve made my code open source and posted it to GitHub here.

First, use your Azure account to get your map API Key

I won’t go into lots of detail about this part – Microsoft have documented the process very well here. In summary:

  • If you don’t have an Azure account, there are instructions here on how to create one.
  • Create a Maps account within the Azure Portal and get your API Key (instructions here).

Once you have set up a resource group in Azure to manage your mapping services, you’ll be able to track usage and errors through the portal – I’ve pasted graphs of my usage and recorded errors below.


You’ll use this API Key to identify yourself to the Azure Maps tiling service. Azure Maps is not a free service – pricing information is here – although presently on the S0 tier there is an included free quantity of tiles and services.

API Key security is one key area of Azure Maps that I would like to be enhanced – the API Key has to be rendered on the client somewhere in plain text and then passed back to the maps API. Even with HTTPS, the API Key could be easily intercepted by someone viewing the page source, or using a tool to read outgoing requests.

Many other tiling services use CORS to restrict which domains can make requests, but:

  • Azure Maps doesn’t do this at the time of writing and
  • This isn’t real security because the Origin header can be easily modified (I know it’s a forbidden header name for a browser but tools like cUrl can spoof the Origin). More discussion here and here.

So this isn’t a solved problem yet – I’d recommend you consider how you use your API Key very carefully and bear in mind that if you expose it on the internet you’re leaving your account open to abuse. There’s an open issue about this raised on GitHub and hopefully there will be an announcement soon.

Next, set up your web page to use the Leaflet JS library

There’s a very helpful ‘getting started‘ tutorial on the Leaflet website – I added the stylesheet and javascript to my webpage’s head using the code below.

<link rel="stylesheet" href=""
      crossorigin="" />
<script src=""

Now add the URLs to your JavaScript to access the tiling services

I’ve included some very simple JavaScript code below for accessing two Azure Maps services – the tiles which display roadmaps and also those which have satellite images.

function satelliteImageryUrl() {
    return "{z}&x={x}&y={y}&subscription-key={subscriptionKey}";
function roadMapTilesUrl() {
    return "{z}&x={x}&y={y}&subscription-key={subscriptionKey}";

If you’re interested in reading more about these two tiling services, there’s more about the road map service here and more about the satellite image service here.

Now add the tiling layers to Leaflet and create the map

I’ve written a JavaScript function below which registers the two tiling layers (satellite and roads) with Leaflet. It also instantiates the map object, and attempts to identify the user’s location from the browser. Finally it registers a control which will appear on the map and list the available tiling services, allowing me to toggle between them on the fly.

var map;
function GetMap() {
    var subscriptionKey = '[[[**YOUR API KEY HERE**]]]';
    var satellite = L.tileLayer(satelliteImageryUrl(), {
        attribution: '© ' + new Date().getFullYear() + ' Microsoft, © 1992 - ' + new Date().getFullYear() + ' TomTom',
        maxZoom: 18,
        tileSize: 512,
        zoomOffset: -1,
        id: 'azureSatelliteMaps',
        crossOrigin: true,
        subscriptionKey: subscriptionKey
    var roads = L.tileLayer(roadMapTilesUrl(), {
        attribution: '© ' + new Date().getFullYear() + ' Microsoft, © 1992 - ' + new Date().getFullYear() + ' TomTom',
        maxZoom: 18,
        tileSize: 512,
        zoomOffset: -1,
        id: 'azureRoadMaps',
        crossOrigin: true,
        subscriptionKey: subscriptionKey
    // instantiate the map object and display the 'roads' layer
    map ='myMap', { layers: [roads] });
    // attempt to identify the user's location from the browser
    map.locate({ setView: true, enableHighAccuracy: true });
    map.on('locationfound', onLocationFound);
    // create an array of the tiling base layers and their 'friendly' names
    var baseMaps = {
        "Azure Satellite Imagery": satellite,
        "Azure Roads": roads
    // add a control to map (top-right by default) allowing the user to toggle the layer
    L.control.layers(baseMaps, null, { collapsed: false }).addTo(map);

Finally, I’ve added a div to my page which specifies the size of the map, gives it the Id “mymap” (which I’ve used in the JavaScript above when instantiating the map object), and I call the GetMap() method when the page loads.

<body onload="GetMap()">
    <div id="myMap" style="position:relative;width:900px;height:600px;"></div>

If the browser GeoServices have identified my location, I’ll also be given an accuracy in meters – the JavaScript below allows me to draw a circle on my map to indicate where the browser believes my location to be.

map.on('locationfound', onLocationFound);
function onLocationFound(e) {
    var radius = e.accuracy / 2;
        .bindPopup("You are within " + radius + " meters from this point")
        .openPopup();, radius).addTo(map);

And I’ve taken some screenshots of the results below – first of all the results in the MS Edge browser showing roads and landmarks near my location…


…and swapping to the satellite imagery using the control at the top right of the map.


Results in Firefox and Chrome

When I ran this in Firefox and Chrome, I found that my location was identified with much less accuracy. I know both of these browsers use the Google GeoLocation API and MS Edge uses the Windows Location API so this might account for the difference on my machine (Windows 10), but I’d need to do more experimentation to better understand. Obviously my laptop machine doesn’t have GPS hardware, so testing on a mobile phone probably would give very different results.


Wrapping up

We’ve seen how to use the Azure Maps tiling services with the Leaflet JS library, and create a very basic web application which uses the Azure Maps tiling services to display both road and landmark data, and also satellite aerial imagery. It seems to me that MS Edge is able to identify my location much more accurately on a desktop machine than Firefox or Chrome on my Windows 10 machine (within a 75m radius on Edge, and over 3.114km radius on Firefox and Chrome) – however, your mileage may vary.

Finally, as I emphasised above, I’ve concerns about the security of a production application using an API Key in plain text inside my JavaScript, and hopefully Microsoft will deploy a solution with improved security soon.

About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

GeoJson, GIS, SQL Server

How to list feature property information for GeoJSON data using SQL Server 2017 and OPENJSON

I’ve been importing GeoJSON files into tables in SQL Server 2017 – I’ve just had the files themselves without any metadata, so I’ve had to have a look into the file contents to work out what the properties are of the features listed so I can design the table schema.

After a while it got painful looking at the GeoJSON text and trying to parse it visually, and I wondered if I could write a SQL command to list the feature properties for me.

The SQL below allows me to import a GeoJSON file and quickly peek into the first feature in the collection of features, and list all the name-value pairs in the ‘property’ node.

Declare @geoJson nvarchar(max)
SELECT @geoJson = BulkColumn
FROM OPENROWSET (BULK 'C:\Users\jeremy.lindsay\Desktop\my_gis_data.json', SINGLE_CLOB) as importedGeoJson
FROM OPENJSON(@geoJson, '$.features[0].properties')

Now I can read the properties of the first feature much more easily.

I also get the values and the type SQL Server infers about the property data – that’s interesting but I don’t trust the type information too much, as that could be different in every feature in the collection. (Of course the properties could be different in every feature in the collection as well!)

About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

GeoJson, GIS, SQL Server

‘Invalid object name ‘OPENJSON’.’ – SQL Server doesn’t recognise OPENJSON when bulk importing files

Recently I’ve been working with the GIS extensions in SQL Server, and Ineeded to import a GeoJSON file into a SQL Server 2017 table.

There’s a very straightforward way to import a valid JSON file into SQL Server with the OPENROWSET command, as shown below.

Declare @geoJson nvarchar(max)
SELECT @geoJson = BulkColumn
FROM OPENROWSET (BULK 'C:\Users\jeremy.lindsay\Desktop\my_gis_data.json', SINGLE_CLOB) as importedGeoJson
SELECT left(@geoJson, 100)

The above SQL reads the GeoJSON file from my desktop, loads it into a variable, and allows me to view the leftmost 100 characters just as a quick visual check that the code GeoJSON data loaded into the variable correctly.

So far so good – the next step was to load this GeoJSON into a table. For this, I planned to use the OPENJSON command – but I noticed that my SQL Server Management Studio instance highlighted this command as unrecognised, as shown below.openjsonerror

I didn’t think too much of this – it happens sometimes – but of course when I ran the command, I got an error.

 'Invalid object name 'OPENJSON'.'

This made no sense to me – I was sure I had used the syntax correctly, I’ve used this on other machines and, I’d recently upgraded to SQL Server 2017 on my development machine. After a bit of googling, I found some other people had this issue on earlier versions of SQL Server, and the compatability level needs to be 130 or higher. To identify my database compatibility level, I ran the command below:

SELECT compatibility_level  
FROM sys.databases WHERE name = 'SampleGis';

I was greatly surprised to see it was set to 120!

I decided to double check what version of SQL Server I was using with the command:


And was again greatly surprised when the answer came back telling me I was running SQL Server 2014.

As it turned out, I had an old version of SQL Server 2014 still running on my development machine. After uninstalling this, and trying again, the version correctly came back as SQL Server 2017, and the compatibility_level of my database was 140 – this time, OPENJSON worked correctly and I was able to view and query the GeoJSON file.

About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

Hackathon, Meetup

Report on the Belfast 2018 Global Azure Bootcamp

I was lucky enough to have the opportunity to help organise the Belfast chapter of the 2018 Global Azure Bootcamp. This was the first time I’d organised a meetup like this – I’d run hackathons before, but not something that required me to find speakers, corporate sponsors, accommodation and swag for the attendees.

Kainos Software kindly agreed to sponsor the event location, which was in the beautiful building of Riddel Hall in Belfast.


Paul Breen was the other event organiser, and he secured sponsorship from Civica Digital which paid for lunch and refreshments – we couldn’t have had the event without both of these local sponsors.

We were also extremely lucky to have five knowledgeable local speakers agree to talk at the event and share their knowledge with the community.

First we heard Heather Campbell give a fascinating talk about her experiences with API management in Azure – the hall was really interested to hear about how Azure policies make fine-grained control super simple for API service providers.


Next we heard Peter Farrell talk about how to achieve scalability in Azure to make a performant solution while maintaining control of your costs – Peter had about 10 minutes of questions from an audience that was clearly interested in how to make their applications faster!


Following that, Paul Breen gave an introduction to Azure functions – Paul gave a live demo of creating and running different types of functions, which really made it real for the audience (some feedback from after the event was that they wanted to hear more from Paul!)


Following this, Chris McAtackney and Connor Dickson from Automated Intelligence gave a very interesting and valuable talk on securing data in the Azure cloud – particularly important given the imminent start of the GPDR era.


Finally, Gareth Rooney rounded out the day with a talk about creating Azure infrastructure using code and ARM templates – this lead to some of the audience feeding back to me later that they would like to participate in a hackathon building on some of the principles Gareth talked about.

Summing Up

I think this was the first Global Azure Bootcamp in Belfast, and since there was lots of positive feedback about the event I’m optimistic that we’ll be able to have another one next year. Thank you to all the speakers!

.net core, Azure, C# tip, Clean Code, Cloud architecture, Security

Simplifying Azure Key Vault and .NET Core Web App (includes NuGet package)

In my previous post I wrote about securing my application secrets using Azure Key Vault, and in this post I’m going to write about how to simplify the code that a .NET Core web app needs to use the Key Vault.

I previously went into a bit of detail about how to create a Key Vault and add a secret to that vault, and then add a Managed Service Identity to a web app. At the end of the post, I showed some C# code about how to access a secret inside a controller action.

public class HomeController : Controller
    public async Task<ActionResult> Index()
        var azureServiceTokenProvider = new AzureServiceTokenProvider();
        var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
        var secret = await keyVaultClient.GetSecretAsync("").ConfigureAwait(false);
        ViewBag.Secret = secret.Value;
        return View();
    // rest of the class...

Whereas this works for the purposes of an example of how to use it in a .NET Core MVC application, it’s not amazingly pretty code – for any serious application, I wouldn’t have all this code in my controller.

I think it would be more logical to access my secrets at the time my web application starts up, and put them into the Configuration for the app. Therefore if I need them later, I can just inject an IConfiguration object into my class, and use that to get the secret values.

using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Configuration.AzureKeyVault;

namespace MyWebApplication
    public class Program
        public static void Main(string[] args)
        private static IWebHost BuildWebHost(string[] args)
            return WebHost.CreateDefaultBuilder(args)
                .ConfigureAppConfiguration(builder =>
                    var azureServiceTokenProvider = new AzureServiceTokenProvider();
                    var keyVaultClient =
                        new KeyVaultClient(
                            new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
                            keyVaultClient, new DefaultKeyVaultSecretManager());

But as you can see above, the code I need to add here to build my web host is not very clear either – it seemed a shame to lose all the good work done in .NET Core 2 to simplify this class.

So I’ve created a NuGet package to allow me to have a simpler and cleaner interface – it’s at uploaded to the NuGet repository here, and I’ve open sourced the code at GitHub here.

As usual you can install pretty easily from the command-line:

Install-Package Kodiak.Azure.WebHostExtension -prerelease

Now my BuildWebHost method looks much cleaner – I can just add the fluent extension AddAzureKeyVaultSecretsToConfiguration and pass in the URL of the vault).

private static IWebHost BuildWebHost(string[] args)
    return WebHost.CreateDefaultBuilder(args)

I think this is a more elegant implementation, and now if I need to access the secret inside my controller’s action, I can use the cleaner code below.

using System.Diagnostics;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Configuration;
namespace MyWebApplication.Controllers
    public class HomeController : Controller
        private readonly IConfiguration _configuration;
        public HomeController(IConfiguration configuration)
            _configuration = configuration;
        public IActionResult Index()
            ViewBag.Secret = _configuration["MySecret"];
            return View();

Summing up

Azure Key Vault (AKV) is a useful tool to help keep production application secrets secure, although like any tool it’s possible to mis-use it, so don’t assume that just because you’re using AKV that your secrets are secured – always remember to examine threat vectors and impacts.

There’s a lot of documentation on the internet about AKV, and a lot of it recommends using the client Id and secrets in your code – I personally don’t like this because it’s always risky to have any kind of secret in your code. The last couple of posts that I’ve written have been about how to use an Azure Managed Service Identity with your application to avoid secrets in your code, and how to simplify the C# code you might use in your application to access AKV secrets.

.net, Azure, Cloud architecture, Security

Using the Azure Key Vault to keep secrets out of your web app’s source code

Ahead of the Global Azure Bootcamp, I’ve been looking how I could allow a distributed team to develop and deploy a web application to access an Azure SQL Server instance in a secure way. There are a few different ways that I could share credentials to access my Azure SQL database:

  • Environment variables – this keeps secrets (like passwords) out of the code and mitigates the risk they’d be committed to source code. But Environment Variables are stored in plain text, so if the host is compromised, those secrets are lost.
  • .NET Core Secret Manager tool – there is a NuGet package which allows the user to keep application secrets (like a password) in a JSON file which is stored in the user profile directory – again, this mitigates the risk that secrets would be committed to source code, but I’d still have to share that secret to be stored in plain text.

Neither of these options are ideal for me – I would rather grant access to my Azure SQL database by role and not share passwords with developers which need to be written down in somewhere, either in JSON or in my automated deployment scripts. And even though the two options above mitigate the risk that passwords are committed to source code, they don’t eliminate the risk.

So I was pretty excited to read about Azure Key Vault (AKV)- a way to securely store secrets in the cloud and avoid any risk of secrets being committed to source code.

This page from Microsoft presents a few different user stories and how AKV meets these needs, specifically around:

(There’s more information on Stack Overflow here)

But after reading the document here I was a bit surprise that the implementation described still used the Secret Manager tool – it seemed like we’re just swapping storing secrets in one place for another. I searched around to find how this could be done without the Secret Manager tool, and in numerous blog posts and videos I saw developers setting up a secret in AKV, but then copying a “client secret” from Azure into their code, and I thought this really defeated the purpose of having a vault for secrets.

Fortunately I’ve found what I need to do to use AKV with my .NET Core web application and not have to add any secrets to code – I secure my application with a Managed Service Identity. I’ve described how to do this below, with the C# code I needed to use the feature.

It looks like there’s a lot of steps but they’re all very simple – this is a lot less complicated than I thought it would be!

How to keep the secrets out of your source code.

  • First create a vault
  • Add a secret to your vault
  • Secure your app service using Managed Service Identity
  • Access the secret from your source code with a KeyVaultClient

I’ll cover each of these in turn, with code samples at the end to show how to access the AKV.

First create a vault

Open the Azure portal and log in – click on the “All services” menu item on the left hand side, and search for “key vault” – this should filter the options so you have a screen like the one below.


Once you’ve got the Key Vaults option, click on it to see a screen like the one below which will list the Key Vaults in your subscription. To create a new vault, click on the “Add” button, highlighted in the document below.


This will open another “blade” (which I just see as jargon for a floating window) in the portal where you can enter information about your new vault.

As you can see in the image below, I’ve called my vault “MyWebsiteSecret”, and I’ve created a new resource group for it called “Development_Secret”. I’ve chosen the location to be “UK West”, and by default my user has been added as the first principal who has permission to access this.


I clicked on the Create button at the bottom of the screen, and the portal presents a toast at the top right to say my vault is in the process of being created.


Eventually this changes when the deployment has succeeded.


So the Azure portal screen now shows the list page again, and my new vault is on this page.


Add a secret to the vault

Now the vault is created, we can create a new secret in it. Click on the vault created in the previous step to see the details for this vault (shown below).


Now click on the “Secrets” menu item to open a blade showing secrets in this vault. Obviously as I’ve just created it, there are no secrets yet. We can create on by clicking on the “Generate/Import” button, highlighted in the image below.


After clicking on the “Generate/Import” button, a new blade opens where you can enter details of your secret. I chose a name of “TheSecret”, entered a secret value which is masked, and entered in a bit of text for the Content Type to describe the type of secret.


Once I click on “Create” at the bottom of the blade, the site returns me to the list of secrets in this vault – but this time, you can see my secret in the list, as shown below.


Secure the app service using Managed Service Identity

I have deployed my .NET Core application to Azure previously – I won’t go into lots of detail about how to deploy a .NET Core application since it’s in a million other blog posts and videos – basically I created a new App Service through the Azure portal, and linked it to a .NET Core application on my GitHub profile. Now when I push code to that application on GitHub, Azure will automatically build and deploy it.

Obviously this isn’t a full deployment pipeline – but it works for this simple application.

But I do want to show how to create a Managed Service Identity for this application – as shown in the image below, I’ve searched for my App Service on Azure.


I selected my app service to open a blade with options for this service, and selected “Managed Service Identity”, as shown below. By default it’s off – I’ve drawn an arrow below beside the button I pressed to turn it on for the app service, and after that I clicked on Save to persist my changes.


Once it was saved, I needed to go back to the key vault and secret that I created earlier and select “Access Policies”, as shown below. As I mentioned earlier, my name is in there as having permission by default, but I want my application to have permission too – so I clicked on the “Add new” option, which I’ve highlighted with a red arrow below.


The blade below opens up – for the principle, I selected my app service (called “MyAppServiceForTestingVaults”) – by default nothing is selected so you just need to click on the option to open another blade where you can search for your app service. It’ll only be available if you’ve correctly configured the Managed Service Identity as described above.

Also, I selected two “Secret permissions” from the dropdown – Get and List.


Once I click OK, I can now see the my application is in the list of app services which have access to the secret I created earlier.


Add code to my .NET application to access these secrets

I use the Azure Services Authentication Extension to simplify development with my Visual Studio account.

I’m going to choose a really simple example – modifying the Index action of a HomeController class in the default .NET Core MVC website. I also need to add a NuGet package to my project:

Install-Package Microsoft.Azure.Services.AppAuthentication -Version 1.1.0-preview

The code below allows me to authenticate myself to my Azure instance, and get the secret from my vault.

You can see there’s a URL specified in the action below – it’s in the format of:

https://<<name of the vault>><<name of the secret>>

public class HomeController : Controller
    public async Task<ActionResult> Index()
        var azureServiceTokenProvider = new AzureServiceTokenProvider();
        var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
        var secret = await keyVaultClient.GetSecretAsync("").ConfigureAwait(false);
        ViewBag.Secret = secret.Value;
        return View();
    // rest of the class...


Now I can just modify the Index.cshtml view and add some code to show the secret (as simple as adding @ViewBag.Secret into the cshtml) – and when I run the project locally, I can now see that my application has been able to access the vault and decrypt my secret (as highlighted in the image below) without any client Id or client secret information in my code – this is because my machine recognises that I’m authenticated to access my own Azure instance.

I can also deploy this code to my Azure App Service and I’ll get the same results, because the application’s Managed Service Identity ensures that my application in Azure has permission to access the secret.

secret in image

Summing up

This was a really simple example, and it’s just to illustrate how to allow developers to access AKV secrets without having to add secret information to source code. Obviously, if a developer is determined to compromise security, they could obviously decrypt passwords and disseminate in another way –  so we’d have to tighten up security for a real-world application. For example, we could have different secrets stored in different environmental resource groups as we promote our application from Dev to QA/Staging and finally to Production.

Continue reading

arduino, AZ3166, IOT, MXChip

Hacking with the MXChip AZ3166 Azure DevKit – a better PWM implementation

This is another post about the AZ3166 MXChip DevKit – this post builds on part 4 of my getting started series, and describes another way to implement pulse-width-modulation (PWM) in code.

You might wonder why I’m spending so much time with servos and this device – my personal use-case is that I’d like to use the sensors on the device alter something in the physical world – to use a classic example, when the temperature falls below a certain value, I’d like to use a servo to change the position of a thermostat.


I normally use PWM for controlling servos where I’ve not been able to use a pre-packaged library (like the one that ships with the Arduino IDE). That package doesn’t work with the AZ3166 hardware, so in a previous post I described how to use PWM and the Arduino function analogWrite to control a servo position.

The code below describes how to control a servo by sending a PWM signal to the PWM_OUT pin, and allowing 1s for the wiper to reach its final position.

#define analogIn A2
int inputValue = 0;
void setup() 
void loop() 
  for (int i = 5; i < 31; i++)
    analogWrite(PWM_OUT, i);
    inputValue = analogRead(analogIn);

My servo is a little bit special in that it has an extra pin that allows me to measure an analog value corresponding to the wiper position between 0 and 180, so I’m able to graph the position against the value that I pass to analogWrite.

The graph below shows the position of the servo against the value passed to analogWrite.

Analog Feedback servo - AZ3166 - input vs angle

I found a couple of things which were a bit odd:

  • The documentation for the device says I can control PWM through three pins corresponding to RGB_R, RGB_G, RGB_B – but I could only issue a PWM signal using analogWrite through physical pin 7 (corresponding to the software pin PWM_OUT). I could physically observe this PWM through the servo, and also through the green LED, but I couldn’t replicate this with the red or blue LEDs or their pins.
  • I can control the position of the servo by changing what value I pass to the analogWrite function – however, through a process of trial and error, I found that even though I can pass integers between 0 and 255 to analogWrite, the only values which allow me to control the servo position are between 5 and 31. Given there’s 180 degrees in a full servo sweep, that means I don’t have very much control over the servo’s angular position using analogWrite and the AZ3166. Also, guessing/trial and error isn’t a great way to achieve repeatable control.

Since writing that post, I’ve found a different way of controlling servos. The AZ3166 uses the MBED microcontroller library, mbed.h, which refers to another useful library, PwmOut.h.

I found the code for mbed.h on my machine at:


Also the code for PwmOut.h is on my machine at:


How to use the PwmOut.h functions to create and control a PWM signal

I found the comment by Arthur Ma on Github very helpful when writing this post.

So let’s say I wanted to instantiate one of the three pins available for PWM use – let’s choose PB4, which is shared with the red onboard LED, RGB_R – I could declare it in my arduino code in this way.

PwmOut myServoPin(RGB_R);

which is equivalent to:

PwmOut myServoPin(PB_4);

This just says “make pin PB_4 available to send a PWM signal”.

The next bit is really useful – I can control the frequency and period of the PWM signal using the functions available in the PwmOut.h library. So if I wanted my pin’s PWM signal to have a frequency of 50Hz, i.e. a period of 20ms (because 1/50 = 0.02), I could send the instruction:


The servo’s position is determined by the width of the pulse sent each cycle, and typically a pulse width of 1.5ms will turn the servo to its central (90 degree) position. So if I wanted to do this, I could send the instruction


1500 microseconds is the same as 1.5ms. The function pulsewidth_us allows me to set the width in microseconds, and pulsewidth_ms allows me to set it in milliseconds.

I wrote the program below and uploaded it to my AZ3166 to see how the analog signal varied with pulse width, and found I can control the position of the wiper much more accurately now, and I can control it on any of Pins 5, 7, and 10 (corresponding to RGB_R, RGB_G, and RGB_B).

#define analogIn A2
PwmOut myServoPin(RGB_R);
void setup()
void loop()
  for (int i = 1; i < 3000; i++)

I’ve included the graph of results below.

Analog Feedback servo - AZ3166 - input vs angle - using PwmOut

There are some limitations of the cheap hobby servo which are now obvious.

  • You can see that the servo only reliably sweeps between pulse-width values of about 690 and 2270.
  • The servo jumps from 0 degrees to about a 40 degree position at a pulse width of 253microseconds, after which there is a very linear relationship.
  • The servo output reports a higher than expected value quite often, as shown by the data points above the main trend line – with this knowledge, I can anticipate this behaviour and code for it.

Wrapping up

I’ve discovered a better way of controlling pulse-width-modulation signals on the AZ3166 device using PwmOut – previously I used analogWrite with a single pin, and now I can use three pins, and control the period and pulse width to much higher degree of accuracy.