.net, .net core, Azure, Cosmos

Test driving the Cosmos SDK v3 with .NET Core – Getting started with Azure Cosmos DB and .NET Core, Part #3

The Azure Cosmos team have recently released (and open sourced) a new SDK preview with some awesome features, as recently seen on the Azure Friday show on Channel 9. So I wanted to test drive the functions available in this SDK against the one I’ve been using (SDK v2.2.2) to see the differences.

In the last two parts of this series, I’ve looked at how to create databases and collections in the Azure Cosmos emulator using .NET Core and version 2.2.2 of the Cosmos SDK. I’ve also looked at how to carry out some string queries against documents held in collections.

Version 3 of the SDK is a preview – it’s not recommended for production use yet, and I’d expect a lot of changes between now and a production ready version.

And before you read my comparison…it’s all purely opinion based – sometimes one piece of code will look longer than another because of how I’ve chosen to write it. I’ve also written the samples as synchronous only – this is just because I want to focus on the SDK differences in this post, rather than explore an async/await topic.

Connecting to the Cosmos Emulator

Previously when I was setting up a connection to my Cosmos Emulator instance, I’d write something in C# like the code below.

#region Set up Document client
 
// Create the client connection using v2.2.2
client = new DocumentClient(
    new Uri(CosmosEndpoint),
    EmulatorKey,
    new ConnectionPolicy
    {
        ConnectionMode = ConnectionMode.Direct,
        ConnectionProtocol = Protocol.Tcp
    });
 
#endregion

Now I can connect using the code below – client instantiation in SDK 3 is cleaner, and has keywords relevant to Cosmos rather than its previous name of DocumentDB. This makes it easier to read and conveys intent much better.

#region Set up Cosmos client
 
// Create the configuration using SDK v3
var configuration = new CosmosConfiguration(CosmosEndpoint, EmulatorKey)
{
    ConnectionMode = ConnectionMode.Direct,
};
 
_client = new CosmosClient(configuration);
 
#endregion

Creating the database and collections

Looking at my code with the previous SDK…well there sure is a lot of it. And it works, so I guess that’s something. But creation of objects from the UriFactory adds a lot of noise, and I’ve previously hidden code like this in a facade class.

#region Create database, collection and indexing policy
 
// Set up database and collection Uris
var databaseUrl = UriFactory.CreateDatabaseUri(DatabaseId);
var naturalSiteCollectionUri = UriFactory.CreateDocumentCollectionUri(DatabaseId, NaturalSitesCollection);
 
// Create the database if it doesn't exist
client.CreateDatabaseIfNotExistsAsync(new Database { Id = DatabaseId }).Wait();
 
var naturalSitesCollection = new DocumentCollection { Id = NaturalSitesCollection };
 
// Create an indexing policy to make strings have a Ranged index.
var indexingPolicy = new IndexingPolicy();
indexingPolicy.IncludedPaths.Add(new IncludedPath
{
    Path = "/*",
    Indexes = new Collection<Microsoft.Azure.Documents.Index>()
    {
        new RangeIndex(DataType.String) { Precision = -1 }
    }
});
 
// Assign the index policy to our collection
naturalSitesCollection.IndexingPolicy = indexingPolicy;
 
// And create the collection if it doesn't exist
client.CreateDocumentCollectionIfNotExistsAsync(databaseUrl, naturalSitesCollection).Wait();
 
#endregion

The new code is much cleaner – no more UriFactories, and we again have keywords which are more relevant to Cosmos.

There are a few things I think are worth commenting on:

  • “Collections” are now “Containers” in the SDK, although they’re still Collections in the Data Explorer.
  • We can access the array of available databases from a “Databases” method accessible from the Cosmos client, and we can access available containers from a “Containers” method available from individual Cosmos databases. This object hierarchy makes much more sense to me than having to create everything from methods accessible from the DocumentClient in v2.2.2.
  • We now need to specify a partition key name for a container, whereas we didn’t need to do that in v2.2.2.
#region Create database, collection and indexing policy
 
// Create the database if it doesn't exist
CosmosDatabase database = _client.Databases.CreateDatabaseIfNotExistsAsync(DatabaseId).Result;
 
var containerSettings = new CosmosContainerSettings(NaturalSitesCollection, "/Name")
{
    // Assign the index policy to our container
    IndexingPolicy = new IndexingPolicy(new RangeIndex(DataType.String) { Precision = -1 })
};
 
CosmosContainer container = database.Containers.CreateContainerIfNotExistsAsync(containerSettings).Result;
 
#endregion

Saving items to a container

The code using the previous SDK is pretty clean already.

#region Create a sample document in our collection
 
// Let's instantiate a POCO with a local landmark
var giantsCauseway = new NaturalSite { Name = "Giant's Causeway" };
 
// Create the document in our database
client.CreateDocumentAsync(naturalSiteCollectionUri, giantsCauseway).Wait();
 
#endregion

But the new SDK improves on its predecessor by using a more logical object hierarchy – we create items from an “Items” array which is available from a container, and the naming conventions are also more consistent.

#region Create sample item in our container in SDK 3
 
// Let's instantiate a POCO with a local landmark
var giantsCauseway = new NaturalSite { Id = Guid.NewGuid().ToString(), Name = "Giant's Causeway" };
 
// Create the document in our database
container.Items.CreateItemAsync(giantsCauseway.Name, giantsCauseway).Wait();
 
#endregion

There are also a couple of changes in SDK worth noting:

  • When creating the item, we also need to explicitly specify the value corresponding to the partition key.
  • My custom object now needs to have an ID property with type of string, decorated as a JsonProperty in the way shown in the code below. I didn’t need this with the previous SDK.
public class NaturalSite
{
    [JsonProperty(PropertyName = "id")]
    public string Id { get; set; }
 
    public string Name { get; set; }
}

Querying a collection/container for exact and partial string matches

Using SDK 2.2.2, my code can look something like the sample below – I’ve used a query facade and can take advantage of SDK 2.2.2’s LINQ querying function.

#region Query collection for exact matches
 
// Instantiate with the DocumentClient and database identifier
var cosmosQueryFacade = new CosmosQueryFacade<NaturalSite>
{
    DocumentClient = client,
    DatabaseId = DatabaseId,
    CollectionId = NaturalSitesCollection
};
 
// We can look for strings that exactly match a search string
var sites = cosmosQueryFacade.GetItemsAsync(m => m.Name == "Giant's Causeway").Result;
 
foreach (var site in sites)
{
    Console.WriteLine($"The natural site name is: {site.Name}");
}
 
#endregion

But in the new SDK v3, there’s presently no LINQ query function. It’s high on the team’s list of ‘things to do next’, and in the meantime I can use parameterized queries to achieve the same result.

#region Query collection for exact matches using SDK 3
 
// Or we can use the new SDK, which uses the CosmosSqlQueryDefinition object
var sql = new CosmosSqlQueryDefinition("Select * from Items i where i.Name = @name")
                                                           .UseParameter("@name", "Giant's Causeway");
 
 
var setIterator = container.Items.CreateItemQuery<NaturalSite>(
                    sqlQueryDefinition: sql,
                    partitionKey: "Giant's Causeway");
 
while (setIterator.HasMoreResults)
{
    foreach (var site in setIterator.FetchNextSetAsync().Result)
    {
        Console.WriteLine($"The natural site name is: {site.Name}");
    }
}
 
#endregion

For partial string matches, previously I could use the built in LINQ functions as shown below.

#region Query collection for matches that start with our search string
 
// And we can search for strings that start with a search string,
// as long as we have strings set up to be Ranged Indexes
sites = cosmosQueryFacade.GetItemsAsync(m => m.Name.StartsWith("Giant")).Result;
 
foreach (var site in sites)
{
    Console.WriteLine($"The natural site name is: {site.Name}");
}
 
#endregion

And even though we don’t have LINQ functions yet in the new SDK v3, we can still achieve the same result with the SQL query shown in the code below.

#region Or query collection for matches that start with our search string using SDK 3
 
sql = new CosmosSqlQueryDefinition("SELECT * FROM Items i WHERE STARTSWITH(i.Name, @name)")
    .UseParameter("@name", "Giant");
 
setIterator = container.Items.CreateItemQuery<NaturalSite>(
    sqlQueryDefinition: sql,
    partitionKey: "Giant's Causeway");
 
while (setIterator.HasMoreResults)
{
    foreach (var site in setIterator.FetchNextSetAsync().Result)
    {
        Console.WriteLine($"The natural site name is: {site.Name}");
    }
}
 
#endregion

What I’d like to see next

The Cosmos team have said the SDK is a preview only – it’s not suitable for production use yet, even though it already has some very nice advantages over the previous SDK. I think the things I’d like to see in future iterations are:

  • LINQ querying – which I know is already on the backlog.
  • More support for “Request Unit” information, so I can get a little more insight into the cost of my queries.

Wrapping up

The new Cosmos SDK v3 looks really interesting – it allows me to write much cleaner code with clear intent. And even though it’s not production ready yet, I’m going to start trying to use it where I can so I’m ready to take advantage of the new features as soon as they’re more generally available, and supported. I hope this helps anyone else who’s thinking about trying out the new SDK – what would you like to see?

.net core, Azure, Cosmos, NoSQL

Getting started with Azure Cosmos DB and .NET Core: Part #2 – string querying and ranged indexes

Last time I scratched the surface of creating databases and collections in Azure Cosmos using the emulator and some C# code written using .NET Core. This time I’m going to dig a bit deeper into how to query these databases and collections with C#, and show a few code snippets that I’m using to help remove cruft from my classes. I’m also going write a little about Indexing Policies and how to use them to do useful string comparison queries.

Initializing Databases and Collections

I use the DocumentClient object to create databases and collections, and previously I used the CreateDatabaseAsync and CreateDocumentCollectionAsync methods to create databases and document collections.

But after running my test project a few times it got a bit annoying to keep having to delete the database from my local Cosmos instance before running my code, or have the code throw an exception.

Fortunately I’ve discovered the Cosmos SDK has a nice solution for this – a couple of methods which are named CreateDatabaseIfNotExistsAsync and CreateDocumentCollectionIfNotExistsAsync.

string DatabaseId = "LocalLandmarks";
string NaturalSitesCollection = "NaturalSites";
 
var databaseUrl = UriFactory.CreateDatabaseUri(DatabaseId);
var collectionUri = UriFactory.CreateDocumentCollectionUri(DatabaseIdNaturalSitesCollection);
 
client.CreateDatabaseIfNotExistsAsync(new Database { Id = DatabaseId }).Wait();
 
client.CreateDocumentCollectionIfNotExistsAsync(databaseUrlnew DocumentCollection { Id = NaturalSitesCollection }).Wait();

Now I can initialize my code repeatedly without having to tear down my database or handle exceptions.

What about querying by something more useful than the document resource ID?

Last time I wrote some code that took a POCO and inserted it as a document into the Cosmos emulator.

// Let's instantiate a POCO with a local landmark
var giantsCauseway = new NaturalSite { Name = "Giant's Causeway" };
 
// Add this POCO as a document in Cosmos to our natural site collection
var collectionUri = UriFactory.CreateDocumentCollectionUri(DatabaseIdNaturalSitesCollection);
var itemResult = client.CreateDocumentAsync(collectionUrigiantsCauseway).Result;

Then I was able to query the database for that document using the document resource ID.

// Use the ID to retrieve the object we just created
var document = client
    .ReadDocumentAsync(
        UriFactory.CreateDocumentUri(DatabaseIdNaturalSitesCollectionitemResult.Resource.Id))
    .Result;

But that’s not really useful to me – I’d rather query by a property of the POCO. For example, I’d like to query by the Name property, perhaps with an object instantiation and method signature like the suggestion below:

// Instantiate with the DocumentClient and database identifier
var cosmosQueryFacade = new CosmosQueryFacade<NaturalSite>
{
    DocumentClient = client,
    DatabaseId = DatabaseId,
    CollectionId = NaturalSitesCollection
};
 
// Querying one collection
var sites = cosmosQueryFacade.GetItemsAsync(m => m.Name == "Giant's Causeway").Result;

There’s a really useful sample project available with the Cosmos emulator which provided some code that I’ve adapted – you can access it from the Quickstart screen in the Data Explorer (available at https://localhost:8081/_explorer/index.html after you start the emulator). The image below shows how I’ve accessed the sample, which is available by clicking on the “Download” button after selecting the .NET Core tab.

sampleapp

The code below shows a query facade class that I have created – I can instantiate the object with parameters like the Cosmos DocumentClient, and the database identifier.

I’m going to be enhancing this Facade over the next few posts in this series, including how to use the new version 3.0 of the Cosmos SDK which has recently entered public preview.

public class CosmosQueryFacade<Twhere T : class
{
    public string CollectionId { getset; }
 
    public string DatabaseId { getset; }
 
    public DocumentClient DocumentClient { getset; }
 
    public async Task<IEnumerable<T>> GetItemsAsync(Expression<Func<Tbool>> predicate)
    {
        var documentCollectionUrl = UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId);
 
        var query = DocumentClient.CreateDocumentQuery<T>(documentCollectionUrl)
            .Where(predicate)
            .AsDocumentQuery();
 
        var results = new List<T>();
 
        while (query.HasMoreResults)
        {
            results.AddRange(await query.ExecuteNextAsync<T>());
        }
 
        return results;
    }
}

This class lets me query when I know the full name of the site. But what happens if I want to do a different kind of query – instead of exact comparison, what about something like “StartsWith”?

// Querying using LINQ StartsWith  
var sites = cosmosQueryFacade.GetItemsAsync(m => m.Name.StartsWith("Giant")).Result;

If I run this, I get an error:

An invalid query has been specified with filters against path(s) 
that are not range-indexed. 
Consider adding allow scan header in the request.

What’s gone wrong? The clue is in the error message – I don’t have the right indexes applied to my collection.

Indexing Policies in Cosmos

From Wikipedia, an index is a data structure that improves the speed of data retrieval from a database. But as we’ve seen from the error above, in Cosmos it’s even more than this. Certain types of index won’t permit certain types of comparison operation, and when I tried to carry out that operation, by default I got an error (rather than just a slow response).

One of the really well publicised benefits of Cosmos is that documents added to collections in a Azure Cosmos database are automatically indexed. And whereas that’s extremely powerful and useful, it’s not magic – Cosmos can’t know what indexes match my specific business logic, and won’t add them.

There are three types of indexes in Cosmos:

  • Hash, used for:
    • Equality queries e.g. m => m.Name == “Giant’s Causeway”
  • Range, used for:
    • Equality queries,
    • Comparison within a range, e.g. m => m.Age > 5, or m => m.StartsWith(“Giant”)
    • Ordering e.g. OrderBy(m => m.Name)
  • Spatial – used for geo-spatial data – more on this in future posts.

So I’ve created a collection called “NaturalSites” in my Cosmos emulator, and added some data to it – but how can I find out what the present indexing policy is. That’s pretty straightforward – it’s all in the Data Explorer again. Go to the Explorer tab, expand the database to see its collections, and then click on the “Scale & settings” menu item – this will show you the indexing policy for the collection.

indexes

When I created the database and collection from C#, the indexing policy created by default is shown below:

{
  "indexingMode": "consistent",
  "automatic": true,
  "includedPaths": [
    {
      "path": "/*",
      "indexes": [
        {
          "kind": "Range",
          "dataType": "Number",
          "precision": -1
        },
        {
          "kind": "Hash",
          "dataType": "String",
          "precision": 3
        }
      ]
    }
  ],
  "excludedPaths": []
}

I can see that in the list of indexes for my collection, the dataType of String has an index of Hash (I’ve highlighted this in red above). We know this index is good for equality comparisons, but as the error message from before suggests, we need this to be a Ranged index to be able to do more complex comparisons than just equality between two strings.

I can modify the index policy for the collection in C#, as shown below:

// Set up Uris to create database and collection
var databaseUri = UriFactory.CreateDatabaseUri(DatabaseId);
var constructedSiteCollectionUri = UriFactory.CreateDocumentCollectionUri(DatabaseIdConstructedSitesCollection);
 
// Create the database
client.CreateDatabaseIfNotExistsAsync(new Database { Id = DatabaseId }).Wait();
 
// Create a document collection
var naturalSitesCollection = new DocumentCollection { Id = NaturalSitesCollection };
// Now create the policy to make strings a Ranged index
var indexingPolicy = new IndexingPolicy();
indexingPolicy.IncludedPaths.Add(new IncludedPath
{
    Path = "/*",
    Indexes = new Collection<Microsoft.Azure.Documents.Index>()
    {
        new RangeIndex(DataType.String) { Precision = -1 }
    }
});

// Now assign the policy to the document collection
naturalSitesCollection.IndexingPolicy = indexingPolicy;
 
// And finally create the document collection
client.CreateDocumentCollectionIfNotExistsAsync(databaseUrinaturalSitesCollection).Wait();

And now if I inspect the Data Explorer for this collection, the index policy created is shown below. As you can see from the section highlighted in red, the kind of index now used for comparing the dataType String is now a Range.

{
  "indexingMode": "consistent",
  "automatic": true,
  "includedPaths": [
    {
      "path": "/*",
      "indexes": [
        {
          "kind": "Range",
          "dataType": "String",
          "precision": -1
        },
        {
          "kind": "Range",
          "dataType": "Number",
          "precision": -1
        }
      ]
    }
  ],
  "excludedPaths": []
}

So when I run the code below to look for sites that start with “Giant”, the code now works and returns objects rather than throwing an exception.

var sites = cosmosQueryFacade.GetItemsAsync(m => m.Name.StartsWith("Giant")).Result;

There are many more indexing examples here if you’re interested.

Wrapping up

I’ve taken a small step beyond the previous part of this tutorial, and I’m now able to query for strings that exactly and partially match values in my Cosmos database. As usual I’ve uploaded my code to GitHub and you can pull the code from here. Next time I’m going to try to convert my code to the new version of the SDK, which is now in public preview.

Continue reading

.net core, Azure, Cosmos

Getting started with Azure Cosmos DB and .NET Core: Part #1 – Installing the Cosmos emulator, writing and reading data

I’d like to start using Cosmos, and I’ve have a bunch of questions about it – how to create databases, how to write to it and read from it, how can I use attachments and spatial data, how can I secure it, how can I test the code that uses it…and lots more. So I’m going to write a few posts over the coming weeks which hopefully will answer these questions, starting with some basics and moving to more advanced topics in later posts.

Can I trial Cosmos to help me understand it a bit more?

Fortunately Microsoft has an answer for this – they’ve provided a Cosmos emulator, and I can trial Cosmos without going near the Azure cloud.

The official Microsoft docs on the Cosmos Emulator are fantastic – you can install it locally or use a Docker image.

My own preference is to use the installer. I’ve tried using the Docker image and this needs to download a Windows container which totals well over 5GB, which can take a long time.

docker_pull

The emulator’s installer is only about 50MB and I was able to get up and running with this a lot faster than with Docker containers. There were some snags when I installed it – after trying to run it for the first time, I got this message:

cosmos installer

But this was pretty easy to work around by just following the instruction in the message and running the emulator with the NoFirewall option:

Microsoft.Azure.Cosmos.Emulator.exe /NoFirewall

I prefer to manage the emulator from PowerShell – to do this, after installing the emulator I run the PowerShell command below to import modules that let me use some useful PowerShell commands.

Import-Module "$env:ProgramFiles\Azure Cosmos DB Emulator\PSModules\Microsoft.Azure.CosmosDB.Emulator"

And now I can control the emulator with those built in PowerShell commands.

powershell cosmos emulator controll

The Cosmos Emulator’s Local Data Explorer

When I’ve started the emulator, I can browse to the URL below:

https://localhost:8081/_explorer/index.html

This opens the Emulator’s Data Explorer, which has some quickstart connection information, like connection strings and samples:

emulator control panel

But more interestingly, I can also browse to a data explorer which allows me to browse databases in my Cosmos emulator, and collections within these databases using a SQL like language. Of course after I install the emulator, there are no databases or collections – but let’s start writing some .NET Core code to change that.

data explorer

Let’s write to, and read from, some Cosmos Databases and Collections with .NET Core

I’m going to write a very simple application to interact with the Cosmos Emulator. This isn’t production ready code – this is just to examine how we might carry out some common database operations using .NET Core and Azure Cosmos.

I’m using Visual Studio 2019 with the .NET Core 3.0 preview (3.0.100-preview-010184), and I’ve created an empty .NET Core Console application.

My sample application will be to store information about interesting places near me – so I’ve chosen to create a Cosmos database with the title “LocalLandmarks”. I’m going to create a collection in this database for natural landmarks, and in this first blog I’m only going to store the landmark name.

From my application, I need to install a NuGet package to access the Azure Cosmos libraries.

Install-Package Microsoft.Azure.DocumentDB.Core

First let’s set up some parameters and objects:

  • Our Cosmos Emulator endpoint is just https://localhost:8081;
  • We know from the Data Explorer that the emulator key is (this is the same for everyone that uses the emulator):
C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==
  • I’m going to call my database “LocalLandmarks”;
  • I’m going to call the collection of natural landmarks “NaturalSites”;
  • My POCO for natural landmarks can be very simple for now:
namespace CosmosEmulatorSample
{
    public class NaturalSite
    {
        public string Name { get; set; }
    }
}

So I can specify a few static readonly strings for my application:

private static readonly string CosmosEndpoint = "https://localhost:8081";
private static readonly string EmulatorKey = "C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==";
private static readonly string DatabaseId = "LocalLandmarks";
private static readonly string NaturalSitesCollection = "NaturalSites";

We can create a client to connect to our Cosmos Emulator using our specified parameters and the code below:

// Create the client connection
var client = new DocumentClient(
    new Uri(CosmosEndpoint), 
    EmulatorKey, 
    new ConnectionPolicy
    {
        ConnectionMode = ConnectionMode.Direct,
        ConnectionProtocol = Protocol.Tcp
    });

And now using this client we can create our “LocalLandmarks” database.

I’ve used the “Result” method to make many of the asychronous functions into synchronous functions for simplicity in this introductory post.

// Create a new database in Cosmos
var databaseCreationResult = client.CreateDatabaseAsync(new Database { Id = DatabaseId }).Result;
Console.WriteLine("The database Id created is: " + databaseCreationResult.Resource.Id);

Within this database, we can also create a collection to store our natural landmarks.

// Now initialize a new collection for our objects to live inside
var collectionCreationResult = client.CreateDocumentCollectionAsync(
    UriFactory.CreateDatabaseUri(DatabaseId),
    new DocumentCollection { Id = NaturalSitesCollection }).Result;
 
Console.WriteLine("The collection created has the ID: " + collectionCreationResult.Resource.Id);

So let’s declare and initialize a NaturalSite object – an example of a natural landmark near me is the Giant’s Causeway.

// Let's instantiate a POCO with a local landmark
var giantsCauseway = new NaturalSite { Name = "Giant's Causeway" };

And I can pass this object to the Cosmos client’s “CreateDocumentAsync” method to write this to Cosmos, and I can specify the database and collection that I’m targeting in this method also.

// Add this POCO as a document in Cosmos to our natural site collection
var itemResult = client
    .CreateDocumentAsync(
        UriFactory.CreateDocumentCollectionUri(DatabaseId, NaturalSitesCollection), giantsCauseway)
    .Result;
 
Console.WriteLine("The document has been created with the ID: " + itemResult.Resource.Id);

At this point I could look at the Cosmos Emulator’s Data Explorer and see this in my database, as shown below:

cosmos with data

Finally I can read back from this NaturalSite collection by ID – I know the ID of the document I just created in Cosmos, so I can just call the Cosmos client’s “ReadDocumentAsync” method and specify the database Id, the collection I want to search in, and the document Id that I want to retrieve. I convert the results to a NaturalSite POCO, and then I can read properties back from it.

// Use the ID to retrieve the object we just created
var document = client
    .ReadDocumentAsync(
        UriFactory.CreateDocumentUri(DatabaseId, NaturalSitesCollection, itemResult.Resource.Id))
    .Result;
 
// Convert the document resource returned to a NaturalSite POCO
NaturalSite site = (dynamic)document.Resource;
 
Console.WriteLine("The returned document is a natural landmark with name: " + site.Name);

I’ve uploaded this code to GitHub here.

Wrapping up

In this post, I’ve written about the Azure Cosmos emulator which I’ve used to experiment with coding for Cosmos. I’ve written a little bit of very basic C# code which uses the Cosmos SDK to create databases and collections, write to these collections, and also read documents from collections by primary key. Of course this query might not be that useful – we probably don’t know the IDs of the documents saved to the database (and probably don’t care either as it’s non-semantic). In the next part of this series, I’ll write about querying Cosmos documents by object properties using .NET.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, .net core, Azure, Azure DevOps, Azure DevOps Boards, Flurl

How to delete a TestCase from Azure DevOps boards using .NET, Flurl and the Azure DevOps Restful API

So here’s a problem…

I’ve been working with Azure DevOps and finding it really great – but every now and again I hit a roadblock and feel like I’m on the edge of what’s possible with the platform.

For instance – I’ve loaded work items and test cases into my development instance for some analysis before going to my production instance, and now I’d like to delete all of them. Sounds like a simple thing to do from the UI – I’ve selected multiple work items before, clicked on the ellipsis on one item and selected ‘Delete’ from the menu that appears.

bulk delete

Except that sometimes it doesn’t work out like that. Where’s the delete option gone in the menu below?

bulk delete fail

Right now you can only delete one test case at a time through the Azure DevOps web user interface

You can only delete test cases one at a time through the Azure DevOps web UI at the moment, and you can’t do it from the WorkItem list view. To delete a test case, select the test case to display its detailed view, and then select the ellipsis at the top right of this view to reveal an action menu (as shown below). You can select the options with the text ‘Permanently delete’

delete test case

Then you’ll be presented with a dialog asking you to confirm the deletion and enter the Test Case ID to confirm your intent.

perm delete

This is a lot of work if you’ve got a few (or a few hundred) test cases to delete.

Fortunately, this isn’t the only option available – I can .NET my way out of trouble.

You also can use .NET, Flurl and the Azure DevOps Restful API to delete test cases

Azure DevOps also provides a Restful interface which has comprehensive coverage of the functions available through the web UI – and sometimes a bit more. This is one of those instances where the using Restful API gives me the flexibility that I’m looking for.

I’ve previously written about using libraries with .NET to simplify accessing Restful interfaces – one of my favourite libraries is Flurl, because it makes it really easy for me to construct a URI endpoint and call Restful verbs in a fluent way.

The code below shows a .NET method where I’ve called the Delete verb on a Restful endpoint – this allows me to delete test cases by Id from my Azure DevOps Board.

using System.Net.Http;
using System.Threading.Tasks;
using Flurl;
using Flurl.Http;
 
namespace DeleteTestCasesFromAzureDevOpsApp
{
    public class TestCaseProcessor
    {
        public static async Task<HttpResponseMessage> Delete(int id, string projectUri, string projectName,
            string personalAccessToken)
        {
            var deleteUri = Url.Combine(projectUri, projectName, "_apis/test/testcases/", id.ToString(),
                "?api-version=5.0-preview.1");
 
            var responseMessage = await deleteUri
                .WithBasicAuth(string.Empty, personalAccessToken)
                .DeleteAsync();
 
            return responseMessage;
        }
    }
}

And it’s really easy to call this method, as shown in the code below – in addition to the test case ID, I just need to provide my Azure DevUps URI, my project name and a personal access token.

using System;
 
namespace DeleteTestCasesFromAzureDevOpsApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string testToken = "[[my personal access token]]";
            const string projectName = "Corvette";
            const int testCaseToDelete = 124;
 
            var responseMessage = TestCaseProcessor.Delete(testCaseToDelete, uri, projectName, testToken).Result;
 
            Console.WriteLine("Response code: " + responseMessage.StatusCode);
        }
    }
}

So now if I want to delete test cases in bulk, I just need to iterate through the list of IDs and call this method for each test case ID – which is much for me than deleting many test cases through the UI.

Wrapping up

Deleting test cases from Azure DevOps is a bit more difficult through the web UI than deleting other types of WorkItems – fortunately the Restful interface available is available, and I can use it with an application in .NET that can delete test cases quickly and easily. Hopefully this is useful to anyone who’s working with Azure DevOps Boards and needs to delete test cases.

.net, Azure, C# tip

Setting up relationships between work items on Azure DevOps boards, and using .NET to read these relationships

So here’s a question…

How do you set up relationships between work items, and display this relationship in Azure Devops?

There’s a well known relationship between epics, features, user stories and tasks/bugs in the Agile process, but on the ‘Work Items’ screen, Azure DevOps lists them without showing the relationship –  like in the screen below.

workliist

Display relationships using the Backlogs view in Azure DevOps Boards

The thing is that the Work Items screen just lists all the work items – I prefer to view my work items in the Backlogs screen which visually represents the relationship between them.

But how do you set up relationships between work items?

Let’s work an example through from the start. I set up a new project in Azure DevOps – once I created the project, I’m shown the Summary screen, and for me this looks like the screenshot below.

project home

On the left hand navigation menu, I click on the ‘Boards’ item, and when this expands, I select the ‘Backlogs’ sub-item. This presents me with a screen where I’ve a few options, like the one at the top left – ‘New Work Item’.

backlog view

But when I click on this ‘New Work Item’ button, the pop-up only allows me to create a work item with type of ‘User Story’ (as shown below). This is not what I want – I want to create an Epic.

user story

To do this, I have to change some defaults in my Project Settings. At the bottom left of my screen, I click on the ‘Project settings’ button, and then select the ‘Team configuration’ sub-menu item which sits under the ‘Boards’ menu heading. This shows me a screen like the one below.

team configuration

There’s a section on this page which shows me the navigation levels available to me – by default, I don’t have the Epics checkbox ticked. So I can just tick the box as shown below to make this available. No need to click save anywhere – that setting is automatically saved back to the cloud.

backlogs with epic

Now if I go back to the ‘Backlogs’ menu item under ‘Boards’ in my projects left hand navigation menu, I need to select the dropdown list in the top right of the screen – my default setting here is ‘Stories’, but I can open the menu and now choose ‘Epics’ instead.

backlogs with epics dropdown

Now when I click on the ‘New Work Item’ button, I can create an Epic, and enter in the epic’s title, as shown below.

my epic title 2

And I’ve created my first epic in an Azure DevOps Board!

Ok, but what about nesting other items under that Epic?

There are a few different ways, but it’s straightforward (when you know how) – the way I like to do this is by selecting the ‘+’ button on the right hand side of my Epic. If you hover over this ‘+’ button, a tooltip appears that says ‘Add Feature’, and clicking on the button does exactly that.

add feature hover

A large dialog appears once you’ve clicked ‘+’ where you can add feature details – and note that in the bottom right of this dialog, there’s a ‘Related Work’ section, that shows the Epic we previously created as a parent.

new features

After clicking the blue ‘Save & Close’ button on the top right of the New Feature dialog, you’ll be taken back to the project board’s Backlog view, and you can see the feature that you just created below the epic we created previously, and it’s indented one place to the right to visually represent the parent-child relationship, as shown below.

add user story dropdown

And if you hover over the ‘+’ button to the left of the feature you just created, you’ll see the hint that this button now allows you to create a new user story. So if you click on ‘+’, you’ll have a similar experience to before, except the dialog that pops up is for a work item type of ‘User Story’. And you can see the relationship between this and the parent feature again by looking in the bottom right corner of the dialog, in the ‘Related Work’ section.

my new user story

And just to finish off this section, when you save that user story you’ll be taken to the backlog screen, and again see the user story sitting below its parent feature, indented one place to the right, as shown below. From this user story, you can click on the ‘+’ button on the left, and this time you’ve got a couple of options – either create a bug or a task with that user story as a parent.

show task and bug

I went ahead and create a task and a bug – the experience of creating them is identical to before where a dialog pops up with the type you select, and any existing relationed work detailed in the bottom right of the dialog box. So the image below shows my 5 new work items (an epic, a feature, a story, a task and a bug), and it’s easy to see the relationship between them by how they’re indented relative to each other.

indented

What about getting these items in .NET – how do I find out what items are related to?

I’ve previously written about creating Azure DevOps work items using the .NET framework, and you can re-apply some of the same principles to read work items into .NET objects.

I created a .NET Framework console app and installed the required NuGet packages using:

Install-Package Microsoft.TeamFoundationServer.Client
Install-Package Microsoft.VisualStudio.Services.Client

Then I used the code below to read back information about item 64 in my backlog – this is a user story which has a parent feature, and two children – a task and a bug.

So I expected the code below to tell me that there were a list of three relations in the workItem.Relations property.

using Microsoft.TeamFoundation.WorkItemTracking.WebApi;
using Microsoft.VisualStudio.Services.Common;
using Microsoft.VisualStudio.Services.WebApi;
using System;
 
namespace ConsoleApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            // my unique organization's Azure DevOps uri
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string personalAccessToken = "[[***my personal access token***]]";
 
            var credentials = new VssBasicCredential(string.Empty, personalAccessToken);
 
            // connect to Azure DevOps
            var connection = new VssConnection(new Uri(uri), credentials);
            var workItemTrackingHttpClient = connection.GetClient<WorkItemTrackingHttpClient>();
 
            // get information about workitem 64
            const int workItemId = 64;
            var workItem = workItemTrackingHttpClient.GetWorkItemAsync(workItemId).Result;
 
            // get relations
            var relations = workItem.Relations;
            Console.WriteLine(relations.Count); // uh oh - reports there are zero relations!
        }
    }

But this code doesn’t show relationships – why isn’t it working?

It turns out that the GetWorkItemAsync method doesn’t return relations by default. Instead, the GetWorkItemAsync method has an overload where you can specify to what extra information to return using a WorkItemExpand enumeration. In the code below I’ve chosen to return everything using:

expand: WorkItemExpand.All

But if I only wanted to return relations I could use:

expand: WorkItemExpand.Relations

The code below now correctly reports there are three items related to workitem 64.

using Microsoft.TeamFoundation.WorkItemTracking.WebApi;
using Microsoft.TeamFoundation.WorkItemTracking.WebApi.Models;
using Microsoft.VisualStudio.Services.Common;
using Microsoft.VisualStudio.Services.WebApi;
using System;
 
namespace ConsoleApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            // my unique organization's Azure DevOps uri
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string personalAccessToken = "[[***my personal access token***]]";
 
            var credentials = new VssBasicCredential(string.Empty, personalAccessToken);
 
            // connect to Azure DevOps
            var connection = new VssConnection(new Uri(uri), credentials);
            var workItemTrackingHttpClient = connection.GetClient<WorkItemTrackingHttpClient>();
 
            // get information about workitem 64
            const int workItemId = 64;
            var workItem = workItemTrackingHttpClient.GetWorkItemAsync(workItemId, expand: WorkItemExpand.All).Result;
 
            // get relations
            var relations = workItem.Relations;
            Console.WriteLine(relations.Count); // now correctly reports there are 3 relations
        }
    }
}

The relations list now correctly reports:

Wrapping up

This post has been about how to create work items with a hierarchical relationship using the Azure DevOps web user interface, and view them using the Backlog view in Azure -DevOps boards. I’ve also written about how to read these items and relationships between them using the .NET framework – I hope this helps!


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

.net, Azure

Load your work items into Azure DevOps Boards with .NET

This post is about how to write a .NET application to move workitems from another source (e.g. JIRA, Excel etc) into Azure Boards in Azure DevOps, and a Nuget package I’ve built to hopefully make it a bit easier for anyone else doing this as well.

So here’s a problem…

Let’s say you’ve convinced your boss to move your projects to Azure Devops – great! You’re happy, and your team are happy, but before you can really start, there’s still some work to be done – migration of all the historical project data from your existing company systems….

Maybe your company has its own custom story/issue/bug tracking system (maybe it’s JIRA, maybe it’s Mantis, or something else), and you don’t want to lose or archive all that valuable content. You want to load all that content in your project’s Azure Board as well – how do you do that?

Use .NET with Azure Boards to solve this problem

I had exactly this problem recently – my project’s history was exported into one big CSV file, and I needed to get it into Azure Boards. I had loads of fields which I needed to keep and I don’t want to lose all this…

…so I ‘.NET’ted  my way out of trouble.

A bit of searching on the internet also leads me to the option of bulk loading using Excel and the TFS Standalone Office Integration pack, but I’m a programmer and I prefer the flexibility of using code. Though, y’know, YMMV.

excel link

First I created a .NET Framework console application, and added a couple of NuGet packages for Azure DevOps:

Install-Package Microsoft.TeamFoundationServer.Client
Install-Package Microsoft.VisualStudio.Services.Client

These are both projects that target .NET Framework, so I can’t use .NET Core for this yet.

With these included in my application, I now have access to objects which allow me to connect to Azure DevOps through .NET, and also connect to a work item client that allows me to perform create/read/update/delete operations on work items in my project’s board.

It’s pretty easy to load up my project history CSV into a list in a .NET application, so I knew I had all the puzzle pieces to solve this problem, I just needed to put them together.

In order to connect to Azure DevOps and add items using .NET, I used:

  • The name of the project I want to add work items to – my project codename is “Corvette
  • The Url of my Azure DevOps instance – http://dev.azure.com/jeremylindsay
  • My personal access token.

If you’ve not generated a personal access token in Azure DevOps before, check this link out for details on how to do it – it’s really straightforward from the Azure DevOps portal:

https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=vsts

I can now use the code below to connect to AzureDevOps and create a work item client.

var uri = new Uri("http://dev.azure.com/jeremylindsay");
var personalAccessToken = "[***my access token***]";
var projectName = "Corvette";
 
var credentials = new VssBasicCredential("", personalAccessToken);
 
var connection = new VssConnection(uri, credentials);
var workItemTrackingHttpClient = connection.GetClient<WorkItemTrackingHttpClient>();

Next, I need to create what is basically a list of name and value pairs which describes the name of the work item field (e.g. title, description etc), and the value that I want to put in that field.

This link below describes the fields you can access through code:

https://docs.microsoft.com/en-us/azure/devops/reference/xml/reportable-fields-reference?view=vsts

It’s a little bit more complex than a normal dictionaries or other key-value pair objects in .NET but not that difficult. The work item client uses custom objects called JsonPatchDocuments and JsonPatchOperations. Also, the names of the fields are not intuitive out of the box, but given all that, I can still create a work item in .NET using the code below:

var bug = new JsonPatchDocument
{
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/System.Title",
        Value = "Spelling mistake on the home page"
    },
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/Microsoft.VSTS.TCM.ReproSteps",
        Value = "Log in, look at the home page - there is a spelling mistake."
    },
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/Microsoft.VSTS.Common.Priority",
        Value = "1"
    },
    new JsonPatchOperation()
    {
        Operation = Operation.Add,
        Path = "/fields/Microsoft.VSTS.Common.Severity",
        Value = "2 - High"
    }
};

Then I can add the bug to my Board with the code below:

workItemTrackingHttpClient.CreateWorkItemAsync(bug, ProjectName, "Bug").Result;

Now this works and is very flexible, but I think my code could be made more readable and easy to use. So I refactored the code, moved most of it into library, and uploaded it to NuGet here. My refactoring is pretty simple – I’m not going to go into lots of detail on how I did it, but if you’re interested the code is up on GitHub here.

If you’d like to get this package, you can use the command below

Install-Package AzureDevOpsBoardsCustomWorkItemObjects -pre

This package depends on the two NuGet packages I referred to earlier in this post, so they’ll be added automatically if you install my NuGet package.

This allows us to instantiate a bug object look much more like creation of a normal POCO, as shown below:

var bug = new AzureDevOpsBug
{
    Title = "Spelling mistake on the home page",
    ReproSteps = "Log in, look at the home page - there is a spelling mistake.",
    Priority = AzureDevOpsWorkItemPriority.Medium,
    Severity = AzureDevOpsWorkItemSeverity.Low,
    AssignedTo = "Jeremy Lindsay",
    Comment = "First comment from me",
    Activity = "Development",
    AcceptanceCriteria = "This is the acceptance criteria",
    SystemInformation = "This is the system information",
    Effort = 13,
    Tag = "Cosmetic; UI Only"
};

And to push this bug to my Azure Board, I can use the code below which is a little simpler than what I wrote previously.

using AzureDevOpsCustomObjects;
using AzureDevOpsCustomObjects.Enumerations;
using AzureDevOpsCustomObjects.WorkItems;
 
namespace ConsoleApp
{
    internal static class Program
    {
        private static void Main(string[] args)
        {
            const string uri = "https://dev.azure.com/jeremylindsay";
            const string personalAccessToken = "[[***my personal access token***]]";
            const string projectName = "Corvette";
 
            var workItemCreator = new WorkItemCreator(uri, personalAccessToken, projectName);
 
            var bug = new AzureDevOpsBug
            {
                Title = "Spelling mistake on the home page",
                ReproSteps = "Log in, look at the home page - there is a spelling mistake.",
                Priority = AzureDevOpsWorkItemPriority.Medium,
                Severity = AzureDevOpsWorkItemSeverity.Low,
                AssignedTo = "Jeremy Lindsay",
                Comment = "First comment from me",
                Activity = "Development",
                AcceptanceCriteria = "This is the acceptance criteria",
                SystemInformation = "This is the system information",
                Effort = 13,
                Tag = "Cosmetic; UI Only"
            };
 
            var createdBug = workItemCreator.Create(bug);
        }
    }
}

I’ve chosen to instantiate the bug with hard-coded text in the example above for clarity – but obviously you can instantiate the POCO any way you like, for example from a database, or perhaps parsing data out of a CSV file.

Anyway, the image below shows the bug added to my Azure Board.

bug

Of course, Bugs are not the only types of work item – let’s say I want to add Product Backlog Items also. And there are many, many different fields used in Azure Boards, and I haven’t coded for all of them in my NuGet package. So:

  • I’ve also added a Product Backlog object into my NuGet package,
  • I’ve made the creation method generic so it can detect the object type and work out what type of work item is being added to the Board
  • I’ve made the work item objects extensible so I users can add any fields which I haven’t coded for yet.

For example, the code below how to add a task and include a comment in the System.History field:

private static void Main(string[] args)
{
const string uri = "https://dev.azure.com/jeremylindsay";
const string personalAccessToken = "[[***my personal access token***]]";
const string projectName = "Corvette";
 
var workItemCreator = new WorkItemCreator(uri, personalAccessToken, projectName);
 
var productBacklogItem = new AzureDevOpsProductBacklogItem
{
    Title = "Add reports for how many users log in each day",
    Description = "Need a new report with log in statistics.",
    Priority = AzureDevOpsWorkItemPriority.Low,
    Severity = AzureDevOpsWorkItemSeverity.Low,
    AssignedTo = "Jeremy Lindsay",
    Activity = "Development",
    AcceptanceCriteria = "This is the acceptance criteria",
    SystemInformation = "This is the system information",
    Effort = 13,
    Tag = "Reporting; Users"
};
 
productBacklogItem.Add(
    new JsonPatchOperation
    {
        Path = "/fields/System.History",
        Value = "Comment from product owner."
    }
);
 
var createdBacklogItem = workItemCreator.Create(productBacklogItem);
}

Obviously I can change the code to allow addition of comments through a property in the AzureDevOpsProductBacklogItem POCO, but this is just an example to demonstrate how it can be done by adding a JsonPatchOperation.

The image below shows the product backlog item successfully added to my Azure Board.

bug

Wrapping up

The Boards component of Azure DevOps is a useful and effective way to track your teams work items. And if you want to populate a new Board with a list of existing bugs or backlog items, you can do this with .NET. I guess a lot of these functions aren’t new, and they were available in VSTS, and it’s still nice to see these powerful functions and libraries continue to be supported. And hopefully the NuGet package I’ve created to assist in the process will be useful to some of you who are working through the same migration challenges that I am. Obviously this NuGet package can still be improved a lot – it just covers Backlog Items and Bugs right now, and it’d be better if it flagged those fields that are read only – but it’s good enough to meet minimum viable standards for me right now, and maybe it’ll be helpful for you too.


About me: I regularly post about Microsoft technologies like Azure and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!

Azure, GIS, Leaflet, Security, Web Development

Getting started with Azure Maps, using Leaflet to display roads and satellite images, and comparing different browser behaviours

In this post I’m going to describe how to use the new(ish) Azure Maps service from Microsoft with the Leaflet JavaScript library. Azure Maps provides its own API for Geoservices, but I have an existing application that uses Leaflet, and I wanted to try out using the Azure Maps tiling services.

Rather than just replicating the example that already exists on the excellent Azure Maps Code Samples site, I’ll go a bit further:

  • I’ll show how to display both the tiles with roads and those with aerial images
  • I’ll show how to switch between the layers using a UI component on the map
  • I’ll show how Leaflet can identify your present location
  • And I’ll talk about my experiences of location accuracy in Chrome, Firefox and Edge browsers on my desktop machine.

As usual, I’ve made my code open source and posted it to GitHub here.

First, use your Azure account to get your map API Key

I won’t go into lots of detail about this part – Microsoft have documented the process very well here. In summary:

  • If you don’t have an Azure account, there are instructions here on how to create one.
  • Create a Maps account within the Azure Portal and get your API Key (instructions here).

Once you have set up a resource group in Azure to manage your mapping services, you’ll be able to track usage and errors through the portal – I’ve pasted graphs of my usage and recorded errors below.

graphs

You’ll use this API Key to identify yourself to the Azure Maps tiling service. Azure Maps is not a free service – pricing information is here – although presently on the S0 tier there is an included free quantity of tiles and services.

API Key security is one key area of Azure Maps that I would like to be enhanced – the API Key has to be rendered on the client somewhere in plain text and then passed back to the maps API. Even with HTTPS, the API Key could be easily intercepted by someone viewing the page source, or using a tool to read outgoing requests.

Many other tiling services use CORS to restrict which domains can make requests, but:

  • Azure Maps doesn’t do this at the time of writing and
  • This isn’t real security because the Origin header can be easily modified (I know it’s a forbidden header name for a browser but tools like cUrl can spoof the Origin). More discussion here and here.

So this isn’t a solved problem yet – I’d recommend you consider how you use your API Key very carefully and bear in mind that if you expose it on the internet you’re leaving your account open to abuse. There’s an open issue about this raised on GitHub and hopefully there will be an announcement soon.

Next, set up your web page to use the Leaflet JS library

There’s a very helpful ‘getting started‘ tutorial on the Leaflet website – I added the stylesheet and javascript to my webpage’s head using the code below.

<link rel="stylesheet" href="https://unpkg.com/leaflet@1.3.1/dist/leaflet.css"
      integrity="sha512-Rksm5RenBEKSKFjgI3a41vrjkw4EVPlJ3+OiI65vTjIdo9brlAacEuKOiQ5OFh7cOI1bkDwLqdLw3Zg0cRJAAQ=="
      crossorigin="" />
 
<script src="https://unpkg.com/leaflet@1.3.1/dist/leaflet.js"
        integrity="sha512-/Nsx9X4HebavoBvEBuyp3I7od5tA0UzAxs+j83KgC8PU0kgB4XiK4Lfe4y4cgBtaRJQEIFCW+oC506aPT2L1zw=="
        crossorigin=""></script>

Now add the URLs to your JavaScript to access the tiling services

I’ve included some very simple JavaScript code below for accessing two Azure Maps services – the tiles which display roadmaps and also those which have satellite images.

function satelliteImageryUrl() {
    return "https://atlas.microsoft.com/map/imagery/png?api-version=1&style=satellite&tileSize=512&zoom={z}&x={x}&y={y}&subscription-key={subscriptionKey}";
}
 
function roadMapTilesUrl() {
    return "https://atlas.microsoft.com/map/tile/png?api-version=1&layer=basic&style=main&TileFormat=pbf&tileSize=512&zoom={z}&x={x}&y={y}&subscription-key={subscriptionKey}";
}

If you’re interested in reading more about these two tiling services, there’s more about the road map service here and more about the satellite image service here.

Now add the tiling layers to Leaflet and create the map

I’ve written a JavaScript function below which registers the two tiling layers (satellite and roads) with Leaflet. It also instantiates the map object, and attempts to identify the user’s location from the browser. Finally it registers a control which will appear on the map and list the available tiling services, allowing me to toggle between them on the fly.

var map;
 
function GetMap() {
    var subscriptionKey = '[[[**YOUR API KEY HERE**]]]';
 
    var satellite = L.tileLayer(satelliteImageryUrl(), {
        attribution: '© ' + new Date().getFullYear() + ' Microsoft, © 1992 - ' + new Date().getFullYear() + ' TomTom',
        maxZoom: 18,
        tileSize: 512,
        zoomOffset: -1,
        id: 'azureSatelliteMaps',
        crossOrigin: true,
        subscriptionKey: subscriptionKey
    });
 
    var roads = L.tileLayer(roadMapTilesUrl(), {
        attribution: '© ' + new Date().getFullYear() + ' Microsoft, © 1992 - ' + new Date().getFullYear() + ' TomTom',
        maxZoom: 18,
        tileSize: 512,
        zoomOffset: -1,
        id: 'azureRoadMaps',
        crossOrigin: true,
        subscriptionKey: subscriptionKey
    });
 
    // instantiate the map object and display the 'roads' layer
    map = L.map('myMap', { layers: [roads] });
 
    // attempt to identify the user's location from the browser
    map.locate({ setView: true, enableHighAccuracy: true });
    map.on('locationfound', onLocationFound);
 
    // create an array of the tiling base layers and their 'friendly' names
    var baseMaps = {
        "Azure Satellite Imagery": satellite,
        "Azure Roads": roads
    };
 
    // add a control to map (top-right by default) allowing the user to toggle the layer
    L.control.layers(baseMaps, null, { collapsed: false }).addTo(map);
}

Finally, I’ve added a div to my page which specifies the size of the map, gives it the Id “mymap” (which I’ve used in the JavaScript above when instantiating the map object), and I call the GetMap() method when the page loads.

<body onload="GetMap()">
    <div id="myMap" style="position:relative;width:900px;height:600px;"></div>
</body>

If the browser GeoServices have identified my location, I’ll also be given an accuracy in meters – the JavaScript below allows me to draw a circle on my map to indicate where the browser believes my location to be.

map.on('locationfound', onLocationFound);
 
function onLocationFound(e) {
    var radius = e.accuracy / 2;
 
    L.marker(e.latlng)
        .addTo(map)
        .bindPopup("You are within " + radius + " meters from this point")
        .openPopup();
 
    L.circle(e.latlng, radius).addTo(map);
}

And I’ve taken some screenshots of the results below – first of all the results in the MS Edge browser showing roads and landmarks near my location…

roads

…and swapping to the satellite imagery using the control at the top right of the map.

satellite

Results in Firefox and Chrome

When I ran this in Firefox and Chrome, I found that my location was identified with much less accuracy. I know both of these browsers use the Google GeoLocation API and MS Edge uses the Windows Location API so this might account for the difference on my machine (Windows 10), but I’d need to do more experimentation to better understand. Obviously my laptop machine doesn’t have GPS hardware, so testing on a mobile phone probably would give very different results.

roads2

Wrapping up

We’ve seen how to use the Azure Maps tiling services with the Leaflet JS library, and create a very basic web application which uses the Azure Maps tiling services to display both road and landmark data, and also satellite aerial imagery. It seems to me that MS Edge is able to identify my location much more accurately on a desktop machine than Firefox or Chrome on my Windows 10 machine (within a 75m radius on Edge, and over 3.114km radius on Firefox and Chrome) – however, your mileage may vary.

Finally, as I emphasised above, I’ve concerns about the security of a production application using an API Key in plain text inside my JavaScript, and hopefully Microsoft will deploy a solution with improved security soon.


About me: I regularly post about Microsoft technologies and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!