The posts tagged with 'Azure' are listed below. You can get to each post by clicking the title in the list.

Pushing Docker Containers to Azure Container Registry with Visual Studio Code

Visual Studio Code offers some great extensions for use when you're developing your applications on Azure or using Azure resources. You can find the collection of Azure extensions in the Visual Studio Marketplace. I'm investigating ways I can use these extensions together to build apps. This post will walk through how you can use Visual Studio Code to build a set of Docker images. Then those images will be published to the Azure Container Registry, so I can share or deploy them later. I'll use the Docker extension for Visual Studio Code to see the images once they're published.

Starting with the SmartHotel360 Microservices

Given the recent work I've been doing with the SmartHotel360 code and demonstrations, I had the back-end microservices built on my machine, so I'll use those images for this demonstration. You can learn more about SmartHotel360 by watching some of the great videos we've recorded on the Visual Studio Toolbox show with Robert Green. Our demo leader Erika introduces SmartHotel360, then Maria and I talk about the Web & Serverless areas of SmartHotel360 with .NET Core 2 and Azure Functions, and Robert talks to James about the SmartHotel360 mobile apps. We have a few more shows coming, so stay tuned to Visual Studio Toolbox.

Build the Images

The screen shot below shows the SmartHotel360 docker-compose.yml file open in Visual Studio Code. Since I have the Docker extension installed, I can see the base images I've got from other builds on my machine. Note, none of the actual SmartHotel360 microservices are built yet, so none of those images are visible.

No images built

The first step is to run docker-compose build to build all of the images. This will take a few minutes to build the numerous images that comprise all of the SmartHotel360 back-end microservices.

docker-compose build

Once the images are built, they'll show up in the Docker Visual Studio Code extension's tree view. I especially love that this is automatic, and I don't even have to manually refresh the list.

Images visible in VS Code

Composing the Databases

The docker-compose.yml file indicates we'll be creating a PostgreSQL database and a Microsoft SQL Server database. You can learn more interesting things on how to run SQL Server in Containers from the Connect(); 2017 session, Use SQL Server 2017 in Docker containers for your CI/CD process.

I prefer to make sure my data tier services are up first, so I'll run docker-compose up sql-data reviews-data tasks-data in the Visual Studio Code terminal.

Data services building

Once these images are built, I see the containers light up in the Visual Studio Code Docker extension treeview.

Data services contained and running

Now, I'll run a docker-compose up to fire up the remaning images. Each of these provide a specific unit of functionality to the SmartHotel360 public web app, or to the individual mobile apps. If you peruse the code, you'll see Java, Node.js and C# code dispersed all throughout the source. This demonstrates how a team of developers using a variety of technologies and frameworks to isolate their functionality, but share in the construction of a community of dependent microservices.

Building the APIs

Now that the containers are built I can see all of them in the Docker tools for Visual Studio Code. The handy context tools allow for easy management of the images and containers, and the integrated command line gives me a place to enter those frequently-used Docker commands.

APIs and Data services are ready

Deploying to Azure Container Registry (ACR)

I want to make use of Azure Container Registry to store the SmartHotel360 private images, so I can share or deploy them later. I'll use the Docker extension for Visual Studio Code to see the images once they're published.

Below is a screen shot of the SmartHotel360 private container registry, provided as an ACR instance.

SmartHotel360 private registry

Note in particular that the Admin user property is disabled. This setting is documented well, as is implications for using it (or not using it) on The short story is, I can't really access my registry outside of my cloud resources.

Admin account disabled

As long as I've got this set, I won't be able to see my ACR instances and the repositories they contain.

No registries found

So when I first get started, I'll want to enable Admin user (you can always turn this off later, and passwords can be regenerated whenever you need).

Then I'll use the username and password to publish images from the command line. As I do this, I'll see my container images lighting up in the Azure Registry pane in Visual Studio Code's Docker extension.

Enabling the Admin user

Now my Azure Container Registry (ACR) instances light up, and I can begin using the Docker command line interface from within Visual Studio Code to push images into Azure.

Registry visible

Now I'll enter a few commands into the terminal that will log me into my Azure Container Register instance and allow me to publish a Docker image into ACR. There, it'll be ready for deploymnet to App Service, Azure Container Service, or Azure Kubernetes Service.

Pushing images to ACR

Once the image has been published, you can see it as a repository in the ACR tree view node in the Docker panel in Visual Studio Code.

See the repository in VS Code

Other Ideas for using Visual Studio Code & Azure?

This post shows just one simple thing you can do with Visual Studio Code and Azure together. If you have other ideas you'd like to see, drop me a comment below, or send me a message on twitter to let me know what sorts of things you'd like to see.

Easily configuring Azure Functions with Visual Studio Code

Visual Studio Code offers some great extensions for use when you're developing your applications on Azure or using Azure resources. You can find the collection of Azure extensions in the Visual Studio Marketplace. I'm investigating ways I can use these extensions together to build apps. This post demonstrates how, when you're developing an Azure Function using Visual Studio Code's Azure Functions extension, you can also use the Azure Storage extension to make configuration a snap.

The Azure Functions extension makes it really easy to create a new Functions project (red arrow) in your workspace's active directory. It also provides a button in the Functions Explorer pane that makes it easy to create new Function files in my project (green arrow).

Function toolbar buttons

In my case, I want to create a Function that wakes up when new blobs are dropped into a blob container. You can learn more about blob-triggered functions on

Create a blob trigger function

During the handy Function-creation process, I'm asked if I'd like to configure my Function project with the storage account's connection string. I don't have this yet, as I'm still experimenting, so I click the close button.

Skip this

The new Function code opens up in Visual Studio Code, and I'm ready to start coding. At this point I'm curious what'll happen, so I go ahead and hit F5 to start debugging.

Blob Triggered Function code

Debugging immediately throws an error to let me know I should've given it a connection string.

Humorous narrator: If you're like me, you don't always let the tools do their jobs for you, accept the defaults, and pray things will be okay. As a result, I tend to do a lot of configuring (which I later replace with devops fanciness). This "I'll do it later myself" attitude results with my hastily copying connection strings and accidentally pasting them into emails later. Keep reading and I'll save you hours of the repeated process of copying and not-properly pasting your connection string.

The good news is that the Azure Storage tools make it easy to fix the Missing value for AzureWebJobsStorage in local.settings.json error message shown below.

Missing value for AzureWebJobsStorage in local.settings.json error

Whether I forgot to create a Storage account, or didn't have one and wanted to create it later, or I'm migrating from my localhost Storage emulated environment to a live one in Azure, the tools make it easy.

By simply expanding the subscription so I can see all of my Storage Accounts, I can right-click the account I'm after and select the Copy Connection String context menu.

Copy the connection string

The results can be pasted into my local.settings.json file in a jiffy.

Paste the connection string

And with that, I've configured my Azure Function, and it runs just fine in the debugger.

Running fine now

Other Ideas for using Visual Studio Code & Azure?

This post shows just one simple thing you can do with Visual Studio Code and Azure together. If you have other ideas you'd like to see, drop me a comment below, or send me a message on twitter to let me know what sorts of things you'd like to see.

Important Updates regarding Azure Tools for Visual Studio Code V2

Some time has passed since the last update of the Azure Tools for Visual Studio Code, and in that time there have been some great advancements in the extensions available to Azure developers who use Visual Studio Code. Some amazing new extensions have appeared in the Azure category of the Visual Studio Marketplace that make it easy and fun to party with Azure.

V2 will remove some features

The short version is that the official extensions - Azure App Service and Azure Storage - provide much better experiences for their service areas than does our (myself, Christos Matskas, and Ian Philpot) extension. So we're removing the redundant functionality from the Azure Tools for Visual Studio Code in lieu of the official extensions' features. There will be a notice of this change when the extension is updated.

The official extensions are so much easier to use, offer richer experiences and more discoverable features, and the authentication experience is great. Plus, the official extensions share certain components, yet revise independently. Smaller pieces, easier autonomy for the individual feature owners. Yay extensibility!

Future Plans

The great folks in the Azure SDK and Visual Studio Code teams have worked together to create not only a great set of extensions for Azure developers, but they've also given extension authors a common dependency to get started, the Azure Account extension. There's guidance for developers who want to write Azure extensions for Visual Studio Code, too.

With so many improvements in the community of extensions being developed around this great SDK and new directions in how authors contribute Azure features our team has some opportunities ahead for distributing the individual resource management areas into separate, smaller extensions. Obviously, there's a huge benefit to retrofitting our extension with the new common underpinnings, too. We'll keep everyone updated in the updates to the extension, link back to this blog for updates to the extension.

Question - to depend or not?

There are various schools of thought on adding arbitrary dependencies for feature redirection. My personal feelings on this is that I'd be receptive to an extension author who takes a dependency on a later extension that improves/replaces the original feature. But some find this intrusive.

What's your opinion? Leave a comment. Given the potential intrusiveness of taking a dependency on the official extensions, I'll hold off, but given it'd be my own personal preference, I'd implement this next if the community prefers it.

Querying Azure Cosmos DB using serverless Node.js

A few days ago I blogged about using Functions, .NET, and Cosmos DB's Graph API together, and as I pointed out in an update to that post accompanying this one, the experience of working with Cosmos DB's Graph API was exciting and had offered some interesting opportunities and mental exercises. The first [somewhat constructive] criticism I received on the initial post was from my friend and colleague Burke Holland, who asked "where's the Node.js love?," to which I replied "I wanted to use the VS Functions tools and I'm more comfortable with .NET and..." and Burke cut me off in mid-message with clear judgment and purpose:

"Nobody puts Node in the corner."

I can't ignore a strong argument that conjures that gem of filmography and choreography, so the second post in the series will give Node.js a break from the battlefield, escorting it right back onto the dancefloor. I'll show you how to build a quick-and-dirty serverless Azure Function that uses the open-source gremlin-secure NPM package to query the same Cosmos DB Graph database from the original post and article Azure Cosmos DB's Graph API .NET Client. The Node.js post will give us an HTTP API we can call when we want to get a list of all the people who know one specific person.

Part 2 of the Serverless & Schemaless Series

Functions and Cosmos DB's nature together create a Serverless & Schemaless Scenario, and the opportunities this scenario provides for agile developers dealing with evolving structures and processes seem vast. This post is one in what I call the Serverless & Schemaless Series:

  1. Querying Azure Cosmos DB's Graph API using an Azure Function - Walks through creating a .NET Function that queries Cosmos DB's Graph API using the Cosmos DB Graph API .NET SDK.
  2. Querying Azure Cosmos DB using serverless Node.js (this post)
  3. TBD
  4. TBD

Creating an Azure Function with Node.js

The Azure Functions JavaScript developer guide has a wealth of information on some of the internal Function SDKs, as well as how to do things like reading environment variables into your code that you configure in the Azure portal, and more. This is definitely recommended reading for more context on some of the lower-level details we won't cover in this post.

Let's dive in and create a new Azure Function, using JavaScript as the language du jour and the HttpTrigger template as a getting-started point.

Create a new trigger

Once the new Function has been created and deployed into your Azure subscription and resource group, the Functions editing experience will open in the portal. Note a few key areas of the window:

  1. The code editor, where we can make updates to our serverless code.
  2. The Test area on the right side of the code editor. This dialog is useful for executing test commands against an Azure Function. The HttpTrigger template has a name parameter expected, so the Request body widget provides a quick way of editing the request content.
  3. The View Files tab. It provides quick access to the *.js, *.json, and other source code files that comprise a Function.

The new Function in the editor

The code below is also available in my fork of the original code used in the article that got me started. Copy this code and paste it into the code editor in the Azure portal. Portions of the source code below was borrowed (okay, it was stolen and bastardized) from the Cosmos DB team's great code sample on Getting Started with Node.js and the Graph API.

var Gremlin = require("gremlin-secure");

// read the configuration from the environment settings
const config = {
    "database" : process.env["database"],
    "collection" : process.env["collection"],
    "AuthKey" : process.env["AuthKey"],
    "nodeEndpoint" : process.env["nodeEndpoint"]

// handle the http request
module.exports = function (context, req) {
    context.log('JavaScript HTTP trigger function processed a request.');

    if ( || (req.body && {

        // create a new client
        const client = Gremlin.createClient(443, 
                "user":"/dbs/" + config.database + "/colls/" + config.collection,
                "password": config.AuthKey

        // execute the query
        client.execute("g.V().outE('knows').where(inV().has('id','" + ( || + "'))", { }, (err, results) => {
            if (err) return console.error(err);

            // return the results
            context.res = {
                // status: 200, /* Defaults to 200 */
                body: results

    else {
        context.res = {
            status: 400,
            body: "Please pass a name on the query string or in the request body"


Once you paste in the code the editor should look more like this.


Note in particular these lines from the source code for the Node.js Function. These lines of code read from the environment variables for the Function, which can be set using the Azure portal.

// read the configuration from the environment settings
const config = {
    "database" : process.env["database"],
    "collection" : process.env["collection"],
    "AuthKey" : process.env["AuthKey"],
    "nodeEndpoint" : process.env["nodeEndpoint"]

You can avoid putting the environment variables into your source code and your source control repository. To set up these variables you can go to the Application Settings blade of your Function (or any other App Service). You can get to the Application Settings blade from the Platform features blade in your Function app.

Getting to Application Settings

Each of the Application Settings are available to the Node.js code as environment variables, accessible using the process.env[name] coding pattern. Note the AuthKey setting - it is the same key that's being used by the .NET code from the first post in this series. Since the .NET Function and the Node.js Function would be running in the same App Service instance, both can share these Application settings.

That said, the Gremlin NPM package for Node.js expects a slightly different URL structure than the .NET client, so the nodeEndpoint Application Setting will function to provide the Node.js Function the URL it'll reach out to when communicating with the Graph API.

Configuring environment variables in the portal

Installing Dependencies via NPM

Since the Node.js code for the Azure Function will make use of the gremlin-secure NPM package to issue queries over to the Cosmos DB Graph, I'll need to install that and any other dependent NPM packages into my Function before it'll run properly.

If I was using a Git deploy model here, the Kudu deployment engine is smart enough to know how to install NPM packages for me. But I'm not using Kudu explicitly in this particular App Service, I'm building it using the web-based Azure Portal code editor and testing tools.

Luckilly, Kudu and App Service make it just as easy to install NPM packages using the portal as it is using Git deploy. As with any proper Node.js app, I'll need to add a package.json file to my app so that I can inform NPM which dependent packages the code will need. To create a new file in my Function's file store named package.json.


Next, add the following code to the package.json file you just added to the Function. The code is also in the repository containing this blog's sample code.

  "name": "node-gremlin",
  "version": "0.0.0",
  "description": "",
  "homepage": "",
  "author": {
    "name": "username",
    "email": "",
    "url": ""
  "main": "lib/index.js",
  "keywords": [
    "cosmos db",
  "license": "MIT",
  "dependencies": {
    "gremlin-secure": "^1.0.2"

The screen shot below shows what the package.json looks like once I added it to my own function. Note, I've customized my author metadata and added keywords to the file's content; you may want to customize these to reflect your own... um... meta?


On the Platform Features blade there's an icon in the Development Tools area called Console. Clicking this icon will bring up an interactive console window, right in the browser. By executing NPM commands using this console it is possible to install all the dependencies your app needs manually.


The screen shot below shows how I first need to cd into the correct folder to find the package.json I just created. Once I find that folder and move into it, executing an npm install will bring down all the NPM dependencies needed by the Node.js code.


When I flip back to the View files blade in the portal, I can see that the node_modules folder has been created. The dependencies have been installed, so now I can safely execute to Function and test it out using the portal's development tools.


Now that all the dependencies have been installed I can test the function directly in the Azure portal.

Testing the Function

The Test segment of the Azure portal is great for making sure Functions operate as expected. The test tool helps debugging and validating that things are working properly.

Using the Request body area of the portal, I can customize the JSON content that will be POSTed over HTTP to the Function endpoint.


Note the JSON below in the bottom-right corner of the Azure portal. This JSON was returned from the Function based on the request body passed into the server-side function.


This is the second post in a series about Azure Functions and Cosmos DB, using the Graph API to access the Cosmos DB data. We've covered .NET and Node.js Functions that wrap around the Cosmos DB database, providing a variety of HTTP operations that clients can call to consume the data in the Cosmos DB Graph.

The next post will probably highlight how a single client app could be built to reach out to these back-end Functions, and we'll take a look at how these items could automated in a later post. If you have any inquiries, requests, or ideas, feel free to use the comment tools below to throw something out for me to include in the series and I'll try to add some content and additional posts to the series.

This is an exciting time for doing serverless microservices to make use of a variety of back-end storage containers in Azure, and I'm enjoying learning a few new tricks. Thanks for reading, and happy coding!

Querying Azure Cosmos DB's Graph API using an Azure Function

Azure Cosmos DB is an exciting new way to store data. It offers a few approaches to data storage, one of which has been mysterious to me - Graph APIs. Cosmos DB has support for dynamic graph api structures and allows developers to use the rich Gremlin query syntax to access and update data. When I read the great walk-through article on describing how to use Azure Cosmos DB's Graph API .NET Client I was really excited how dynamic my data could be in the Cosmos DB Graph API. I'd been looking for some demo subject matter to try out the new Azure Functions tools for Visual Studio, so I decided to put these three new ideas together to build an Azure Function that allows a serverless approach to searching a Cosmos DB via the Graph API.

Update: I've made a few updates below after doing some due dilligence on a dependency-related exception I observed during local debugging. If SDKs evolve I'll update the post and make additional updates in this area of the post.

Introducing the Serverless & Schemaless Series

Authoring this post and learning about Graph API was really exciting, and like all good technologists I found myself becoming a little obsessed with the Graph and the opportunities the world of Functions and Cosmos DB has to offer. Functions and Cosmos DB's nature together create a Serverless & Schemaless Scenario, and the opportunities this scenario provides for agile developers dealing with evolving structures and processes seem vast. This post is one in what I call the Serverless & Schemaless Series:

  1. Querying Azure Cosmos DB's Graph API using an Azure Function (this post)
  2. Querying Azure Cosmos DB using serverless Node.js - Walks through creating a Node.js Function that queries Cosmos DB's Graph API using an open-source Gremlin package
  3. TBD
  4. TBD

Building a Function using Visual Studio

The article I mentioned earlier, Azure Cosmos DB's Graph API .NET Client, has a companion GitHub repository containing some great getting-started code. The console application project basically sets up a Cosmos DB database with some sample person data. I forked the companion repository into my own fork here, which contains the basic Function code I'll describe below. So if you just want the code and aren't interested my verbose discussion, you can find it here.

First step should be obvious - we need to add a Function to the solution.


Once the Function project is created there are a few NuGet related updates and installs we'll need to make. To make sure we're using the latest and greatest Functions SDK, it'd be a good idea to update the Microsoft.NET.Sdk.Functions package.


The Cosmos DB Graph API team was nice enough to give us a .NET Client SDK, so we should use it. Use the handy NuGet tools to install the Microsoft.Azure.Graphs package.


During debugging, I noticed a few runtime errors that indicated the Mono.CSharp assemblies couldn't be found. I presume this has something to do with the emulated environment, but don't quote me on that. I followed up Donna Malayeri, one of the awesome Program Managers on the Functions team to get some details here, thinking the reference might indicate a func.exe issue or potential emulator variance. She confirmed there's no dependency on Mono.CSharp in the Functions emulator.

So then I checked in with Andrew Liu, one of the awesome Program Managers in the Cosmos DB team. He confirmed that one of the dependencies in the Cosmos DB SDK is Mono.CSharp. My debugging experience did error with this dependency mentioned during a Graph API call, come to think of it.

I mention all these great folks not to name-drop, but so you know how to find them if you have questions, too. They're super receptive to feedback and love making their products better, so hit them up if you have ideas.

Either way - to debug this guy locally you'll need to install the Mono.CSharp package.


Once the dependencies are all in place (see the NuGet node in the Solution Explorer below for what-it-should-look-like), we'll need to write some code. To do this, add a new Function item to the project. I've named mine Search.cs, since the point of this Function will be to provide database searching.


The Function will respond to HTTP requests, so the Http Trigger template is appropriate here. We want this Function to be "wide open," too, so we'll set the Access rights menu to be Anonymous, which lets everyone through.


Once Search.cs is added to the Function project, add these using statements to the top of the file.

using Microsoft.Azure.Documents;
using Microsoft.Azure.Documents.Client;
using Microsoft.Azure.Documents.Linq;
using Microsoft.Azure.Graphs;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Azure.WebJobs.Host;
using System;
using System.Collections.Generic;
using System.Configuration;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Threading.Tasks;

Once those have been added, replace the Function's class code with the code below. The code will simply search the Cosmos DB database using the Graph API for either all the people, or for the specific person identified via the name querystring parameter.

public static class Search
    static string endpoint = ConfigurationManager.AppSettings["Endpoint"];
    static string authKey = ConfigurationManager.AppSettings["AuthKey"];

    public static async Task<HttpResponseMessage> Run(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]HttpRequestMessage req,
        TraceWriter log)
        log.Info("C# HTTP trigger function processed a request.");

        // the person objects will be free-form in structure
        List<dynamic> results = new List<dynamic>();

        // open the client's connection
        using (DocumentClient client = new DocumentClient(
            new Uri(endpoint),
            new ConnectionPolicy
                ConnectionMode = ConnectionMode.Direct,
                ConnectionProtocol = Protocol.Tcp
            // get a reference to the database the console app created
            Database database = await client.CreateDatabaseIfNotExistsAsync(
                new Database
                    Id = "graphdb"

            // get an instance of the database's graph
            DocumentCollection graph = await client.CreateDocumentCollectionIfNotExistsAsync(
                new DocumentCollection { Id = "graphcollz" },
                new RequestOptions { OfferThroughput = 1000 });

            // build a gremlin query based on the existence of a name parameter
            string name = req.GetQueryNameValuePairs()
                .FirstOrDefault(q => string.Compare(q.Key, "name", true) == 0)

            IDocumentQuery<dynamic> query = (!String.IsNullOrEmpty(name))
                ? client.CreateGremlinQuery<dynamic>(graph, string.Format("g.V('{0}')", name))
                : client.CreateGremlinQuery<dynamic>(graph, "g.V()");

            // iterate over all the results and add them to the list
            while (query.HasMoreResults)
                foreach (dynamic result in await query.ExecuteNextAsync())

        // return the list with an OK response
        return req.CreateResponse<List<dynamic>>(HttpStatusCode.OK, results);

The code is basically the same connection logic as in the original console application which seeded the database, with a simple query to retrieve the matching records.

Debugging the Function Locally

Now that the code is complete, the Functions local debugging tools and emulator can be used to run the code locally so we can test it out.

Before the code will run properly, it must be configured for local execution with the Cosmos DB connection information. The local.settings.json file can be used to configure the Function for local execution much in the same way the App.config file is used to configure the original console application for execution.


Once the Function app has been configured locally so it knows how to find the Cosmos DB database, hitting F5 will launch the local debugging tool (which probably has the funkiest name of all time), func.exe, with the Function code hosted and ready for use.

At the end of the initial output from func.exe, I'll see that my Function is being hosted at localhost:7071. This will be helpful to test it in a client.


To test my Function, I'll use Visual Studio Code with Huachao Mao's excellent extension, REST Client. REST Client offers local or remote HTTP request capability in a single right-click. I'll add the URL of my person search function and execute the HTTP request.


I'm immediately presented with the raw HTTP response from the locally-running Function! Headers, JSON body content, everything.


By adding the name querystring parameter with a value I know to be in the database, I can filter the results the Function returns.


Once the Function is validated and seems to be working properly, the last step is publishing it to Azure App Service and configuring it to run in the cloud.

Publishing the Function to Azure

Luckilly, publishing is nearly a single mouse-click. By right-clicking the project I can see the familiar Publish context menu item.


I'm ready to publish this to the Cloud so it can be tested running in a publicly available scenario. So I'll select the first option, Azure Function App and enable the Create New radio button here so I can create a brand new Function in my Azure subscription.


The publish panel opens next, allowing me to name my Function. I'll opt for creating a new Consumption-based App Service Plan since I intend on using the pay-per-use billing method for my serverless Function. In addition, I'll create a new Storage Account to use with my Function in case I ever need support for Blobs, Tables, or Queues to trigger execution of other functionality I've not yet dreamed up.


Clicking the Create button in the dialog will result in all the resources being created in my Azure subscription. Then, Visual Studio will download a publish profile file (which is a simple XML file) that it'll use for knowing how to publish the Function code the next time I want to do so.

Once the Function is published, I can flip to the Azure Portal blade for my Function. There, I'll see a link to the Function's Application settings. I'll need to go here, as this is where I'll configure the live Function for connectivity to the Cosmos DB database with my Person data.


Just as I did earlier in the console application's App.config file and in the Function app's local.settings.json file, I'll need to configure the published Function with the Endpoint and AuthKey values appropriate for my Cosmos DB database. This way, I never have to check in configuration code that contains my keys - I can configure them in the portal and be sure they're not stored in source control.


Once the Function is configured properly in my Azure subscription, I can again use the Visual Studio Code REST Client extension to query my publicly-available Function URL.



This post summarized how to write a basic Azure Function to search a super-small Cosmos DB database using the Graph API, but there's so much more opportunity here. I enjoyed this quick-and-dirty introduction to using Cosmos DB and Functions together - it really drives home the flexibility of using a serverless back-end together with a schemaless data storage mechanism. These two tools are powerful when used together, to enable really rapid, fluid evolution of an API that can evolve around the underlying data structure.

I'll definitely be investigating Cosmos DB and Azure Functions together for some upcoming side project ideas on my backlog, and encourage you to take a look at it. The sample code in my fork of the repository - though kind of brute-force still - demonstrates how easy it is to get up and running with a serverless front-end atop a Graph database with worldwide distribution capability.

Azure Tools for Visual Studio Code 1.2.0 - ARM Export, Batch, Telemetry

Today you can download the new version (v1.2.0) of the Azure Tools for Visual Studio Code (ATVSC). A team of contributors have collaborated, responded to issues reported on our public GitHub repository, and spent some time cleaning up some of the less-than-desirable early code in the extension to make contributions more easy and isolated. This post will describe the new features in v1.2.0. In addition to a doing some refactoring to ease extension contributions, we've added a ton of new features in this version.

Export Template

In 1.2.0 we've added support for exporting existing resource groups to ARM templates saved in your workspace. First, you invoke the Export command using the palette.

Export command

Then you select an existing resource group.

Select a resource group

A few seconds later, the resource group's contents are downloaded as an ARM template and stored into your current workspace's arm-templates folder.

Export command

Note: As of the 1.2.0 release time frame there are a few kinks in the particular Azure API call we're using; certain details of your resources might not be persisted exactly right. You can use the great features contained in the Azure Resource Manager Tools extension (which bundled with this extension) to make tweaks. The API owners are working on making great improvements to this functionality so it'll improve in future releases of the back-end API.

Azure Batch account creation

Christos Matskas, a colleague and fellow Azure speaker rich in the arts of FOSS contributions submitted some great features like Key Vault in 1.1.0, and he continued to contribute in 1.2.0, adding support for Azure Batch.

From within Visual Studio Code you can use the Create Azure Batch command from the palette, shown below, to create new Azure Batch accounts. Future releases may add support for scripting against your Batch account, creating Jobs, and so forth. Feel free to send the team requests for additional Batch features via our GitHub Issues page.

Create Key Vault

Telemetry Collection

This release introduces the collection of basic usage telemetry. We're using Application Insights to collect and understand how customers are using the extension. To disable the collection of telemetry data simply edit set the azure.enableTelemetry configuration setting to false as shown below.

How to disable usage telemetry

Note: No performance degradation has occurred during this addition, and no private customer information is being persisted. Our telemetry code tracks the name of the call being made (like CreateAppService) and the GUID-based subscription id being affected. We capture the subscription ID so we can understand the frequency of customer usage; the ID can't be used to reversely-identify customers. No customer-identifying data, passwords, connection strings, or resource names are being persisted to our telemetry system.

Existing Features

You can learn more about the ATVSC in the initial announcement post or on the ATVSC marketplace page. Here's a bullet list of the other features the tools provide to make it easy for developers to party in the cloud.


Thanks for taking the time to learn about all the new features we've put into the ATVCS. If you're a Visual Studio Code user, take a moment to try the tools out. Send us a GitHub issue if you have any ideas. Your votes or comments are also welcome on the marketplace site, if you're finding the tools useful.

Happy coding!

The Deploy to Azure Button

Update: Two awesome developers from the Azure Websites team, David Ebbo  and Elliott Hamai, took the idea of Deploy to Azure, looked at the code, laughed and pointed at me, and used all kinds of new APIs in Azure to bring the idea to life in a manner that will provide far more value than I'd imagined. Keep reading to learn about the prototype I built, but make sure to learn about the real thing, and try it out yourself. 

Over the weekend I received an email from my friend and colleague Devin Rader . In the email he directed me to a neat feature Heroku offers, called the Heroku Button, and asked me if there was anything in the works like this for Azure. I wasn't aware of anything that was in the works at the time, and given that I'm one of the program managers on the Azure SDK team who builds the Azure Management Libraries, I figured this was a challenge for which I'd be well-suited. So I replied to Devin to let him know I was inspired by the idea and wanted to build it myself. This blog post will introduce you to the idea of the Deploy to Azure Button. My good buddy Vittorio Bertocci has a complementary post on how the identity flow works in Deploy to Azure, as he and I consulted on the idea, so make sure to check that out too.  

What is the Deploy to Azure Button?

The idea of the Deploy to Azure button is to make it dirt simple for owners of pre-packaged web applications to make it dirt simple for their potential customers to deploy their applications to Azure. As a web application's author, I could place a button in my GitHub repository's readme file to give users a visual cue that one-click deployment is possible. Here's a screen shot of the Deploy to Azure button being used in a demonstration repository I created for an early demo of Azure Websites, the " Hello All Worlds " demo.


See that gorgeous blue button? If I click that button, the GitHub repository page URL will be passed as the HTTP referrer and the associated Git repository URI can pretty easily be guessed. Here's the Deploy to Azure site with this GitHub repository as it's referring URL. Note the Git repository URL is pre-populated into the form.


Once I provide a name for my site and select a region, the site name is verified as one that's available. If it isn't I'm informed as such.


Once I find a site name I like, and that is available, clicking the "Deploy to Azure" button will submit the form. The form's data is collected and posted back to a Web API controller, which in turn bubbles up status information about the process of cloning the code, creating the site, and then deploying the site's code via SignalR . As the site's deployed I'm provided real-time status.


Once the site's been deployed, a button is added to the form that I can use to pop open the new site in a new tab or browser window.


I've also added a Deploy to Azure button to my fork of my good buddy Mads' MiniBlog source code, which I've frequently used as a test case for the idea of enabling SaaS with the Management Libraries.


Video is so much more effective at conveying the idea of the Deploy to Azure button, so I've created 3-minute video walking through it on YouTube and embedded it below.

Want to use the Deploy to Azure Button?

Feel free! The app is live and working at, and will accept requests today. I'd encourage you to grab the button below or use the direct link to the button in your own GitHub repositories. Just put the button below on your repository, and link over to the URL above and the app will do the rest.


Below I describe how Deploy to Azure works, as well as put forth a few caveats of the app in its current state, so keep reading to understand some of the limitations of Deploy to Azure, as well as some of the plans we have for its future.

How it Works

There are a lot of ideas we could build out of the Deploy to Azure idea, but the code isn't too elegant just yet. The idea was to prove how easy it'd be to enable a one-click deployment story directly from a public repository. Now that we're there, we're seeing a lot of other ideas popping up.

For the time being I'm doing some really simple Git tricks on the server side and then some even simpler tricks on the deployment side. I'll go into the identity stuff later, but the Deploy to Azure code base started with Vittorio's sample on using the new OWIN middleware with Open IDC and multi-tenancy .

The workflow of Deploy to Azure is pretty simple. I'll walk through it at a very high level in this post, then dive a little deeper into the code of the site to explain how it works. The code for the site is open-source, too, so feel free to check out the GitHub repository where the code for Deploy to Azure is stored if you'd like to see more. Feel free to submit a pull request, too, if you feel you can make it better.

  1. A user is authenticated to their Azure account during the AAD handshake, driven by the OpenId Connect OWIN middleware
  2. The OWIN middleware hands over an authentication code to ADAL, which uses it to obtain a new AAD token for accessing the Azure Management API
  3. Once a token is obtained, MAML clients can be used to communicate with the Azure management APIs
  4. The list of regions available to a user's subscription are retrieved and displayed in the form's menu
  5. When a user submits the form the data is sent to a Web API controller
  6. The Web API controller clones the Git repository down to a new folder on the server side
  7. The Web API controller creates an instance of the Web Site Management Client and a site is created
  8. The site's publish profiles are pulled
  9. The site is deployed via Web Deploy back up to Azure Websites

The diagram below demonstrates this process visually.


This code isn't perfect, though, and Deploy to Azure should be considered a beta release. We have some ideas on how to make it better. New APIs are being released frequently, and during my discussions of this prototype with David Ebbo I learned of some upcoming APIs that will mitigate some of this functionality and add some features to the Deploy to Azure application. For now, consider Deploy to Azure a prototype of something awesome that we might push to the next level in the upcoming weeks.

Deploy to Azure is in Beta

As I mention above, Deploy to Azure has a few caveats. I'll cut to the chase real quick and break down some of the limitations of Deploy to Azure. I know you're excited and you want to start using the button, but first I feel I must clarify a few of the limitations it has in this first iteration.


Deploy to Azure uses a multi-tenant Active Directory application. This way, users can allow the application access to their own Azure subscriptions and allow it to spin up new sites on their behalf. Since Deploy to Azure uses the multi-tenancy functionality of Azure Active Directory and isn't an official Microsoft application the login functionality is limited to Azure Active Directory users. This means you can't log in using your,, or accounts. Instead, you have to create an Active Directory user who is a Global Administrator of your Active Directory domain. Below I got into a little more detail on the identity aspect of Deploy to Azure and link off to a complementary post Vittorio Bertocci wrote up to describe how that portion of Deploy to Azure works.

No Solution Builds

Since the code for Deploy to Azure just clones the repository for a site and then publishes it, everything you want to deploy must be in your repository. Whereas Kudu will facilitate the solution build prior to publishing, which pulls in your NuGet packages, Deploy to Azure simply clones and publishes. This is one area where Mr. Ebbo and I are joining forces and working together to augment Deploy to Azure with more Kudu-like functionality in a later release.

Customizing Deployments

That said, it is highly probably that a Git repository might contain a site and other stuff unrelated to the site. In the case of MiniBlog, for instance, the actual web site that is MiniBlog is contained in a sub folder called "Website." Given this, if I simply re-publish the entire repository up to Azure, the site obviously won't work. For this reason, I've given users of the Deploy to Azure button a JSON file that the server-side code checks during deployments. In the screen shot below from my MiniBlog fork, you'll see two things highlighted. One is the Website folder, which contains the MiniBlog site source code.


See the arrow in the screen shot above? That arrow points to the file named deploytoazure.json. This file has a specific property in it that the Deploy to Azure code checks at run-time. The screen shot below shows this file in GitHub.


Once the Git repository has been cloned, I check for the presence of the deploytoazure.json file in the root of the cloned source. If the file exists, I open it up and check the value of the subdirectoryWithWebsite property. Then, I use the value of that property to determine which folder I'll publish up to the site. This gives developers a little more control over how the deployment works.

I'd imagine a later iteration of Deploy to Azure including other settings and flags in this file, but for now, the path to the web site code was really all I needed.

The Identity Component

One of the components about my own product I'd not really mastered was to work through some complex customer scenarios where the Azure Management Libraries would be used. Repeatedly customers asked me for server-side examples using ADAL and MAML together. The Deploy to Azure button was a perfect scenario for me to learn more about the code our customers would need to author to take advantage of these two together. I knew multi-tenancy would be crucial to Deploy to Azure- I'll host it in one subscription, but users of other (or multiple) subscriptions will want to deploy web applications into their own subscriptions, not into mine. So Deploy to Azure would have to allow for multi-tenant authorization, and I'd need to be able to get the user's AAD token in my code, since the Management Libraries' TokenCloudCredential class needs a bearer token at construction.

I spent the weekend learning some more about Azure Active Directory. By learning more about AAD, I really meant to say "emailing my colleague Vittorio Bertocci ." Vittorio and I are working on a lot of things together now - the Azure Management Libraries, Java, Visual Studio, and basically everywhere else where the notion of identity is important in the conversation. Vittorio was interested in supporting my little project some support. My first question - how to get the token on the server side once a user was authenticated via AAD - was answered via Vittorio's excellent sample using the new OWIN middleware with Open IDC and multi-tenancy . The code in this repository was the starting point, in fact, for Deploy to Azure. I just added the functionality once I knew all the identity bits were wired up properly and I could grab the token.


As Deploy to Azure evolved and became a reality and the eventual creation of this blog post arose, Vittorio offered to write a complementary post explaining the details of the AAD-related functionality in Deploy to Azure. His post explains the entire end-to-end of the identity flow in the Deploy to Azure button process really well . I encourage you to continue reading over at Vittorio's post on the topic.

Next Steps

As I pointed out above, Deploy to Azure began as an idea and evolved pretty quickly. It has been a lot of fun to build, and in so doing I've successfully created an example of how you could use the ADAL library along with the Azure Management Libraries on the server side. We're discussing more features and ideas to add to Deploy to Azure. I'll post another, more technical post that walks through the code in more detail, but this post's purpose is to introduce you to the idea of the button and to invite you to try it out. Feel free to fork the code, too, and submit a pull request or issues that you run into as you're using it.

Announcing the General Availability of the Microsoft Azure Management Libraries for .NET

I’d like to officially introduce you to the 1.0 release of the Microsoft Azure Management Libraries. The official announcement of the libraries came out a few days ago on the Microsoft Azure Service Updates blog. Update: Jeff Wilcox wrote up am excellent piece introducing the Management Libraries, in which he covers a lot of ground. 

As I was busy travelling and presenting at the //build conference in San Francisco and enjoying my son Gabriel’s 6th birthday, I was a little tied up and unable to get this post out, but it gave me time to shore up a few little things, publish the code, and prepare a surprise for you that I’ll describe below in this post. Let’s just say I wanted to make it as easy as possible for you to get up and running with the 1.0 bits, since I’m so proud of all the work our teams have put into it. This week at the //build/ 2014 conference I presented a session with my buddy Joe Levy on many new automation stack we’ve added to Microsoft Azure. You can watch our //build/ session on Channel 9, which covers all of the topics from the slide image below. Joe and I talked about the Automation Stack in Microsoft Azure, from the SDK Common NuGet package up through the Microsoft Azure Management Libraries for .NET and into PowerShell and how the new Microsoft Azure Automation Service sits atop all of it for true PowerShell-as-a-service automation that you can use for just about anything.


Demonstrating the Management Libraries

My part of the session was primarily focused on how developers can make use of the Management Libraries (MAML, for short) for various scenarios. I’ve created 2 GitHub projects where the code for these demos, and I also have another surprise I’ll discuss later in the post. First, the demos!

Integration Testing

One scenario in which I think MAML has a lot to offer is to enable integration testing. Imagine having a Web Site that talks to a Storage Account to display data to the user in HTML. Pretty common scenario that can have all sorts of problems. Connection string incorrectness, a dead back-end, misconfiguration – you never know what could happen. Good integration tests offer more confidence that “at least the environment is right and everything is configured properly.” This scenario is represented by the code in the MAML Integration Testing code repository . Using xunit tests and MAML together, I was able to automate the entire process of:

  1. Creating a web site
  2. Publishing the web site’s code
  3. Creating a storage account
  4. Getting that storage account’s connection string
  5. Saving data to the storage account that I intend on displaying on the web site
  6. Configuring the web site’s connection string so that it can find the storage account and pull the data for display
  7. Hit the web site and verify it displays the correct information
  8. Delete the storage account
  9. Delete the web site

If this sounds like a common practice for your Microsoft Azure web apps, you might get some value from this demo, as it could streamline your entire process of integration testing. Here’s the best part – if you’re not really an Azure storage person, and your typical scenario involves a non-cloud-hosted ASP.NET web site that talks to SQL Server, you could still make use of MAML for your own integration tests. Simply use the SQL Management Client to fire up a Microsoft Azure SQL Database, insert a few records, and do basically the rest of the integration testing “stuff” but set up your page to read from the database instead of the storage account. Then, whether you’re deploying your production site to Microsoft Azure or not, you can make sure it all works using a scorched-earth testing environment.

Enabling SaaS

Cloud is a great place for software-as-a-service vendors. In typical SaaS situations, a customer can hit a web site, provide some information, and voila’, their newly-customized web site is all ready. The final demonstration I did during the //build/ session was geared towards these sorts of scenarios. In my demo at //build/, I demonstrated this sort of scenario by creating an MVC application I called MiniBlogger, for it generates live MiniBlog sites running in Microsoft Azure. When the user clicks the button, a Web API controller is invoked using JavaScript. The controller code makes a few calls out to the Microsoft Azure REST API using MAML. It first verifies the site name is available and if not, the user is provided a subtle visual cue that their requested site name isn’t available:


When the user finds a name they like that’s also not already in use, they can create the site. As the API controller iterates over each step of the process it sends messages to a SignalR Hub (yes, I can work SignalR in anywhere ), and the user is provided real-time status on the process of the site being created and deployed.


Once the deployment is complete, the site pops up in a new browser, all ready for use. The code for this demo is also on GitHub, so fork it and party .

Get Your Very Own MAML Project Template (the surprise)

In this session I made use of Sayed and Mads’ work on SideWaffle and Template Builder to create a Visual Studio Extension that makes it easy to get up and running with MAML. Sayed and Mads have long thought SideWaffle would be great for coming up with canned presentations, and this was my first attempt at delivering on their goal. I asked them both tons of questions throughout the process, so first and foremost, thanks to them for SideWaffle and their patience as I fumbled through aspects of getting the hang of using it.

You can get the Microsoft Azure Management Libraries extension now in the Visual Studio Extensions Gallery. I’ve also created a little YouTube video demonstrating its usage . In five minutes, you can have a running Console Application that creates Virtual Machines in Microsoft Azure.

This Visual Studio Extension I created contains a few elements. First, it has a project template that references all of the MAML NuGet packages and the Active Directory Authentication Library NuGet package, which are dependencies for the demonstration. When you install the extension you’ll get a new project template like the one highlighted below.


The project is a basic Console Application, but with all the MAML/ADAL NuGets referenced. Also contained within the extension are five item templates and 6 code snippets that walk you through the process of authoring code that will result in the following workflow:

  1. Authenticate to Microsoft Azure using Azure Active Directory
  2. Retrieve a list of Microsoft Azure subscriptions that the authenticated user can access
  3. Find a specific subscription and associate the AAD token with that subscription
  4. Create a new Cloud Service in the subscription
  5. Create a new Storage Account in the subscription
  6. Get the list of Virtual Machine images containing a filter (this is provided in the snippets as a parameter)
  7. Create a Virtual Machine running in the newly-created Cloud Service container, using the VHD of the image selected earlier
  8. Deploy the Virtual Machine and start it up

The screen shot below is from my own instance of Visual Studio testing out the item templates. I’m on step 2 in this screen shot, about to add the class that facilitates the subscription-selection process described above.


Likewise, here’s my code being edited during step 2. Note how the snippet is really doing the work, and the comments provide guidance.


Each step of the process is pretty-well documented. I tried really hard to think of the easiest way to help the Microsoft Azure community get up and running with MAML following our 1.0 release, and this extension seemed to be the best answer I could come up with. I hope you find it as helpful as I think you’ll find it, but I welcome any feedback you may have on the extension and how it could be improved. Same thing for MAML – we’re all about taking feedback, so let us know what we can do to make the future better for you as you automate everything in Microsoft Azure.

Congrats to the Team

I’d definitely like to congratulate my team, and all the teams in Microsoft Azure who brought their awesome in full force this year in preparation for //build/. We had some great releases, amazing announcements, and heard great things from the community. Happy coding!

Rebuilding the SiteMonitR using Azure WebJobs SDK

Some time back, I created the SiteMonitR sample to demonstrate how SignalR could be used to tie together a Web Site and a Cloud Service. Since then Azure has evolved quite rapidly, as have the ASP.NET Web API and SignalR areas. One recent addition to the arsenal of tools available to application developers in Azure is WebJobs . Similar to the traditional Worker Role, a WebJob allows for the continuous or triggered execution of program logic. The main difference between a Worker Role and a WebJob is that the latter runs not in the context of a separate Cloud Service, but as a resource of a Web Site. WebJobs also simplify development of these routine middleware programs, too, since the only requirement on the developer is to reference the WebJobs NuGet package . Developers can write basic console applications with methods that, when decorated with properties resident in the WebJobs SDK, will execute at appropriate times or on schedules. You can learn more about the basics of WebJobs via the introductory article, the WebJobs SDK documentation, or from Hanselman’s blog post on the topic.

In this blog post, I’m going to concentrate on how I’ve used WebJobs and some of my other favorite technologies together to re-create the SiteMonitR sample. I’ve forked the original GitHub repository into my own account to provide you with access to the new SiteMonitR code . Once I wrap up this post I’ll also update the MSDN Code Sample for SiteMonitR, so if you prefer a raw download you’ll have that capability. I worked pretty closely with Pranav Rastogi, Mike Stall and the rest of the WebJobs team as I worked through this re-engineering process. They’ve also recorded an episode of Web Camps TV on the topic, so check that out if you’re interested in more details. Finally, Sayed and Mads have developed a prototype of some tooling features that could make developing and deploying WebJobs easier. Take a look at this extension and give us all feedback on it, as we’re trying to conceptualize the best way to surface WebJobs tooling and we’d love to have your input on how to make the whole process easier. 

Application Overview

SiteMonitR is a very simple application written for a very simple situation. I wanted to know the status of my web sites during a period where my [non-cloud] previous hosting provider wasn’t doing so well with keeping my sites live. I wrote this app up and had a monitor dedicated to it, and since then the app has served as proving ground for each new wave of technology I’d like to learn. This implementation of the application obviously makes use of WebJobs to queue up various points in the work flow of site-monitoring and logging. During this workflow the code also updates a SignalR-enabled dashboard to provide constant, real-time visibility into the health of a list of sites. The workflow diagram below represents how messages move throughout SiteMonitR.


The goal for the UX was to keep it elegant and simple. Bootstrap made this easy in the last version of the application, and given the newest release of Boostrap, the templates Pranav and his team made available to us in Visual Studio 2013, it seemed like a logical choice for this new version of SiteMonitoR. I didn’t change much from the last release aside from making it even more simple than before. I’ve been listening to the team, and the community, rave about AngularJS but I’d not had made the time to learn it, so this seemed like a great opportunity. I found I really love AngularJS from reworking this app from Knockout in the previous version. I’ve tried a lot of JavaScript frameworks, and I’m pretty comfortable saying that right now I’m in love with AngularJS. It really is simple, and fun to use.


The entire solution is comprised of 4 projects. Two of these projects are basic console applications I use as program logic for the WebJobs. One is [duh] a Web Application, and the last is a Common project that has things like helper methods, constants, and configuration code. Pretty basic structure, that when augmented with a few NuGet packages and some elbow grease, makes for a relatively small set of code to have to understand.


With a quick high-level introduction to the application out of the way, I’ll introduce the WebJob methods, and walk through the code and how it works.

Code for Harnessing WebJobs

One of the goals for WebJobs was to provide ASP.NET developers a method of reaching in and doing things with Azure’s storage features without requiring developers to learn much about how Azure storage actually works. The team’s architects thought (rightfully) that via the provision of convenience attributes that referred to abstract use-cases facilitated in many common Azure scenarios, more developers would have the basic functionality they need to use storage without actually needing to learn how to use storage. Sometimes the requirement of having to learn a new API to use a feature mitigates that feature’s usefulness (I know, it sounds crazy, right?).

So, Mike and Pranav and the rest of the team came up with a series of attributes that are explained pretty effectively in their SDK documentation. I’m going to teach via demonstration here, so let’s just dive in and look at the first method. This method, CheckSitesFunction, lives in the executable code for a WebJob that will be executed on a schedule. Whenever the scheduler service wakes up this particular job, the method below will execute with two parameters being passed in. The first parameter references a table of SiteRecord objects, the second is a storage queue into which the code will send messages.


You could’ve probably guessed what I’m going to do next. Iterate over all the records in the table, grab the URL of the site that needs to be pinged, then ping it and send the results to a storage queue. The out parameter in this method is actually a queue itself. So the variable resultList below is literally going to represent the list of messages I’m planning on sending into that storage queue.

Now, if you’re obsessive like me you’ll probably have that extra monitor set up just to keep tabs on all your sites, but that’s not the point of this WebJob. As the code executes, I’m also going to call out to the Web API controller in the web site via the UpdateDashboard method. I’ll cover that in more detail later, but that’s mainly to provide the user with real-time visibility into the health of the sites being checked. Realistically all that really matters is the log data for the site health, which is why I’m sending it to a queue to be processed. I don’t want to slow down the iterative processing by needing to wait for the whole process so I queue it up and let some other process handle it.


In addition to a scheduled WebJob, there’s another one that will run on events. Specifically, these WebJobs will wake up whenever messages land into specific queues being observed by this WebJob. The method signatures with appropriately-decorated attributes specifying which queues to watch and tables to process, are shown in the code below.


One method in particular, AddSite, runs whenever a user event sends a message into a queue used to receive requests to add sites to the list of sites being watched. The user facilitates this use-case via the SiteMonitR dashboard, a message is sent to a queue containing a URL, and then, this method just wakes up and executes . Whenever a user sends a message containing a string URL value for the site they’d like to monitor, the message is then saved to the storage table provided in the second parameter. As you can see from the method below there’s no code that makes explicit use of the storage API or SDK, but rather, it’s just an instance of an IDictionary implementer to which I’m just adding items.


The SaveSiteLogEntry method below is similar to the AddSite method. It has a pair of parameters. One of these parameters represents the incoming queue watched by this method, the second represents a table into which data will be stored. In this example, however, the first parameter isn’t a primitive type, but rather a custom type I wrote within the SiteMonitR code. This variation shows the richness of the WebJob API; when methods land on this queue they can be deserialized into object instances of type SiteResult that are then handled by this method. This is a lot easier than needing to write my own polling mechanism to sit between my code and the storage queue. The WebJob service takes care of that for me, and all I need to worry about is how I handle incoming messages . That removes a good bit of the ceremony of working with the storage SDK; of course, the trade-off is that I have little no no control over the inner workings of the storage functionality.

That’s the beauty of it. In a lot of application code, the plumbing doesn’t really matter in the beginning. All that matters is that all the pieces work together.


Finally, there’s one more function that deletes sites. This function, like the others, takes a first parameter decorated by the QueueInput attribute to represent a queue that’s being watched by the program. The final parameters in the method represent two different tables from which data will be deleted . First, the site record is deleted, then the logs for that site that’ve been stored up are deleted.


The SiteMonitR Dashboard

The UX of SiteMonitR is built using Web API, SignalR, and AngularJS. This section will walk through this part of the code and provide some visibility into how the real-time updates work, as well as how the CRUD functionality is exposed via a Web API Controller. This controller’s add, delete, and list methods are shown below. Note, in this part of the code I’ll actually be using the Storage SDK via a utility class resident in the Web Project.


Remember the scheduled WebJob from the discussion earlier? During that section I mentioned the Web API side of the equation and how it would be used with SignalR to provide real-time updates to the dashboard from the WebJob code running in a process external to the web site itself. In essence, the WebJob programs simply make HTTP GET/POST calls over to the Web API side to the methods below. Both of these methods are pretty simple; they just hand off the information they obtained during the HTTP call from the WebJob up to the UX via bubbling up events through SignalR.


The SignalR Hub being called actually has no code in it. It’s just a Hub the Web API uses to bubble events up to the UX in the browser.


The code the WebJobs use to call the Web API make use of a client I’ll be investigating in more detail in the next few weeks. I’ve been so busy in my own project work I’ve had little time to keep up with some of the awesome updates coming from the Web API team recently. So I was excited to have time to tinker with the Web API Client NuGet’s latest release, which makes it dirt simple to call out to a Web API from client code. In this case, my client code is running in the scheduled WebJob. The utility code the WebJob calls that then calls to the Web API Controller is below.


As I mentioned, I’m using AngularJS in the dashboard’s HTML code. I love how similar AngularJS’s templating is to Handlebars. I didn’t have to learn a whole lot here, aside from how to use the ng-class attribute with potential multiple values. The data-binding logic on that line defines the color of the box indicating the site’s status. I love this syntax and how easy it is to update UX elements when models change using AngularJS. I don’t think I’ve ever had it so easy, and have really enjoyed the logical nature of AngularJS and how everything just seemed to work.


Another thing that is nice with AngularJS is well-explained by Ravi on his blog on a Better Way of Using ASP.NET SignalR with AngularJS . Basically, since services in AngularJS are Singletons and since AngularJS has a few great ways it does injection, one can easily dial in a service that connects to the SignalR Hub. Then, this wrapper service can fire JavaScript methods that can be handled by Angular controllers on the client. This approach felt very much like DI/IoC acts I’ve taken for granted in C# but never really used too much in JavaScript. The nature of service orientation in JavaScript and how things like abstracting a SignalR connection are really elegant when performed with AngularJS. The code below shows the service I inject into my Angular controller that does just this: it handles SignalR events and bubbles them up in the JavaScript code on the client.


Speaking of my AngularJS controller, here it is. This first code is how the Angular controller makes HTTP calls to the Web API running on the server side. Pretty simple.


Here’s how the Angular controller handles the events bubbled up from the Angular service that abstracts the SignalR connection back to the server. Whenever events fire from within that service they’re handled by the controller methods, which re-bind the HTML UX and keep the user up-to-date in real-time on what’s happening with their sites.


With that terse examination of the various moving parts of the SiteMonitR application code out of the way the time has come to learning how it can be deployed.

Deploying SiteMonitR

There are three steps in the deployment process for getting the site and the WebJobs running in Azure, and the whole set of code can be configured in one final step. The first step should be self-explanatory; the site needs to be published to Azure Web Sites. I won’t go through the details of publishing a web site in this post. There are literally hundreds of resources out there on the topic, from Web Camps TV episodes to Sayed’s blog, as well as demo videos on AzureConf, //build/ keynotes, and so on. Save it to say that publishing and remote debugging ASP.NET sites in Azure Web Sites is pretty simple . Just right-click, select Publish, and follow the steps.


Once the code is published the two WebJob executables need to be zipped up and deployed. First, I’ll pop out of Visual Studio to the bin/Release folder of my event-driven WebJob. Then I’ll select all the required files, right-click them, and zip them up into a single zip file.


Then in the portal’s widget for the SiteMonitR web site I’ll click the WebJobs tab in the navigation bar to get started creating a new WebJob.


I’ll give the WebJob a name, then select the zip file and specify how the WebJob should run. I’ll select Run Continuously for the event-driven WebJob. It has code in it that will halt the process in wait for incoming queue messages, so this selection will be adequate.


Next I’ll zip up the output of the scheduled WebJob’s code.


This time, when I’m uploading the zip file containing the WebJob, I’ll select the Run on a Schedule option from the How to Run drop-down.


Then I’ll set up the schedule for my WebJob. In this example I’m going to be a little obsessive and run the process to check my sites every fifteen minutes. So, every fifteen minutes the scheduled task, which is responsible for checking the sites, will wake up, check all the sites, and enqueue the results of each check so that the results can be logged. If a user were sitting on the dashboard they’d observe this check happen every 15 minutes for each site.


The two WebJobs are listed in the portal once they’re created. Controls for executing manual or scheduled WebJobs as well as stopping those currently or continuously running appear at the bottom of the portal.



The final step in deployment (like always) is configuration. SiteMonitR was built for Azure, so it can be entirely configured from within the portal. The first step in configuring SiteMonitR is to create a Storage Account for it to use for storing data in tables and for the queue being used to send and receive messages. An existing storage account could be used, and each object created in the storage account is prefixed with the sitemonitr phrase. That said, it could be better for some to have an isolated storage account. If so, create a storage account in the portal.


Once the storage account has been created, copy the name and either primary or secondary keys so that you can build a storage account connection string. That connection string needs to be used, as does the URL of my site (in this case,, which is in fact a live instance of this code) as connection strings and an appSetting (respectively). See below on how to configure these items right from within the Azure portal.


Once these values are configured, the site should be ready to run.

Running SiteMonitR

To test it out, open the site in another browser instance, then go to the WebJobs tab of the site and select the scheduled task. Then, run it from within the portal and you should see the HTML UX reacting as the sites are checked and their status sent back to the dashboard from the WebJob code.


Speaking of dashboards, the WebJobs feature itself has a nice dashboard I can use to check on the status of my WebJobs and their history. I’ll click the Logs link in the WebJobs tab to get to the dashboard.


The WebJobs Dashboard shows all of my jobs running, or the history for those that’ve already executed.



I’m really enjoying my time spent with WebJobs up to this point. At the original time of this post, WebJobs was in the first Alpha release stage. so they’re still pretty preview. I’m seeing huge potential for WebJobs, where customers who have a small middle-tier or scheduled thing that needs to happen. In those cases a Worker Role could be quite overkill, so WebJobs is a great middle-of-the-road approach to giving web application owners and developers these sorts of scheduled or event-driven programs that enable a second or multiple tiers running. I’ve really enjoyed playing with WebJobs, learning the SDK, and look forward to more interesting things coming out in this space.

Writing a Windows Phone 8 Application that uses the Azure Management Libraries for on-the-go Cloud Management

Since the initial release of the Azure Management Libraries (WAML) as Portable Class Libraries (PCL, or pickles as I like to call them) we’ve had tons of questions from community members like Patriek asking how Windows Phone applications can be written that make use of the cloud management functionality WAML offers. At first glance it would seem that WAML is “unsupported on Windows Phone,” but the reality is that WAML is fully supported via it being a PCL, but some external factors made it seem just shy of impossible to be able to glue our PCL framework to each of the targeted client platforms. In the case of Phone, there are two blockers:

  1. Since the X509Certificate2 class is unsupported on Windows Phone developers can’t make use of the certificate-based method of authenticating against the Azure REST API
  2. Since there’s no ADAL implementation that targets Phone there’s no way to use ADAL’s slick functionality to get my authentication token. The assumption here is that some creative coding is going to have to be done to go deeper into the OAuth stack. Sounds nasty.

Well, that’s never stopped any motivated coder before, right? Armed with some code from Patriek, a problem statement, and a pretty informative blog post written by none other than Vittorio on how he had some fun tying Phone to REST Services using AAD, I set out to start at ground zero to get some code working on my phone that would enable me to see what’s happening in my Azure subscription, across all my sites, services, and data stores. I set out to create a new type of phone app that will give me complete control over everything I’ve got published to my Azure cloud subscription. This post will walk through the code I worked up to get a baseline Windows Phone mobile cloud management app running.

Creating and Configuring the Phone App

First, a little background is required here. As Vittorio’s post makes very clear, the OAuth dance the WP8 code is going to be doing is pretty interesting, challenging, and results with a great set of functionality and an even greater starting point if this app gives you some ideas on what you could make it do next. In fact, if you’re into that whole social coding thing, the code in this post has a GitHub repository right here . The code in the repository contains the project I created in this screen shot in more detail.


The first step is to pull down the NuGet pacakges I’ll need to write this application. In this particular case I’m kind of thinking of writing a Windows Phone app that I’ll use to management my Azure Web Sites, Cloud Services, SQL Databases, Storage Accounts, and everything in between. So I’ll right-click my Phone project and select the correct packages in next step.


I’ll go ahead and pull down the entire WAML NuGet set, so I can manage and view all of my asset types throughout the my Azure platform subscription. By selecting the Microsoft.Azure.Management.Libraries package, all of the SDK Common and WAML platform NuGets will all be pulled down for this application to use.


The Common SDK, shown below in the project’s references window, provides the baseline HTTP, Parsing, and Tracing functionality that all the higher-order WAML components need to function. Next to that is the newest member of the Common SDK tools, the Common Windows Phone reference, which is new in the latest release of the Common SDK package. This new addition provides the high-level functionality for taking token credentials from within the Windows Phone client and being able to attach those credentials to the SDK client code so that calls made to the Azure REST API from a phone are authenticated properly.


If you read my last blog post on using WAAD and WAML together to authenticate WAML-based client applications, there are a few important steps in that post about working through the process of adding an application that can authenticate on behalf of your WAAD tenant. Take a look at that post here to get the values for the App Resources dialog in the screen shot below. In a sense, you’ll be setting application settings here that you can get from the Azure portal. You’ll need to create an application, then get that app’s client id, and your subscription and active directory tenant id or domain name, and place those values into the code’s resources file, as shown below.


The Login View

The first view shows a built-in web browser control. This web browser will be used to direct the user to the login page. From there, the browser’s behavior and events will be intercepted. The authentication data the form contains will then be used to manually authenticate the application directly to AAD.

Reminder – As Vittorio’s post pointed out, you might not want to use this exact code in your production app. This is a prototype to demonstrate putting it all together in the lack of important resources. Something better will come along soon. For now the OAuth dance is sort of up to me.

Note the web browser control has already been event-bound to a method named webBrowser_Navigating, so each time the browser tries to make a request this method handler will run first, and allow my code to intercept the response coming back to the browser.


Once the login page loads up, I’ll presume a login is in order, so I’ll go ahead and navigate to the login page, passing in a few of my own configuration parameters to make sure I’m hitting my own AAD tenant.


Vittorio deep-dives on the workflow of this code a lot better than I’ll try to do here. The basic idea is that the browser flow is being intercepted. Once the code realizes that AAD has authenticated me it hands back a URL beginning with the string I set for the redirectUrl property of this application in the portal. When my code sees my redirectUrl come through, I just run the GetToken() method to perform the authentication. It posts some data over the wire again, and this time, the response activity is handled by the RetrieveTokenEndpointResponse() event handler.


Once this handler wakes up to process the HTTP response I’ve received from the AAD authentication workflow (that we did via the OAuth dance), my access token is pulled out of the response. The subscription id value is then persisted via one of my App’s properties, and the next view in the application is shown.


When the user first opens the app, the obvious first step is visible. We need to show the user the AAD login site, and allow the user to manually enter in their email address and password. In this window. I could use a Microsoft (Live/Hotmail/etc.) account or an organizational account. The latter, so long as they’ve been marked as Azure subscription co-admins, would be able to log in and make changes, too.


Once the user logs in they’ll be shown a list of their subscriptions, so let’s take a look at how the code in the subscriptions view is constructed.

Selecting a Subscription

The subscription selection view just shows a list of subscriptions to the user in a long scrolling list. This list is data-bound to the results of a REST API call made via the SubscriptionClient class, which goes out and gets the list of subscriptions a user can view.

Note – even though I can see my full list of subscriptions, I’ve only installed the Application into one AAD tenant in one of my subscriptions. If I click any of the other subscriptions, things won’t work so well. A subscription and a tenant are tied together. Unless your application gets special permission (it does happen from time to time, I hear ) your application can only access Azure assets in the same subscription. If you wanted to work across subscription you’d have a little more work to do.

The XAML code below shows how the list of subscriptions will be data-bound to the view. I may eventually go into this code and add some view/hide logic that would show only those subscriptions I could impact using the application settings I’ve got running in this app. Point is, I can see all of the subscriptions I can manage and conceptually manage them with some additional coding.


The code below is what executes to log a user in with their stored access token and subscription id, when the TokenCloudCredentials class is used to wrap their credentials up as a single construction parameter to most of the management client. Since I was able to get a token from the OAuth dance with AAD, and since I know which subscription ID I want to affect, I’ve got everything I need to start working against the rest of the service management REST API endpoints.


When the subscriptions page loads, it gives me the chance to select the subscription I want to manage. Note: I just need to be cautious and only click the subscription that matches the subscription where my AAD tenant is set up . In my case, I’ve got the application installed into my AAD tenant that lives in the subscription named Azure MSDN – Visual Studio Ultimate . If I were to select any of my other subscriptions I’d get an exception.

To have true parity across all subscriptions you’d need to have an application setup in each of your directories and some additional configuration, or you’d have to allow other tenants set up multi-tenancy with your AAD tenant.


Once the subscription list is data-bound the user can select a subscription to work with. Selection of a subscription item in the list fires the following code.


It persists the subscription id of the subscription the user selected, and then navigates the user to the AssetListView page, which will show more details on what’s in a user’s Azure subscription.

Viewing Cloud Assets via Mobile Phone

To provide a simple method of being able to scroll through various asset types rather than emulate the Azure portal, I made use of the LongListSelector and Pivot controls. In each Pivot I display the type of Azure asset being represented by that face of the pivot control. Web Sites, slide, Storage, slide, SQL, slide, Cloud Services… and so on. The idea for the phone prototype was to show how WAML could be used to load ViewModel classes that would then be shown in a series of views to make it easy to get right to a particular asset in your Azure architecture to see details about it in a native Phone interface.


Since the assets list page is primarily just a logically-grouped series of lists for each of the types of assets one can own in their Azure stack, the ViewModel class’s main purpose is to expose everything that will be needed on the view. It exposes Observable Collections of each type of Azure asset the app can display.


What would be better as controller functionality, the code in the codebehind of the view ends up doing most of the data-acquisition from the Azure REST APIs via WAML calls. In the code sample below the GetWebSites method is highlighted, but the others – GetSqlDatabases, GetStorageAccounts, and GetHostedServices – all do the same sorts of things. Each method creates instances of appropriate management client classes, retrieves asset lists, and data-binds the lists so that the assets are all visible in the Windows Phone client.


The web sites pivot is shown below. I’m thinking of adding a deeper screen here, one that would show site statistics over the last N days, maybe add an appSetting and connectionString editor page, or something else. Have any ideas?


Here’s a list of my cloud services – 2 empty PaaS instances and 1 newly set-up VM instance.



Though the OAuth dance has a few drawbacks and the code is a little less elegant than when I could make use of ADAL in my previous example, it works. Once I have the token information I need from the browser, I can run with that information by creating an instance of a TokenCloudCredential, passing in the token I received from the OAuth dance to get to authenticate WAML and therefore, make calls out to the Azure Service Management API to automate my asset management. Since WAML is a PCL package, and as a result of the introduction of support for the Windows Phone platform into the SDK Common NuGet package, the community can now freely experiment with solutions combining Windows Phone and the Management Libraries.

Happy Coding!

Using Azure Active Directory to Authenticate the Management Libraries

Integration with Azure Active Directory was one of the main features we wanted to get into the 1.0 release of the Azure Management Libraries. Since Visual Studio and even the PowerShell Cmdlets (which use WAML, in fact) already have support for AAD, it was really important to all of us on the team that we have a method of providing authentication directly via AAD from within C# code using WAML. This makes for a synonymous login experience across all three of these (and more) areas. Most importantly, it makes it easy for Azure users to be able to manage all their subscription assets when they only need to login using their username/password portal credentials rather than go through the process of creating and uploading management certificates. This blog post will answer some of the questions and requests we’ve had from the community on how to tie AAD and WAML together, to create AAD-authenticated applications code that can be used to manage your Azure account assets.

This won’t be a deep-dive into how AAD works, but more a short examination on how it and WAML can be used together.

I worked quite a bit with the AAD team, especially Vittorio Bertocci, on this post. Our teams have a regular meeting so we’re on the same page, but the guys on their team spent some additional cycles with us spelunking, coming up with features, and so on. As we talked about ideas, the AAD team gave me some great resources, like this post which walks through the process of setting up a client app so that it can be authenticated using AAD . Vittorio’s site has a series of great examples on how to go deeper with AAD. I won’t go too deep into the inner-working of AAD in this post, so keep check out those great resources if you want more information. Thanks to the AAD team for all your help and patience and supporting our team!

Create a client app for managing Azure assets

The code for this sample application won’t be too complicated. I’m actually going to retrofit a small command line app I wrote the other day when asked by a peer in another team who wanted to export the list of Virtual Machine images in the gallery. The code for this itty-bitty application’s beginnings is below, which makes use of the Compute Management Management Library .


With this code already working using my Management Certificate, and with it being a relatively simple app, it seemed like a great case for demonstrating how easy it is to switch out the CertificateCloudCredentials for the TokenCloudCredentials class once the AAD infrastructure is set up and ready. Speaking of that, this would be a good time to walk through the process using the Azure portal to set up an AAD application in my AAD directory that I can develop that will use the Management Libraries to manage my subscription.

Setting up an AAD application using the portal

To authenticate using AAD I first need to create an application in my Azure subscription’s existing Active Directory tenant. To do this I go to my AAD tenant page in the portal and click the Applications tab at the top.


I currently have no applications authenticating against my directory so I’ll click the Add button at the bottom of the portal.


Selecting the second option in this list presents me with a huge list of existing applications with which AAD can integrate. Since I’m interested in authenticating an application I’m writing, I’ll click the first option. 


The application I’m writing is a small console application, not a web app. Hence I’ll select the second option – Native Client Application - in the Add Application dialog, shown below. If I’d wanted to put an ASP.NET Web API up in the cloud that would use AAD on the back end for authentication, I’d select the top option, Web Application and/or Web API. The API my client app will need to access is the actual Azure Service Management API, which AAD has a special provision for in the portal that will be examined in a moment.


Next, I’ll need to provide a redirect URI. Even though the application I’m writing is a native app, I still need to provide this URI to give AAD more details on the specific application that will be authenticating.


Once these settings are made, I can see the application’s client id in the portal, along with the information I provided during the application’s creation process. The MSDN code site article I mentioned above, written by the AAD team, walks through more details of how the client application authentication workflow functions and offers more details, so definitely check out that sample if you need more information on these settings. For now, what’s important to remember from this window are the client id text box and the redirect URI created below. Those two strings make up 2 of the 3 strings I’ll need to authenticate a WAML application with AAD.


The final piece of information is the AAD tenant id, which can be found by clicking the View Endpoints button in the portal when on the page for the directory I wish to use to authenticate users.


The URI provided in each of the text boxes in the resultant dialog contain a GUID. This GUID is the tenant id, so it’ll need to be copied out for use in my code.


Changing the WAML code to perform AAD authentication

Back in Visual Studio I’m ready to change the code to make use of AAD authentication. The first step is to reference the required NuGet package, the Active Directory Authentication Library (ADAL) . This will enable my project the ability of prompting the user with a login dialog, into which they can enter their Microsoft account username and password. It will also add all sorts of goodness from the AAD folks that you can make use of in your client applications.


In the code I’ll add a method called GetAuthorizationHeader that will take my tenant id as a parameter. I’ll presume the calling code might want to make use of the common tenant, but will give callers the ability to pass in their own tenant GUID identifying their custom Active Directory tenant. Take note that within this method I’m making use of the application’s settings, the redirect URL and the client id property from the portal. As well. I’m passing the base URI of the Azure REST API as the value for the resource parameter to the AuthenticationContext.AcquireToken method. Vittorio has a great blog post introducing ADAL and what you can do with it, so if you’re looking to dive deeper on this topic head on over to and check it out. In a more practical implementation I should probably be setting those values up as appSetting variables, but for this demonstration code sheer pasting in the values is sufficient.


The final step in the code is to comment out the old authentication mechanism, where I was using the X509Certificate2 class to authenticate using a management certificate. In place of this code, which creates an instance of a new CertificateCloudCredentials class, I’ll make a call to the new GetAuthorizationHeader method to get my token, then use that token as a parameter to an instance of the TokenCloudCredentials class.


Authorizing access to the Azure Service Management API to my client application

At this point the code almost works. However, when I run it I get an error that pretty clearly indicates what’s wrong in the exception message. Clearly, this application hasn’t been granted access to the service management API.


A recently-added (okay, a personally recently-discovered ) feature within the portal allows me to specify which applications I’m going to allow access to this client application. Since I’ve not yet created any APIs that also use AAD as an authentication mechanism I’ll only see the two default items in this drop-down menu (the Graph Explorer and the Service Management API). By selecting the option Azure Service Management API, I effectively grant my client application access to any of the REST API URLs available under . Once I save this setting, the code should run and give me back the list of Virtual Machine images from the gallery.


When I run the code this time, it works as expected. I’m prompted for my credentials, thanks to some of the functionality provided in the ADAL package.


Once I log in using my subscription’s credentials, WAML can authenticate and pull down the list of virtual machines.


From here, I could use the rest of the functionality available in WAML to do whatever I need. Now that I’m able to authenticate using AAD, I won’t need to create, upload, configure, and track my management certificates. Rather, my applications can just make use of Active Directory to log in and manage my subscription’s assets.

I hope you find this to be an exciting new option available in the Azure Management Libraries. I’ve been looking forward to showing this feature off, as I think it shows two awesome areas of the client developer experience – AAD via ADAL and the REST API via WAML – being used together to create some amazing client-side magic against the Azure cloud. Thanks again to the AAD folks for lending a hand from time to time with some of the details and fine tuning!

Managing Web Sites from Web Sites using the Azure Management Libraries for .NET

I received an email from Hanselman this week, a forward of an email he received after posting his [much-appreciated and far too kind] blog post on WAML. The email was from a community member experiencing a behavior when trying to use WAML to create web sites (or manage their existing sites) from code running on the server side from another Azure Web Site. I can imagine lots of user stories when a Web Site could be used with WAML:

Automation isn’t limited to the desktop. With WAML you can pick and choose which areas you need to offer and install the appropriate NuGets and get up and running quickly. There are a few caveats, however, mostly deliberate design decisions based on the basic ideas of cryptography and data integrity. I spent a few hours this week talking to my good friends in the Web Sites team, along with my own awesome team in Azure Developer Experience, to work through some certificate-loading problems I was seeing in Web Sites. The ability to use a management certificate is pretty important when programming against WAML (yes, AAD support is coming soon in WAML). I’ve seen a few different forums mention similar issues. Given WAML makes use of certs, and sometimes using certs on the server side in the Web Site can be a little tricky, I thought a walk-through was in order.

How Meta. A Web Site that Makes Web Sites.

I’ve created a Visual Studio 2013 solution, with an ASP.NET project in the solution, that I’ll be using for this blog post. The code for this site is on GitHub, so go grab that first. The code in the single MVC controller shows you a list of the sites you have in your subscription. It also gives you the ability to create a new site. The results of this look like the code below.


Here’s a snapshot of the code I’m using in an MVC controller to talk to the Azure REST API using WAML.

There are a few areas that you’ll need to configure, but I’ve made all three of them appSettings so it should be relatively easy to do. The picture below shows all of these variables. Once you edit these and work through the certificate-related setup steps below, you’ll have your very own web site-spawning web site. You probably already have the first of these variables but if you don’t, what are you waiting for ?


Once your Azure subscription ID is pasted in you’ll need to do a little magic with certificates. Before we get to all the crypto-magic, here’s the method that the controller calls that prepare WAML for usage by setting up an X509Certificate .


I’d been using a base 64 encoded string representation of the certificate, but that wouldn’t work on top of Web Sites. Web Sites needs a real physical certificate file .Which makes sense – you want for access to your subscription to be a difficult thing to fake, so this configuration you have to go through once to secure the communication? It’s worth it. The code below then takes that credential and runs some calls to the WebSiteManagementClient object, which is a client class in the Web Sites Management Package .


This next part is all about cryptography, certificates, and moving things around properly. It’s not too complicated or deep into the topics, just a few steps you should know just in case you need to do this again.

Don’t worry. If it were complicated, you wouldn’t be reading about it here.

Creating a Self-Signed Cert and Using the PFX and CER Files Properly with Web Sites

I’ll run through these steps pretty quickly, with pictures. There are many other great resources online on how to create certificates so I’m not going to go into great detail. This section has three goals:

  1. Create a self-signed certificate
  2. Create a *.CER file that I can use to upload to the Azure portal as a management certificate
  3. Use the *.PFX file I created on the way to creating my *.CER file on my web site

To create the self-signed cert open up IIS Manager (some would prefer to do this using makecert.exe ) and click the Server Certificates feature.


Then, click the Create Self-Signed Certificate action link.


You get to walk through a wizard:


Then the new certificate will appear in the list:


Select it and click the Export action link:


Now that you’ve got the PFX file exported, it’d be a good time to drop that into the web site. Drop the PFX file into the App_Data folder…


Once the .PFX is in the App_Data folder, copy it’s location into the Web.Config or in the portal’s configure tab.


Double-click the PFX file. Run through the subsequent steps needed to import the PFX into the personal certificate store. Once the wizard completes you’ll have the certificate installed, so the final step will be to export it. Open up your user certificates tile. I always find mine using the new Modern Tiles method.


Open up the file in the user certificate manager, and select the new certificate just created. Select the Export context menu.


Select the DER option. This is when you’ll output a CER file that can be used as your management certificate in the next step.


Save the output *.CER file on your desktop. With the PFX file set up in the web site and this file created, we’re almost finished.

Uploading the Management Cert to the Portal

With the CER file ready, all one needs to do to upload it is to go to the Management Portal. So long as the web site you’re running WAML in is trying to access resources in the same subscription, everything should work. Go to the management portal, select Settings from the navigation bar, and then select the Management Certificates navigation bar.Click the Upload button to upload the *.CER file only. NOT the PFX, yet!


Once the CER is uploaded it’ll appear in the list of management certificates.


With those configuration changes in place, I can finish the configuration by adding the password for the PFX to the Web.Config file. This part isn’t perfect, but it’s just to get you started with the difficult connecting-of-the-dots that can occur when embarking on a new feature or prototype.


Deploying the Site

The last step, following the configuration and certificates being set up, is to deploy the site. I can do that from right within Visual Studio using the publish web features. Here, I’m just creating a new site.


Once the site deploys and loads up in a browser, you can see what capabilities it’ll offer – the simple creation of other Azure Web Sites.



This article covers more how to prepare a web site with the proper certificate setup and contains code that explains the actual functional code. I’d welcome you to take a look at the repository, submit questions in the comments below, or even fork the repository and come up with a better way, or to add features, whatever you think of. Have fun, and happy coding!

Managing Azure SQL Databases Using the Management Libraries for .NET

The Azure SDK team has been working hard to finish up more of the Azure Management Libraries so you can use .NET code to manage your Azure resources, and today’s blog will highlight one of the newest additions to the WAML stack. Today I’ll introduce you to the Azure SQL Database Management Library . We released the SQL Management Library this week, and it’s loaded with features for managing your Azure SQL Databases. Like the rest of the management libraries, the SQL library provides most of the functionality you’d previously only been able to do using the Azure portal, so you’ll be able to write .NET code to do pretty much any level of automation against your SQL databases and servers.

Here’s a list of some of the features supported by the new SQL library:

Let’s dive in to some of these features by exploring their usage in the sample code I’ve written to demonstrate all these cool new WAML features. I’ve put this code up in GitHub so you can fork it, add your own functionality and test cases, and [please, feel free] spruce up the UX if you’re so inclined. The code for this demonstration includes the demonstrations from my previous blog posts on WAML, so if you’ve been wanting to see that code, you can dive in as of this latest installment in the blog series.

The WPF app shown below sums up today’s code demonstration for using the SQL management library. Once I select my publish settings file and select the subscription in which I want to work WAML makes a REST call out to Azure to retrieve the list of database servers in my subscription. I data-bind a list view in the WPF app with the list of servers. Next to each you’ll find buttons to allow for convenient access to the databases residing on the servers, as well as a button giving you access to the servers’ firewall rules.


Creating the SQL Management Client

As with the other demonstrations, the first step is to create an instance of the SqlManagementClient class using a subscription ID and an X509 Certificate, both of which are available in my publish settings file. The code below is similar to the manner in which the other management clients (Compute, Infrastructure, Storage, and so on) are created.


SQL Server Operations

The code below shows how, in two lines of code, I can get back the list of servers from my subscription. Once I’ve got the list, I set a property that’s data-bound in the WPF app equal to the list of servers the REST API returned for my subscription.


Database Operations

Once a user clicks the “Databases” button in one of the server lines in the WPF app a subsequent call is made out to the Azure REST API to pull back the list of databases running on the selected server. 


You can do more than just list databases using the SQL management library – you have full control over the creation and deletion of databases, too, using the easily-discoverable syntax. Below, you’ll see how easy it is to figure out using nothing more than the Intellisense features of Visual Studio to figure out how to create a new database.


Firewall Rules

You no longer need to load up a server in the portal to manage your firewall rules – you can do that using the SQL management library, too. The code below demonstrates how easy it is to retrieve the list of firewall rules for a particular SQL Servers. When the user hits the “Firewall Rules” button in the WPF app, this code runs and loads up the rules in the UX.


In addition to the SQL library, we’ve also released the preview libraries for Media Services and Service Bus, too. We’re continuing to improve the landscape of WAML every day, and have some convenience features we’re adding into the overall WAML functionality set, too. Keep watching this blog for more announcements and tutorials on the Azure Management Libraries. As always, if you have any suggestions or ideas you’d like to see bubble up in the management libraries, feel free to post a comment below.

Happy Coding!

Using Publish Settings Files to Authenticate the Management Libraries

I am not very good at cryptography, even the most fundamental basic aspects of it. I’m humble enough to be willing to admit this (or smart enough to put it out there so folks don’t ask me questions about it, take your pick). That said, the idea of working with X509 Certificates in my code is something I obviously don’t relish doing. It’s a necessary thing, however, when working with various authentication paradigms, like the traditional method of attaching certificates to individual requests I’d like to make to the Azure API. Since the first implementation of the credential-passing logic with the Management Libraries use X509 certificates, I needed to come up with an easy way of testing my code with management certificates. Luckily, the *.publishsettings files one can download from Azure via techniques like the PowerShell cmdlets’ Get-AzurePublishSettingsFile command provides a Base-64 encoded representation of an X509 Certificate. This post will walk through how I use the information in a publish settings file to authenticate my requests with the management libraries.

You won’t want to put your publish settings files onto a web server, store the information in these files in your code, or anything like that. This is just for demonstration purposes, and makes the assumption you’re working on your own machine, controlled by you, while you’re logged into it in a nice and secure fashion. If some nefarious villain gets ahold of your publish settings file they could theoretically authenticate as you. Point is, be careful with these files and with the strings embedded into them. Keep them somewhere safe and secure when you’re not using them.

Once you’ve downloaded a publish settings file to your computer, you can write code that will open the files up and parse them. Don’t worry, this is pretty easy, as the files are just normal XML files. So, you can use XLinq or any other XML-parsing techniques to read the values out of the file. In my example code for this and the other posts I’m writing on the topic, you’ll notice that I’ve provided a menu you can use to open up a publish settings file. A screen shot of this application running is below:

Selecting the publish settings file

When the user clicks this menu, I simply throw open a new Open File Dialog object and allow the user to select a *.publishsettings file from their machine. The screen shot below demonstrates this.

Opening the file

When the user selects the file the code does just what you’d expect – it opens the file up, parses out the XML, and builds an array of subscription models the rest of my code will use when it needs to make calls out the API via the management libraries. The section of the code below that’s highlighted is doing something pretty simple – just building a list of a model type known as PublishSettingsSubscriptionItem to which I’ll data-bind a list of child menu items in my form.

Loading up the subscriptions

The XAML code that does this data-binding logic is below. This sample code (which will soon be available for your viewing pleasure) shown below does the data-binding magic.

Data-binding the list of subscriptions

Once this code is in place and I select a publish settings file, the menu will be augmented with the list of subscriptions found in that publish settings file.

The data-bound list

Since I’ve got my XAML wired up in a nice command-like manner, selecting any of those items will fire the SelectSubscriptionCommand method in my form’s view model class. That method in turn sets the current selected subscription so that it can be used later on by management library client classes that need the information. The code below, which is the handler method for that command, does this magic.

Setting the selected subscription

Now, whenever I need to run any management library code that reaches out to the Azure REST API, I can use the properties of my selected subscription to set up the two elements of data each client needs – the subscription ID and the Base-64 encoded management certificate. The code below does exactly this to make a call out to the REST API to get the list of supported regions in the Azure fabric.

Using the selected subscription

Thanks to my good buddy and evil coding genius Mike Lorbetske for MVVM-ifying this code so the demos can be easily added to support the upcoming blog posts in this series. I was mixing MVVM techniques and nasty, old-fashioned codebehind brutality together, and Mike came in just in time and helped me make this code a little more robust using MEF and some other tricks.

Tracing with the Management Libraries

When we built out the Azure Management Libraries, one of the things we knew we needed to do was to make it easy for developers to trace API conversations however they want or need, to whatever medium they’d like to save the trace logs. The deeper the trace information available during conversations with the Azure REST API, the easier it is to figure out what’s going on. To make tracing easier and more flexible, we added an interface to the SDK common package called the ICloudTracingInterceptor. You can implement this method however you want to drop the trace messages from the API conversations. This post will walk through a simple example of using the tracing interception functionality in the management libraries in a WPF application.

Building a Custom Interceptor

First, a quick inspection of the tracing interceptor interface is in order. The object browser window below shows the interface and all of the methods it offers.

Object Browser

An obvious implementation of the interceptor logic would be to dump trace messages to a Console window or to a debug log. Given I’m building a desktop application I’d like to see that trace output dropped into a multi-line textbox. Rather than pass the constructor of my implementation a reference to a WPF TextBox control I’ll take a reference to an Action, as I may need to be proactive in handling threading issues associated with the API calls running in an asynchronous manner. Better safe than sorry, right? Below is the beginning of the code for my custom tracing interceptor.

Custom tracing interceptor implementation

With the custom implementation built, I’m ready to use it in an application.

Hooking the Interceptor to the Cloud Context

Also provided within our common package is another helpful object that abstracts the very idea of the cloud’s context and the conversations occurring with the REST API. This abstraction, appropriately named CloudContext, is visible in the object browser below.

Object Browser

The Cloud Context abstraction makes it simple to dial in a custom tracing interceptor. In my WPF application code, shown below, I simply use the Configuration.Tracing.AddTracingInterceptor method to add a new instance of my interceptor to the context.

Hooking in the tracing interceptor

Now, each call I make to the Azure API via the management libraries will be dumped to the multi-line text box in my WPF form. The screen shot below is the WPF application running. In this example, I’ve made a call to the management API’s list locations method, which returns a listing of all of the regions supported by the Azure fabric.

Example code running

Since the tracing interceptor is bound to the cloud context, any calls made through the management libraries will be traced and dropped into the text box (or whatever you specify in your own implementation). We think this adds a nice, one-stop method of tracing your API conversations. Keep watching this space for more blog posts about the management libraries. My next post, which will be released this week, will cover the code in this WPF application responsible for loading up a list of subscriptions from a publish settings file, and how the information in the publish settings file can be used to hydrate an X509 Certificate at run-time (note the “Select Publish Settings File” UX elements in this example).

Getting Started with the Azure Management Libraries for .NET

The first thing I had the opportunity to work on when I joined the Azure team was something that I’m excited to show off today. I demonstrated the early bits of the Azure Management Libraries at the TechEd Australia Developer kick-off session, and now that they’re out I’m really excited to walk you through getting started with their use. This post will sum up what the Azure Management Libraries are and why you should care to take a peek at them, and then I’ll dive into some code to show you how to get started.

What are these libraries you speak of?

With this release, a broad surface area of the Azure cloud infrastructure can be accessed and automated using the same technology that was previously available only from the Azure PowerShell Cmdlets or directly from the REST API. Today’s initial preview includes support for hosted Cloud Services, Virtual Machines, Virtual Networks, Web Sites, Storage Accounts, as well as infrastructure components such as affinity groups.

We’ve spent a lot of time designing natural .NET Framework APIs that map cleanly to their underlying REST endpoints. It was very important to expose these services using a modern .NET approach that developers will find familiar and easy to use:

These packages open up a rich surface area of Azure services, giving you the power to automate, deploy, and test cloud infrastructure with ease. These services support Azure Virtual Machines, Hosted Services, Storage, Virtual Networks, Web Sites and core data center infrastructure management.

Getting Started

As with any good SDK, it helps to know how you could get started using it by taking a look at some code. No code should ever be written to solve a problem that doesn’t exist, so let’s start with a decent, but simple, problem statement:

I have this process I run as in Azure as a Worker Role. It runs great, but the process it deals with really only needs to be run a few times a week. It’d be great if I could set up a new service, deploy my code to it, then have it run. Once the process finishes it’d be even better if the service could “phone home” so it could be deleted automatically. I sometimes forget to turn it off when I’m not using it, and that can be expensive. It’d be great if I could automate the creation of what I need, then let it run, then have it self-destruct.

Until this preview release of the Azure Management Libraries (WAML for short hereafter, though this is not an official acronym, I’m just being lazy), this wasn’t very easy. There’ve been some great open-source contributions to answering the .NET layer in managing Azure services and their automation, but nothing comprehensive that delivers C# wrappers for nearly all of the Azure Management REST APIs. If you needed to use these API to generate your own “stuff” in Azure, you pretty much had to write your own HTTP/XML code to communicate with the REST API. Not fun. Repetitive. Boring, maybe, after you do a few dozen out of hundreds of API methods.

Getting the Management Libraries

I decided to do this work in a simple WPF application I’ll run on my desktop for the time being. I’ll want to run it as long-running app or service later, but for now this will work just fine. Since I’ve got a Azure Cloud Service with a Worker Role I’ll want to run in the cloud, I’ve just added all three projects to a single solution, which you’ll see below.

You probably noticed that I’m preparing to add some NuGet packages to the WPF application. That’s because all of the Azure Management Libraries are available as individual NuGet packages. I’m going to select the Microsoft.WindowsAzure.Management.Libraries package, as that one will pull everything in the Management Libraries into my project. If I wanted to manage one aspect of Azure rather than all of it, I’d reference one of the more specific packages, like Microsoft.WindowsAzure.Management.WebSites, which provides management functionality specific only to the Azure Web Sites component.

Once I’ve referenced the NuGet packages, I’m ready to set up client authentication between my WPF application and the Azure REST APIs.


The first implementation we’ve built out for authenticating users who are using WAML and Azure is a familiar one – using X509 Certificates. Integrated sign-in was added recently in SDK 2.2 to Visual Studio and to PowerShell, and we’re working on a solution for this in WAML, too. With this first preview release we’re shipping certificate authentication, but stay tuned, we’re doing our best to add in additional functionality.

Don’t panic. We’ve made this so easy even I can do it.

I’m not going to go deep into a discussion of using certificate-based authentication in this post. In fact, I’m going to be as brute-force as possible just to move into the functional areas of this tutorial. I’ll need two pieces of information to be able to log into the Azure API:

I obtained these values from one of my publish settings files. The XML for this file is below.

With the key and the subscription ID in my code later on, I can call the GetCredentials method below that returns an instance of the abstract class, SubscriptionCloudCredentials, we’re using to represent a credential instance in the Management Library code. That way, if I add single-sign on later it’ll be easy for me to replace the certificate authentication with something else. The code the the CertificateAuthenticationHelper class from my sample code is below:

Now I’ll write a controller class that’ll do the work between my WPF application and the Management Libraries – a convenience layer, in a sense.

Management Convenience Layer

To map out all the various parameters I’ll have in my workflow I’ve created the ManagementControllerParameters class shown below. This class will summarize all of the pieces of data I’ll need to create my services and deploy my code.

Then, I’ll create a class that will provide convenience functionality between the UX code and the Management Library layer. This code will make for cleaner code in the UX layer later on. Note the constructor of the code below. In it, two clients are being created. One, the StorageManagementClient, will provide the ability for me to manage storage accounts. The other, the ComputeManagementClient, provides the ability for me to work with most of the Azure compute landscape – hosted services, locations, virtual machines, and so on.

For the purposes of explaining these steps individually, I've created a partial class named ManagementController that's spread across multiple files. This just breaks up the code into functional units to make it easier to explain in this post, and to provide for you as a public Gist so that you can clone all the files and use them in your own code.

Now, let’s wire up some management clients and do some work.

Create a New Storage Account using the Storage Management Client

The first thing I’ll need in my deployment strategy is a storage account. I’ll be uploading the .cspkg file I packaged up from a Cloud project in Visual Studio into a Azure blob. Before I can do that, I’ll need to create an account into which that package file can be uploaded. The code below will create a new storage account in a specified region.

Once the storage account has finished creating, I'm ready to use it. Given that I'll need a connection string to connect my application (and my soon-to-be-created cloud service) to the storage account, I'll create a method that will reach out to the Azure REST APIs to get the storage account's connection keys. Then, I'll build the connection string and hand it back to the calling code.

Now that the storage account has been created I'll create my cloud service and publish my package up to Azure.

Create and Deploy a new Cloud Service using the Compute Management Client

The call to create a cloud service is surprisingly simple. All I need to do is to provide the name of the cloud service I intend on creating and the region in which I'd like it to be created.

Finally, all I need to do to deploy the cloud service is to upload the cloud service package file I created in Visual Studio to a blob, then call the REST API. That call will consist of the blob URI of the package I uploaded to my storage account, and the XML data from the cloud project's configuration file. This code will make use of the Azure Storage SDK, which is also available as a NuGet package .

Now that all the code's written to create my Azure application, I'll write some code to destroy it once it wraps up all of the work it was designed to do.

Deleting Assets from Azure

Deleting assets using the Azure Management Libraries is as easy as creating the assets. The code below cleans up the storage account I created. Then, it deletes the cloud service deployment and the cloud service altogether.

With all the convenience code written at this point, the user experience code should be relatively painless to write next.

The User Experience

The UX for this application is relatively simplistic. I'm just providing a pair of buttons on a WPF form. One will create the assets I need in Azure and perform the deployment. The other will delete the assets from Azure. XAML code for this UX is below. It isn't much to look at but the idea here is to keep this simple.

The codebehind for the UX is also just as easy. In the Create button-click event, I create a new ManagementController instance, providing it all of the parameters I'll need to create the application's components in the Azure fabric. Then I call all of the methods to created everything.

I also handle the Delete button-click by cleaning up everything I just created.

I could modify this code to use the Windows Storage SDK to watch a storage queue on the client side. When the cloud service is finished doing its job, it could send a message into that queue in the cloud. The message would then be caught by the client, which would in turn call the Cleanup method and delete the entire application.

Endless Automation Possibilities

The Azure Management Libraries provide a great automation layer between your code and Azure. You can use these libraries, which are in their preview release as of this week, to automate your entire Azure creation and destruction processes. In this first preview release, we're providing these management libraries for our compute and storage stacks, as well as for Azure Web Sites. In time, we'll be adding more functionality to the libraries. The goal is to give you automation capabilities for everything in Azure.

We're also excited about your feedback and look forward to suggestions during this preview phase. Please try out the Management Libraries, use them in your own experimentation, and let us know what you're using them to facilitate. If you have ideas or questions about the design, we're open to that too. The code for the libraries, like many other things in the Azure stack, are open source. We encourage you to take a look at the code in our GitHub repository .

This Team is Astounding. I am Not Worthy.

Jeff Wilcox’s team of amazing developers have put in a lot of time on the Management Libraries and today we’re excited to share them with you via NuGet . Jeff’s build script and NuGet wizardry have been a lot of fun to watch. The pride this team takes in what it does and the awesomeness of what they’ve produced is evident in how easy the Management Libraries are to use. We think you’ll agree, and welcome your feedback and stories of how you’re finding ways to use them.

New Relic and Azure Web Sites

This past week I was able to attend the //build/ conference in San Francisco, and whilst at the conference I and some teammates and colleagues were invited to hang out with the awesome dudes from New Relic . To correspond with the Web Sites GA announcement this week, New Relic announced their support for Azure Web Sites . I wanted to share my experiences getting New Relic set up with my Orchard CMS blog, as it was surprisingly simple. I had it up and running in under 5 minutes, and promptly tweeted my gratification .

Hanselman visited New Relic a few months ago and blogged about how he instrumented his sites using New Relic in order to save money on compute resources. Now that I’m using their product and really diving in I can’t believe the wealth of information available to me, on an existing site, in seconds.

FTP, Config, Done.

Basically, it’s all FTP and configuration. Seriously . I uploaded a directory, added some configuration settings using the Azure portal, Powershell Cmdlets, or Node.js CLI tools, and partied. There’s extensive documentation on setting up New Relic with Web Sites on their site that starts with a Quick Install process.

In the spirit of disclosure, when I set up my first MVC site with New Relic I didn’t follow the instructions, and it didn’t work quite right. One of New Relic’s resident ninja, Nick Floyd, had given Vstrator’s Rob Zelt and myself a demo the night before during the Hackathon. So I emailed Nick and was all dude meet me at your booth and he was all dude totally so we like got totally together and he hooked me up with the ka-knowledge and stuff. I’ll ‘splain un momento. The point in my mentioning this? RT#M when you set this up and life will be a lot more pleasant.

I don’t need to go through the whole NuGet-pulling process, since I’ve already got an active site running, specifically using Orchard CMS . Plus, I’d already created a Visual Studio Web Project to follow Nick’s instructions so I had the content items that the New Relic Web Sites NuGet package imported when I installed it.


So, I just FTPed those files up to my blog’s root directory. The screen shot below shows how I’ve got a newrelic folder at the root of my site, with all of New Relic’s dependencies and configuration files.

They’ve made it so easy, I didn’t even have to change any of the configuration before I uploaded it and the stuff just worked.


Earlier, I mentioned having had one small issue as a result of not reading the documentation. In spite of the fact that their docs say, pretty explicitly, to either use the portal or the Powershell/Node.js CLI tools, I’d just added the settings to my Web.config file, as depicted in the screen shot below.


Since the ninjas at New Relic support non-.NET platforms too, they do expect those application settings to be set at a deeper level than the *.config file. New Relic needs these settings to be at the environment level. Luckily the soothsayer PM’s on the Azure team predicted this sort of thing would happen, so when you use some other means of configuring your Web Site, Azure persists those settings at that deeper level. So don’t do what I did, okay? Do the right thing.

Just to make sure you see the right way. Take a look at this screen shot below, which I lifted from the New Relic documentation tonight. It’s the Powershell code you’d need to run to automate the configuration of these settings.


Likewise, you could configure New Relic using the Azure portal.


Bottom line is this:

  • If you just use the Web.config, it won’t work
  • Once you light it up in the portal, it works like a champ

Deep Diving into Diagnostics

Once I spent 2 minutes and got the monitoring activated on my site, it worked just fine. I was able to look right into what Orchard’s doing all the way back to the database level. Below, you’ll see a picture of the most basic monitoring page looks like when I log into New Relic. I can see a great snapshot of everything right away.


Where I’m spending some time right now is on the Database tab in the New Relic console. I’m walking through the SQL that’s getting executed by Orchard against my SQL database, learning all sort of interesting stuff about what’s fast, not-as-fast, and so on.


I can’t tell you how impressed I was  by the New Relic product when I first saw it, and how stoked I am that it’s officially unveiled on Azure Web Sites. Now you can get deep visibility and metrics information about your web sites, just like what was available for Cloud Services prior to this week’s release.

I’ll have a few more of these blog posts coming out soon, maybe even a Channel 9 screencast to show part of the process of setting up New Relic. Feel free to sound off if there’s anything on which you’d like to see me focus. In the meantime, happy monitoring!

Running SSL with Azure Web Sites Today

If you’re a web developer working with ASP.NET, Node.js, PHP, Python, or you have plans on building your site in C++, Azure Web Sites is the best thing since sliced bread. With support for virtually every method of deployment and with support for most of the major web development models you can’t beat it. Until recently, SSL was the only question mark for a lot of web site owners, as WAWS doesn’t yet support SSL out of the box (trust me, it’s coming, I promise). The good news is that there’s a method of achieving SSL-secured sites now . In this blog post I’ll introduce the idea of a workaround my engineering friends in the Web Sites team call the SSL Forwarder, and to demonstrate how you can get up and running with an SSL-protected Azure-hosted web site in just a few minutes’ work.


First, I’d like to point out one very important point about the SSL Forwarder solution. This solution works, and we have a handful of community members actively using this solution to provide an SSL front-end for their web sites. So feel comfortable using it, but understand that this isn’t something you’ll have to do forever, as SSL is indeed coming as an in-the-box feature for Web Sites. If you love the idea of Azure Web Sites but the lack of in-the-box SSL support is a deal-breaker for you and your organization, this is a viable option to get you up and running now . However, the SSL Forwarder isn’t an officially supported solution, in spite of one being actively used by numerous customers. So if you set this up and you experience anything weird, feel free to contact me directly via the comment form below, or on Twitter, or by email (and I’ll give you my email address on Twitter if you need it). All that being said, I’ve heard from quite a few in the community who are using this solution that it has mitigated their concern and they appear to be running quite well with this in place.

Architectural Overview

Don’t panic when you see this solution. Do read the introduction, once you see grok how it all works, the SSL Forwarding solution is a whole lot less intimidating. I admit to having freaked out with fear when I first saw this. I’m no expert at most of the things involved in this exercise, but the Web Sites team literally put together a “starter project” for me to use, and it took me 1 hour to get it working. If I can do this, you can do this .

SSL-Forwarder-Diagram The idea of the SSL Forwarder is pretty simple. You set up a Cloud Service using the Azure portal that redirects traffic to your Azure Web Site. You can use all the niceties of Web Sites (like Git deployment, DropBox integration, and publishing directly to your site using Visual Studio or WebMatrix ) to actually build your web site, but the requests actually resolve to your Cloud Service endpoint, which then proxies HTTP traffic into your Web Site.

The diagram to the right shows how this solution works, at a high level. The paragraph below explains it in pretty simple terms. I think you’ll agree that it isn’t that complicated and that the magic that occurs works because of tried-and-true IIS URL Rewrite functionality. In order to obtain the 99.9% uptime as outlined in the Azure SLA, you’ll need to deploy at least 2 instances of the Cloud Service, so the diagram shows 2 instances running. As well, the code provided with this blog post as a starting point is defaulted to start 2 instances. You can back this off or increase it however you want, but the 99.9% uptime is only guaranteed if you deploy the Cloud Service in 2 instances or more (and there’s no SLA in place yet for Web Sites, since it’s still in preview at the time of this blog post’s release, so you can host your Web Site on as many or as few instances as you like).

You map your domain name to your Cloud Service. Traffic resolves to the Cloud Service, and is then reverse-proxied back to your Web Site. The Cloud Service has 1 Web Role in it, and the Web Role consists of a single file, the Web.config file. The Web.config in the Web Role contains some hefty IISRewrite rules that direct traffic to the Web Site in which your content is hosted. In this way, all traffic – be it HTTP or HTTPS traffic – comes through the Cloud Service and resolves onto the Web Site you want to serve. Since Cloud Services support the use of custom SSL certificates, you can place a certificate into the Cloud Service, and serve up content via an HTTPS connection.


To go along with this blog post, there’s a repository containing a Visual Studio 2012 solution you can use to get started. This solution contains three projects:

Create the Cloud Service and Web Site

First thing is, I’ll need to create a Web Site to host the site’s code. Below is a screen shot of me creating a simple web site myself using the Azure portal.


Obviously, I’ll need to create a Azure Cloud Service, too. In this demo, I’ll be using a new Cloud Service called SSLForwarder, since I’m not too good at coming up with funky names for things that don’t end in a capital R (and when I do, Phil teases me, so I’ll spare him the ammunition). Below is another screen shot of the Azure portal, with the new Cloud Service being created.


If you’re following along at home work, leave your browser open when you perform the next step, if you even need to perform the next step, as it is an optional one.

Create a Self-signed Certificate

This next step is optional, and only required if you don’t already have an SSL certificate in mind that you’d like to use. I’ll use the IIS Manager to create my own self-signed certificate. In the IIS Manager I’ll click the Server Certificates applet, as shown below.

When I browse this site secured with this certificate, there’ll be an error message in the browser informing me that this cert isn’t supposed to be used by the domain name from where it’s being served. Since you’ll be using a real SSL certificate, you shouldn’t have to worry about that error when you go through this process (and I trust you’ll forgive a later screen shot where the error is visible).


Once that applet loads up in the manager, I’ll click the link in the actions pane labeled Create Self-Signed Certificate .


I’ll name my certificate SSLForwarderTesting, and then it appears in the list of certificates I have installed on my local development machine. I select that certificate from the list and click the link in the Actions pane labeled Export to save the cert somewhere as a file.


Then I find the location where I’ll save the file and provide it with a password (which I’ll need to remember for the next step).


Now that this [optional] step is complete I have a *.PFX file I can use to install my certificate in the Cloud Service.

Install the SSL Certificate into a Cloud Service

To activate SSL on the Cloud Service I’ll need to install an SSL certificate into the service using the Azure portal. Don’t panic, this is easier than it sounds. Promise. Five minutes, tops.

Back in my browser, on the Azure portal page, I’ll click the Cloud Service that’ll be answering HTTP/S requests for my site. The service’s dashboard page will open up.


I’ll click the Certificates tab in the navigation bar.


I’m going to want to upload my certificate, so this next step should be self-explanatory.


The next dialog gives me a pretty hard-to-screw-up dialog. Unless I forgot that password.

( cue the sound of hands ruffling through hundreds of post-its)


Once the certificate is uploaded, I’ll click the new cert and copy the thumbprint to my clipboard, maybe paste it into Notepad just for the moment…


Configuring the Cloud Service’s SSL Certificate

With the SSL cert installed and the thumbprint copied, I’ll open up the file in Visual Studio 2012 and set the thumbprint’s configuration. I could also do this using the built-in Azure tools in Visual Studio, but since I’ve got the thumbprint copied this is just as easy to do directly editing the files. Plus, the Web Sites team made it pretty obvious where to put the thumbprint, as you’ll see from the screen shot below.


Configuring the URL Rewrite Rules in the Web Role

Remember the architectural overview from earlier. The main thing this Cloud Service does is to answer HTTP/S requests and then reverse-proxy that traffic back to the Web Site I’m happily hosting in Azure. Setting up this proxy configuration isn’t too bad, especially when I’ve got the code the team handed me. I just look for all the places in the Web.config file from the Web Role project that mentions foo .com or foo or foo… well, you get the idea.

Here’s the Web.config file from the Web Role project before I edit it to comply with the Web Site and Cloud Service I created to demonstrate this solution open in Visual Studio 2012. I’ve marked all the spots you’ll need to change in the screen shot.


Here’s the file after being edited. Again, I’ll indicate the places where I made changes.


Now that the Cloud Service and the Web.config file of the Web Role project’s been configured to redirect traffic to another destination, the proxy is ready for deployment. The solution’s Cloud project is defaulted to run at 2 instances, so that’s something you’ll want to remember – you’ll be paying for 2 instances of the Cloud Service you’ll be using to forward HTTP/S traffic to your Web Site.

Publish the Cloud Service

Within Visual Studio 2012, I right-click the Cloud project and select the Publish context menu item.


A dew dialogs will walk me through the process of publishing the SSLForwarder service into Azure. It may take a few minutes to complete, but once it publishes the Cloud Service will be running in your subscription and ready to respond to HTTP/S requests.

To verify everything’s working, try hitting the Cloud Service URL – in my case, to see if it’s answering requests or spewing errors about unreachable hosts, either of which wouldn’t be surprising – we’ve redirected the Cloud Service’s Web Role to a Web Site. That web site probably isn’t set up yet, so you could see some unpredictable results.

If you actually pre-configured your SSLForwarder Cloud Service to direct traffic to a * you’re already running you’re pretty much finished, and you’re probably running behind HTTPS without any problems right now. If not, and the idea of publishing Web Sites from Visual Studio is new to you, you’ll have a chance to use that technique here.

Publish a Azure Web Site

I’ll go back into the Azure portal and go specifically to the SSLForwarder Web Site I created earlier on in the post.


Once the site’s dashboard opens up, I’ll find the link labeled download publishing profile . This file will be used by Visual Studio during publishing to make the process very simple.


Publishing and Browsing an SSL-encrypted Web Site

Once the publish settings file has been downloaded, it’s easy to push the site to Web Sites using Visual Studio 2012 or WebMatrix. With the sample project provided I’ll open up the Web Application project I want to publish to Web Sites. Then, I’ll right-click the web site project and select the Publish menu item.


Then, the publish process will make the remainder of the publishing experience pretty simple.

Remember to thank Sayed Hashimi if you need him out and about, he loves to hear thoughts on publishing and uses suggestions to make the experience an improved one for you. He also has a stupendous team of people working with him to execute great publishing experiences, who love feedback.

The publish process dialogs will walk you through the simple act of publishing your site up to Azure. Once it completes (which usually takes 30-60 seconds for a larger site) the site will open up in a web browser.


Note the URL still shows HTTP, and it also shows the URL of the Azure Web Site you created. You’ll need to manually enter in the URL for the Cloud Service you created.

For me, that’s . So long as the domain name you enter resolves to Cloud Service you should be alright. You can also opt for the * approach too, as the domain name of the site you want to put out. Whatever your preference for solving this particular issue.

I’m going to go ahead and change the domain name and the protocol, so our hit to the site being hosted in Web Sites will respond to it receiving an SSL-encrypted request, then load the site in the browser.

Note – this is when you’ll need to forgive me for showing you something that causes a warning in the browser. It’s just that way since I used a self-signed cert in a Azure service, so we should expect to see an error here. It’s right there in the browser’s address bar, where it says “Certificate Error.” If I’d used a real SSL cert, from a real authority, the error wouldn’t be there.



So for many months I’ve heard users request SSL on Web Sites, saying everything about it is awesome. Then they stare at me and wait about 3 seconds and usually follow it up with “but you’ve gotta get me SSL man, I’m over here and I gotta have my SSL” . I understand their desire for us to support it, and luckily, the Web Sites team and our engineering organization is so willing to share their solutions publicly. This is a great solution, but it won’t work in every situation and isn’t as good as what the Web Sites teams have in plans for the future. The SSL Forwarder solution is a good stop-gap, a good temporary solution to a problem we’ve had a lot of requests about.

Hopefully this helps in your decision to give Azure Web Sites a shot. If SSL has been your sole reason for not wanting to give it a try, now you have a great workaround in place that you can facilitate to get started right now.

WebMatrix Templates in the App Gallery

If you’ve not yet used WebMatrix, what are you waiting for?!?!?!? you’re missing out on a great IDE that helps you get a web site up and running in very little time. Whether you’re coding your site in PHP, Node.js, or ASP.NET, WebMatrix has you covered with all sorts of great features. One of the awesome features of WebMatrix is the number of templates it has baked in. Starter templates for any of the languages it supports are available, as are a number of practical templates for things like stores, personal sites, and so on. As of the latest release of Azure Web Sites, all the awesome WebMatrix templates are now also available in the Web Sites application gallery. Below, you’ll see a screen shot of the web application gallery, with the Boilerplate template selected.


Each of the WebMatrix gallery projects is  now duplicated for you as a Web Site application gallery entry. So even if you’re not yet using WebMatrix, you’ll still have the ability to start your site using one of its handy templates.

Impossible, you say?

See below for a screenshot from within the WebMatrix templates dialog, and you’re sure to notice the similarities that exist between the Application Gallery entries and those previously only available from directly within WebMatrix.


As before, WebMatrix is fully supported as the easiest Azure-based web development IDE. After you follow through with the creation of a site from the Application Gallery template list from within the Azure portal, you can open the site live directly within WebMatrix. From directly within the portal’s dashboard of a new Boilerplate site, I can click the WebMatrix icon and the site will be opened up in the IDE.


Now that it’s open, if I make any changes to the site, they’ll be uploaded right back into Azure Web Sites, live for consumption by my site’s visitors. Below, you’ll see the site opened up in WebMatrix.


If you’ve not already signed up, go ahead and sign up for a free trial of Azure, and you’ll get 10 free sites (per region) to use for as long as you want. Grab yourself a free copy of WebMatrix, and you’ll have everything you need to build – and publish – your site directly into the cloud.

Solving Real-world Problems with Azure Web Sites

I’ve been asked a lot of great questions about Azure Web Sites since the feature launched in June. Things like on-premise integration, connecting to service bus, and having multiple environments (like staging, production, etc), are all great questions that arise on a pretty regular cadence. With this post, I’m going to kick off a series on solving real-world problems for web site and PaaS owners that will try to address a lot of these questions and concerns. I’ve got a few blog posts in the hopper that will address some of these questions, rather than just cover how certain things are done. Those posts are great and all (and a lot of fun to write), but they don’t answer some real-world, practical questions I’ve been asked this year. Stay tuned to this area of my site, as I’ll be posting these articles over the next few weeks and probably into the new year. As I post each of these solutions I’ll update this post so you have a one-stop shop to go to when you need to solve one of these problems.

Posts in this Series

Multiple Environments with Azure Web Sites
In this post I demonstrate how to have production and staging sites set up for your web site so that you can test your changes in a sandbox site before pushing your production site and potentially causing damage to it (and your reputation). If you’ve wondered how to gate your deployments using Azure Web Sites, this is a good place to start. You’ll learn how to use Azure Web Sites with a repository and some creative branching strategies to maintain multiple environments for a site.

Managing Multiple Azure Web Site Environments using Visual Studio Publishing Profiles
This post takes the same sort of scenario as presented in the first article. Rather than use as a means to executing a series of gated environment deployments it focuses on the awesome features within Visual Studio for web publishing. Specifically, you’ll see how to use publishing profiles to deploy to multiple Azure Web Sites, so that a team of peers responsible for releasing a site can do so without ever needing to leave Visual Studio.

This post also takes a look at the idea of release management and how this solution answers the question of doing proper release management with a cloud-hosted web site. If you’ve wondered how your SDLC could fit in the idea of continuously maintaining a series of environments for gating your releases using Visual Studio’s super-simple publishing features, this is a great place to start.

Connecting Azure Web Sites to On-Premises Databases Using Azure Service Bus
This post introduces the idea creating a hybrid cloud setup using a Azure Web Site and the Azure Service Bus, to demonstrate how a web site hosted in Azure can connect to your on-premises enterprise database. If you’ve been wondering how to save data from your Azure Web Site into your local database but didn’t know how to do it, or if you’re thinking of taking baby steps in your move toward cloud computing, this post could provide some good guidance on how to get started.

Continuous Delivery to Azure Web Sites using TeamCity
My good friend Magnus put together a very extensive and informative blog post on using Azure Web Sites with TeamCity . If you're a CI user, or a TeamCity user, you'll want to check this out, as it is a great recipe for implementing your CI builds against Azure Web Sites. Magnus worked really hard on this blog post and started working on it for his Windows AzureConf talk, and I'm proud to see how it came together for him.

Jim O'Neil's Blog Series on Integrating Azure Web Sites and Notifications
Jim's blog series - part 1, part 2, and part 3 - are great if you're looking into implementing instant messaging with your Azure Web Site. This series is great, and I feel it shows off some amazing potential and adds a whole new dimension to what's possible on the platform. Take a look at these posts, they're quite inspiring.

G. Andrew Duthie’s Series on Back-end Data Services with Windows 8 Applications
The @devhammer himself has created an awesome series on putting your data in the cloud, so your Windows 8 applications have a common place from which to pull the data. In this series, he discusses how to use Azure Web Sites to create an API that can be called by a Windows 8 application. If you’re delving into Windows 8 development and have been wondering how you could use Web Sites to create the APIs you’ll be calling, this series is a must-read.

Maarten Balliauw on Configuring IIS Methods for ASP.NET Web API with Azure Web Sites
My good friend and dedicated MVP Maarten just wrote up a great post on configuring IIS within Azure Web Sites to support additional HTTP methods like HEAD and PATCH using configuration files. If you’re trying to do some deeper Web API functionality and have struggled with getting these types of HTTP methods supported, this could solve your problem.

Branches, Team Foundation Services, and Azure Web Sites
Wouter de Kort covers how to achieve the multiple-environment approach using Team Foundation Services online. If you're using TFS and you want to set up multiple branches that deploy to a variety of web sites, this article will help you get going. This is a great introduction to using TFS for multiple branches with Azure Web Sites. 

nopCommerce and Azure Web Sites

This week we announced support for nopCommerce in the Azure Web Sites application gallery. Using the Azure portal and with requiring zero lines of code, you can set up nopCommerce on Web Sites and get your online store up in minutes. You’ll have your very own products database, shopping cart, order history – the works. On the nopCommerce web site you can learn a lot more about the features nopCommerce offers. In this blog post, I’ll show you how to get your own store up and running on Azure Web Sites.

I walked through this process today. As with other entries in the Web Sites application gallery, you really do have to do very little digging to figure out where to go to get started. It’s pretty much “start, new site, from gallery,” as you’ll see from the picture below. It shows you exactly which menu item to click in the Azure portal to get started.


The next step should be pretty self-explanatory. Select nopCommerce from the list of available applications.


The next step will ask you for some database connection information. This is what will be set in your nopCommerce installation once the process is complete. I’m going to create a new database solely for use with my nopCommerce site I’m creating for this demonstration.


The next screen is one that appears in a few other application gallery entries, too. It’s the “where do you want to store your data today?” screen. I’m creating a new SQL Server database in this screen so I need to provide the database name, specify a new server’s creation, and provide the database username and password. Don’t bother writing this down, there’s a n app screen for that later in this post.


Once I click OK here, my site is created. First, the portal tells me it’s creating the site:


Once the site is up and running, Azure lets me know:


If I select my nopCommerce site and click the “Browse” button in the Azure portal, the site will open up in a new browser instance and allow me the capability of specifying the database connection string it’ll use.


Now, I’ll go back to the Azure portal’s dashboard for my nopCommerce demo site. In that dashboard page I’ll click the link labeled “View connection strings,” and a dialog will open. In that dialog I’ll see the connection string for my database. I can copy that from the dialog…


… and paste it into the nopCommerce setup window.


Of course, I’ve blocked out my site’s real connection string in this picture, but the idea is – it doesn’t get much easier. Once I click the “Install” button in the nopCommerce setup page, the site and database schema, as well as some sample data points, will be installed automatically and the site configured to access the database. Once the setup process is complete, I’ll be redirected to my very own store site.


In the navigation bar I’ll click on the “My Account” link, login, and then, at the very tip-top of my browser I’ll see a link to get to the Administration panel of my new nopCommerce store site.


The administration portal for the nopCommerce product promises to give me just about everything I’d need to sell some stuff, know how my sales are doing, and so on. I can pretty much do whatever I need to do using their rich, extensive administration functionality.


If you’ve been thinking of setting up a store, with a shopping cart, online, or you’ve been asked to do so and are more interested in getting it up and running quickly than you are with re-inventing the wheel by writing custom code, check out nopCommerce. Get your free Azure trialwhich comes with 10 free web sites for free – right here, then set up your own nopCommerce site and have your products selling in your store.

Connecting Azure Web Sites to On-Premises Databases Using Azure Service Bus

The third post in the Solving Real-world Problems with Azure Web Sites blog series I’ll demonstrate one manner in which a web site can be connected to an on-premises enterprise. A common use-case for a web site is to collect data for storage in a database in an enterprise environment. Likewise, the first thing most customers want to move into the cloud is their web site. Ironically, the idea of moving a whole enterprise architecture into the cloud can appear to be a daunting task. So, if one wants to host their site in the cloud but keep their data in their enterprise, what’s the solution? This post will address that question and point out how the Azure Service Bus between a Azure Web Site and an on-premises database can be a great glue between your web site and your enterprise.

We have this application that’d be great as a cloud-based web application, but we’re not ready to move the database into the cloud. We have a few other applications that talk to the database into which our web application will need to save data, but we’re not ready to move everything yet. Is there any way we could get the site running in Azure Web Sites but have it save data back to our enterprise? Or do we have to move everything or nothing works?

I get this question quite frequently when showing off Azure Web Sites. People know the depth of what’s possible with Azure,  but they don’t want to have to know everything there is to know about Azure just to have a web site online. More importantly, most of these folks have learned that Azure Web Sites makes it dirt simple to get their ASP.NET, PHP, Node.js, or Python web site hosted into Azure. Azure Web Sites provides a great starting point for most web applications, but the block on adoption comes when the first few options are laid out, similar to these:

This is a common plight and question whenever I attend conferences. So, I chose to take one of these conversations as a starting point. I invented a customer situation, but one that emulates the above problem statement and prioritizations associated with the project we have on the table.

Solving the Problem using Azure Service Bus

The first thing I needed to think about when I was brought this problem would be the sample scenario. I needed to come up with something realistic, a problem I had seen customers already experiencing. Here’s a high-level diagram of the idea in place. There’s not much to it, really, just a few simple steps.


In this diagram I point out how my web site will send data over to the Service Bus. Below the service bus layer is a Console EXE application that subscribes to the Service Bus Topic the web application will be publishing into the enterprise for persistence. That console EXE will then use Entity Framework to persist the incoming objects – Customer class instances, in fact – into a SQL Server database.

The subscription internal process is also exploded in this diagram. The Console EXE waits for any instance of a customer being thrown in, and it wakes up any process that’s supposed to handle the incoming instance of that object.

The Console EXE that runs allows the program to subscribe to the Service Bus Topic. When customers land on the topics from other applications, the app wakes up and knows what to do to process those customers. In this first case, the handler component basically persists the data to a SQL Server installation on their enterprise.

Code Summary and Walk-through

This example code consists of three projects, and is all available for your perusal as a repository . The first of these projects is a simple MVC web site, the second is console application. The final project is a core project that gives these two projects a common language via a domain object and a few helper classes. Realistically, the solution could be divided into 4 projects; the core project could be divided into 2 projects, one being the service bus utilities and the other being the on-premises data access code. For the purposes of this demonstration, though, the common core project approach was good enough. The diagram below shows how these projects are organized and how they depend on one another.


The overall idea is simple -  build a web site that collects customer information in a form hosted in a Azure Web Site, then ship that data off to the on-premises database via the Azure Service Bus for storage. The first thing I created was a domain object to represent the customers the site will need to save. This domain object serves as a contract between both sides of the application, and will be used by an Entity Framework context class structure in the on-premises environment to save data to a SQL Server database.

With the domain language represented by the Customer class I’ll need some Entity Framework classes running in my enterprise to save customer instances to the SQL database. The classes that perform this functionality are below. They’re not too rich in terms of functionality or business logic implementation, they’re just in the application’s architecture to perform CRUD operations via Entity Framework, given a particular domain object (the Customer class, in this case).

This next class is sort of the most complicated spot in the application if you’ve never done much with the Azure Service Bus. The good thing is, if you don’t want to learn how to do a lot with the internals of Service Bus, this class could be reused in your own application code to provide a quick-and-dirty first step towards using Service Bus.

The ServiceBusHelper class below basically provides a utilitarian method of allowing for both the publishing and subscribing features of Service Bus. I’ll use this class on the web site side to publish Customer instances into the Service Bus, and I’ll also use it in my enterprise application code to subscribe to and read messages from the Service Bus whenever they come in. The code in this utility class is far from perfect, but it should give me a good starting point for publishing and subscribing to the Service Bus to connect the dots.

Now that the Service Bus helper is there to deal with both ends of the conversation I can tie the two ends together pretty easily. The web site’s code won’t be too complicated. I’ll create a view that site users can use to input their customer data. Obviously, this is a short form for demonstration purposes, but it could be any shape or size you want (within reason, of course, but you’ve got ample room if you’re just working with serialized objects).

If I’ve got an MVC view, chances are I’ll need an MVC action method to drive that view. The code for the HomeController class is below. Note, the Index action is repeated – one to display the form to the user, the second to handle the form’s posting. The data is collected via the second Index action method, and then passed into the Service Bus.

The final piece of code to get this all working is to write the console application that runs in my enterprise environment. The code below is all that’s needed to do this; when the application starts up it subscribes to the Service Bus topic and starts listening for incoming objects. Whenever they come in, the code then makes use of the Entity Framework classes to persist the Customer instance to the SQL Server database.

Now that the code’s all written, I’ll walk you through the process of creating your Service Bus topic using the Azure portal. A few configuration changes will need to be made to the web site and the on-premise console application, too, but the hard part is definitely over.

Create the Service Bus Topic

Creating a Service Bus topic using the Azure portal is relatively painless. The first step is to use the New menu in the portal to create the actual Service Bus topic. The screen shot below, from the portal, demonstrates the single step I need to take to create my own namespace in the Azure Service Bus.


Once I click the Create a New Topic button, the Azure portal will run off and create my very own area within the Service Bus. The process won’t take long, but while the Service Bus namespace is being created, the portal will make sure I know it hasn’t forgotten about me.

service- bus-getting-created

After a few seconds, the namespace will be visible in the Azure portal. If I select the new namespace and click the button at the bottom of the portal labeled Access Key, a dialog will open that shows me the connection string I’ll need to use to connect to the Service Bus.


I’ll copy that connection string out of the dialog. Then, I’ll paste that connection string into the appropriate place in the Web.config file of the web application. The screen shot below shows the Web.config file from the project, with the appropriate appSettings node highlighted.


A similar node also needs to be configured in the console application’s App.config file, as shown below.


In all, there are only two *.config files that need to be edited in the solution to get this working – the console application’s App.config file and the web application’s Web.config file. Both of these files are highlighted in the solution explorer view of the solution included with this blog post.


With the applications configured properly and the service bus namespace created, I can now run the application to see how things work.

Running the Code

Since I’ll be using Entity Framework to scaffold the SQL Server database on the fly, all I’ll need to do to set up my local enterprise environment is to create a new database. The screen shot below shows my new SQL database before running the console application on my machine. Note, there are no tables or objects in the database yet.


The first thing I’ll do to get running is to debug the console application in Visual Studio. I could just hit F5, but I’ll be running the web application in debug mode next. The idea here is to go ahead and fire up the console application so that it can create the database objects and prepare my enterprise for incoming messages.


The  console application will open up, but will display no messages until it begins processing Customer objects that land on the Service Bus. To send it some messages, I’ll now debug the web application, while leaving the console application running locally.


When the web site fires up and opens in my web browser, I’ll be presented the simple form used to collect customer data. If I fill out that form and click the Save button, the data will be sent into the Service Bus.


By leaving the console application running as I submit the form, I can see the data coming into my enterprise environment.


Going back into SQL Server Management Studio and refreshing the list of tables I can see that the Entity Framework migrations ran  perfectly, and created the table into which the data will be saved. If I select the data out of that table using a SQL query, I can verify that indeed, the data was persisted into my on-premises database.


At this point, I’ve successfully pushed data from my Azure Web Site up into the Service Bus, and then back down again into my local enterprise database.


One of the big questions I’ve gotten from the community since the introduction of Azure Web Sites to the Azure platform is on how to connect these sites to an enterprise environment. Many customers aren’t ready to move their whole enterprise into Azure, but they want to take some steps towards getting their applications into the cloud. This sort of hybrid cloud setup is one that’s perfect for Service Bus. As you’ve seen in this demonstration, the process of connecting a Azure Web Site to your on-premises enterprise isn’t difficult, and it allows you the option of moving individual pieces as you’re ready. Getting started is easy, cheap, and will allow for infinite scaling opportunities. Find out how easy Azure can be for Web Sites, mobile applications, or hybrid situations such as this by getting a free trial account today . I’m sure you’ll see that pretty much anything’s possible with Azure.

Managing Multiple Azure Web Site Environments using Visual Studio Publishing Profiles

This is the second post in the Real World Problems with Azure Web Sites . The first post summarized how one can manage multiple environments (development, staging, production, etc) using a Git repository with a branching strategy . Not everyone wants to use Git, and most would prefer to stay in their favorite IDE – Visual Studio 2012 – all day to do pretty much everything. My buddy Sayed Hashimi told me about Visual Studio profiles a few weeks ago and I’d been wanting to write up something on how it could work with Azure Web Sites. This post follows up on the idea of managing multiple Azure Web Sites, but rather than do it with Git, I’ll show you how to manage multiple sites with only Visual Studio’s awesome publishing-with-profiles features.

Set Up the Environments

The first step in the process is to have your multiple sites set up so that you have environmental isolation. In this case, I’m being thorough and requiring there are two gates prior to production release. All three of these sites are in the free zone, for this demonstration.


If this was fully realistic, the production zone would probably be at least shared or reserved, so that it had a domain name mapped to it. That’s the only site that would cost money, so the development and staging sites would have no impact on the cost I’ll incur for this setup.

Once the sites have been created I’ll go into each site’s dashboard to download the site’s publish settings profile. The publish settings files will be used from within Visual Studio to inform the IDE how to perform a web deploy up to my Azure Web Site environment.


Once I’ve downloaded each of these files I’ll have them all lined up in my downloads folder. I’ll be using these files in a moment once I’ve got some code written for my web site.


Now that I’ve got all my environments set up and have the publishing settings downloaded I can get down to business and write a little code.

Setting up the Web Application Project

I know I’ll have some environmental variances in the deployment details of this web application. I’ll want to use different databases for each environment, so I’ll need to have three different connection strings each site will have to be configured to use for data persistence. There’ll be application settings and details and stuff, so the first thing I’ll do in this simple ASP.NET MVC project is to prepare the different publishing profiles and the respective configuration for those environments.

To do this, I’ll just right-click my web project and select the Publish menu item. I’m not going to publish anything just yet, but this is the super-easiest way of getting to the appropriate dialog.


When the publishing dialog opens, I’ll click the Import button to grab the first environment’s publish settings files.


I’ll grab the first publish settings file I find in my downloads folder, for the site’s development environment.


Once I click Open, the wizard will presume I’m done and advance to the next screen. I’ll click the Profile link in the navigation bar at this point one more time, to go back to the first step in the wizard.

If, at any point during this process you’re asked if you want to saved the profile, click yes .


I’ll repeat the import process for the staging and production files. The idea here is, to get all of the publish settings files imported as separate profiles for the same Visual Studio web application project. Once I’ve imported all those files I’ll click the Manage Profiles button. The dialog below should open up, which will show me all of the profiles I’ve imported.


This part isn’t a requirement for you or a recommendation, but I don’t typically need the FTP profile so I’ll go through and delete all of the *FTP profiles that were imported. Again, not a requirement, just a preference, but once I’m done with it I’ll have all the web deploy profiles left in my dialog.


I’ll just click Close now that I’ve got the profiles set up. Now that the profiles are setup they’ll be visible under the Properties/PublishProfiles project node in Visual Studio. This folder is where the XML files containing publishing details are stored.


With the profile setup complete, I’m going to go ahead and set up the configuration specifics for each environment. By right-clicking on each *.pubxml file and selecting the Add Config Transform menu item, a separate *.config will be created in the project.


Each file represents the transformations I’ll want to do as I’m deploying the web site to the individual environment sites. Once I’ve added a configuration transformation for each profile, there’ll be a few nodes under the Web.config file I’ll have the opportunity of configuring specific details for each site.


Now that I’ve got the publish profiles and their respective configuration transformation files set up for each profile, I’ll write some code to make use of an application setting so I can check to make sure the per-profile deployment does what I think it’ll do.

Now, if you’re thinking to yourself this isn’t very practical, since I couldn’t allow my developers to have the ability of deploying to production and you’re compelled to blow off the rest of this post since you feel I’ve completely jumped the shark at this point, keep on reading. I bring it back down to Earth and even talk a little release-management process later on.

Environmental Configuration via Profiles

Now I’ll go into the Web.config file and add an appSetting to the file that will reflect the message I want users to see whenever they browse to the home page. This setting will be specific per environment, so I’ll use the transformation files in a moment to make sure each environment has its very own welcome message.


This is the message that would be displayed to a user if they were to hit the home page of the site. I need to add some code to my controller and view to display this message. It isn’t very exciting code, but I’ve posted it below for reference.

First, the controller code that reads from configuration and injects the message into the view.


Then I’ll add some code to the view to display the message in the browser.


When I browse the site I’ll get the obvious result, a simple hello message rendered from the configuration file on my local machine.


I’ll go into the development configuration profile file and make a few changes – I strip out the comments and stuff I don’t need, and then I add the message appSetting variable to the file and set the transformation to perform a replace when the publish happens. This basically replaces everything in the Web.config file with everything in the Web.MySite-Dev - Web Deploy.config file that has a xdt:Transform attribute set to Replace .


I do the same thing for the staging profile’s configuration file…


… and then for the production profile’s configuration file.


With the environmentally-specific configuration attributes set up in the profile transformations and the publish profiles set up, everything should work whenever I need to do a deployment to any of the environments. Speaking of which, let’s wrap this up with a few deployments to our new environments!


The final step will be to deploy the code for the site into each environment to make sure the profile configuration is correct. This will be easy, since I’ve already imported all of my environments’ configuration files. I’ll deploy development first by right-clicking the project and again selecting the Publish context menu item. When the publish wizard opens up I need to select the development environment’s profile from the menu.


Once the publish process completes the site will open up in my browser and I can see that the appropriate message is being displayed, indicating the configuration transformation occurred properly based on the publish profile I’d selected to deploy.


Next, I right-click the project and select Publish again, this time selecting the staging environment.


When the publish completes, the staging welcome message is displayed.


If I repeat the same steps for production, the appropriate message is displayed there, too.


In a few short steps, I’m able to set up a series of environments and publish profiles, that work together to allow me separate deployment environments, with little extra work or overhead. Since the profiles are linked to the configuration transformations explicitly, it all just works when I deploy the site.

Release Management

As promised earlier in that blockquote up there, I want to stay with the “these are real world scenarios as much as possible based on my real-world experiences and questions I’ve been asked” mantra, I feel it’s necessary to get into the idea of release management insomuch as how it’d apply here. In the previous example I was using Git branches to gate releases. In this example, I’m not using any centralized build solution, but rather assuming there’s a source control environment in between the team members – developers, testers, release management, and so on – but that the whole team just chooses to use the Web Deploy awesomesauce built into Visual Studio.

Think of a company with aggressive timelines but who still take care to gate releases but choose not (for whatever reason) to set up a centralized build system. This company still feels strongly about managing the release process and about maintaining separate chains of testing and signoff responsibility as code is moved through the environments on the way to a production release, but they love using Visual Studio and Web Deploy to get things into the environments as quickly as possible. 

The diagram below demonstrates one potential release cycle that could make use of the publish profile method of gating deployments through a series of environmental gates.


Assume the team has come to a few conclusions and agreements on how their release cycle will execute.

Luckily, this sort of situation is quite possible using publish profiles and free Azure Web Sites used as environmental weigh stations on the way to be deployed to a production site that’s deployed to multiple large reserved instances (for instance).


The convenient partnership between web publishing and Azure Web Sites shouldn’t be regarded as an indicator of it creating the potential for cowboy coding, but more considered a tool that when coupled with a responsible release cycle and effective deployment gating can streamline and simplify the entire SDLC when your business is web sites.

I hope this post has introduced you to a method of controlling your deployment environments, while also allowing you to do the whole thing from within Visual Studio. Later on, I’ll follow up this post with an example of doing this sort of thing using Team Foundation Services.

Hopefully, you have enough ammunition to get started with your very own Azure Web Site account today, for free, and you feel confident you’ll be able to follow your very own release management process, without the process or architecture slowing you down. If you have any questions about this approach or the idea in general, feel free to use the comments form below.

Happy coding!

Multiple Environments with Azure Web Sites

This is the first post in the Real World Problems with Azure Web Sites blog series, as it intends to answer one of the most common questions I receive when I’m doing presentations about Azure Web Sites. This situation demonstrates a typical setup, wherein a site owner has multiple environments to which they push their web site. This setup is extremely valuable for staging site releases and for delivering solid web applications or for doing A-B testing of a site’s changes. In order to make sure your changes are okay, it helps to have a staging and production environment to use to make sure things are good before one makes their changes live in a production environment. My good friend and colleague Cory Fowler blogged about continuous deployment with Azure Web Sites, and my other good buddy Magnus Martensson did a great presentation at Windows AzureConf on the topic . I’ve done countless demonstrations of continuous deployment with Web Sites, but one question always comes up, that this post intends to answer.

That’s all well and good and I know I can deploy my site automatically each time I make a change, but that’s not realistic.


It’s like, if I use Azure Web Sites to host my site, it’ll be deployed each time I check in code – even when I didn’t want to deploy the site.


How do I control what gets deployed and control the deployment and maintain quality?

That’s a great question, and it’s one that most have struggled to answer. It’s also a barrier for many who are thinking of using Azure Web Sites but who don’t want to manage their site like they’re running their company out of a garage. This “happy path” deployment mantra isn’t real-world, especially for site owners who want to stage changes, test them out, and be certain their changes won’t cause any problems following a hasty deployment process.

Multiple Sites for Multiple Environments

As with any multiple-environment setup, the first thing I need to do to support having multiple environments is to create multiple sites in Azure Web Sites. Using the portal, this is quite simple. The screen shot below shows you what this would look like. Note, I’ve got a “real” site, that’s my production site area, and I’ve also added a staging site to the account.


In particular, take note of how both of these sites are in the “free” tier. Let’s say you’ve got a production site in the “non-free” zone because you want to map your domain name to it, scale it up or out, or whatever else. I’ll leave my staging site in the free tier and not incur any charges on it.

Why is this important? Because I won’t need to pay anything additional for having multiple sites . Since most users won’t have the staging URL, or it won’t matter what it is since it’ll just be for testing purposes, I don’t need to map a domain name to it, scale it, or anything like that. It’s just there for whenever I need to do a deployment for testing purposes or verification purposes. This setup won’t require you to spend any more money.

Using for Continuous Deployment

In this example, I’ll be using to manage my source code. You don’t have to use for this, so if you’re new to Git don’t freak out, you have other options like CodePlex, BitBucket, or TFS. Heck, you could even automate an FTP deployment if you want to.

The first step in setting up integration with a web site is to load up the site’s dashboard and to click the Quick Link labeled “Set up Git publishing” as is illustrated in the screen shot below.


Once the repository setup completes, the portal will allow me to specify what type of Git repository I want to connect to my site. I’ll select from the list of options, as you’ll see below.


If this is the first time I’ve tried to connect a repository to a web site, I’ll be asked to allow the partnership to take place.


By clicking the Allow button, I let know I’m okay with the partnership. The final step in tying a repository to a site is to select the repository I want to be associated with the site, which you’ll see in the screen shot below.


I’ll repeat this process for the staging site, too, and I’ll associate it with the exact same repository. This is important, as I’ll be pushing code to one repository and wanting the deployment to happen according to which site I want to publish.

Two Sites, One Repository? Really?

Sounds weird, right? I’ve got two sites set up now, one for the production site, the other for the staging site, but I’ve associated both sites with the same repository. Seems a little weird, as each time I push code to the repository, I’ll be deploying both sites automatically. The good part is, there’s this awesome feature in Git that I can use to make sure I’m deploying to the right spot. That feature is called branching, and if you’re acquainted with any modern source control management product, you probably already know about branching. You probably already use branching to control deviations in your code base, or to fix features and bugs. With Azure Web Sites’ support for branches, you can use them for environmentally-specific deployment practices too. The best part is, it’s quite easy to set up, and that’s just what I’ll show you next.

Configuring Branch Associations

Before I write any code, I’ll need to set up the branch “stuff” using the portal. To do this, I’ll go into my production site’s dashboard and click the Configure link in the navigation bar. Scrolling down about half way, I can see that the production site’s set up to use the master branch. The master branch is the default branch for any Azure Web Site, but as you’ll see here, the portal gives me the ability to change the branch associated with an individual web site.


Now, I’ll go into my staging site and I’ll set the associated branch for that site to staging . This means that, each time I check code into the master branch, it’ll be deployed to the production site, and each time I check code into the staging branch, it’ll be deployed to the staging site.


With the setup out of the way I’ll be able to write some code that’ll be deployed automatically when I need it to be deployed, and to the right place.

Code, Commit, Deploy

Now that my sites are all configured and pointing to the right branches I’ll need to set up local Git repository, write some code, and check that code into the repository. Once that’s all done I’ll create a second branch called staging that I’ll use to push code to my staging site.

The first step is, obviously, to write some code. I won’t do much complicated stuff for this demo. Instead I’ll just make a simple MVC site with one view. In this view I’ll just put a simple message indicating the site to which I intend to deploy the code.


Now, I’ll open up Powershell to do my Git stuff. As Phill Haack points out in this epic blog post on the topic, posh-git is a great little tool if you’re a Powershell fan who also uses Git as a source control method.


I’ll initialize my Git repository using git init, then I’ll tell the Git repository where to push the code whenever I do a commit using the git remote add origin [URL] command. Finally, I’ll use the git add and git commit commands to push all of my changes into my local repository. All of the files in this changeset will scroll up the screen, and when the commit completes I’ll push the changes back up to, to the master branch, using the git push origin [branch] command.


Once the commit finishes and the code is pushed up to, Azure Web Sites will see the commit and perform a deployment. If you’re watching the portal when you do the commit you’ll see the deployment take place [almost magically].


If I click the Browse button at the bottom of the portal the site will open up in a browser and I can verify that the change I just committed was deployed. So far, so good.


Setting Up the Staging Branch

Now that the production site’s been deployed I need to set up the staging environment. To do this, I’ll go back into my favorite Git client, Powershell, and I’ll create a new branch using the git checkout –b [branch] command. This will create a new branch in my local Git repository. Then, it’ll switch to that repository and make it active. If I type git branch in Powershell, it’ll show me all the branches in my local repository. The green line indicates the branch I’m currently working on.


Now that the branch has been created, I’ll make some small change in the code of the site. Once this change has been made I’ll be pushing it up to again, but this time I’ll be pushing it into the newly-created staging branch, so the production branch code in the master branch is safe and sound.


Switching back to Powershell, I’ll commit the changes to the staging branch in my local repository. Then, I’ll push that code back up to, this time specifying the staging branch in my git push command.


This time, I’ll watch the staging site in the portal. As soon as the publish is complete, the portal will reflect that a deployment is taking place.


Once the deployment completes, I can check the changes by clicking the Browse button at the bottom of the portal page. When the staging site opens up, I can see that the changes were pushed to the site successfully.


If go back to the production site, it still says Production on it. Pushing to the staging site didn’t affect my production site, and vice-versa. I’ve got dual-environment deployments, based on source code branches, and I’m able to test things out in one environment before pushing those changes to production. Everyone wins!

Local Git Branch Niftiness

One of the neat things about using Git branches (at least it was nifty to me), is that all the code for all the branches is stored in your local repository. Switching between branches automatically results in the source code being restored on your local drive. The whole thing happens automatically. Demonstrating how this works is as easy as switching branches while you have a source code file open in Visual Studio.

So let’s say I still have the staging branch set as my working branch and I’ve got the source code open up in Visual Studio. If I got to my Powershell client again and switch the branch using the git checkout [branch] command, Git changes the branch on the fly for me. In so doing, all the files in the local directory are replaced with the files from the newly-selected branch.


The moment I switch back to Visual Studio, it warns me that the file has changed on disk and asks me if I’d like to reload it.


If I click the Yes button and then look at my file, I’ll see that the file has been changed to the one resident in the new branch.


In this way, Git keeps all the branches of my source code in my local repository, so I can make changes to the branches as needed and then commit those changes back to the local repository. Once I’m ready, I can push those files back up to the origin (in this case,, and everything’s cool.


Web site environments are a real-world method of controlling, gating, and reverting deployments. Using multiple environments, development shops can make sure their changes took place properly before crashing a production site with poor changes. Azure Web Sites is a real-world web hosting platform that can be used to solve real web site challenges. With a little thought and planning, it’s easy to use Azure Web Sites to host multiple versions of a web site. Testing can happen live, without affecting production deployments. I hope this post has introduced a good method of achieving separate site environments, and that you can see yet another way Azure Web Sites can help you get your site up and running, keep it continually deployed, and reduce your concerns over the nature of continuously deploying to a production web site environment using simple tricks like Git branches.

Build a Location API Using Entity Framework Spatial and Web API, on Azure Web Sites

As announced by Scott Guthrie recently, Azure Web Sites now supports the .NET Framework 4.5. Some awesome ASP.NET features are now available to web developers who want to host their ASP.NET applications on Azure following the web sites offering getting support for .NET 4.5. One feature I’m especially excited about is Entity Framework Spatial support. Only available in .NET 4.5, EF Spatial is something that gives developers who want to build location-aware applications the ability to easily save and retrieve location data without having to invent crazy solutions using SQL code. I’ve implemented the Haversine formula using a SQL stored procedure in the past, and I can speak from experience when I say that EF Spatial is about 10,000 times easier and more logical. Don’t take my word for it, though. Take a look at the sample code I’ll show you in this blog post, which demonstrates how you can develop a location-aware API using ASP.NET Web API, EF Spatial, and host the whole thing on Azure Web Sites.

Creating the Site in Azure

Before diving into code I’ll go out to the Azure portal and create a new web site. For this API example, I create a site with a database, as I’ll want to store the data in a Azure SQL Database. The screen shot below shows the first step of creating a new site. By simply selecting the new web site option, then selecting “with database,” I’m going to be walked through the process of creating both assets in Azure in a moment.


The first thing Azure will need to know is the URL I’ll want associated with my site. The free Azure Web Sites offer defaults to [yoursitename], so this first step allows me to define the URL prefix associated with my site.

Simultaneously, this first step gives me the opportunity to define the name of the connection string I’ll expect to use in my Web.config file later, that will connect the site to the Azure SQL Database I’ll create in a moment.


The last steps in the site creation process will collect the username and SQL Server information from you. In this example, I’m going to create a new database and a new SQL Server in the Azure cloud. However, you can select a pre-existing SQL Server if you’d prefer during your own setup process.

I specifically unchecked the “Configure Advanced Database Settings” checkbox, as there’s not much I’ll need to do to the database in the portal. As you’ll see in a moment, I’ll be doing all my database “stuff” using EF’s Migrations features.


Once I’ve entered the username I’d like to use, the password, and selected (or created) a SQL server, I click the check button to create the site and the SQL database. In just a few seconds, both are created in Azure, and I can get started with the fun stuff – the c0d3z !

Preparing for Deployment

Just so I have a method of deploying the site once I finish the code, I’ll select the new site from the Azure portal by clicking on it’s name once the site-creation process completes.


The site’s dashboard will open up in the browser. If I scroll down, the Quick Glance links are visible on the right side of the dashboard page. Clicking the link labeled Download Publish Profile will do just that – download a publish settings file, which contains some XML defining how Visual Studio or WebMatrix 2 (or the Web Deploy command line) should upload the files to the server. Also contained within the publish settings file is the metadata specific to the database I created for this site.


As you’ll see in a moment when I start the deployment process, everything I need to know about deploying a site and a database backing that site is outlined in the publish settings file. When I perform the deployment from within Visual Studio 2012, I’ll be given the option of using Entity Framework Migrations to populate the database live in Azure. Not only will the site files be published, the database will be created, too. All of this is possible via the publish settings file’s metadata.

Building the API in Visual Studio 2012

The code for the location API will be relatively simple to build (thanks to the Entity Framework, ASP.NET, and Visual Studio teams). The first step is to create a new ASP.NET MVC project using Visual Studio 2012, as is shown below. If you’d rather just grab the code than walk through the coding process, I’ve created a public repository for the Spatial Demo solution, so clone it from there if you’d rather view the completed source code rather than create it from scratch.

Note that I’m selecting the .NET Framework 4.5 in this dialog. Previous to the 4.5 support in Azure Web Sites, this would always need to be set at 4.0 or my deployment would fail. As well, I would have had compilation issues for anything relating to Entity Framework Spatial, as those libraries and namespaces are also only available under .NET 4.5. Now, I can select the 4.5 Framework, satisfy everyone, and keep on trucking.


In the second step of the new MVC project process I’ll select Web API, since my main focus in this application is to create a location-aware API that can be used by multiple clients.


By default, the project template comes with a sample controller to demonstrate how to create Web API controllers, called ValuesController.cs. Nothing against that file, but I’ll delete it right away, since I’ll be adding my own functionality to this project.

Domain Entities

The first classes I’ll add to this project will represent the entity domains pertinent to the project’s goals. The first of these model classes is the LocationEntity class. This class will be used in my Entity Framework layer to represent individual records in the database that are associated with locations on a map. The LocationEntity class is quite simple, and is shown in the gist below.

Some of the metadata associated with a DbGeography object isn’t easily or predictably serialized, so to minimize variableness (okay, I’m a control freak when it comes to serialization) I’ve also created a class to represent a Location object on the wire. This class, the Location class, is visible in the following gist. Take note, though, it’s not that much different from the typical LocationEntity class aside from one thing. I’m adding the explicit Latitude and Longitude properties to this class. DbGeography instances offer a good deal more functionality, but I won’t need those in this particular API example. Since all I need is latitude and longitude in the API side, I’ll just work up some code in the API controller I’ll create later to convert the entity class to the API class.

Essentially, I’ve created a data transfer object and a view model object. Nothing really new here aside from the Entity Framework Spatial additions of functionality from what I’ve done in previous API implementations which required the database entity be loosely coupled away from the class the API or GUI will use to display (or transmit) the data.

Data Context, Configuring Migrations, and Database Seeding

Now that the models are complete I need to work in the Entity Framework “plumbing” that gives the controller access to the database via EF’s magic. The first step in this process is to work up the Data Context class that provides the abstraction layer between the entity models and the database layer. The data context class, shown below, is quite simple, as I’ve really only got a single entity in this example implementation.

Take note of the constructor, which is overridden from the base’s constructor . This requires me to make a change in the web.config file created by the project template. By default, the web.config file is generated with a single connection string, the name of which is DefaultConnection . I need to either create a secondary connection string with the right name, change the default one (which I’ve done in this example), or use Visual Studio’s MVC-generation tools to create an EF-infused controller, which will add a new connection string to the web.config automatically. Since I’m coding up this data context class manually, I just need to go into the Web.config and change the DefaultConnection connection string’s name attribute to match the one I’ve added in this constructor override, SpatialDemoConnectionString . Once that’s done, this EF data context class will use the connection string identified in the configuration file with that name.

During deployment, this becomes a very nifty facet of developing ASP.NET sites that are deployed to Azure Web Sites using the Visual Studio 2012 publishing functionality. We’ll get to that in a moment, though…

EF has this awesome feature called Migrations that gives EF the ability of setting up and/or tearing down database schema objects, like tables and columns and all indexes (oh my!).  So the next step for me during this development cycle is to set up the EF Migrations for this project. Rowan Miller does a great job of describing how EF Migrations work in this Web Camps TV episode, and Robert Green’s Visual Studio Toolbox show has a ton of great content on EF, too, so check out those resources for more information on EF Migrations’ awesomeness. The general idea behind Migrations, though, is simple – it’s a way of allowing EF to scaffold database components up and down, so I won’t have to do those items using SQL code.

What’s even better than the fact the EF has Migrations is that I don’t need to memorize how to do it because the NuGet/PowerShell/Visual Studio gods have made that pretty easy for me.  To turn Migrations on for my project, which contains a class that derives from EF’s data context class (the one I just finished creating in the previous step), I simply need to type the command enable-migrations into the NuGet package management console window.

Once I enable migrations, a new class will be added to my project. This class will be added to a new Migrations folder, and is usually called Configuration.cs . Within that file is contained a constructor and a method I can implement however I want called – appropriately – Seed. In this particular use-case, I enable automatic migrations and add some seed data to the database.

Enabling automatic migrations basically assumes any changes I make will automatically be reflected in the database later on (again, this is super-nifty once we do the deployment, so stay tuned!).

Quick background on what types of locations we’ll be saving… My wife and I moved from the Southeast US to the Pacific Northwest region recently. Much to our chagrin, there are far fewer places to pick up great chicken wings than there were in the Southeast. So, I decided I needed to use our every-Sunday-during-football snack of chicken wings as a good use-case for a location-based app. What a better example than to give you a list of good chicken wing restaurants listed in order of proximity? Anyway, that’s the inspiration for the demo. Dietary recommendation is not implied, BTW.

The API Controller Class

With all the EF plumbing and domain models complete, the last step in the API layer is to create the API controller itself. I simply add a new Web API controller to the Controllers folder, and change the code to make use of the plumbing work I’ve completed up to now. The dialog below shows the first step, when I create a new LocationController Web API controller.


This controller has one method, that takes the latitude and longitude from a client. Those values are then used in conjunction with EF Spatial’s DbGeography.Distance method to sort the records from closest in proximity, then the first five records are returned. The result of this call is that the closest five locations are returned with a client provides its latitude and longitude coordinates to the API method. The Distance method is used again to determine how far away each location is from the provided coordinates. The results are then returned using the API-specific class rather than the EF-specific class (thereby separating the two layers and easing some of the potential serialization issues that could arise), and the whole output is formatted to either XML or JSON and sent down the wire via HTTP.

At this point, the API is complete and can be deployed to Azure directly from within Visual Studio 2012 using the great publishing features created by the Visual Studio publishing team (my buddy Sayed Hashimi loves to talk about this stuff, so ping him on Twitter if you have any questions or suggestions on this awesome feature-set).

Calling the Location API using an HTML 5 Client

In order to make this a more comprehensive sample, I’ve added some HTML 5 client code and Knockout.js -infused JavaScript code to the Home/Index.cshtml view that gets created by default with the ASP.NET MVC project template. This code makes use of the HTML 5 geospatial capabilities to read the user’s current position. The latitude and longitude are then used to call directly the location API, and the results are rendered in the HTML client using a basic table layout.

The final step is to deploy the whole thing up to Azure Web Sites. This is something I wasn’t able to do until last week, so I’m super-stoked to be able to do it now and to share it with you on a demo site, the URL of which I’ll hand out at the end of this post.

One Last NuGet to Include

Entity Framework Spatial has some new data types that add support for things like… well... latitude and longitude, in this particular case. By default, these types aren’t installed into a Azure instance, as they’re part of the database SDK. Most times, those assemblies aren’t needed on a web server, so by default you won’t have them when you deploy. To work around this problem and to make Entity Framework Spatial work on the first try following your deployment to Azure, install the Microsoft.SqlServer.Types NuGet package into your project by typing install-package Microsoft.SqlServer.Types in the Package Manager Console or by manually finding the package in the “Manage NuGet References” dialog.

Thanks to Scott Hunter for this extremely valuable piece of information, which I lacked the first time I tried to do this. This solution was so obvious I hid in my car with embarrassment after realizing how simple it was and that I even had to ask. NuGet, again, to the rescue!

Once this package is installed, deploying the project to Azure will trigger automatic retrieval of that package, and the support for the location data types in SQL Server will be added to your site.

Publishing from Visual Studio 2012 is a Breeze

You’ve probably seen a ton of demonstrations on how to do deployment from within Visual Studio 2012, but it never ceases to amaze me just how quick and easy the team has made it to deploy sites – with databases – directly up to Azure in so few, simple steps. To deploy to a site from within Visual Studio 2012, I just right-click the site and select – get this – Publish. The first dialog that opens gives me the option to import a publish settings file, which I downloaded earlier just after having created the site in the Azure portal.


Once the file is imported, I’m shown the details so I have the chance to verify everything is correct, which I’ve never seen it not be, quite frankly. I just click Next here to move on.


This next step is where all the magic happens that I’ve been promising you’d see. This screen, specifically the last checkbox (highlighted for enthusiasm), points to the database I created earlier in the first step when I initially created the “site with database” in the Azure portal. If I check that box, when I deploy the web site, the database schema will be automatically created for me, and the seed data will be inserted and be there when the first request to the site is made . All that, just by publishing the site!


Can you imagine anything more convenient? I mean seriously. I publish my site and the database is automatically created, seeded, and everything wired up for me using Entity Framework, with a minimal amount of code. Pretty much magic, right?

Have at it!

Now that the .NET 4.5 Framework is supported by Azure Web Sites, you can make use of these and other new features, many of which are discussed or demonstrated on at’s page set aside just on the topic of ASP.NET 4.5 awesomeness . If you want to get started building your own location API’s built on top of Entity Framework Spatial, grab your very own Azure account here, that offers all kinds of awesomeness for free. You can take the sample code for this blog, or copy the gists and tweak them however you want.

Happy Coding!

Gallery Server Pro and Azure Web Sites

The Azure Web Sites team has added another web application to the growing list of options available for pre-configured web sites and applications you can use to get your site up and running in minutes. Today’s addition is Gallery Server Pro, a web-based tool for managing your image or video library. This post will walk you through the process of getting Gallery Server Pro running in Azure Web Sites, so if you have a huge collection of images you’d like to post online, you won’t want to miss this post.

The Gallery

Azure Web Sites makes it easy to get started with a new web site, and you can’t beat the price and the ability to upgrade your horsepower whenever you need (or to downgrade whenever you don’t need it). The web application gallery in the Azure portal adds the sugary sweet ability to get up and running with a completely-built and pre-configured application in seconds. In most cases you won’t even need to write a line of code. Think of all those hours of your life you’ve spent configuring a CMS or blogging tool to get it working on your server – the gallery provides point-and-click creation and deployment of many of these types of tools.

Below, for instance, you can see how easy it is for me to start building a brand new blog using Gallery Server Pro in Azure. Just click the New button at the bottom of the portal, then select the From Gallery option, and select Gallery Server Pro from the list of applications available.

Once you select Gallery Server Pro and click the Next button, you’ll be asked to give the site a name and to provide the username and password for the administrative user.

Once the application is selected, a new site will be provisioned using the Gallery Server Pro application code. Within a minute, the application is created and your new online gallery will be ready for images.

Using Gallery Server Pro

As far as web gallery applications are concerned, Gallery Server Pro is a breeze to use. The screen shot below shows the application running on Azure Web Sites. The menu system makes the process of creating and editing galleries a snap. Below, you’ll see an empty gallery in the application’s administration console.

In under 5 minutes, I was able to create a brand new online image gallery, into which I uploaded a few sample pictures. Note, I’d never used Gallery Server Pro prior to this, and it was super-simple to get up and running. I didn’t even need to read the help file for the application, the process was so lightweight and easy to use.

Whether you're just posting pictures of your family or you’re an aspiring artist or designer who wants to publish their photos and artwork online for people to see, Gallery Server Pro is a great choice. Their web site offers documentation and an extensive user guide and administration reference, and the product promises to continue adding new features and functionality. With Azure Web Sites now as the easiest hosting option around for creating these types of sites and with deployment never being as easy as it is using the web sites application gallery, now is a good time to try out Gallery Server Pro for your online gallery needs.

Start Simple. Scale Up as Needed

If you haven’t tried Azure yet, you can get started right now, for free, and stay free for a year if all you really need is a gallery site. In 5 minutes you’ll have your own online gallery up and running, without writing a single line of code or performing any complex configuration tasks. When you pair the ease of setup and deployment Azure Web Sites and the application gallery provide with the simplicity Gallery Server Pro provide, you can’t go wrong.

BlogEngine.NET and Azure Web Sites

The Azure Web Sites team has been hard at work looking at various applications and working with vendors and community contributors to add some great applications to the web sites gallery. If you’re a blogger and you’d like to get started for free with a simple, yet extensible blogging tool, you might want to check this out. Starting this week you can install BlogEngine.NET into a free instance of a Azure Web Site. I’ll walk you through the process in this post.

The Gallery

Azure Web Sites makes it easy to get started with a new web site, and you can’t beat the price and the ability to upgrade your horsepower whenever you need (or to downgrade whenever you don’t need it). The web application gallery in the Azure portal adds the sugary sweet ability to get up and running with a completely-built and pre-configured application in seconds . In most cases you won’t even need to write a line of code. Think of all those hours of your life you’ve spent configuring a CMS or blogging tool to get it working on your server – the gallery provides point-and-click creation and deployment of many of these types of tools.

Below, for instance, you can see how easy it is for me to start building a brand new blog using BlogEngine.NET in Azure.

Installing BlogEngine.NET

Once you select BlogEngine.NET and click the next arrow button, the gallery installer will need to get some pretty basic information from you and provision the site. The next step in the installation process gets your site’s name, which will serve as the prefix in the [prefix] manner.

Since BlogEngine.NET’s default storage mechanism doesn’t require a database (but you can, of course, customize your installation later to use SQL if you’d like), but that’s basically the last step in the process . Two steps, and a brand new BlogEngine.NET blog will be created and deployed into Azure Web Sites. Below you’ll see how the portal reflects the status of the deployment.

In a few moments, the site will be completely deployed, and by clicking on the browse button at the bottom of the portal when you’ve got your freshly-deployed blog selected…

You’ll be impressed at how easy the process is when you’re running BlogEngine.NET on Azure Web Sites. In seconds, your new blog will be ready, deployed, and willing to record your every contemplation.  

Using BlogEngine.NET

BlogEngine.NET is great, the administrative interface a breeze, and the whole experience painless when you install it this way. I’ve not used the product in a number of years. Though I don’t recall the installation and configuration process as a difficult one, it sure as heck wasn’t this easy the last time I installed it on a server.

The first thing you’ll want to do is to log in as admin (password by default is also admin ), according to the BlogEngine.NET installation documentation, and change your password. The administration dashboard is quite simple, and blogging using the product is a breeze.

As simple as BlogEngine.NET is on the surface, it has a great community and huge list of extensions and resources available. The extensions page in the BlogEngine.NET administration console should give you an idea of what’s available for your use if you’re into the idea of customizing the functionality.

So in seconds – it literally took me less than 1 minute on the first try – you can have a brand new blog running in Azure Web Sites, for free or for however much you need whenever you need it, that’s simple to use and extensible with a great plug-in model.

Start Simple. Go Big When You Need To

If you haven’t tried Azure yet, you can get started right now, for free, and stay free for a year if all you really need is a BlogEngine.NET site. In 5 minutes you’ll have your own site up and running, without writing a single line of code or performing any complex configuration tasks . When you pair the ease of setup and deployment Azure Web Sites and the application gallery provide with the blogging simplicity and elegance BlogEngine.NET provide, you can’t go wrong.

PhluffyFotos on Azure

In keeping with the Azure Evangelism Team’s mission of providing samples that demonstrate how to use various aspects of the Azure platform, I’d like to introduce you to the PhluffyFotos application. The idea behind PhluffyFotos is to offer an image-sharing application that allows users to upload, tag, and share images on line. The MyPictures application sample introduced you to the idea of storing images in Azure Blob Storage via Web API, but PhluffyFotos takes the idea of an image-sharing application a few steps further architecturally. Take a look at this sample if you want to see how various Azure components – Web Sites, Cloud Services, and Azure Storage – to see how to add background processing to your web application, all hosted in the cloud.


The PhluffyFotos application runs as a Azure Web Site used to let visitors upload images. Those images and their metadata are sent over to Azure for processing. The Cloud Service picks up information stored in Azure Storage Queues, and processes that information so it can be stored in Azure Table Storage. The image content itself, like was done in MyPictures, is stored in binary blobs using Azure Blob Storage. The web site allows for multiple user profiles, which are stored in a Azure SQL Database and accessed using the Universal Profile Providers, which are available via NuGet . Everything image-centric is actually stored in Azure Storage, once it has been processed via the Cloud Service.

Links and Video

As with all the other Azure Evangelism Team samples and demonstrations, the source code for PhluffyFotos is stored in its very own repository . If you notice any issues with the code, please feel free to use repository’s Issues feature to let me know what you find. Likewise, if you feel you have a change that’d improve the sample, feel free to make your own fork in which you can change the code, and submit a pull request. The sample is available on the MSDN samples site, too, so you can use that link to submit questions and to download a zip file containing the code for the sample. I’ve also published a Channel9 video that demonstrates how you can get the sample running, step-by-step .

I and the rest of the Azure Evangelism Team hope you enjoy this example of what's possible with Azure! Happy Coding!

Running Nancy on Azure Web Sites

I’m a huge fan of finding cleaner ways to build web applications. Simplicity is a really good thing. Nancy is a simple little web framework inspired by Ruby’s Sinatra . Nancy is open source, hosted on, and distributed like any other awesome .NET open source library – via NuGet packages. When a coding challenge landed on my desk this week that I’ll be responsible for prototyping, Nancy seemed like a good option, so I’ve taken the time to tinker with it a little. I’ll spare you the details of the coding assignment for this post. Instead, I’ll focus on the process of getting Nancy working in your own Azure Web Site.

To start with, I’ve created an empty web site using the Azure portal. This site won’t use a database, so I just logged into the portal and selected New –> Web Site –> Quick Create and gave it a pretty logical name.



Once the site was finished cooking, I grab the publish profile settings from the site’s dashboard.


Getting Started with Nancy

First, I create an empty ASP.NET web application. To make sure everything’s as clean as it could be, I remove a good portion of the files and folders from the site. Since I’ll be creating the most basic level functionality in this example, I don’t need a whole lot of extraneous resources and functionality in the site. The project below shows what I have left over once I ransacked the project’s structure. Take note of the Uninstall-Package command I made in the Package Manager Console. I ran similar commands until I had as bare-bones a project structure as possible. Then I removed some stuff from the Web.config until it literally was quite minimalistic.



To run Nancy on Azure Web Sites, I’ll need to install 2 packages from NuGet. The first of these is the Nancy core package . The second package I’ll need is the package that enables Nancy hosting on ASP.NET .



In my web project I create a new folder called NancyStuff and add a new class called HelloWorldModule .



This class is a very basic Nancy Module, and I learned how to create it by perusing the Nancy project’s Wiki . I basically want to have a route that, when called, will just say hello to the user. You don’t get much simpler than this for assigning functionality to a given route. When the user requests the /hello route using an HTTP GET request, the message Hello World is rendered to the response stream. The HelloWorldModule class will extend NancyModule. In Nancy, routes are handled by classes which inherit from NancyModule. According to the Nancy documentation, Modules are the lynchpin of any given Nancy application .  


At this point, everything should work, so I’ll go ahead and deploy the web site up to Azure. To do this, I select the Publish context menu item from within Visual Studio 2012.



Then, I import the publish settings file using the Import feature on the publishing dialog.



Once I find the .publishsettings file I downloaded from the Azure portal and import it, I’m ready to publish. Clicking the publish button in the dialog at this point will result in the site being deployed up to my Azure Web Site.



The site will open once the deployment has completed and will probably present me with a 404 error, indicating there’s no route configured to answer requests to the root of the site. Changing the URL to hit /hello will result with the module answering the request and doing what I expected it to do:



Enabling Static File Browsing

With this first module functioning properly I want to create a static file from which the Nancy module could be called using jQuery on the client. The idea is, now that I have this Nancy module working, I might want to make calls to it using some AJAX method to display the message it returns to the user. So I add a new static HTML page to my solution.



The problem here is that, since Nancy’s handing requests to my site and using the modules I create to respond to those requests, the ability to browse to static files is… well… sort of turned off. So attempts to hit a seemingly innocent-enough page results with a heinous – yet adorable in a self-loathing sort of way - response.



Thanks to the Nancy documentation, it was pretty easy to find a solution to this behavior. See, Nancy basically starts intercepting requests to my site, and it takes precedence over the default behavior of “serving up static files when they’re requested.” Nancy’s routing engine listens for requests and if one’s made to the site for which no module has been created and routed, the site has no idea how to handle the request.

To solve this problem, I need to create another class that extends the DefaultNancyBootstrapper class. This class, explained pretty thoroughly in the Nancy Wiki’s article on managing static content, is what I’ll need to use to instruct Nancy on how to route to individual static files. For now I’m only in need of a class to handle this one particular static file, but setting up a bootstrapper to allow static browsing in a directory is possible. Other options exist too, such as routes that use regular expressions, but that’s something I’ll look at in a later episode of this series. For now, I just want to tell Nancy to serve up the page Default.html whenever a request is made to /Default.html . I’m also enabling static file browsing out of the /scripts folder of the site. The main thing to look at here is the call to the StaticContentsConventions.Add method, into which I pass the name of the file and the route on which it should be served up.


Now, I’ll add some jQuery code to the static page that calls the HelloWorldModule and displays whatever it responds with in an HTML element on the page.


When the static file loads up in the browser, the jQuery code makes an AJAX request back to the server to the /hello route, and then drops the response right into the page. When I deploy the site again to Azure Web Sites and hit the Default.html file, the behavior is just what I’d expected it would be; the page loads, then the message is obtained via AJAX and displayed.



Hopefully this introduction has demonstrated the support Azure Web Sites has for Nancy. Since Nancy can be hosted under ASP.NET, and since Azure Web Sites support the ASP.NET pipeline, everything works. I’ll continue to tinker with Nancy from here as I work on the coding project for which I chose it to be a puzzle piece. Hopefully during that development work I’ll learn more about Nancy, have the time to demonstrate what I learn, and ease someone’s day when they decide to move their Nancy site over to Azure Web Sites.

Happy coding!

The SiteMonitR Sample

The newest sample from the Azure Evangelism Team (WAET) is a real-time, browser-based web site monitor. The SiteMonitR front-end is blocked out and styled using Twitter Bootstrap, and Knockout.js was used to provide MVVM functionality. A cloud service pings sites on an interval (10 seconds by default, configurable in the worker’s settings) and notifies the web client of the sites’ up-or-down statuses via server-side SignalR conversations. Those conversations are then bubbled up to the browser using client-side SignalR conversations. The client also fires off SignalR calls to the cloud service to manage the storage functionality for the URL’s to be monitored. If you’ve been looking for a practical way to use SignalR with Azure, this sample could shed some light on what’s possible.

Architectural Overview

The diagram below walks through the various method calls exposed by the SiteMonitR SignalR Hub. This Hub is accessed by both the HTML5 client application and by the Cloud Service’s Worker Role code. Since SignalR supports both JavaScript and Native .NET client connectivity (as well as a series of other platforms and operating systems), both ends of the application can communicate with one another in an asynchronous fashion.

SiteMonitR Architectural Diagram

Each tier makes a simple request, then some work happens. Once the work is complete, the caller can call events that are handled by the opposite end of the communication. As the Cloud Service observes sites go up and down, it sends a message to the web site via the Hub indicating the site’s status. The moment the messages are received, the Hub turns around and fires events that are handled on the HTML5 layer via the SignalR jQuery plug-in. Given the new signatures and additional methods added in SignalR 0.5.3, the functionality is not only identical in how it behaves, the syntax to make it happen in both native .NET code and JavaScript are almost identical, as well. The result is a simple GUI offering a real-time view into any number of web sites' statuses all right within a web browser. Since the GUI is written using Twitter Bootstrap and HTML5 conventions, it degrades gracefully, performing great on mobile devices.

Where You Can Get SiteMonitR

As with all the other samples released by the WAET, the SiteMonitR source code is available for download on the MSDN Code Gallery site . You can view or clone the source code in the repository we set up for the SiteMonitR source. Should you find any changes or improvements you’d like to see, feel free to submit a pull request, too. Finally, if you find anything wrong with the sample submit an issue via GitHub’s issue tab, and we’ll do what we can to fix the issues reported. The repository contains a Getting Started document that walks you through the whole process – with screen shots – of setting up the SiteMonitR live in your very own Azure subscription (if you don’t have one, get a free 90-day trial here ).

Demonstration Video

Finally, the video below walks you through the process of downloading, configuring, and deploying the SiteMonitR to Azure. In less than 10 minutes you’ll see the entire process, have your very own web site monitoring solution running in the cloud, and you’ll be confident you’ll be the first to know when any of your sites crash since you’ll see their statuses change in real-time. If the video doesn't load properly for you, feel free to head on over to the Channel 9 post containing the video .


A few days after the SiteMonitR sample was released, Matias Wolowski added in some awesome functionality. Specifically, he added in the ability for users to add PhantomJS scripts that can be executed dynamically when sites statuses are received. Check out his fork of the SiteMonitR repository on . I'll be reviewing the code changes over the next few days to determine if the changes are low-impact enough that they can be pulled into the main repository, but the changes Matias made are awesome and demonstrate how any of the Azure Evangelism Team's samples can be extended by the community. Great work, Matias! 

The CloudMonitR Sample

The next Azure code sample released by the Azure Evangelism Team is the CloudMonitR sample. This code sample demonstrates how Azure Cloud Services can be instrumented and their performance analyzed in real time using a SignalR Hub residing in a web site hosted with Azure Web Sites. Using Twitter Bootstrap and Highcharts JavaScript charting components, the web site provides a real-time view of performance counter data and trace output. This blog post will introduce the CloudMonitR sample and give you some links to obtain it.


Last week I had the pleasure of travelling to Stockholm, Sweden to speak at a great community-run conference, CloudBurst 2012 (as well as a few other events, which will be covered in a future post very very soon ). I decided to release a new Azure code sample at the conference, and to use the opportunity to walk through the architecture and implementation of the sample with the participants. As promised during that event, this is the blog post discussing the CloudMonitR sample, which you can obtain either as a ZIP file download from the MSDN Code Gallery or directly from its repository .

Below, you’ll see a screen shot of CloudMonitR in action, charting and tracing away on a running Azure Worker Role.


The architecture of the CloudMonitR sample is similar to a previous sample I recently blogged about, the SiteMonitR sample. Both samples demonstrate how SignalR can be used to connect Azure Cloud Services to web sites (and back again), and both sites use Twitter Bootstrap on the client to make the GUI simple to develop and customizable via CSS.

The point of CloudMonitR, however, is to allow for simplistic performance analysis of single- or multiple-instance Cloud Services. The slide below is from the CloudBurst presentation deck, and shows a very high-level overview of the architecture.


As each instance of the Worker (or Web) Role you wish to analyze comes online, it makes an outbound connection to the SignalR Hub running in a Azure Web Site. Roles communicate with the Hub to send up tracing information and performance counter data to be charted using the Highcharts JavaScript API. Likewise, user interaction initiated on the Azure Web Sites-hosted dashboard to do things like add additional performance counters to observe (or to delete ones no longer needed on the dashboard) is communicated back to SignalR Hub. Performance counters selected for observation are stored in a Azure Table Storage table, and retrieved as the dashboard is loaded into a browser.

Available via NuGet, Too!

The CloudMonitR solution is also available as a pair of NuGet packages. The first of these packages, the simply-named CloudMonitR package, is the one you’d want to pull down to reference from a Web or Worker Role for which you need the metrics and trace reporting functionality. Referencing this package will give you everything you need to start reporting the performance counter and tracing data from within your Roles.

The CloudMonitR.Web package, on the other hand, won’t bring down a ton of binaries, but will instead provide you with the CSS, HTML, JavaScript, and a few image files required to run the CloudMonitR dashboard in any ASP.NET web site.

The MyPictures Sample

Up next in the new set of samples being produced by the Azure Evangelism Team is the MyPictures sample. This sample uses Web API jQuery to post images to a Web API site running in Azure Web Sites, where the images files are stored in Azure Blob Storage and their metadata is stored in Azure Table Storage.

If you’re tinkering with Web API and wondering how to use it to do uploads, or if you’ve been looking into how to use Azure Storage and need a good example of how to implement it, this sample is definitely for you. In 15 minutes, you’ll be saving your images to the cloud for retrieval later, all using an HTML 5/jQuery interface that performs the upload and retrieval instantaneously.

As with all the other samples we’ll produce, the MyPictures sample lives in its very own repository, so if you experience any issues or have suggestions you can give us feedback on the sample using’s issue-reporting features. Likewise, the sample is being made available on the Azure samples site . The video below is also posted on Channel 9, and will walk you through the whole process of setting up the site and deploying it to Azure Web Sites. Get your free Azure account to get started, then grab the code and deploy it using Web Deploy.

The MyTODO Azure Sample

My team has been working diligently for the past few months preparing a series of samples that demonstrate various Azure techniques, technologies, and ideas. We just completed one of these samples – the MyTODO web site project. MyTODO is a simple list-making application written using ASP.NET MVC 4, ASP.NET Web API, jQuery, and jQuery Mobile. The site is quite easy to use in both a desktop web browser and from any mobile device that supports HTML 5 and jQuery Mobile. This blog post will introduce you to the sample and contains a short video that walks you through the process of installing it in your very own Azure Web Site. 

First and foremost, let me guide you to the sample on both the MSDN Samples Gallery site and to the repository where we’ll be maintaining the MyTODO sample. The Azure Evangelism team uses repositories for each of the samples we’ll produce. If you notice a bug or something you think would augment the sample, feel free to use the Issues tab in the repository to tell us what you think we should fix or change. If you add to the code in the demo or change something and want your change to find its way into the sample, feel free to submit a pull request, too, as we’re always looking to enhance the quality of our samples with things added by the community. The MyTODO repository is located here, and the MSDN Samples Gallery page is located here, in case you don’t use

My teammates Nathan Totten and Cory Fowler are also hard at work coming up with some other samples that will help you learn more about Azure Web Sites, so keep watching their blogs to find out when they publish their samples.

Now, sit back, sip your coffee or your water, and enjoy a short video that walks you through the process of downloading the sample code and getting it deployed to Azure Web Sites. The Getting Started document walks you through this, but we thought a video would be a nice option if you’re in a hurry to get going. Once you create your free Azure account (which will give you 12 months with 10 free web sites), you can download the code or clone the repository, create your own site, and publish the code to your site in under 15 minutes.

Azure Web Sites Log Cleaner

Azure Web Sites customers who run web sites in shared instance mode have a generous amount of storage space to use for multiple web sites. The 1GB limit for free customers is more than enough to get a site up and running with lots of room left over for storage. A site’s log files contribute to the amount of space being used by a site. If a site has a lot of traffic, the log files could grow and eat into the space available for the important assets for a site – the HTML, executables, images, and other relevant assets. That’s the sole purpose for the Azure Web Sites Log Cleaner, a NuGet utility you can grab and use in your site with little impact or work.

Potential Impacts of Not Backing Up Logs

Log analysis is important. It can show all sorts of things, from how much traffic you have to each section of your site, or if you’re getting strange requests that indicate some sort of intrusion. If your logs are important to you, there are some important things to remember about Azure Web Sites:


For these reasons, it’s very helpful to set up a utility such as this to automatically backup and clean out your log files. This utility makes that a snap, and since you can fire it via the execution of a simple URL, there are countless scheduling options, which I introduce later on in this post. For now, let’s take a look at how to get started.

How to Set it Up

Given the convenient features made available via the NuGet team, the installation of this package will make a few changes to your Web.config file. An appSetting node will be added to your file, and an HTTP handler will be registered. The handler, which by default answers requests at the URL /CleanTheLogs.axd, will zip up your log files in a flattened ZIP file, then send that ZIP file into Azure Blob Storage in a conveniently-named file for as long as you’ll need it and delete the files from your sites’ storage space. All you need to do is to hit the URL, and magic happens.

All you need to do is to create a storage account using the Azure Web Sites portal and to put the account name and key into the your Web.config file. The NuGet package adds placeholders to the file to point out what you’ll need to add on your own. Once it’s configured, you could set up a scheduled task to hit the URL on intervals, and you back up your logs and keep your sites tidy.

Placeholders in Web.config

Sounds too easy? If you have any issues or want a little more detailed explanation of exactly what happens, check out the video below. It’ll walk you through the whole process, from creating a site to peeking at the downloaded ZIP file.


There are a multitude of methods you could use to schedule the call to the URL and execute the log cleanup process. You could script a Powershell cmdlet to do it, write some C# code, or use a separate Cron Job service to run the task routinely for you. Nathan Totten wrote a great blog post on how to achieve task scheduling using a Cron Job service . Whatever your choice, one call to a URL does the entire job of saving and clearing your logs.

I can’t guide you on how often you should schedule your logs; each site is different. The guarantee is, if you put a site out there, you’ll get some traffic even if it’s only robot or search engine traffic, and with traffic comes log data. If you’re getting a lot of traffic it stands to reason you could reach the 35mb limit in very little time. With a scheduled job like the one Nathan demonstrates in his post, you could set the log backup URL to be called as often as every 10 minutes.


You’ve got a lot more room in Azure Storage space than you do in your web site folders, and you can use that space to store your logs permanently without using up your web site space. Hopefully this utility will solve that problem for you. If you have any issues setting it up, submit a comment below or find me on Twitter .


The Azure Web Sites Log Cleaner source code has been published into a public repository . If you want to see how it works, or you have an interest in changing the functionality in some way, feel free to clone the source code repository. 


I came up with a little side project on the plane ride home from Belgium. The world needed a dirt-simple Fluent wrapper around Azure Blob Storage to make it dirt-simple, I decided, and this is my first pass at making such a handy resource helper available. I'll get this thing on NuGet in the next few days, but here's a quick run-through of what BlobFu can do for you. Take a closer look at the code, as it's up on right now


Azure Blob Storage Fluent Wrapper

A library that makes it easy via a Fluent interface, to interact with Windows Blob storage. It gives you a very basic start to storing binary blobs in the Azure cloud.

What does BlobFu Do?

Here's the current set of functionality, demonstrated by an NUnit output of the unit tests used to design BlobFu. Note, more tests may be added as the project evolves.

BlobFu Unit Test Run

Using BlobFu Within ASP.NET

Here's the Hello World example to demonstrate one of the best uses for Azure Blob Storage - capturing file uploads. BlobFu makes this pretty simple.

Step 1 - Configure the Azure Blob Storage Connection String

Add an application or web configuration setting with the connection string you'll be using that points to your Azure storage account, as shown below.

Configuring a site or app with the blob connection string

Note: In this, the local storage account will be used, so make sure you're running your local storage emulator in this example.

running the storage emulator

Step 2 - Create an ASPX Page to Upload Files

Don't forget the enctype attribute. I always forget that, and then the files won't be uploaded. Just sayin'.

HTML form for uploading

Step 3 - Collect the Data

The code below simply collects the file upload and slams it into Azure Blob Storage.

saving blobs to blob storage


Yes, really. Looking at the Azure blob storage account in ClumsyLeaf's CloudXPlorer, you'll see images that are uploaded during testing.

checking the blob account using CloudXPlorer

Have Fun!

Give BlobFu a try. Hopefully it'll ease the process of learning how to get your blobs into Azure. These helper methods can also be used as WebMatrix 2 helpers, so give that a spin (and watch this space for more on this) if you get a moment.

Please let me know of any issues or enhancements you observe (or think about) using the Issues link for this project.

BlobFu is on Nuget

I spent a little more time working on BlobFu. Now, it supports stream and byte-array publication and generic support for uploading and downloading in-memory objects to and from Azure blob storage. Within minutes you could be uploading anything - files or serializable object instances - to your Azure blob storage account. Via NuGet, BlobFu is available for anyone's use. 

To get it, just use Visual Studio or WebMatrix 2 (which also supports NuGet packages) and install the package into your project and start using the BlobFuService class to upload and download files to Azure blob storage. 

Happy blob storage!

Redirection with Azure Web Sites

Since the release of Azure Web Sites a few months ago, one of the main questions I’m asked pertains to users’ ability to direct traffic to their Azure Web Site using a custom domain name. I’ve found a way to achieve this (sort of). In this blog post you’ll learn how you can use your DNS provider’s administration panel to send traffic to your shared-instance Azure Web Site.

Now, I admit this isn’t a perfect solution. I also apologize to the network guys out there, because this is kindergarten-level trickery. My SEO friends will cringe, too. The problem I’m aiming to solve in this post, though, is in response to a question I had from a few people on Twitter and during demonstrations of the new portal’s web site features.

This is great, but I can’t use my domain names with it. If I can’t do that and I can’t do that while I’m working on a site for a client, do I have to upgrade to a reserved instance right away or is there something I can do while in shared mode to get traffic using a custom domain name?

doug-henning-spinning-fire Ideally, you could set up some sort of redirection (301 or otherwise) to get traffic to your site using your domain management tool of choice, get some traffic, and then make (or obtain) a small investment in upgrading to a reserved instance once things are up and running. Given the free opportunity Microsoft is offering, you’ve got 10 chances to build one site for free for a year. The chances at least 1/10th’s of your web site ideas are going to make you enough to pay for a small reserved instance is pretty good, right? If not, or if you’re cool with redirection and just have some silly sites or fun little ideas you want to show off, you can use this domain trick as long as you can stand knowing it’s all really just a clever domain redirection illusion .

I’m a user [and huge fan] of for domain management, so I’ll be demonstrating this trick using their administration panel. Your domain management provider probably has something similar to’s URL record type (I think it’s a 301 redirect under the hood, but don’t quote me on that). The idea here is this – DNSimple will send traffic that comes to to Let’s set this up and get some traffic coming in!

Create Your Own Azure Web Site

First thing is your web site itself. Go to and set up a free account to give yourself a year of playtime with 10 free sites . Once you’ve logged into the portal, create a new web site. One with a database, use the application gallery, whatever you choose. This demo will allow for the creation of a simple site, but you’ll then switch over to DNSimple’s administration panel to set some DNS settings. In a few minutes you’ll have a live site, and a live domain name, that directs traffic to your shared-instance Azure Web Site.

Once you’re in the portal just select New, then Web Site, then Quick Create, then give it a URL prefix you’re comfortable using. In this case I have a domain named, and I’ll use the domain name when I create it in the portal.



Once the site is finished creating it will appear in the list of web sites you have hosted in Azure.



When you click on that site to select it and click the browse button you’ll see the Azure Web Sites domain name, with the custom prefix you provided to create the site.



Setting up a URL Record using

I don’t know if other DNS providers call their redirection records “URL” the way does, but if not, your provider probably has something like a 301 redirect. That’s sort of the idea here, we’re just going to redirect traffic permanently to a * domain whenever a request is made to the real domain name. For, no records exist, so the advanced editor for this domain has no entries.



Likewise, if I try to browse to the custom domain name, I’ll get an error page. That’s pretty much expected behavior at this point;’s DNS servers basically don’t know where to send the request.



Click on the Add Records button, then select the URL option from the context menu. You’ll then see the screen below. You can choose to put in a CNAME prefix here, or just leave it blank. In the case of the screenshot below, any requests made to any CNAME of will be directed to the domain.



Setting the TTL menu to 1 minute will result in the domain name resolving (or redirecting) to your Azure Web Site just a moment or two after you click the Add Record button. Now, when users make a request to your custom domain name, they’ll land on your Azure Web Site. Granted, this is a trick, as it just does a redirection, but if you’ve got a site on Azure Web Sites and you’ve got a custom domain you want to use with that site, and you aren’t ready or can’t yet afford to upgrade to a reserved instance, this could get you through in the meantime. You can get your site up and running, set up the redirection, and start taking orders or showing off your skills on your blog.

Blob Storage of Kinectonitor Images

The Kinectonitor has received a lot of commentary and I’ve received some great ideas and suggestions on how it could be improved. There are a few architectural aspects about it that gave me some heartburn. One of those areas is in that, I failed to make use of any of Azure’s storage functionality to store the images. This post sums up how Blob Storage was added to the Kinectonitor’s architecture so that images could be stored in the cloud, not on the individual observer site’s web servers.

So we’re taking all these pictures with our Kinect device. Where are we storing the images? Are they protected by disaster recovery procedures? What if I need to look at historical evidence to see photos of intruders from months or years ago? Where will all of this be stored? Do I have to worry about buying a new hard drive for all those images?

A legitimate concern that can be solved by storing the photographs taken by the Kinect into the Azure cloud. The video below shows a test harness I use in place of the Kinect device. This tiny application allows me to select an image from my hard drive. The image content is then sent along the same path as the images that would be captured and sent into the cloud by the Kinect. This video demonstrates the code running at debug time. Using ClumsyLeaf’s CloudXplorer client I then show the files as they’ve been stored in the local development storage account’s Blob storage container.

Now we’ll take a slightly deeper dive into the changes that have been made in this update of the Kinectonitor source code. If you’d like to grab that code it is available on GitHub .

The Kinectonitor Worker Role

This new project basically serves the purpose of listening for ImageMessage instances. There’s not a whole lot of code in the worker role. We’ll examine its purpose and functionality in more detail in a moment. For now, take a look at the role’s code in the Object Browser to get a quick understanding of what functions the role will provide the overall Kinectonitor architecture.


In the previous release of the code, ImageMessage instances were the only things being used to pass information from the Kinect monitoring client, to the Azure Service Bus, and then back down to the ASP.NET MVC client site. This release of the code simplifies things somewhat, especially around the service bus area. The previous code actually shipped the binary data into and out of the Azure Service bus; obviously this sort of arrangement would make for huge transfer rates. Before, the communication was required because the images were being stored in the ASP.NET MVC site structure as image files. Now, the image data will be stored in Azure Blob Storage, so all the ASP.NET MVC client sites will need is the URL of the image to be shown to the user in the SignalR-powered HTML observation client.

If you haven’t yet taken in some of the great resources on the Microsoft Azure site, now would be a great time. The introductory how-to on Blob Storage was quite helpful in my understanding of how to do some of the Blob Storage-related functionality. It goes a good deal deeper into the details of how Blob Storage works, so I’ll refer you to that article for a little background.

The worker role does very little handiwork with the Blob Storage. Basically, a container is created in which the images will be saved, and that container’s accessibility is set to public. Obviously the images in the container will be served up in a web browser, so they’ll need to be publicly viewable.


The SaveImageToBlobStorage method, shown below, does the work of building a stream to use to pass the binary data into the Blob Storage account, where it is saved permanently (or until a user deletes it).


Note how the CloudBlob.Uri property exposes the URL where the blob can be accessed. In the case of images, this is quite convenient – all we need to be able to display an image is its URL and we’ve got that as soon as the image is saved to the cloud.

Simplifying the Service Bus Usage

As previously mentioned, the image data had been getting sent not only to the cloud, but out of the cloud and then stored in the folder tree of an ASP.NET MVC web site. Not exactly optimal for archival, definitely not for long-term storage. We’ve solved the storage problem by adding in the Blob Storage service, so the next step is to clean up the service bus communication. The sole message type that had been used between all components of the Kinectonitor architecture in the first release was the ImageMessage class, shown below.


Since the ImageMessage class is really only needed when the need exists to pass the binary content of the image, a second class has been added to the messaging architecture. The ImageStoredMessage class, shown below, now serves the purpose of information the SignalR-driven web observation client that new images have been taken and saved into the cloud.


With the added event concept of images being stored and the client needing to only know the URL of the last image that’s shown automatically in the browser, the message bus usage is in need of rework. When the worker role spins up, the service bus subscription is established, as was being done directly from within the MVC site previously. The worker role listens for messages that come from the Kinect monitor.


When those messages are received, the worker role saves them to Blob Storage using the PublishBlobStoredImageUrl method that was highlighted earlier.


Finally, one last change that will surely be augmented in a later revision is within the SignalR hub area. Previously, Hubs were wired up through a middle object, a service that wasn’t too thoroughly implemented. That service has been removed, and the Hubs are wired directly to the service bus subscription.


Obviously, this results in all clients seeing all updates. Not very secure, not very serviceable in terms of simplicity and customer separation. The next post in this series will add some additional features around customer segmentation and subscription, as well as potential authentication via the bolted-on Kinect identification code.

JSON-based WCF in Azure

Developers need to grok Azure, especially developers who want to distribute consumption of an application in a web-based API. A great use for Microsoft Azure, obviously, is to use it to host an application’s web service API layer. This post will demonstrate how to host WCF services in an Azure worker role in a manner that will offer REST-like JSON API support.

WCF Can do JSON, and it isn’t Difficult

I promise. Most of the examples you’ll see cover possibly too much of the configuration specifics and focus on how Visual Studio can generate proxies for you. Though that’s a great facet of the IDE, there are some natural limitations to comprehending the moving parts within WCF if you always let the IDE take care of the plumbing. Once you get into implementing JSON services in WCF, the IDE generation can actually make things more difficult. If you'd like to follow along in the Visual Studio solution you can download it here

To start with, here’s a snap shot of the Visual Studio solution. Each project should be relatively self-explanatory in purpose. Each will be examined in a moment.


At this point, a closer examination of each project is in order. Don’t worry, there’s not a lot to each project. This won’t take long. You’ll be amazed how easy it could’ve been the whole time and even more amazed how easy it is to get your JSON API’s into Azure.

Service Contract and Implementation

This example will be a simple and familiar one; the service layer will expose calculator functionality. Hopefully, a later blog post will go into more detail on complex messaging, but for now, a calculator serves the explanation well.

The screen shot below demonstrates the ICalculator service interface. Note it has been decorated with the typical service model attributes, as well as the WebInvoke attribute. Within the WebInvoke attribute’s constructor, JSON is being used as a request and response style.


This attribute code simply says “when the application hosts this service, use JSON as the serialization syntax.” It’ll be important that those attributes are on the interface later on in this post. For now, just take a look at the implementation in the screen shot below.


As you’ve seen, the service contract interface is not only where the abstraction is defined, but how the service is used from within WCF. The next piece of the puzzle is how the service will be hosted within a Azure worker role process.

Hosting WCF JSON Services in Azure

There are a number of great resources on StackOverflow or CodeProject on the details of hosting a WCF service in Azure, so this article won’t dive too deeply into the intricacies. Rather, this next section will dive right in and get things done.

WCF Services can be hosted in numerous ways. A few options are available via Azure, from hosting in a typical web application environment to being hosted and configured manually in a Worker Role. The code below demonstrates the second option via a worker role class.


The important method, HostService is closed in the screen shot above, as it’ll be examined in somewhat more detail below. The HostService method does just what it says – it hosts a WCF service implementation as represented by a decorated WCF interface in the Azure worker role.


This method just declares a new ServiceHost instance, then sets that instance up on the WebHttp binding so that JSON communication can take place. In particular, the code requires one step of configuration take place. The code that looks at the Worker Role environment’s end points collection is shown below.


This stipulates a convention. That each service endpoint will be named in the endpoints configuration according to it’s type’s name. In this example case, the Calculator class is the one that’ll actually be placed on the WCF binding. The screen shot below demonstrate how and in which file this can be set using Visual Studio 2010.


At this point, the worker role should be configured and ready to run. If the project is run in debug mode at this point the computer emulator will open and the trace output is visible. The screen shot below shows the compute emulator running. The highlighted sections are the custom trace messages the code writes at run-time.


The final step is to write a client to consume the service while it is being hosted in Azure.

Calling via a Unit Test

This service only exposes one small unit of functionality, so it stands to reason a unit test could be a good method of testing the service. Sure, it’s test-after, but that argument is for another time! For now, let’s finish this example up by calling the WCF service being hosted in Azure. Just to prove there are no smoke and mirrors, here’s what the URL from the compute emulator will expose when suffixed with the UrlTemplate format from the ICalculator service definition from earlier.


To know about the service, the unit test project will have a reference set to the contracts project. So that the unit test will know how to call the service in the cloud, the unit test project will need a tiny bit of configuration added to it’s app.config file. The screen shot below demonstrates both of these steps.


I mentioned early on that this post would demonstrate proxy formation without the use of the service references functionality of Visual Studio. The class below demonstrates how this can be achieved. By inheriting from the ClientBase class and the interface being exposed by the service, you can create a custom proxy class in a minimal amount of work.


Finally, the unit test can be authored.


When executed, provided the calculator service is being hosted in Azure, the results should be immediate. The test passes, the result of the calculation yielding a correct result.


Looking back at the compute emulator that was running when the unit test client was executed, the evidence the service is handling requests is apparent. The screen shot below highlights a pair of executions from when the test was executed.



This article took a no-nonsense approach to demonstrating how simple it is to host JSON-serialized WCF services in Azure. Azure is an amazing resource for developers who want to expose aspects of their application in open web API’s. Hopefully, this example has demonstrated how easy it is to get started doing just that. Soon, you’ll be RESTing in the Cloud, with your very own WCF JSON API’s. 

Download the Visual Studio.NET solution for this article.  

The Azure Service Bus Simplifier

One of the things that makes enterprise service development of any type difficult is the requirement to learn that ESB’s programming model. The Azure has a very simple programming model already, but for those developers getting started with Azure Service Bus programming for the first time who mainly want a simple publish/subscribe-style bus architecture limited to a few types of custom messages, I’ve created a NuGet package called the ServiceBusSimplifier.

If all you need from the Azure service bus is a simple implementation of the pub/sub model there’s no longer a need to learn all the plumbing and deeper detail work. Of course, there’s nothing wrong with learning the plumbing and doing so is encouraged, but for those just getting started with Azure’s Topic-based messaging, the ServiceBusSimplifier package might be just the answer. It provides extremely basic access to publishing and subscribing custom messages through the Azure service bus. The video below demonstrates how to use the package, and there is some detailed usage instruction below the video.

Usage and Source Code

If you’re just trying to use the package to simplify your entry into using Azure’s service bus, just pull down the package from NuGet .


If you’d like to peruse the source code, which the remaining sections of this post will take a slightly deeper dive into an element at a time or you’d like to add functionality to it, the code is available as a GitHub repository .

Setting up the Connection

To set up the service bus connection, call the Setup method, passing it an instance of the class the package uses to supply authentication and connectivity information to the service bus, the InitializationRequest class. The ServiceBus abstraction class offers a Fluent interface, so the methods could be chained together if need be.



As mentioned earlier, the ServiceBus abstraction offers very simple usage via self-documenting methods. The code below has been augmented to subscribe to an instance of a custom class.


Note, there are no special requirements or inheritance chain necessary for a class to be passed around within this implementation. The class below is the one being used in this example, and in the GitHub repository.


Finally, here’s the HandleSimpleMessage method from the program class. Note, this could have been passed as an anonymous method rather than a pointer to a class member or static member. The video demonstration above shows such a usage, but it’s important to note that either a static, instance, or anonymous method would be appropriate being passed to the Subscribe method.



The final piece of this demonstration involves publishing messages into the Azure service bus. The code below shows how to publish a message to the bus, using the self-explanatory Publish method


Hopefully, the ServiceBusSimplifier package will ease your development experience with the Azure service bus. Even though the Azure service bus is dirt-simple to use, this handy utility library will give your code 1-line access to efficient publish/subscribe mechanisms built into the cloud. Happy coding!

The Kinectonitor

Suppose you had a some scary-looking hoodlum walking around your house when you were out? You’d want to know about it, wouldn’t you? Take one Kinect, mix in a little Azure Service Bus, sprinkle in some SignalR, and mix it all together with some elbow grease, and you could watch in near-real-time as sinewy folks romp through your living room. Here’s how.

You might not be there (or want to be there) when some maniac breaks in, but it’d be great to have a series of photographs with the dude’s face to aid the authorities in their search for your home stereo equipment. The video below is a demonstration of the code this post will dive into in more detail. I figured it’d give some context to the problem this article will be trying to solve.

I’d really like to have a web page where I could go to see what’s going on in my living room when I’m not there. I know that fancy Kinect I picked up my kids for their XBox can do that sort of thing and I know how to code some .NET. Is it possible to make something at home that’d give me this sort of thing?

Good news! It isn’t that difficult. To start with, take a look at the Kinectonitor Visual Studio solution below.


At a high level it’ll provide the following high-level functions. The monitor application will watch over a room. When a skeleton is detected in the viewing range, a photograph will be taken using the Kinect camera. The image will be published to the Azure Service Bus, using a Topic publish/subscribe approach. An MVC web site will subscribe to the Azure Service Bus topic, and whenever the subscriber receives a ping from the service bus with a new image taken by the Kinect, it will use a SignalR hub to update an HTML client with the new photo. Here's a high-level architectural diagram of how the whole thing works, end-to-end. 

The Kinectonitor Core Project

Within the core project will exist a few common areas of functionality. The idea behind the core project is to provide a domain structure, functional abstraction, and initial implementation of the image publication concept. For all intents and purposes, the Kinect will be publishing messages containing image data to the Azure Service Bus, and allow subscribers (which we’ll get to in a moment) their own autonomy. The ImageMessage class below illustrates the message that’ll be transmitted through the Azure Service Bus.


A high-level abstraction will be needed to represent consumption of image messages coming from the Azure cloud. The purpose of the IImageMessageProcessor service is to receive messages from the cloud and to then notify it’s own listeners that an image has been received.


A simple implementation is needed to receive image messages and to notify observers when they’re received. This implementation will allow the SignalR hub, which we’ll look at next, to get updates from the service bus; this abstraction and implementation are the custom glue that binds the service bus subscriber to the web site.


Next up is the MVC web site, in which the SignalR hub is hosted and served up to users in an HTML client.

Wiring Up a SignalR Hub

Just before we dive into the MVC code itself, take a quick look at the solution again and note the ServiceBusSimplifier project. This is a super naïve, demo-class wrapper around the Azure Service Bus that was inspired by the far-more-complete implementation Joe Feser shares on GitHub . I used Joe’s library to get started with Azure Service Bus and really liked some of his hooks, but his implementation was overkill for my needs so I borrowed some of his ideas in a tinier implementation. If you’re deep into Azure Service Bus, though, you should totally look into Joe’s code.


Within the ServiceBusSimplifier project is a class that provides a Fluent wrapper abstraction around the most basic Azure Service Bus publish/subscribe concepts. The ServiceBus class (which could probably stand be renamed) is below, but collapsed. The idea is just to get the idea of how this abstraction is going to simplify things from here on out. I’ll post a link to download the source code for this article in a later section. For now, just understand that the projects will be using this abstraction to streamline development and form a convention around the Azure Service Bus usage.


A few calls are going to be made in calls to the ServiceBus.Setup method, specifically to provide Azure Service Bus authentication details. The classes that represent this sort of thing are below.


Now that we’ve covered the shared code that the MVC site and WPF/Kinect app will use to communicate via the Azure Service Bus, let’s keep rolling and see how the MVC site is connected to the cloud using this library.

In this prototype code, the Global.asax.cs file is edited. A property is added to the web application to expose an instance of the MockImageMessageProcessor, (a more complete implementation would probably make use of an IoC container to store an instance of the service) for use later on in the SignalR Hub. Once the service instance is created, the Azure Service Bus wrapper is created and the ImageMessage messages are subscribed to by the site MessageProcessor instance’s Process method.


When the application starts up, the instance is created and shared with the web site’s server-side code. The SignalR Hub, then, can make use of that service implementation. The SignalR Hub listens for ImageReceived events coming from the service. Whenever the Hub handles the event, it turns around and notifies the clients connected to it that a new photo has arrived.


With the Hub created, a simple Index view (and controller action) will provide the user-facing side of the Kinectonitor. The HTML/JQuery code below demonstrates how the client responds when messages arrive. There isn’t much to this part, really. The code just changes the src attribute of an img element in the HTML document, then fades the image in using JQuery sugar.


Now that the web client code has been created, we’ll take a quick look at the Kinect code that captures the images and transmits them to the service bus.

The Kinectonitor Monitor WPF Client

Most of the Kinect-interfacing code comes straight from the samples available with the Kinect SDK download . The main points to be looked at in the examination of the WPF client is to see how it publishes the image messages into the Azure cloud.

The XAML code for the main form of the WPF app is about as dirt-simple as you could get. It just needs a way to display the image being taken by the Kinect and the skeletal diagram (the code available from the Kinect SDK samples). The XAML for this sample client app is below.


When the WPF client opens, the first step is, of course, to connect to the Kinect device and the Azure Service Bus. The OnLoad event handler below is how this work is done. Note that this code also instantiates a Timer instance. That timer will be used to control the delay between photographs, and will be looked at in a moment.


Whenever image data is collected from the camera it’ll be displayed in the WPF Image control shown earlier. The OnKinectVideoReady handler method below is where the image processing/display takes place. Take note of the highlighted area; this code sets an instance of a BitmapSource object, which will be used to persist the image data to disk later.


Each time the Kinect video image is processed a new BitmapSource instance is created. Remember the Timer instance from the class earlier? That timer’s handler method is where the image data is saved to disk and transmitted to the cloud. Note the check being performed on the AreSkeletonsBeingTracked property. That’s the last thing that’ll be looked at next, that’ll tie the WPF functionality together.


If the Kinectonitor WPF client just continually took snapshots and sent them into the Azure cloud the eventual price would probably be prohibitive. The idea behind a monitor like this, really, though, is to show only when people enter during unexpected times. So, the WPF client will watch for skeletons using the built-in Kinect skeleton tracking functionality (and code from the Kinect SDK samples). If a skeleton is being tracked, we know someone’s in the room and that a photo should be taken. Given that the skeleton might be continually tracked for a few seconds (or minutes, or longer), the Kinect will continue to photograph while a skeleton is being tracked. As soon as the tracking stops, a final photo is taken, too. The code that sets the AreSkeletonsBeingTracked property value during the skeleton-ready event handler is below.


Some logic occurs during the setter of the AreSkeletonsBeingTracked method, just to sound the alarms whenever a skeleton is tracked, without having to wait the typical few seconds until the next timer tick.


That’s it for the code! One more note – it helps if the Kinect is up high or in front of a room (or both). During development of this article I just placed mine on top of the fridge, which is next to the kitchen bar where I do a lot of work. It could see the whole room pretty well and picked up skeletons rather quickly. Just a note for your environment testing phase.



This article took a few of the more recent techniques and tools to be released to .NET developers. With a little creativity and some time, it’s not difficult to use those components to make something pretty neat at home in the physical computing world. Pairing these disciplines up to create something new (or something old someway different) is great fodder for larger projects using the same technologies later.

If you’d like to view the Kinectonitor GitHub source code, it’s right here.