Rebuilding the SiteMonitR using Azure WebJobs SDK

Some time back, I created the SiteMonitR sample to demonstrate how SignalR could be used to tie together a Web Site and a Cloud Service. Since then Azure has evolved quite rapidly, as have the ASP.NET Web API and SignalR areas. One recent addition to the arsenal of tools available to application developers in Azure is WebJobs. Similar to the traditional Worker Role, a WebJob allows for the continuous or triggered execution of program logic. The main difference between a Worker Role and a WebJob is that the latter runs not in the context of a separate Cloud Service, but as a resource of a Web Site. WebJobs also simplify development of these routine middleware programs, too, since the only requirement on the developer is to reference the WebJobs NuGet package. Developers can write basic console applications with methods that, when decorated with properties resident in the WebJobs SDK, will execute at appropriate times or on schedules. You can learn more about the basics of WebJobs via the introductory article, the WebJobs SDK documentation, or from Hanselman’s blog post on the topic.

In this blog post, I’m going to concentrate on how I’ve used WebJobs and some of my other favorite technologies together to re-create the SiteMonitR sample. I’ve forked the original GitHub repository into my own account to provide you with access to the new SiteMonitR code. Once I wrap up this post I’ll also update the MSDN Code Sample for SiteMonitR, so if you prefer a raw download you’ll have that capability. I worked pretty closely with Pranav Rastogi, Mike Stall and the rest of the WebJobs team as I worked through this re-engineering process. They’ve also recorded an episode of Web Camps TV on the topic, so check that out if you’re interested in more details. Finally, Sayed and Mads have developed a prototype of some tooling features that could make developing and deploying WebJobs easier. Take a look at this extension and give us all feedback on it, as we’re trying to conceptualize the best way to surface WebJobs tooling and we’d love to have your input on how to make the whole process easier. 

Application Overview

SiteMonitR is a very simple application written for a very simple situation. I wanted to know the status of my web sites during a period where my [non-cloud] previous hosting provider wasn’t doing so well with keeping my sites live. I wrote this app up and had a monitor dedicated to it, and since then the app has served as proving ground for each new wave of technology I’d like to learn. This implementation of the application obviously makes use of WebJobs to queue up various points in the work flow of site-monitoring and logging. During this workflow the code also updates a SignalR-enabled dashboard to provide constant, real-time visibility into the health of a list of sites. The workflow diagram below represents how messages move throughout SiteMonitR.


The goal for the UX was to keep it elegant and simple. Bootstrap made this easy in the last version of the application, and given the newest release of Boostrap, the templates Pranav and his team made available to us in Visual Studio 2013, it seemed like a logical choice for this new version of SiteMonitoR. I didn’t change much from the last release aside from making it even moresimple than before. I’ve been listening to the team, and the community, rave about AngularJS but I’d not had made the time to learn it, so this seemed like a great opportunity. I found I really love AngularJS from reworking this app from Knockout in the previous version. I’ve tried a lot of JavaScript frameworks, and I’m pretty comfortable saying that right now I’m in love with AngularJS. It really is simple, and fun to use.


The entire solution is comprised of 4 projects. Two of these projects are basic console applications I use as program logic for the WebJobs. One is [duh] a Web Application, and the last is a Common project that has things like helper methods, constants, and configuration code. Pretty basic structure, that when augmented with a few NuGet packages and some elbow grease, makes for a relatively small set of code to have to understand.


With a quick high-level introduction to the application out of the way, I’ll introduce the WebJob methods, and walk through the code and how it works.

Code for Harnessing WebJobs

One of the goals for WebJobs was to provide ASP.NET developers a method of reaching in and doing things with Azure’s storage features without requiring developers to learn much about how Azure storage actually works. The team’s architects thought (rightfully) that via the provision of convenience attributes that referred to abstract use-cases facilitated in many common Azure scenarios, more developers would have the basic functionality they need to use storage without actually needing to learn how to use storage. Sometimes the requirement of having to learn a new API to use a feature mitigates that feature’s usefulness (I know, it sounds crazy, right?).

So, Mike and Pranav and the rest of the team came up with a series of attributes that are explained pretty effectively in their SDK documentation. I’m going to teach via demonstration here, so let’s just dive in and look at the first method. This method, CheckSitesFunction, lives in the executable code for a WebJob that will be executed on a schedule.Whenever the scheduler service wakes up this particular job, the method below will execute with two parameters being passed in. The first parameter references a table of SiteRecord objects, the second is a storage queue into which the code will send messages.


You could’ve probably guessed what I’m going to do next. Iterate over all the records in the table, grab the URL of the site that needs to be pinged, then ping it and send the results to a storage queue. The outparameter in this method is actually a queue itself. So the variable resultList below is literally going to represent the list of messages I’m planning on sending into that storage queue.

Now, if you’re obsessive like me you’ll probably have that extra monitor set up just to keep tabs on all your sites, but that’s not the point of this WebJob. As the code executes, I’m also going to call out to the Web API controller in the web site via the UpdateDashboard method. I’ll cover that in more detail later, but that’s mainly to provide the user with real-time visibility into the health of the sites being checked. Realistically all that really matters is the log data for the site health, which is why I’m sending it to a queue to be processed. I don’t want to slow down the iterative processing by needing to wait for the whole process so I queue it up and let some other process handle it.


In addition to a scheduled WebJob, there’s another one that will run on events. Specifically, these WebJobs will wake up whenever messages land into specific queues being observed by this WebJob. The method signatures with appropriately-decorated attributes specifying which queues to watch and tables to process, are shown in the code below.


One method in particular, AddSite,runs whenever a user event sends a message into a queue used to receive requests to add sites to the list of sites being watched. The user facilitates this use-case via the SiteMonitR dashboard, a message is sent to a queue containing a URL, and then, this method just wakes up and executes. Whenever a user sends a message containing a string URL value for the site they’d like to monitor, the message is then saved to the storage table provided in the second parameter. As you can see from the method below there’s no code that makes explicit use of the storage API or SDK, but rather, it’s just an instance of an IDictionary implementer to which I’m just adding items.


The SaveSiteLogEntry method below is similar to the AddSite method. It has a pair of parameters. One of these parameters represents the incoming queue watched by this method, the second represents a table into which data will be stored. In this example, however, the first parameter isn’t a primitive type, but rather a custom type I wrote within the SiteMonitR code. This variation shows the richness of the WebJob API; when methods land on this queue they can be deserialized into object instances of type SiteResult that are then handled by this method. This is a lot easier than needing to write my own polling mechanism to sit between my code and the storage queue. The WebJob service takes care of that for me, and all I need to worry about is how I handle incoming messages. That removes a good bit of the ceremony of working with the storage SDK; of course, the trade-off is that I have little no no control over the inner workings of the storage functionality.

That’s the beauty of it. In a lot of application code, the plumbing doesn’t really matter in the beginning. All that matters is that all the pieces work together.


Finally, there’s one more function that deletes sites. This function, like the others, takes a first parameter decorated by the QueueInput attribute to represent a queue that’s being watched by the program. The final parameters in the method represent two different tables from which data will be deleted. First, the site record is deleted, then the logs for that site that’ve been stored up are deleted.


The SiteMonitR Dashboard

The UX of SiteMonitR is built using Web API, SignalR, and AngularJS. This section will walk through this part of the code and provide some visibility into how the real-time updates work, as well as how the CRUD functionality is exposed via a Web API Controller. This controller’s add, delete, and list methods are shown below. Note, in this part of the code I’ll actually be using the Storage SDK via a utility class resident in the Web Project.


Remember the scheduled WebJob from the discussion earlier? During that section I mentioned the Web API side of the equation and how it would be used with SignalR to provide real-time updates to the dashboard fromthe WebJob code running in a process external to the web site itself. In essence, the WebJob programs simply make HTTP GET/POST calls over to the Web API side to the methods below. Both of these methods are pretty simple; they just hand off the information they obtained during the HTTP call from the WebJob up to the UX via bubbling up events through SignalR.


The SignalR Hub being called actually has no code in it. It’s just a Hub the Web API uses to bubble events up to the UX in the browser.


The code the WebJobs use to call the Web API make use of a client I’ll be investigating in more detail in the next few weeks. I’ve been so busy in my own project work I’ve had little time to keep up with some of the awesome updates coming from the Web API team recently. So I was excited to have time to tinker with the Web API Client NuGet’s latest release, which makes it dirt simple to call out to a Web API from client code. In this case, my client code is running in the scheduled WebJob. The utility code the WebJob calls that then calls to the Web API Controller is below.


As I mentioned, I’m using AngularJS in the dashboard’s HTML code. I love how similar AngularJS’s templating is to Handlebars. I didn’t have to learn a whole lot here, aside from how to use the ng-class attribute with potential multiple values. The data-binding logic on that line defines the color of the box indicating the site’s status. I love this syntax and how easy it is to update UX elements when models change using AngularJS. I don’t think I’ve ever had it so easy, and have really enjoyed the logical nature of AngularJS and how everything just seemed to work.


Another thing that is nice with AngularJS is well-explained by Ravi on his blog on a Better Way of Using ASP.NET SignalR with AngularJS. Basically, since services in AngularJS are Singletons and since AngularJS has a few great ways it does injection, one can easily dial in a service that connects to the SignalR Hub. Then, this wrapper service can fire JavaScript methods that can be handled by Angular controllers on the client. This approach felt very much like DI/IoC acts I’ve taken for granted in C# but never really used too much in JavaScript. The nature of service orientation in JavaScript and how things like abstracting a SignalR connection are really elegant when performed with AngularJS. The code below shows the service I inject into my Angular controller that does just this: it handles SignalR events and bubbles them up in the JavaScript code on the client.


Speaking of my AngularJS controller, here it is. This first code is how the Angular controller makes HTTP calls to the Web API running on the server side. Pretty simple.


Here’s how the Angular controller handles the events bubbled up from the Angular service that abstracts the SignalR connection back to the server. Whenever events fire from within that service they’re handled by the controller methods, which re-bind the HTML UX and keep the user up-to-date in real-time on what’s happening with their sites.


With that terse examination of the various moving parts of the SiteMonitR application code out of the way the time has come to learning how it can be deployed.

Deploying SiteMonitR

There are three steps in the deployment process for getting the site and the WebJobs running in Azure, and the whole set of code can be configured in one final step. The first step should be self-explanatory; the site needs to be published to Azure Web Sites. I won’t go through the details of publishing a web site in this post. There are literally hundreds of resources out there on the topic, from Web Camps TV episodes to Sayed’s blog, as well as demo videos on AzureConf, //build/ keynotes, and so on. Save it to say that publishing and remote debugging ASP.NET sites in Azure Web Sites is pretty simple. Just right-click, select Publish, and follow the steps.


Once the code is published the two WebJob executables need to be zipped up and deployed. First, I’ll pop out of Visual Studio to the bin/Release folder of my event-driven WebJob. Then I’ll select all the required files, right-click them, and zip them up into a single zip file.


Then in the portal’s widget for the SiteMonitR web site I’ll click the WebJobs tab in the navigation bar to get started creating a new WebJob.


I’ll give the WebJob a name, then select the zip file and specify how the WebJob should run. I’ll select Run Continuously for the event-driven WebJob. It has code in it that will halt the process in wait for incoming queue messages, so this selection will be adequate.


Next I’ll zip up the output of the scheduled WebJob’s code.


This time, when I’m uploading the zip file containing the WebJob, I’ll select the Run on a Schedule option from the How to Run drop-down.


Then I’ll set up the schedule for my WebJob. In this example I’m going to be a little obsessive and run the process to check my sites every fifteen minutes. So, every fifteen minutes the scheduled task, which is responsible for checking the sites, will wake up, check all the sites, and enqueue the results of each check so that the results can be logged. If a user were sitting on the dashboard they’d observe this check happen every 15 minutes for each site.


The two WebJobs are listed in the portal once they’re created. Controls for executing manual or scheduled WebJobs as well as stopping those currently or continuously running appear at the bottom of the portal.



The final step in deployment (like always) is configuration. SiteMonitR was built for Azure, so it can be entirely configured from within the portal. The first step in configuring SiteMonitR is to create a Storage Account for it to use for storing data in tables and for the queue being used to send and receive messages. An existing storage account could be used, and each object created in the storage account is prefixed with the sitemonitr phrase. That said, it could be better for some to have an isolated storage account. If so, create a storage account in the portal.


Once the storage account has been created, copy the name and either primary or secondary keys so that you can build a storage account connection string. That connection string needs to be used, as does the URL of my site (in this case,, which is in fact a live instance of this code) as connection strings and an appSetting (respectively). See below on how to configure these items right from within the Azure portal.


Once these values are configured, the site should be ready to run.

Running SiteMonitR

To test it out, open the site in another browser instance, then go to the WebJobs tab of the site and select the scheduled task. Then, run it from within the portal and you should see the HTML UX reacting as the sites are checked and their status sent back to the dashboard from the WebJob code.


Speaking of dashboards, the WebJobs feature itself has a nice dashboard I can use to check on the status of my WebJobs and their history. I’ll click the Logs link in the WebJobs tab to get to the dashboard.


The WebJobs Dashboard shows all of my jobs running, or the history for those that’ve already executed.



I’m really enjoying my time spent with WebJobs up to this point. At the original time of this post, WebJobs was in the first Alpha release stage. so they’re still pretty preview. I’m seeing huge potential for WebJobs, where customers who have a small middle-tier or scheduled thingthat needs to happen. In those cases a Worker Role could be quite overkill, so WebJobs is a great middle-of-the-road approach to giving web application owners and developers these sorts of scheduled or event-driven programs that enable a second or multiple tiers running. I’ve really enjoyed playing with WebJobs, learning the SDK, and look forward to more interesting things coming out in this space.

Writing a Windows Phone 8 Application that uses the Azure Management Libraries for on-the-go Cloud Management

Since the initial release of the Azure Management Libraries (WAML) as Portable Class Libraries (PCL, or pickles as I like to call them) we’ve had tons of questions from community members like Patriek asking how Windows Phone applications can be written that make use of the cloud management functionality WAML offers. At first glance it would seem that WAML is “unsupported on Windows Phone,” but the reality is that WAML is fully supported via it being a PCL, but some external factors made it seem just shy of impossible to be able to glue our PCL framework to each of the targeted client platforms. In the case of Phone, there are two blockers:

  1. Since the X509Certificate2class is unsupported on Windows Phone developers can’t make use of the certificate-based method of authenticating against the Azure REST API
  2. Since there’s no ADAL implementation that targets Phone there’s no way to use ADAL’s slick functionality to get my authentication token. The assumption here is that some creative coding is going to have to be done to go deeper into the OAuth stack. Sounds nasty.

Well, that’s never stopped any motivated coder before, right? Armed with some code from Patriek, a problem statement, and a pretty informative blog post written by none other than Vittorio on how he had some fun tying Phone to REST Services using AAD, I set out to start at ground zero to get some code working on my phone that would enable me to see what’s happening in my Azure subscription, across all my sites, services, and data stores. I set out to create a new type of phone app that will give me complete control over everything I’ve got published to my Azure cloud subscription. This post will walk through the code I worked up to get a baseline Windows Phone mobile cloud management app running.

Creating and Configuring the Phone App

First, a little background is required here. As Vittorio’s post makes very clear, the OAuth dance the WP8 code is going to be doing is pretty interesting, challenging, and results with a great set of functionality and an even greater starting point if this app gives you some ideas on what you could make it do next. In fact, if you’re into that whole social coding thing, the code in this post has a GitHub repository right here. The code in the repository contains the project I created in this screen shot in more detail.


The first step is to pull down the NuGet pacakges I’ll need to write this application. In this particular case I’m kind of thinking of writing a Windows Phone app that I’ll use to management my Azure Web Sites, Cloud Services, SQL Databases, Storage Accounts, and everything in between. So I’ll right-click my Phone project and select the correct packages in next step.


I’ll go ahead and pull down the entire WAML NuGet set, so I can manage and view all of my asset types throughout the my Azure platform subscription. By selecting the Microsoft.Azure.Management.Libraries package, all of the SDK Common and WAML platform NuGets will all be pulled down for this application to use.


The Common SDK, shown below in the project’s references window, provides the baseline HTTP, Parsing, and Tracing functionality that all the higher-order WAML components need to function. Next to that is the newest member of the Common SDK tools, the Common Windows Phone reference, which is new in the latest release of the Common SDK package. This new addition provides the high-level functionality for taking token credentials from within the Windows Phone client and being able to attach those credentials to the SDK client code so that calls made to the Azure REST API from a phone are authenticated properly.


If you read my last blog post on using WAAD and WAML together to authenticate WAML-based client applications, there are a few important steps in that post about working through the process of adding an application that can authenticate on behalf of your WAAD tenant. Take a look at that post here to get the values for the App Resources dialog in the screen shot below. In a sense, you’ll be setting application settings here that you can get from the Azure portal. You’ll need to create an application, then get that app’s client id, and your subscription and active directory tenant id or domain name, and place those values into the code’s resources file, as shown below.


The Login View

The first view shows a built-in web browser control. This web browser will be used to direct the user to the login page. From there, the browser’s behavior and events will be intercepted. The authentication data the form contains will then be used to manually authenticate the application directly to AAD.

Reminder – As Vittorio’s post pointed out, you might not want to use this exact code in your production app. This is a prototype to demonstrate putting it all together in the lack of important resources. Something better will come along soon. For now the OAuth dance is sort of up to me.

Note the web browser control has already been event-bound to a method named webBrowser_Navigating, so each time the browser tries to make a request this method handler will run first, and allow my code to intercept the response coming back to the browser.


Once the login page loads up, I’ll presume a login is in order, so I’ll go ahead and navigate to the login page, passing in a few of my own configuration parameters to make sure I’m hitting my own AAD tenant.


Vittorio deep-dives on the workflow of this code a lot better than I’ll try to do here. The basic idea is that the browser flow is being intercepted. Once the code realizes that AAD has authenticated me it hands back a URL beginning with the string I set for the redirectUrl property of this application in the portal. When my code sees my redirectUrl come through, I just run the GetToken() method to perform the authentication. It posts some data over the wire again, and this time, the response activity is handled by the RetrieveTokenEndpointResponse() event handler.


Once this handler wakes up to process the HTTP response I’ve received from the AAD authentication workflow (that we did via the OAuth dance), my access token is pulled out of the response. The subscription id value is then persisted via one of my App’s properties, and the next view in the application is shown.


When the user first opens the app, the obvious first step is visible. We need to show the user the AAD login site, and allow the user to manually enter in their email address and password. In this window. I could use a Microsoft (Live/Hotmail/etc.) account or an organizational account. The latter, so long as they’ve been marked as Azure subscription co-admins, would be able to log in and make changes, too.


Once the user logs in they’ll be shown a list of their subscriptions, so let’s take a look at how the code in the subscriptions view is constructed.

Selecting a Subscription

The subscription selection view just shows a list of subscriptions to the user in a long scrolling list. This list is data-bound to the results of a REST API call made via the SubscriptionClient class, which goes out and gets the list of subscriptions a user can view.

Note – even though I can see my full list of subscriptions, I’ve only installed the Application into one AAD tenant in one of my subscriptions. If I click any of the other subscriptions, things won’t work so well. A subscription and a tenant are tied together. Unless your application gets special permission (it does happen from time to time, I hear) your application can only access Azure assets in the same subscription. If you wanted to work across subscription you’d have a little more work to do.

The XAML code below shows how the list of subscriptions will be data-bound to the view. I may eventually go into this code and add some view/hide logic that would show only those subscriptions I could impact using the application settings I’ve got running in this app. Point is, I can see all of the subscriptions I can manage and conceptually manage them with some additional coding.


The code below is what executes to log a user in with their stored access token and subscription id, when the TokenCloudCredentials class is used to wrap their credentials up as a single construction parameter to most of the management client. Since I was able to get a token from the OAuth dance with AAD, and since I know which subscription ID I want to affect, I’ve got everything I need to start working against the rest of the service management REST API endpoints.


When the subscriptions page loads, it gives me the chance to select the subscription I want to manage. Note:I just need to be cautiousand only click the subscription that matches the subscription where my AAD tenant is set up. In my case, I’ve got the application installed into my AAD tenant that lives in the subscription named Azure MSDN – Visual Studio Ultimate. If I were to select any of my other subscriptions I’d get an exception.

To have true parity across all subscriptions you’d need to have an application setup in each of your directories and some additional configuration, oryou’d have to allow other tenants set up multi-tenancy with your AAD tenant.


Once the subscription list is data-bound the user can select a subscription to work with. Selection of a subscription item in the list fires the following code.


It persists the subscription id of the subscription the user selected, and then navigates the user to the AssetListViewpage, which will show more details on what’s in a user’s Azure subscription.

Viewing Cloud Assets via Mobile Phone

To provide a simple method of being able to scroll through various asset types rather than emulate the Azure portal, I made use of the LongListSelector and Pivot controls. In each Pivot I display the type of Azure asset being represented by that face of the pivot control. Web Sites, slide, Storage, slide, SQL, slide, Cloud Services… and so on. The idea for the phone prototype was to show how WAML could be used to load ViewModel classes that would then be shown in a series of views to make it easy to get right to a particular asset in your Azure architecture to see details about it in a native Phone interface.


Since the assets list page is primarily just a logically-grouped series of lists for each of the types of assets one can own in their Azure stack, the ViewModel class’s main purpose is to expose everything that will be needed on the view. It exposes Observable Collections of each type of Azure asset the app can display.


What would be better as controller functionality, the code in the codebehind of the view ends up doing most of the data-acquisition from the Azure REST APIs via WAML calls. In the code sample below the GetWebSites method is highlighted, but the others – GetSqlDatabases, GetStorageAccounts, and GetHostedServices – all do the same sorts of things. Each method creates instances of appropriate management client classes, retrieves asset lists, and data-binds the lists so that the assets are all visible in the Windows Phone client.


The web sites pivot is shown below. I’m thinking of adding a deeper screen here, one that would show site statistics over the last Ndays, maybe add an appSetting and connectionString editor page, or something else. Have any ideas?


Here’s a list of my cloud services – 2 empty PaaS instances and 1 newly set-up VM instance.



Though the OAuth dance has a few drawbacks and the code is a little less elegant than when I could make use of ADAL in my previous example, it works. Once I have the token information I need from the browser, I can run with that information by creating an instance of a TokenCloudCredential, passing in the token I received from the OAuth dance to get to authenticate WAML and therefore, make calls out to the Azure Service Management API to automate my asset management. Since WAML is a PCL package, and as a result of the introduction of support for the Windows Phone platform into the SDK Common NuGet package, the community can now freely experiment with solutions combining Windows Phone and the Management Libraries.

Happy Coding!