The posts tagged with 'SignalR' are listed below. You can get to each post by clicking the title in the list.


Web Camps Dallas

I wanted to let you know about an event I’ll be speaking at in early April. Along with my good friend and mentor Scott Hunter, “el master’o’ASP.NET,” I'll be presenting at the Web Camps in Dallas, Texas. I’d love to see you there .

web-camps

Dallas isn’t the only stop for the Web Camps tour . Our other good buddy, Jon Galloway, has worked hard creating this year’s Web Camps content, and has been out on the road promoting all the awesome new stuff we’ve released with the ASP.NET framework and tools. Along with Mr. Hunter, I’ll be presenting demonstrations and content related to all of the awesome topics below:

 

So if you’re an ASP.NET developer, or you’ve been thinking about learning more about the new stuff available in the ASP.NET stack, come on out and join Scott Hunter and I on Friday, April 5 in Dallas, TX. We’ve got a whole series of these events lined up lots of other places, so check out the other areas where we’ll be heading and join us at one of those great events, too.

 

Jon, myself, and the rest of the team welcome you to this great series of events. We hope to see you there!


The SiteMonitR Sample

The newest sample from the Azure Evangelism Team (WAET) is a real-time, browser-based web site monitor. The SiteMonitR front-end is blocked out and styled using Twitter Bootstrap, and Knockout.js was used to provide MVVM functionality. A cloud service pings sites on an interval (10 seconds by default, configurable in the worker’s settings) and notifies the web client of the sites’ up-or-down statuses via server-side SignalR conversations. Those conversations are then bubbled up to the browser using client-side SignalR conversations. The client also fires off SignalR calls to the cloud service to manage the storage functionality for the URL’s to be monitored. If you’ve been looking for a practical way to use SignalR with Azure, this sample could shed some light on what’s possible.

Architectural Overview

The diagram below walks through the various method calls exposed by the SiteMonitR SignalR Hub. This Hub is accessed by both the HTML5 client application and by the Cloud Service’s Worker Role code. Since SignalR supports both JavaScript and Native .NET client connectivity (as well as a series of other platforms and operating systems), both ends of the application can communicate with one another in an asynchronous fashion.

SiteMonitR Architectural Diagram

Each tier makes a simple request, then some work happens. Once the work is complete, the caller can call events that are handled by the opposite end of the communication. As the Cloud Service observes sites go up and down, it sends a message to the web site via the Hub indicating the site’s status. The moment the messages are received, the Hub turns around and fires events that are handled on the HTML5 layer via the SignalR jQuery plug-in. Given the new signatures and additional methods added in SignalR 0.5.3, the functionality is not only identical in how it behaves, the syntax to make it happen in both native .NET code and JavaScript are almost identical, as well. The result is a simple GUI offering a real-time view into any number of web sites' statuses all right within a web browser. Since the GUI is written using Twitter Bootstrap and HTML5 conventions, it degrades gracefully, performing great on mobile devices.

Where You Can Get SiteMonitR

As with all the other samples released by the WAET, the SiteMonitR source code is available for download on the MSDN Code Gallery site . You can view or clone the source code in the GitHub.com repository we set up for the SiteMonitR source. Should you find any changes or improvements you’d like to see, feel free to submit a pull request, too. Finally, if you find anything wrong with the sample submit an issue via GitHub’s issue tab, and we’ll do what we can to fix the issues reported. The GitHub.com repository contains a Getting Started document that walks you through the whole process – with screen shots – of setting up the SiteMonitR live in your very own Azure subscription (if you don’t have one, get a free 90-day trial here ).

Demonstration Video

Finally, the video below walks you through the process of downloading, configuring, and deploying the SiteMonitR to Azure. In less than 10 minutes you’ll see the entire process, have your very own web site monitoring solution running in the cloud, and you’ll be confident you’ll be the first to know when any of your sites crash since you’ll see their statuses change in real-time. If the video doesn't load properly for you, feel free to head on over to the Channel 9 post containing the video .

Updates

A few days after the SiteMonitR sample was released, Matias Wolowski added in some awesome functionality. Specifically, he added in the ability for users to add PhantomJS scripts that can be executed dynamically when sites statuses are received. Check out his fork of the SiteMonitR repository on GitHub.com . I'll be reviewing the code changes over the next few days to determine if the changes are low-impact enough that they can be pulled into the main repository, but the changes Matias made are awesome and demonstrate how any of the Azure Evangelism Team's samples can be extended by the community. Great work, Matias! 


The CloudMonitR Sample

The next Azure code sample released by the Azure Evangelism Team is the CloudMonitR sample. This code sample demonstrates how Azure Cloud Services can be instrumented and their performance analyzed in real time using a SignalR Hub residing in a web site hosted with Azure Web Sites. Using Twitter Bootstrap and Highcharts JavaScript charting components, the web site provides a real-time view of performance counter data and trace output. This blog post will introduce the CloudMonitR sample and give you some links to obtain it.

Overview

Last week I had the pleasure of travelling to Stockholm, Sweden to speak at a great community-run conference, CloudBurst 2012 (as well as a few other events, which will be covered in a future post very very soon ). I decided to release a new Azure code sample at the conference, and to use the opportunity to walk through the architecture and implementation of the sample with the participants. As promised during that event, this is the blog post discussing the CloudMonitR sample, which you can obtain either as a ZIP file download from the MSDN Code Gallery or directly from its GitHub.com repository .

Below, you’ll see a screen shot of CloudMonitR in action, charting and tracing away on a running Azure Worker Role.

35[1]

The architecture of the CloudMonitR sample is similar to a previous sample I recently blogged about, the SiteMonitR sample. Both samples demonstrate how SignalR can be used to connect Azure Cloud Services to web sites (and back again), and both sites use Twitter Bootstrap on the client to make the GUI simple to develop and customizable via CSS.

The point of CloudMonitR, however, is to allow for simplistic performance analysis of single- or multiple-instance Cloud Services. The slide below is from the CloudBurst presentation deck, and shows a very high-level overview of the architecture.

CloudMonitR

As each instance of the Worker (or Web) Role you wish to analyze comes online, it makes an outbound connection to the SignalR Hub running in a Azure Web Site. Roles communicate with the Hub to send up tracing information and performance counter data to be charted using the Highcharts JavaScript API. Likewise, user interaction initiated on the Azure Web Sites-hosted dashboard to do things like add additional performance counters to observe (or to delete ones no longer needed on the dashboard) is communicated back to SignalR Hub. Performance counters selected for observation are stored in a Azure Table Storage table, and retrieved as the dashboard is loaded into a browser.

Available via NuGet, Too!

The CloudMonitR solution is also available as a pair of NuGet packages. The first of these packages, the simply-named CloudMonitR package, is the one you’d want to pull down to reference from a Web or Worker Role for which you need the metrics and trace reporting functionality. Referencing this package will give you everything you need to start reporting the performance counter and tracing data from within your Roles.

The CloudMonitR.Web package, on the other hand, won’t bring down a ton of binaries, but will instead provide you with the CSS, HTML, JavaScript, and a few image files required to run the CloudMonitR dashboard in any ASP.NET web site.


Codemash SignalR Talk

If you attended my SignalR talk at Codemash, thank you so much for your time. I had a blast at Codemash, and really enjoyed the talk. It was a great audience with some good conversations in the hallway afterward. If you attended the session and want to get your hands on the deck and code, this is the place. 

The deck and code are here !

Safe travels home, everyone! I look forward to next year's Codemash!


Blob Storage of Kinectonitor Images

The Kinectonitor has received a lot of commentary and I’ve received some great ideas and suggestions on how it could be improved. There are a few architectural aspects about it that gave me some heartburn. One of those areas is in that, I failed to make use of any of Azure’s storage functionality to store the images. This post sums up how Blob Storage was added to the Kinectonitor’s architecture so that images could be stored in the cloud, not on the individual observer site’s web servers.

So we’re taking all these pictures with our Kinect device. Where are we storing the images? Are they protected by disaster recovery procedures? What if I need to look at historical evidence to see photos of intruders from months or years ago? Where will all of this be stored? Do I have to worry about buying a new hard drive for all those images?

A legitimate concern that can be solved by storing the photographs taken by the Kinect into the Azure cloud. The video below shows a test harness I use in place of the Kinect device. This tiny application allows me to select an image from my hard drive. The image content is then sent along the same path as the images that would be captured and sent into the cloud by the Kinect. This video demonstrates the code running at debug time. Using ClumsyLeaf’s CloudXplorer client I then show the files as they’ve been stored in the local development storage account’s Blob storage container.

Now we’ll take a slightly deeper dive into the changes that have been made in this update of the Kinectonitor source code. If you’d like to grab that code it is available on GitHub .

The Kinectonitor Worker Role

This new project basically serves the purpose of listening for ImageMessage instances. There’s not a whole lot of code in the worker role. We’ll examine its purpose and functionality in more detail in a moment. For now, take a look at the role’s code in the Object Browser to get a quick understanding of what functions the role will provide the overall Kinectonitor architecture.

image

In the previous release of the code, ImageMessage instances were the only things being used to pass information from the Kinect monitoring client, to the Azure Service Bus, and then back down to the ASP.NET MVC client site. This release of the code simplifies things somewhat, especially around the service bus area. The previous code actually shipped the binary data into and out of the Azure Service bus; obviously this sort of arrangement would make for huge transfer rates. Before, the communication was required because the images were being stored in the ASP.NET MVC site structure as image files. Now, the image data will be stored in Azure Blob Storage, so all the ASP.NET MVC client sites will need is the URL of the image to be shown to the user in the SignalR-powered HTML observation client.

If you haven’t yet taken in some of the great resources on the Microsoft Azure site, now would be a great time. The introductory how-to on Blob Storage was quite helpful in my understanding of how to do some of the Blob Storage-related functionality. It goes a good deal deeper into the details of how Blob Storage works, so I’ll refer you to that article for a little background.

The worker role does very little handiwork with the Blob Storage. Basically, a container is created in which the images will be saved, and that container’s accessibility is set to public. Obviously the images in the container will be served up in a web browser, so they’ll need to be publicly viewable.

image

The SaveImageToBlobStorage method, shown below, does the work of building a stream to use to pass the binary data into the Blob Storage account, where it is saved permanently (or until a user deletes it).

image

Note how the CloudBlob.Uri property exposes the URL where the blob can be accessed. In the case of images, this is quite convenient – all we need to be able to display an image is its URL and we’ve got that as soon as the image is saved to the cloud.

Simplifying the Service Bus Usage

As previously mentioned, the image data had been getting sent not only to the cloud, but out of the cloud and then stored in the folder tree of an ASP.NET MVC web site. Not exactly optimal for archival, definitely not for long-term storage. We’ve solved the storage problem by adding in the Blob Storage service, so the next step is to clean up the service bus communication. The sole message type that had been used between all components of the Kinectonitor architecture in the first release was the ImageMessage class, shown below.

image

Since the ImageMessage class is really only needed when the need exists to pass the binary content of the image, a second class has been added to the messaging architecture. The ImageStoredMessage class, shown below, now serves the purpose of information the SignalR-driven web observation client that new images have been taken and saved into the cloud.

image

With the added event concept of images being stored and the client needing to only know the URL of the last image that’s shown automatically in the browser, the message bus usage is in need of rework. When the worker role spins up, the service bus subscription is established, as was being done directly from within the MVC site previously. The worker role listens for messages that come from the Kinect monitor.

image

When those messages are received, the worker role saves them to Blob Storage using the PublishBlobStoredImageUrl method that was highlighted earlier.

image

Finally, one last change that will surely be augmented in a later revision is within the SignalR hub area. Previously, Hubs were wired up through a middle object, a service that wasn’t too thoroughly implemented. That service has been removed, and the Hubs are wired directly to the service bus subscription.

image

Obviously, this results in all clients seeing all updates. Not very secure, not very serviceable in terms of simplicity and customer separation. The next post in this series will add some additional features around customer segmentation and subscription, as well as potential authentication via the bolted-on Kinect identification code.


Testing SignalR Connections with NUnit

The SignalR GitHub site has an example wherein a SignalR PersistentConnection instance is used from a non-HTML client. The idea of being able to use SignalR connections in applications other than those that run in a web browser raises some interesting challenges. Likewise, there aren’t too many examples on how to use SignalR connections. This post will demonstrate asynchronously testing a SignalR connection in an end-to-end scenario using NUnit.

Creating Custom SignalR Connections

The first step is to create a custom SignalR connection. For the purposes of this example a simple connection implementation will do just fine. Borrowing from the example on the SignalR GitHub site is the EchoConnection class below.

The implementation of the EchoConnection should be relatively self-explanatory. Whenever the connection receives a message from a client, it turns around and sends that message out to all the connected clients using the Broadcast method.

image

Unlike SignalR Hubs, which are automatically wired up and routed, SignalR Connection classes aren’t (at least, I don’t know that they are). There’s an additional step with SignalR connections; they must be routed similarly to how MVC routes or WCF Web API routes are set up. The code below is from an MVC 3.0 application’s Global.asax.cs file, but you could accomplish the same sort of thing using a custom module.

The point here is to define a route that will provide HTTP access to this connection class. Next, the client code needs to be written. That’ll be accomplished using an NUnit test (it’s really an integration test, but that’s semantic sugar).

Using the SignalR.Client NuGet Package

The example source code contains a simple class library project. This project references the SignalR.Client NuGet package to provide .NET client communication functionality to an application written in managed code. You can use the SignalR.Client project in your desktop applications or in back-end services or Azure Worker Roles. The class library project is shown below, so you can get an idea of the minimal dependencies necessary to support .NET client SignalR support.

SignalR.Client makes use of Newtonsoft.JSON, so that project gets pulled in automatically when you update your NuGet package references. Caveat/Gotcha: I had to explicitly redirect this class library project’s App.config file to the newest version of the Newtonsoft.JSON assembly. You might not need to do that, but just in case, here’s how.

Writing the Test

At this point, the project is properly set up and everything should be ready for setting up the unit (or integration, depending on your stance) test to communicate with the SignalR connection.

It is important to point out that this test makes use of the AutoResetEvent class to wait for the asynchronous process to complete. This is done because SignalR’s communication via either the connection or hub approach implies that the communication is asynchronous.

If this test is run when the web server is turned off, it will obviously fail, as the test output below from NUnit demonstrates.

Failing!!!

Once the web server is started and the test re-run, the client receives a message almost as soon as it finishes sending it, and the test passes.

Passing!!!

Summary

SignalR is an asynchronous messaging library designed to provide push-like functionality and continuous connectivity for web clients. However, it isn’t just for the web; SignalR connections can be used from within .NET code to provide instant communication and constant connectivity between multiple clients.

Happy coding!


The Kinectonitor

Suppose you had a some scary-looking hoodlum walking around your house when you were out? You’d want to know about it, wouldn’t you? Take one Kinect, mix in a little Azure Service Bus, sprinkle in some SignalR, and mix it all together with some elbow grease, and you could watch in near-real-time as sinewy folks romp through your living room. Here’s how.

You might not be there (or want to be there) when some maniac breaks in, but it’d be great to have a series of photographs with the dude’s face to aid the authorities in their search for your home stereo equipment. The video below is a demonstration of the code this post will dive into in more detail. I figured it’d give some context to the problem this article will be trying to solve.

I’d really like to have a web page where I could go to see what’s going on in my living room when I’m not there. I know that fancy Kinect I picked up my kids for their XBox can do that sort of thing and I know how to code some .NET. Is it possible to make something at home that’d give me this sort of thing?

Good news! It isn’t that difficult. To start with, take a look at the Kinectonitor Visual Studio solution below.

image

At a high level it’ll provide the following high-level functions. The monitor application will watch over a room. When a skeleton is detected in the viewing range, a photograph will be taken using the Kinect camera. The image will be published to the Azure Service Bus, using a Topic publish/subscribe approach. An MVC web site will subscribe to the Azure Service Bus topic, and whenever the subscriber receives a ping from the service bus with a new image taken by the Kinect, it will use a SignalR hub to update an HTML client with the new photo. Here's a high-level architectural diagram of how the whole thing works, end-to-end. 

The Kinectonitor Core Project

Within the core project will exist a few common areas of functionality. The idea behind the core project is to provide a domain structure, functional abstraction, and initial implementation of the image publication concept. For all intents and purposes, the Kinect will be publishing messages containing image data to the Azure Service Bus, and allow subscribers (which we’ll get to in a moment) their own autonomy. The ImageMessage class below illustrates the message that’ll be transmitted through the Azure Service Bus.

image

A high-level abstraction will be needed to represent consumption of image messages coming from the Azure cloud. The purpose of the IImageMessageProcessor service is to receive messages from the cloud and to then notify it’s own listeners that an image has been received.

image

A simple implementation is needed to receive image messages and to notify observers when they’re received. This implementation will allow the SignalR hub, which we’ll look at next, to get updates from the service bus; this abstraction and implementation are the custom glue that binds the service bus subscriber to the web site.

image

Next up is the MVC web site, in which the SignalR hub is hosted and served up to users in an HTML client.

Wiring Up a SignalR Hub

Just before we dive into the MVC code itself, take a quick look at the solution again and note the ServiceBusSimplifier project. This is a super naïve, demo-class wrapper around the Azure Service Bus that was inspired by the far-more-complete implementation Joe Feser shares on GitHub . I used Joe’s library to get started with Azure Service Bus and really liked some of his hooks, but his implementation was overkill for my needs so I borrowed some of his ideas in a tinier implementation. If you’re deep into Azure Service Bus, though, you should totally look into Joe’s code.

image

Within the ServiceBusSimplifier project is a class that provides a Fluent wrapper abstraction around the most basic Azure Service Bus publish/subscribe concepts. The ServiceBus class (which could probably stand be renamed) is below, but collapsed. The idea is just to get the idea of how this abstraction is going to simplify things from here on out. I’ll post a link to download the source code for this article in a later section. For now, just understand that the projects will be using this abstraction to streamline development and form a convention around the Azure Service Bus usage.

image

A few calls are going to be made in calls to the ServiceBus.Setup method, specifically to provide Azure Service Bus authentication details. The classes that represent this sort of thing are below.

image

Now that we’ve covered the shared code that the MVC site and WPF/Kinect app will use to communicate via the Azure Service Bus, let’s keep rolling and see how the MVC site is connected to the cloud using this library.

In this prototype code, the Global.asax.cs file is edited. A property is added to the web application to expose an instance of the MockImageMessageProcessor, (a more complete implementation would probably make use of an IoC container to store an instance of the service) for use later on in the SignalR Hub. Once the service instance is created, the Azure Service Bus wrapper is created and the ImageMessage messages are subscribed to by the site MessageProcessor instance’s Process method.

image

When the application starts up, the instance is created and shared with the web site’s server-side code. The SignalR Hub, then, can make use of that service implementation. The SignalR Hub listens for ImageReceived events coming from the service. Whenever the Hub handles the event, it turns around and notifies the clients connected to it that a new photo has arrived.

image

With the Hub created, a simple Index view (and controller action) will provide the user-facing side of the Kinectonitor. The HTML/JQuery code below demonstrates how the client responds when messages arrive. There isn’t much to this part, really. The code just changes the src attribute of an img element in the HTML document, then fades the image in using JQuery sugar.

image

Now that the web client code has been created, we’ll take a quick look at the Kinect code that captures the images and transmits them to the service bus.

The Kinectonitor Monitor WPF Client

Most of the Kinect-interfacing code comes straight from the samples available with the Kinect SDK download . The main points to be looked at in the examination of the WPF client is to see how it publishes the image messages into the Azure cloud.

The XAML code for the main form of the WPF app is about as dirt-simple as you could get. It just needs a way to display the image being taken by the Kinect and the skeletal diagram (the code available from the Kinect SDK samples). The XAML for this sample client app is below.

image

When the WPF client opens, the first step is, of course, to connect to the Kinect device and the Azure Service Bus. The OnLoad event handler below is how this work is done. Note that this code also instantiates a Timer instance. That timer will be used to control the delay between photographs, and will be looked at in a moment.

image

Whenever image data is collected from the camera it’ll be displayed in the WPF Image control shown earlier. The OnKinectVideoReady handler method below is where the image processing/display takes place. Take note of the highlighted area; this code sets an instance of a BitmapSource object, which will be used to persist the image data to disk later.

image

Each time the Kinect video image is processed a new BitmapSource instance is created. Remember the Timer instance from the class earlier? That timer’s handler method is where the image data is saved to disk and transmitted to the cloud. Note the check being performed on the AreSkeletonsBeingTracked property. That’s the last thing that’ll be looked at next, that’ll tie the WPF functionality together.

image

If the Kinectonitor WPF client just continually took snapshots and sent them into the Azure cloud the eventual price would probably be prohibitive. The idea behind a monitor like this, really, though, is to show only when people enter during unexpected times. So, the WPF client will watch for skeletons using the built-in Kinect skeleton tracking functionality (and code from the Kinect SDK samples). If a skeleton is being tracked, we know someone’s in the room and that a photo should be taken. Given that the skeleton might be continually tracked for a few seconds (or minutes, or longer), the Kinect will continue to photograph while a skeleton is being tracked. As soon as the tracking stops, a final photo is taken, too. The code that sets the AreSkeletonsBeingTracked property value during the skeleton-ready event handler is below.

image

Some logic occurs during the setter of the AreSkeletonsBeingTracked method, just to sound the alarms whenever a skeleton is tracked, without having to wait the typical few seconds until the next timer tick.

image

That’s it for the code! One more note – it helps if the Kinect is up high or in front of a room (or both). During development of this article I just placed mine on top of the fridge, which is next to the kitchen bar where I do a lot of work. It could see the whole room pretty well and picked up skeletons rather quickly. Just a note for your environment testing phase.

IMAG0356

Summary

This article took a few of the more recent techniques and tools to be released to .NET developers. With a little creativity and some time, it’s not difficult to use those components to make something pretty neat at home in the physical computing world. Pairing these disciplines up to create something new (or something old someway different) is great fodder for larger projects using the same technologies later.

If you’d like to view the Kinectonitor GitHub source code, it’s right here.


Doing BDD with SignalR and Jasmine

SignalR is one of the latest (and sexiest) elements in the .NET stack. Expect to hear more about SignalR if you haven’t already, because it delivers on the promise of push technology without the requirement of a fat client. If you’ve not yet read much about SignalR, clone the source code from GitHub or read Scott Hanselman’s post on SignalR for an introduction. The scope and inner-workings of SignalR are somewhat out of scope here, so I’ll just assume you’ve at least heard something about SignalR and that you’re interested in it but have a few questions. I mean, if you’re into BDD/TDD, you should definitely be wondering:

So how testable is SignalR? If the magic of it is based in some form of maintained connection, the implication of how one has to code asynchronously against it makes testing p-r-e-t-t-y difficult. Testing in JavaScript is pretty difficult to begin with, so now what?

There’s this amazing TDD/BDD micro-framework called Jasmine that lends itself extremely well to all sorts of different JavaScript problem domains. I learned about Jasmine from some colleagues in the Ruby community, then found some pretty awesome resources on Jasmine, and have been waiting for an opportunity to try it out. This seemed like a perfect opportunity, so I dove in.

The Calculator SignalR Hub

This example will offer a solution to the simple problem of providing basic numeric calculations. Obviously, the first operation most users will need is some addition functionality.

image

The implementation of this method might look a little strange if you’re new to SignalR. It’ll make sense in just a moment when you see the corresponding JavaScript code. Think about it this way; the Clients property of any Hub is dynamic object that will represent a corresponding JavaScript method on each of the clients connected to the Hub.

image

The controller action for the client has no special glue or magic in it. It’s basically a shell to adhere to the requirement of having an action for the views.

image

On the client, there will be a few script references to make sure SignalR works properly.

image

This is the part where the dynamic client thing should make sense. The JavaScript code below demonstrates how to call this dirt-simple SignalR Hub and to generate some action on the client.

image

The call earlier, within the implementation of the Hub, is basically a way of the server telling the client, would you please run this method and pass it this data? It isn’t a really sexy example when you see it execute, of course, but it drives the point home. SignalR makes push ridiculously simple.

image

Bring on the BDD!

As if SignalR isn’t nifty enough, Jasmine gives us the ability to set up specifications and unit tests to make sure things are working properly and to guide development. Now, we get to put them together to answer the problem of how to test the client. (I’m not knocking other methods, like WaTiN or other automated testing implementations, here, this is just another way of skinning the same kitty.)

We’ll need to include the basic script references and CSS file to make Jasmine light up. You can download those from the Jasmine site, or they’re included in the GitHub repository I’ve created for the purpose of my continued tinkering in SignalR.

image

Once the Jasmine files are in place, it doesn’t hurt to create a pulse-check specification to make sure the glue’s dry. The JavaScript object shell below will be a roadmap for how we’ll solve this problem.

image

Once the specifications have been written up, Jasmine’s environment is created and the tests are executed.

image

If you’ve pinned the tail on the donkey to this point, the results should be quite self-explanatory when you view the calculator page again. If you don’t see the tests, check the passed checkbox and the passing test will appear.

image

At this point, to test that the SignalR CalculatorHub is communicating with the client properly all that’s really needed is a way to call the Add method from in a unit test and to verify that the JavaScript code’s showSum method executed with a parameter of 4.

The first step in getting this working is to modify the test fixture class so it can be supplied any dependencies it might have. In this case, we plan on testing the calculator hub, so it’ll be passed into the fixture in it’s constructor. Also added is the addCallback field. At this point it’ll be set to false, but the plan is, to set this variable to the callback method that will be called when the calculation is completed by the hub.

image

Since the constructor’s changed it’d be wise to change the calling code, our jQuery ready method. This time, we’ll go ahead and create an instance of the SignalR CalculatorHub and it’ll be provided as a construction argument.

image

Next, a new specification is added to the init method. This specification is the unit test that will demonstrate the calculator’s being called properly from the client JavaScript.

image

It’d help at this point to take a peek at the Jasmine Wiki article on Spies to get an idea of how this sort of verification can be accomplished. Jasmine’s Fluent syntax makes it pretty self-explanatory (and including vsdoc files to provide Intellisense in Visual Studio brings in the Jasmine methods). Jasmine, like Moq and other mocking frameworks, offers unit tests the capability of checking to see if methods have been executed properly.

image

If your Jasmine tests are executed at this point, you’ll see some p-r-e-t-t-y interesting results. For about 5 seconds the browser will let you know something’s happening by making the current test’s background color yellow.

image

Then the test will complete execution and fail. Jasmine is assuming via those calls earlier assume they should wait for an asynchronous response and if it doesn’t arrive, the tests will fail. They fail because the callback methods are never executed, and the tests expect them to be .

image

That addCallback field that was added to the test fixture is how this can be accomplished. The client’s showSum method is the one thing that’s been omitted from the JavaScript code. That’s the method that the SignalR hub will try to call on each client whenever it needs to do so.

image

Within the test fixture resides the onAdd method. It just executes the fixture’s addCallback method and feeds it the results from the SignalR CalculatorHub call.

image

Once that’s all wired up, the test should pass!

image

If you’re obsessive about verifying, just change the expected value to be something wrong and re-run the test. Obviously, if the math fails, so should the test. Otherwise it’d be a pretty useless calculator!

image

Recap

SignalR and Jasmine work quite nicely together. If you apply a little OO elbow grease and take the time to wire things up with callbacks and to verify the execution of those callbacks, SignalR can be tested rather effectively. All it takes is to properly stage, and then verify, that the Hub’s method is called and that, when the Hub fires the client callback, that the callback is executed as your code expects it to be executed.

Thanks for taking the time to stop by. Happy Coding!


Richmond Code Camp

The Richmond Code Camp was great. NDecision attracted a lot of participants and enthusiasm and motivated me to keep plugging away at it. There was some excellent audience pariticipation and Twitter-reviews from which I got much encouragement. Both sessions I presented were loaded with enthusiasm that Microsoft is, from all public observation, maturing in its attitude and adoption of OSS contribution. That was uplifting, but not my favorite observation from the weekend.

My favorite observation from the weekend is that it seems like the Microsoft community is finally beginning to realize how much work Microsoft is doing for it and either appreciates, or finally admits in public that it appreciates the contributions Microsoft has been making. There were a few participants and discussions I heard in hallways that were inspiring. I heard a few discussions between OSS enthusiasts who were speaking favorably, rather than with fear and loathing, about a few different .NET projects produced both within Microsoft and independent. That's inspiring, considering that just a few years ago the public opinion of Microsoft's support of OSS a joke, and I heard more snickers and doubt than I did appreciation. That's motivation for myself, those out there who are ISV'ing themselves, and those who want to utilize open-source contributions to ease their own development pains.

What's frustrating about that is that projects like SignalR might go un-noticed by participants who lack the time or knowledge to maintain a very active level of involvement. Yes, SignalR is new and I'm probably being impatient in wanting the community to get on it yesterday, as is a lot of other technologies within and outside of the .NET stack that need discovering. I fear that, after years of bickering about how Microsoft fails to support the open-source community, that once Microsoft does openly begin supporting the open-source community and the open-source community begins to admit to itself that Microsoft really is supporting it (that was the hard part IMO), that Microsoft misses a few golden opportunities because it fails to market its own efforts effectively. I think we need to do our best to keep researching these young micro-frameworks and pushing the envelope of what's possible in our development framework. That shared contribution thing... it really works! 

I didn't get to spend a lot of time with my out-of-state colleagues this weekend, but it was nice to see everyone for the brief conversations and dinner last night. Looking forward to seeing everyone at the next event!