The posts tagged with 'Kinect' are listed below. You can get to each post by clicking the title in the list.


Blob Storage of Kinectonitor Images

The Kinectonitor has received a lot of commentary and I’ve received some great ideas and suggestions on how it could be improved. There are a few architectural aspects about it that gave me some heartburn. One of those areas is in that, I failed to make use of any of Azure’s storage functionality to store the images. This post sums up how Blob Storage was added to the Kinectonitor’s architecture so that images could be stored in the cloud, not on the individual observer site’s web servers.

So we’re taking all these pictures with our Kinect device. Where are we storing the images? Are they protected by disaster recovery procedures? What if I need to look at historical evidence to see photos of intruders from months or years ago? Where will all of this be stored? Do I have to worry about buying a new hard drive for all those images?

A legitimate concern that can be solved by storing the photographs taken by the Kinect into the Azure cloud. The video below shows a test harness I use in place of the Kinect device. This tiny application allows me to select an image from my hard drive. The image content is then sent along the same path as the images that would be captured and sent into the cloud by the Kinect. This video demonstrates the code running at debug time. Using ClumsyLeaf’s CloudXplorer client I then show the files as they’ve been stored in the local development storage account’s Blob storage container.

Now we’ll take a slightly deeper dive into the changes that have been made in this update of the Kinectonitor source code. If you’d like to grab that code it is available on GitHub .

The Kinectonitor Worker Role

This new project basically serves the purpose of listening for ImageMessage instances. There’s not a whole lot of code in the worker role. We’ll examine its purpose and functionality in more detail in a moment. For now, take a look at the role’s code in the Object Browser to get a quick understanding of what functions the role will provide the overall Kinectonitor architecture.

image

In the previous release of the code, ImageMessage instances were the only things being used to pass information from the Kinect monitoring client, to the Azure Service Bus, and then back down to the ASP.NET MVC client site. This release of the code simplifies things somewhat, especially around the service bus area. The previous code actually shipped the binary data into and out of the Azure Service bus; obviously this sort of arrangement would make for huge transfer rates. Before, the communication was required because the images were being stored in the ASP.NET MVC site structure as image files. Now, the image data will be stored in Azure Blob Storage, so all the ASP.NET MVC client sites will need is the URL of the image to be shown to the user in the SignalR-powered HTML observation client.

If you haven’t yet taken in some of the great resources on the Microsoft Azure site, now would be a great time. The introductory how-to on Blob Storage was quite helpful in my understanding of how to do some of the Blob Storage-related functionality. It goes a good deal deeper into the details of how Blob Storage works, so I’ll refer you to that article for a little background.

The worker role does very little handiwork with the Blob Storage. Basically, a container is created in which the images will be saved, and that container’s accessibility is set to public. Obviously the images in the container will be served up in a web browser, so they’ll need to be publicly viewable.

image

The SaveImageToBlobStorage method, shown below, does the work of building a stream to use to pass the binary data into the Blob Storage account, where it is saved permanently (or until a user deletes it).

image

Note how the CloudBlob.Uri property exposes the URL where the blob can be accessed. In the case of images, this is quite convenient – all we need to be able to display an image is its URL and we’ve got that as soon as the image is saved to the cloud.

Simplifying the Service Bus Usage

As previously mentioned, the image data had been getting sent not only to the cloud, but out of the cloud and then stored in the folder tree of an ASP.NET MVC web site. Not exactly optimal for archival, definitely not for long-term storage. We’ve solved the storage problem by adding in the Blob Storage service, so the next step is to clean up the service bus communication. The sole message type that had been used between all components of the Kinectonitor architecture in the first release was the ImageMessage class, shown below.

image

Since the ImageMessage class is really only needed when the need exists to pass the binary content of the image, a second class has been added to the messaging architecture. The ImageStoredMessage class, shown below, now serves the purpose of information the SignalR-driven web observation client that new images have been taken and saved into the cloud.

image

With the added event concept of images being stored and the client needing to only know the URL of the last image that’s shown automatically in the browser, the message bus usage is in need of rework. When the worker role spins up, the service bus subscription is established, as was being done directly from within the MVC site previously. The worker role listens for messages that come from the Kinect monitor.

image

When those messages are received, the worker role saves them to Blob Storage using the PublishBlobStoredImageUrl method that was highlighted earlier.

image

Finally, one last change that will surely be augmented in a later revision is within the SignalR hub area. Previously, Hubs were wired up through a middle object, a service that wasn’t too thoroughly implemented. That service has been removed, and the Hubs are wired directly to the service bus subscription.

image

Obviously, this results in all clients seeing all updates. Not very secure, not very serviceable in terms of simplicity and customer separation. The next post in this series will add some additional features around customer segmentation and subscription, as well as potential authentication via the bolted-on Kinect identification code.


The Kinectonitor

Suppose you had a some scary-looking hoodlum walking around your house when you were out? You’d want to know about it, wouldn’t you? Take one Kinect, mix in a little Azure Service Bus, sprinkle in some SignalR, and mix it all together with some elbow grease, and you could watch in near-real-time as sinewy folks romp through your living room. Here’s how.

You might not be there (or want to be there) when some maniac breaks in, but it’d be great to have a series of photographs with the dude’s face to aid the authorities in their search for your home stereo equipment. The video below is a demonstration of the code this post will dive into in more detail. I figured it’d give some context to the problem this article will be trying to solve.

I’d really like to have a web page where I could go to see what’s going on in my living room when I’m not there. I know that fancy Kinect I picked up my kids for their XBox can do that sort of thing and I know how to code some .NET. Is it possible to make something at home that’d give me this sort of thing?

Good news! It isn’t that difficult. To start with, take a look at the Kinectonitor Visual Studio solution below.

image

At a high level it’ll provide the following high-level functions. The monitor application will watch over a room. When a skeleton is detected in the viewing range, a photograph will be taken using the Kinect camera. The image will be published to the Azure Service Bus, using a Topic publish/subscribe approach. An MVC web site will subscribe to the Azure Service Bus topic, and whenever the subscriber receives a ping from the service bus with a new image taken by the Kinect, it will use a SignalR hub to update an HTML client with the new photo. Here's a high-level architectural diagram of how the whole thing works, end-to-end. 

The Kinectonitor Core Project

Within the core project will exist a few common areas of functionality. The idea behind the core project is to provide a domain structure, functional abstraction, and initial implementation of the image publication concept. For all intents and purposes, the Kinect will be publishing messages containing image data to the Azure Service Bus, and allow subscribers (which we’ll get to in a moment) their own autonomy. The ImageMessage class below illustrates the message that’ll be transmitted through the Azure Service Bus.

image

A high-level abstraction will be needed to represent consumption of image messages coming from the Azure cloud. The purpose of the IImageMessageProcessor service is to receive messages from the cloud and to then notify it’s own listeners that an image has been received.

image

A simple implementation is needed to receive image messages and to notify observers when they’re received. This implementation will allow the SignalR hub, which we’ll look at next, to get updates from the service bus; this abstraction and implementation are the custom glue that binds the service bus subscriber to the web site.

image

Next up is the MVC web site, in which the SignalR hub is hosted and served up to users in an HTML client.

Wiring Up a SignalR Hub

Just before we dive into the MVC code itself, take a quick look at the solution again and note the ServiceBusSimplifier project. This is a super naïve, demo-class wrapper around the Azure Service Bus that was inspired by the far-more-complete implementation Joe Feser shares on GitHub . I used Joe’s library to get started with Azure Service Bus and really liked some of his hooks, but his implementation was overkill for my needs so I borrowed some of his ideas in a tinier implementation. If you’re deep into Azure Service Bus, though, you should totally look into Joe’s code.

image

Within the ServiceBusSimplifier project is a class that provides a Fluent wrapper abstraction around the most basic Azure Service Bus publish/subscribe concepts. The ServiceBus class (which could probably stand be renamed) is below, but collapsed. The idea is just to get the idea of how this abstraction is going to simplify things from here on out. I’ll post a link to download the source code for this article in a later section. For now, just understand that the projects will be using this abstraction to streamline development and form a convention around the Azure Service Bus usage.

image

A few calls are going to be made in calls to the ServiceBus.Setup method, specifically to provide Azure Service Bus authentication details. The classes that represent this sort of thing are below.

image

Now that we’ve covered the shared code that the MVC site and WPF/Kinect app will use to communicate via the Azure Service Bus, let’s keep rolling and see how the MVC site is connected to the cloud using this library.

In this prototype code, the Global.asax.cs file is edited. A property is added to the web application to expose an instance of the MockImageMessageProcessor, (a more complete implementation would probably make use of an IoC container to store an instance of the service) for use later on in the SignalR Hub. Once the service instance is created, the Azure Service Bus wrapper is created and the ImageMessage messages are subscribed to by the site MessageProcessor instance’s Process method.

image

When the application starts up, the instance is created and shared with the web site’s server-side code. The SignalR Hub, then, can make use of that service implementation. The SignalR Hub listens for ImageReceived events coming from the service. Whenever the Hub handles the event, it turns around and notifies the clients connected to it that a new photo has arrived.

image

With the Hub created, a simple Index view (and controller action) will provide the user-facing side of the Kinectonitor. The HTML/JQuery code below demonstrates how the client responds when messages arrive. There isn’t much to this part, really. The code just changes the src attribute of an img element in the HTML document, then fades the image in using JQuery sugar.

image

Now that the web client code has been created, we’ll take a quick look at the Kinect code that captures the images and transmits them to the service bus.

The Kinectonitor Monitor WPF Client

Most of the Kinect-interfacing code comes straight from the samples available with the Kinect SDK download . The main points to be looked at in the examination of the WPF client is to see how it publishes the image messages into the Azure cloud.

The XAML code for the main form of the WPF app is about as dirt-simple as you could get. It just needs a way to display the image being taken by the Kinect and the skeletal diagram (the code available from the Kinect SDK samples). The XAML for this sample client app is below.

image

When the WPF client opens, the first step is, of course, to connect to the Kinect device and the Azure Service Bus. The OnLoad event handler below is how this work is done. Note that this code also instantiates a Timer instance. That timer will be used to control the delay between photographs, and will be looked at in a moment.

image

Whenever image data is collected from the camera it’ll be displayed in the WPF Image control shown earlier. The OnKinectVideoReady handler method below is where the image processing/display takes place. Take note of the highlighted area; this code sets an instance of a BitmapSource object, which will be used to persist the image data to disk later.

image

Each time the Kinect video image is processed a new BitmapSource instance is created. Remember the Timer instance from the class earlier? That timer’s handler method is where the image data is saved to disk and transmitted to the cloud. Note the check being performed on the AreSkeletonsBeingTracked property. That’s the last thing that’ll be looked at next, that’ll tie the WPF functionality together.

image

If the Kinectonitor WPF client just continually took snapshots and sent them into the Azure cloud the eventual price would probably be prohibitive. The idea behind a monitor like this, really, though, is to show only when people enter during unexpected times. So, the WPF client will watch for skeletons using the built-in Kinect skeleton tracking functionality (and code from the Kinect SDK samples). If a skeleton is being tracked, we know someone’s in the room and that a photo should be taken. Given that the skeleton might be continually tracked for a few seconds (or minutes, or longer), the Kinect will continue to photograph while a skeleton is being tracked. As soon as the tracking stops, a final photo is taken, too. The code that sets the AreSkeletonsBeingTracked property value during the skeleton-ready event handler is below.

image

Some logic occurs during the setter of the AreSkeletonsBeingTracked method, just to sound the alarms whenever a skeleton is tracked, without having to wait the typical few seconds until the next timer tick.

image

That’s it for the code! One more note – it helps if the Kinect is up high or in front of a room (or both). During development of this article I just placed mine on top of the fridge, which is next to the kitchen bar where I do a lot of work. It could see the whole room pretty well and picked up skeletons rather quickly. Just a note for your environment testing phase.

IMAG0356

Summary

This article took a few of the more recent techniques and tools to be released to .NET developers. With a little creativity and some time, it’s not difficult to use those components to make something pretty neat at home in the physical computing world. Pairing these disciplines up to create something new (or something old someway different) is great fodder for larger projects using the same technologies later.

If you’d like to view the Kinectonitor GitHub source code, it’s right here.


Netduino Controlled by Kinect

Update: I've uploaded the code I wrote for this demonstration project to GitHub into my KinectControlledNetduino public repository. I also forgot to mention that the Kinect code makes use of the excellent gesturing engine created by David Catuhe, which you can read about on his blog or download from CodePlex

I'll put the code up here tomorrow once time and energy permit, but for the time being the title says it all. There's a Kinect, it controls a WPF app, that app sends messages to an HTTP server running on a Netduino, which is connected to a servo.

Using hand gestures like "SwipeRight" or "SwipeLeft," a user can literally wave to the Kinect to tell it how to tell the Netduino server how to angle the servo. Pretty neat, and quite easier than I'd expected. I'll post the code ASAP but for now here's a video demonstrating how it works.