The posts tagged with '.NET' are listed below. You can get to each post by clicking the title in the list.


Build a Location API Using Entity Framework Spatial and Web API, on Azure Web Sites

As announced by Scott Guthrie recently, Azure Web Sites now supports the .NET Framework 4.5. Some awesome ASP.NET features are now available to web developers who want to host their ASP.NET applications on Azure following the web sites offering getting support for .NET 4.5. One feature I’m especially excited about is Entity Framework Spatial support. Only available in .NET 4.5, EF Spatial is something that gives developers who want to build location-aware applications the ability to easily save and retrieve location data without having to invent crazy solutions using SQL code. I’ve implemented the Haversine formula using a SQL stored procedure in the past, and I can speak from experience when I say that EF Spatial is about 10,000 times easier and more logical. Don’t take my word for it, though. Take a look at the sample code I’ll show you in this blog post, which demonstrates how you can develop a location-aware API using ASP.NET Web API, EF Spatial, and host the whole thing on Azure Web Sites.

Creating the Site in Azure

Before diving into code I’ll go out to the Azure portal and create a new web site. For this API example, I create a site with a database, as I’ll want to store the data in a Azure SQL Database. The screen shot below shows the first step of creating a new site. By simply selecting the new web site option, then selecting “with database,” I’m going to be walked through the process of creating both assets in Azure in a moment.

1

The first thing Azure will need to know is the URL I’ll want associated with my site. The free Azure Web Sites offer defaults to [yoursitename].azurewebsites.net, so this first step allows me to define the URL prefix associated with my site.

Simultaneously, this first step gives me the opportunity to define the name of the connection string I’ll expect to use in my Web.config file later, that will connect the site to the Azure SQL Database I’ll create in a moment.

2

The last steps in the site creation process will collect the username and SQL Server information from you. In this example, I’m going to create a new database and a new SQL Server in the Azure cloud. However, you can select a pre-existing SQL Server if you’d prefer during your own setup process.

I specifically unchecked the “Configure Advanced Database Settings” checkbox, as there’s not much I’ll need to do to the database in the portal. As you’ll see in a moment, I’ll be doing all my database “stuff” using EF’s Migrations features.

3

Once I’ve entered the username I’d like to use, the password, and selected (or created) a SQL server, I click the check button to create the site and the SQL database. In just a few seconds, both are created in Azure, and I can get started with the fun stuff – the c0d3z !

Preparing for Deployment

Just so I have a method of deploying the site once I finish the code, I’ll select the new site from the Azure portal by clicking on it’s name once the site-creation process completes.

4

The site’s dashboard will open up in the browser. If I scroll down, the Quick Glance links are visible on the right side of the dashboard page. Clicking the link labeled Download Publish Profile will do just that – download a publish settings file, which contains some XML defining how Visual Studio or WebMatrix 2 (or the Web Deploy command line) should upload the files to the server. Also contained within the publish settings file is the metadata specific to the database I created for this site.

5

As you’ll see in a moment when I start the deployment process, everything I need to know about deploying a site and a database backing that site is outlined in the publish settings file. When I perform the deployment from within Visual Studio 2012, I’ll be given the option of using Entity Framework Migrations to populate the database live in Azure. Not only will the site files be published, the database will be created, too. All of this is possible via the publish settings file’s metadata.

Building the API in Visual Studio 2012

The code for the location API will be relatively simple to build (thanks to the Entity Framework, ASP.NET, and Visual Studio teams). The first step is to create a new ASP.NET MVC project using Visual Studio 2012, as is shown below. If you’d rather just grab the code than walk through the coding process, I’ve created a public GitHub.com repository for the Spatial Demo solution, so clone it from there if you’d rather view the completed source code rather than create it from scratch.

Note that I’m selecting the .NET Framework 4.5 in this dialog. Previous to the 4.5 support in Azure Web Sites, this would always need to be set at 4.0 or my deployment would fail. As well, I would have had compilation issues for anything relating to Entity Framework Spatial, as those libraries and namespaces are also only available under .NET 4.5. Now, I can select the 4.5 Framework, satisfy everyone, and keep on trucking.

new-project

In the second step of the new MVC project process I’ll select Web API, since my main focus in this application is to create a location-aware API that can be used by multiple clients.

web-api-project

By default, the project template comes with a sample controller to demonstrate how to create Web API controllers, called ValuesController.cs. Nothing against that file, but I’ll delete it right away, since I’ll be adding my own functionality to this project.

Domain Entities

The first classes I’ll add to this project will represent the entity domains pertinent to the project’s goals. The first of these model classes is the LocationEntity class. This class will be used in my Entity Framework layer to represent individual records in the database that are associated with locations on a map. The LocationEntity class is quite simple, and is shown in the gist below.

Some of the metadata associated with a DbGeography object isn’t easily or predictably serialized, so to minimize variableness (okay, I’m a control freak when it comes to serialization) I’ve also created a class to represent a Location object on the wire. This class, the Location class, is visible in the following gist. Take note, though, it’s not that much different from the typical LocationEntity class aside from one thing. I’m adding the explicit Latitude and Longitude properties to this class. DbGeography instances offer a good deal more functionality, but I won’t need those in this particular API example. Since all I need is latitude and longitude in the API side, I’ll just work up some code in the API controller I’ll create later to convert the entity class to the API class.

Essentially, I’ve created a data transfer object and a view model object. Nothing really new here aside from the Entity Framework Spatial additions of functionality from what I’ve done in previous API implementations which required the database entity be loosely coupled away from the class the API or GUI will use to display (or transmit) the data.

Data Context, Configuring Migrations, and Database Seeding

Now that the models are complete I need to work in the Entity Framework “plumbing” that gives the controller access to the database via EF’s magic. The first step in this process is to work up the Data Context class that provides the abstraction layer between the entity models and the database layer. The data context class, shown below, is quite simple, as I’ve really only got a single entity in this example implementation.

Take note of the constructor, which is overridden from the base’s constructor . This requires me to make a change in the web.config file created by the project template. By default, the web.config file is generated with a single connection string, the name of which is DefaultConnection . I need to either create a secondary connection string with the right name, change the default one (which I’ve done in this example), or use Visual Studio’s MVC-generation tools to create an EF-infused controller, which will add a new connection string to the web.config automatically. Since I’m coding up this data context class manually, I just need to go into the Web.config and change the DefaultConnection connection string’s name attribute to match the one I’ve added in this constructor override, SpatialDemoConnectionString . Once that’s done, this EF data context class will use the connection string identified in the configuration file with that name.

During deployment, this becomes a very nifty facet of developing ASP.NET sites that are deployed to Azure Web Sites using the Visual Studio 2012 publishing functionality. We’ll get to that in a moment, though…

EF has this awesome feature called Migrations that gives EF the ability of setting up and/or tearing down database schema objects, like tables and columns and all indexes (oh my!).  So the next step for me during this development cycle is to set up the EF Migrations for this project. Rowan Miller does a great job of describing how EF Migrations work in this Web Camps TV episode, and Robert Green’s Visual Studio Toolbox show has a ton of great content on EF, too, so check out those resources for more information on EF Migrations’ awesomeness. The general idea behind Migrations, though, is simple – it’s a way of allowing EF to scaffold database components up and down, so I won’t have to do those items using SQL code.

What’s even better than the fact the EF has Migrations is that I don’t need to memorize how to do it because the NuGet/PowerShell/Visual Studio gods have made that pretty easy for me.  To turn Migrations on for my project, which contains a class that derives from EF’s data context class (the one I just finished creating in the previous step), I simply need to type the command enable-migrations into the NuGet package management console window.

Once I enable migrations, a new class will be added to my project. This class will be added to a new Migrations folder, and is usually called Configuration.cs . Within that file is contained a constructor and a method I can implement however I want called – appropriately – Seed. In this particular use-case, I enable automatic migrations and add some seed data to the database.

Enabling automatic migrations basically assumes any changes I make will automatically be reflected in the database later on (again, this is super-nifty once we do the deployment, so stay tuned!).

Quick background on what types of locations we’ll be saving… My wife and I moved from the Southeast US to the Pacific Northwest region recently. Much to our chagrin, there are far fewer places to pick up great chicken wings than there were in the Southeast. So, I decided I needed to use our every-Sunday-during-football snack of chicken wings as a good use-case for a location-based app. What a better example than to give you a list of good chicken wing restaurants listed in order of proximity? Anyway, that’s the inspiration for the demo. Dietary recommendation is not implied, BTW.

The API Controller Class

With all the EF plumbing and domain models complete, the last step in the API layer is to create the API controller itself. I simply add a new Web API controller to the Controllers folder, and change the code to make use of the plumbing work I’ve completed up to now. The dialog below shows the first step, when I create a new LocationController Web API controller.

8

This controller has one method, that takes the latitude and longitude from a client. Those values are then used in conjunction with EF Spatial’s DbGeography.Distance method to sort the records from closest in proximity, then the first five records are returned. The result of this call is that the closest five locations are returned with a client provides its latitude and longitude coordinates to the API method. The Distance method is used again to determine how far away each location is from the provided coordinates. The results are then returned using the API-specific class rather than the EF-specific class (thereby separating the two layers and easing some of the potential serialization issues that could arise), and the whole output is formatted to either XML or JSON and sent down the wire via HTTP.

At this point, the API is complete and can be deployed to Azure directly from within Visual Studio 2012 using the great publishing features created by the Visual Studio publishing team (my buddy Sayed Hashimi loves to talk about this stuff, so ping him on Twitter if you have any questions or suggestions on this awesome feature-set).

Calling the Location API using an HTML 5 Client

In order to make this a more comprehensive sample, I’ve added some HTML 5 client code and Knockout.js -infused JavaScript code to the Home/Index.cshtml view that gets created by default with the ASP.NET MVC project template. This code makes use of the HTML 5 geospatial capabilities to read the user’s current position. The latitude and longitude are then used to call directly the location API, and the results are rendered in the HTML client using a basic table layout.

The final step is to deploy the whole thing up to Azure Web Sites. This is something I wasn’t able to do until last week, so I’m super-stoked to be able to do it now and to share it with you on a demo site, the URL of which I’ll hand out at the end of this post.

One Last NuGet to Include

Entity Framework Spatial has some new data types that add support for things like… well... latitude and longitude, in this particular case. By default, these types aren’t installed into a Azure instance, as they’re part of the database SDK. Most times, those assemblies aren’t needed on a web server, so by default you won’t have them when you deploy. To work around this problem and to make Entity Framework Spatial work on the first try following your deployment to Azure, install the Microsoft.SqlServer.Types NuGet package into your project by typing install-package Microsoft.SqlServer.Types in the Package Manager Console or by manually finding the package in the “Manage NuGet References” dialog.

Thanks to Scott Hunter for this extremely valuable piece of information, which I lacked the first time I tried to do this. This solution was so obvious I hid in my car with embarrassment after realizing how simple it was and that I even had to ask. NuGet, again, to the rescue!

Once this package is installed, deploying the project to Azure will trigger automatic retrieval of that package, and the support for the location data types in SQL Server will be added to your site.

Publishing from Visual Studio 2012 is a Breeze

You’ve probably seen a ton of demonstrations on how to do deployment from within Visual Studio 2012, but it never ceases to amaze me just how quick and easy the team has made it to deploy sites – with databases – directly up to Azure in so few, simple steps. To deploy to a site from within Visual Studio 2012, I just right-click the site and select – get this – Publish. The first dialog that opens gives me the option to import a publish settings file, which I downloaded earlier just after having created the site in the Azure portal.

publish-1

Once the file is imported, I’m shown the details so I have the chance to verify everything is correct, which I’ve never seen it not be, quite frankly. I just click Next here to move on.

publish-2

This next step is where all the magic happens that I’ve been promising you’d see. This screen, specifically the last checkbox (highlighted for enthusiasm), points to the database I created earlier in the first step when I initially created the “site with database” in the Azure portal. If I check that box, when I deploy the web site, the database schema will be automatically created for me, and the seed data will be inserted and be there when the first request to the site is made . All that, just by publishing the site!

publish-3

Can you imagine anything more convenient? I mean seriously. I publish my site and the database is automatically created, seeded, and everything wired up for me using Entity Framework, with a minimal amount of code. Pretty much magic, right?

Have at it!

Now that the .NET 4.5 Framework is supported by Azure Web Sites, you can make use of these and other new features, many of which are discussed or demonstrated on at www.asp.net’s page set aside just on the topic of ASP.NET 4.5 awesomeness . If you want to get started building your own location API’s built on top of Entity Framework Spatial, grab your very own Azure account here, that offers all kinds of awesomeness for free. You can take the sample code for this blog, or copy the gists and tweak them however you want.

Happy Coding!


Custom Authentication with MVC 3.0

During a friendly code review discussion a week or so ago I realized I’d forgotten my favorite lore of custom authentication/authorization functionality in lieu of ASP.NET P&M. Though I definitely prefer P&M to rolling my own from scratch to the extent that I’ve gone as far as to use it as a pass-through there are some times when P&M is too much – or too little – so custom schemes must be developed. The thing that surprises me each time I observe a custom scheme is the lack of usage of the IPrincipal/IIdentity interfaces. With MVC 3.0 as my major weapon of choice in web development and my recent adoption of P&M, it became obvious that an opportunity had popped up for mixing my favorite old-school .NET trick into my favorite new-school paradigm.

Here’s the thing. If you don’t use the IIdentity and IPrincipal interfaces when you create a custom authentication/authorization solution you’ve completely missed the boat and I guarantee you you will write 100 times more code, that will be more brittle, than you would have ever written had you just used them in the first place.

image

A quick look into what these two interfaces offer is in order. The graphic above sums it up visually. You’ll see that the IIdentity interface is typically considered a requirement for the IPrincipal to exist; obviously this model fits in rather well with MVC’s favoritism to IoC/DI practices.  The principal wraps around the identity, supplying access to a user’s roles. So long as your classes implement the requirements, they can be bound to the HTTP Context, the current thread, basically attached the process so that all the niceties like use of the [Authorize] attribute and role-based location security via the web.config file are possible without tons of rework.

 

The first class needed in any custom implementation is the identity class. The main purpose of this implementation is to represent the user’s name, really, as well as how the user was authenticated and if they are authenticated. Should the IsAuthenticated property be set to false at run-time, code later on assumes the user is an anonymous user.

image

Next will be the CustomPrincipal class. Since this class implements IPrincipal, it can be used to bind itself directly to a thread (or an HTTP Context). Since it can be used in that manner, all the functionality and support offered via web.config-based authorization, use of the [Authorization] attribute, all of that – it will be maintained and you won’t have to write it (or support it, or debug it, and so on). Note how the constructor’s argument is highlighted to illustrate the connection between the two classes.

image

To wrap these classes up via a native event handler or inherited method we’ll create a BaseController . It could – and probably should – be done via an injected service, obviously, but I’m honoring the KISS principle [you say principle, I say principal ] for the purposes of this walk-thru. The base controller is shown below and its main purpose in existing – to authorize the active user account prior to allowing execution of the controller action.

image

Caveat: This next part isn’t a recommendation, it’s only used for the purposes of keeping this explanation simple. Forgive me.

Now that we’ve satisfied the authorization part we’ve got to authenticate the user and store their login information somewhere for the life of their… yeah, you guess it, for the life of their session. We’re going to use Session here, okay, but just for the demo. It isn’t a recommendation. Please turn off the tape recorder.

The SimpleSessionPersister class below persists the user’s name into a session variable [ducks under flying tomato]. The login controller action below the utility class just validates that the user provided something for their username and password and, if so, the user’s username is persisted to session for use – by the BaseController, specifically – later during the HTTP request.

image

You can relax now, that part’s over.

Now that the user has been authenticated we can assume that each controller we create (provided it inherit from our custom base) will authorize each individual request per the best-practice role-base security means. Take the following controller action below, which requires the user be authenticated via the [Authorize] attribute and the web.config, also below, which indicates the login page URL as our Login controller action.

image

Should a user try to hit the non-anonymous page they’d be redirected to the login page (see the URL highlighted below). 

image

To review the process we went through to accomplish our very own custom-made end-to-end security paradigm that still allows usage of the .NET built-in role-based security resources, here’s what we had to do:

Happy Coding! If you’d like to take a peek at the code for this demo and debug it for better observation as to how it all works together you can download the code from my DropBox account .


MyNuGets - An Orchard Module for NuGet Fans

If you enjoy using NuGet to obtain and distribute your open source work and you maintain a blog or a site using Orchard, this post's for you. I've written a widget that will allow you to enter your author name. Then, the widget will hit the NuGet OData API and find any packages published with that author name. From that, the HTML is build up through the Orchard pipeline and what you get is a nice list of NuGet projects. Check it out for yourself on the MyNuGets project page or in the Orchard Gallery


NHQS NuGet Package

This evening I was finally able to get the NHQS library up on NuGet. As always, you can use either the NuGet package explorer or the command-line tool to grab yourself NHQS. 

I'll be maintaining most of the general information about NHQS on the main NHQS page here on my site, which you can see in the navigation bar above. Hope NHQS makes your life with NHibernate a little easier!

Happy coding!


The Robot Factory Kata

On the drive home from my last Behavior Driven Development talk, I began thinking about the idea of Code Katas and how one might be appropriate in my future disucssions of Behavior Driven Development. Given that BDD tries to solve things in as simple and direct a path as possible, and given that BDD takes some of the lessons learned via TDD and applies them in slightly more business-centric language, a Kata would demonstrate well the effectiveness of BDD when applied to a problem domain. 

So, I took the example problem domain of a robotic assembly line that I've been using since I was training full-time under the guidance of J. Michael Palermo IV  and implemented it using SpecFlow and Moq . So far, the Robot Factory Kata video series has two videos. 

The first of these demonstrates the project setup and configuration, and starts solving the problem using SpecFlow specifications and NUnit tests. 

The second part helps demonstrate how mocking method verification using callback expectations to demonstrate testing interaction between two objects. Sure, this is getting into integration testing, but the idea is to demonstrate BDD using a legitimate problem domain that's slightly more interesting than the construction of a calculator. Not that there's anything wrong with that, but you know, it helps to have variety. 

Hope you enjoy these video demonstrations and that they motivate you to start BDD'ing today. 


Introducing the NHibernate QuickStart (NHQS, Part 1)

If you’ve worked with me over the past year or you’ve seen one of my alt group meetings, chances are I’ve mentioned, demonstrated, or maybe stuffed down your throat, the NHQS thingamabob. With the improvements in NHibernate following .NET 4.0 and the experiences I learned in a few projects in the previous release I’ve decided to rewrite NHQS for NHibernate 3.xx. I like the improvements in NHibernate and with those and a slightly renewed focus on my goal for NHQS I felt it might be worthwhile to introduce it on its own into the wild. This post will introduce the world to NHQS and provide a guideline for its usage.

As usual the project is only as effective as is the goal it intends to serve so let’s get that out of the way.

A lot of developers want to use NHiibernate, or Fluent NHibernate, or some flavor therein, but don’t want to learn the layers and layers of how-and-why it works just to get up and running. NHQS should mitigate this confusion and ease the transition into usage of NHibernate. A secondary goal of NHQS is to make it easy for a developer to have access to any and all persistent storage mechanisms supported by NHibernate in a fluent and domain-centric manner. At the highest level, a developer facilitating object persistence via NHQS should neither need to know nor be exposed to the underlying persistence mechanism.

To accomplish that goal we’re going to walk through the steps of setting up and using NHQS to access multiple databases via the almost-too-simple-to-be-legit fluent/generic interfaces exposed almost solely via extension methods. I’m sure I’ll get burned at the stake for failing to follow someone’s NHibernate Best Practice 101 list, so I’ll apologize up front. With that, here’s the synopsis of the blog series, which I aim to complete by the end of this week.

  1. Introduction (this post)
  2. Session Factory Creation and Containment
  3. Session Fluency Extensions
  4. Transactions and Multiple Database Support
NHQS in 30 Seconds

There are a few steps one must follow to get their persistence mechanism accessible via NHQS. Here they are:

  1. Implement the ISessionFactoryCreator interface via a class of your own (we’ll cover this in a moment)
  2. Add code to the startup of the calling application (or in your TestFixtureSetup method) that calls the Create method of that class to create an NHibernate ISessionFactory object instance
  3. Take the resultant ISessionFactory and store it using the SessionFactoryContainer class
  4. Write DDD-like code to get objects from databases and perform CRUD operations via an arsenal of extension methods attached to areas of NHibernate

The second post in this series will begin the process of using NHQS in a real-world situation by simplifying the art of creating ISessionFactories and storing them in a container, so check back soon.

If you’d like to skip the blogs and look at the source, feel free, I keep the NHQS source in Github .


NHQS Part 3- Session Fluency Extensions

Though this might be the poorliest-named aspect of NHQS it is my favorite part, for it takes a huge part of the complexities associated with NHibernate and makes them simple. Dirt simple. Like, “you’re kidding, right? That couldn’t possibly work,” simple.

In part 2, the session factory containment methodology was discussed and demonstrated. During the stage of session factory containment NHQS does a wee little trick; it associates all of the domain (or if you prefer, entity ) types with the session factory responsible for performing that domain type’s CRUD responsibilities.

What? Huh?

In other words, during containment, NHQS learns that, when asked for a Person, to hand back session factory A. When asked for an Order hand back session factory B. Whatever domain is requested, the framework should know which session factory to ask for that domain object; it should be transparent to the developer using NHQS.

Take a look again at the test fixture setup for the NHQS demonstration tests. This setup function was introduced in the previous post but has been augmented to help explain the topic in this post. The relevant addition has been highlighted for simplicity.

image

To further explain what’s going on, let’s walk through that highlighted section, piece by piece, and get a good understanding the steps taking place at execution time.

image

This first part looks in the session factory container and finds the session factory that has been set to perform CRUD operations against the SQLCE database that stores Person class instances. The return of the For method is, obviously, the instance of the ISessionFactory interface that was previously stored in the container. Note – this isn’t creating a new session factory each time, because, as pointed out in a previous post, that’s the probably single most expensive [RE: painful ] thing you can do with NHibernate. Since each session factory is created once and maintained within the session factory container, the container is just handing it back based on the type of domain object needed by the calling code.

image

Since the For method hands back a session factory instance, this line should be relatively self-explanatory – it creates an NHibernate ISession instance and hands it back to the calling code. Now for the good stuff, the session extension methods.

image

Hanging off of the session object are a few NHQS extension methods. Located within the root NHQS namespace, these methods do pretty much what their normal method equivalents do – the CRUD operations against the database. The Save method takes an instance of the domain object type represented in the generic argument and saves it to the NHibernate session and eventually, the SQLCE database.

Moving on, we need to have a unit test to make sure we stored some data. As with the Save extension method, NHQS provides a retrieval methodology that uses LINQ Expressions to get to the data in the database.

image

This test will, obviously, pass, as the setup routine saved some data to the database. We can also ask for the specific data we saved to make sure the test isn’t returning other data that’s been persisted at some other time. The second unit test below demonstrates this in action.

image

Most of the extension methods have overloaded options, too, if you want to continue to chain together operations in a fluent manner. The test below demonstrates an example of this approach in action.

image

Just take a look at all the work NHibernate’s doing under the hood from the debugging log generated at run-time when this unit test (and the setup) are executed. As you’ll see, NHQS makes CRUD activities a lot simpler than doing multiple methods of database logic. Obviously, it makes such processes a little more testable, too.

Click to zoom

This post sums up probably the biggest benefit NHQS could provide most developers – easy, fluent access to their domain persistence via simple, domain-centric language and chainable methods.

The final post in this series (which may be split into two posts) will demonstrate two more important facets of NHQS – multiple-database access and the big kahuna topic in most database/ORM discussions – transaction management. We’ll peek at these two topics in the next few days and wrap up this NHQS introduction.

I hope you’ve enjoyed the series so far!


NHQS Part 4- Multiple Database Connectivity and Transaction Management

Most of the previous posts have alluded to the multiple-database goal of NHQS and some of the code has demonstrated the beginnings of a demonstration of the feature. This post will take a longer look at how NHQS can provide multiple-database support and will wrap up by demonstrating transaction management within NHQS.

Multiple Database Connectivity

As has been demonstrated in the previous posts’ unit test setup methods the provision within NHQS for multiple database support relies on the idea that multiple session factories are contained automatically by NHQS, and during that containment associations are made between the session factories and the domain entities they service. Take another look at the test fixture setup, which has been modified in this post to provide the initial persistence of testing data.

Multiple database setup

In this example, recall from the first post, we’ll be working with two different domain entity projects, each of which is serviced by its own session factory. Since both session factories are being contained by NHQS, the domain language specifies where the objects are sent, and the respective session factories do the rest.

To further understand how awesome this idea is take a peek at the test project’s configuration file. Note how the first of the connection strings points to a SQLCE database, and the second connection string points to a SQL Server 2008 database. So long as NHibernate supports the RDBMS you’re targeting, NHQS can target it.

image

The test below completes this section of the multiple-database examination by demonstrating how data from one of the databases tied to the application – the Person SQL CE table – can be pulled and inserted into a table in another database – the Customer SQL Server table.

image

This unit test demonstrates how you could potentially use domain language and NHQS to perform data migrations – even when the database platform is completely different.

Transaction Management

Fans of the Unit of Work pattern/concept will appreciate this section of the post. Those with extensive NHibernate experience understand the idea of managing transactions explicitly rather than just expecting implicit transactions. If you’ve ever used NHProf to profile and trace your NHibernate queries you’ll know explicit transaction usage is an area it warns about repeatedly.

Basically the rule of thumb with NHibernate is that, one should always perform the work of saving, updating, and deleting related entities to an underlying persistence mechanism, in the context of an explicit transaction. Most problems or misunderstandings about when does my data truly get saved when I use NHibernate can be mitigated by using explicit transactions. Most generic approaches yield problems when dealing with transactions, so NHQS had a goal of doing its best to allow the developer the right to manage their transactions separate to the internal workings of NHQS.

Take a look at the RealWorkWrapper method within NHQS to demonstrate the way it works. The method peeks at the NHibernate session to see if it is involved in a transaction. If not, it starts its own and does the work. If so, the method just remains within that transaction as it does its own work.

Automatic transaction management

One of the NHQS CRUD methods, Save, is pictured below. These two code snippets are not to dive deep into how NHQS works (that would defeat the black-box idea in the first place, right?), but rather to demonstrate how NHQS will either use the existing transaction when it exists and if not, how it will create its own. The idea being, to keep inline with an NHibernate best practice recommendation (or to keep inline with my interpretation of it, which is open to review).

image

Here’s the point to that shallow dive. When a developer wants to do something in NHibernate specifically and maybe use NHQS here and there, they have the freedom of doing what they want to do in their own way and to combine NHQS by simply re-using an existing NHibernate session. This way, NHQS can be added or removed in-line with your current NHibernate implementation, without having to change a whole lot. Unit-of-work aficionados will appreciate the ability of re-using the transaction for multiple database procedures, so that CRUD operations can be grouped together to ensure validity.

image

Examine this unit test and how it would work (and why it would be valid).

  1. Verify there are no records in the table
  2. Add one person
  3. Add a second person
  4. Verify that two records are in the table
  5. Throw an exception. Since the steps are being performed in a transaction – via the DoWithTransaction call wrapping the unit of work being performed – firing the exception will cause avoidance of the transaction’s commit method being called, so…
  6. Verify that no records exist in the database

In Conclusion

I hope you’ve enjoyed this brief encounter with NHQS. As was stated in the original post, it had a few clear goals. I hope NHQS makes your life easier and, if you’ve recently embarked into using NHibernate, that it makes that process a little less painful. If you have any suggestions, or have extensive experience with NHibernate and find yourself strongly objecting to how NHQS does things, please leave a comment. I’m always trying to improve not only NHQS, but my understanding of how NHibernate works.


ASP.NET MVC, JSON, and Prototype

I've tinkered with the method of posting JSON between the browser and the server via a generic handler. When the new MVC Preview came out and had native support for JSON I knew I needed to do some further tinkering. This post will describe in brief how to perform such a thing. I'll demonstrate how to perform a login process using JSON communication between an MVC Controller's action method which returns a JsonResult instance. Sounds tricky, but that's how Prototype helps us. It makes the communication process pretty easy. I'll try to be pretty short and sweet here and will keep the code discussion to a bare minimum.

Environment Setup 

First and foremost, you'll need to learn the basics of the ASP.NET MVC approach and find the two downloads over at ScottGu's blog post on the topic . That's about the best place to start.  Once you've got everything installed you should create a new MVC project. Add a folder to the Views folder called Login . This is a relatively simple topic that all must implement at some point or another. I'll use the Prototype JavaScript framework for this. So make sure you download Prototype and put it into your web project. Finally, add a reference to the script in your Site.Master page, which should be in the Views/Shared folder. 

Creating the Login Index View and Controller

Within the new Login folder, create an MVC View Content Page named Index.aspx.  This page will contain some HTML code, as you'll see below. This code contains some form elements and some JavaScript code. The JavaScript code is what will perform the duty of packing up the data collected from the form in the structure of a JavaScript object called Login. Once the object is created, it will be shipped over HTTP to the server.

 

   

       
            username                    
   
   
       
            password                    
   
   
       
                   
   
   

 

Controlling Things

Take note of the URL in the call to the Ajax.Request constructor. The call is going to be made to /Login/Authenticate. Understanding that step will give you a better comprehension of how the URL's are converted into useful routes on the server to view pages. The first step is to create a Controller class. Since I've created a Views folder named Login, I must create the class useful for controlling all login-related procedures. This class, LoginController, will contain action methods that perform various actions the user will need to execute during the Login routines. The first of these - the Index view - needs to be controlled first. 

 

public class LoginController : Controller
{
    public ActionResult Index()
    {
        return View();
    }

 

}

 

This one's pretty obvious - the call to View() will just render the Index view I already created, with the login form and JavaScript call. The next one isn't so obvious and requires a quick glance at some helper methods I've written into a JsonHelper class. This class just makes some of the heavy lifting easier in a moment. The ToJson extension method wouldn't have been possible without a post from ScottGu's blog .

 

public static class JsonHelper
{
    public static string ToJson(this object target)
    {
        JavaScriptSerializer ser = new JavaScriptSerializer();
        return ser.Serialize(target);
    }
    public static T FromJson(string json)
    {
        JavaScriptSerializer ser = new JavaScriptSerializer();
        return ser.Deserialize(json);
    }
    public static T FromJson(Stream stream)
    {
        StreamReader rdr = new StreamReader(stream);
        return FromJson(rdr.ReadToEnd());
    }
}

 

The last step in our controller class is the actual niftyness of the whole post. Within this method the code takes a peek at the current Request, specifically the incoming JSON string contained within the request's InputStream property. This is where I'll use my JsonHelper class, which makes it a snap to parse an HTTP Request into a JSON object. 

 

public JsonResult Authenticate()
{
    Login login = JsonHelper.FromJson(this.Request.InputStream);
    return new JsonResult
    {
        Data = (login.Username == "username" && login.Password == "password")
    };

 

}

 

The Final Touch

Last, I'll need to give the user some way of finding the Login's Index view so I'll add a link to the view in the Site.Master file's navigation section.

 

    

 

Once you've done all this you should be able to run the code with success. I'll get around to uploading this sample onto the server so you can see it working in the next few days, but I couldn't wait to share this technique. Using Prototype as a means to communicate with a ASP.NET MVC Controller class' JsonResult methods makes for a powerful combination of tools to build truly rich, interactive applications that run in all modern browsers.

Happy Coding! 

 


Dependency Injection Example - Constructor Injection and Service Orientation

With all the talk on weblogs or technical conversations within my own organization about DI it's difficult to ignore it as little more than the latest "new black" pattern. I've given it some considerable thought and until quite recently didn't really comprehend the overall niftiness of the DI approach. As with anything else it took a moment of "a-HA" to really grasp the power of DI; I was developing a custom CruiseControl.Net build plugin and realized that the plugins are injected dynamically at construction time. If you debug  one of these plugins, you'll notice that the plugin constructors don't match a common parameter structure. The one commonality throughout all the plugins I was investigating as examples for my own education seemed to be in the parameter types - they were all interfaces, implementations of which were usually stored in the CCNet server application domain.

Services, basically. My interest in service orientation perked my interest in DI; the two concepts appeared mutually beneficial. So I opened the laptop and started coding away on my own implementation of a DI framework, with tests (since it's virtually impossible to talk about DI without talking about tests, too). This blog post consists of an investigation into my own implementation. It doesn't offer up DI as a "holier-than-thou" approach nor a dismissable coding trend. Rather, it is my first attempt to prove to myself that I am getting this DI stuff and to implement it in my own words. *cracks knuckles*

First Things First - Support Laziness

As with any framework, I anticipate that the easier it is to use the more chance I'll have of talking someone else into giving this a shot or a glance. So I knew that I wanted my approach to be very simple to use, quick to implement, and hopefully, make the approach of using DI more interesting. My first question to every pattern is "does it make my coding process easier and more flexible?" I know that:

My discomfort with the ServiceContainer approach is that, someone actually has to "Add" the service implementations to the service container. Hence, they have to:

That point, most times, is where I get pretty aggravated when I'm using someone else's framework. That is a requirement that I must not impose on my audience, but one that I can't get started without. So a compromise is in order, and I'll use the idea of simple metadata to notify me of interfaces the developer intends need to be added to the service layer at run-time.

    1      ///

    2      /// Defines an interface as one that should be created and hosted by the

    3      /// service host during application run-time.

    4      ///

    5      [AttributeUsage (AttributeTargets .Interface, AllowMultiple= false )]

    6      public class DependencyServiceInterfaceAttribute : Attribute

    7      {

    8      }

Using the DSI attribute I can mark any interface that I intend to be added to a service container instance within the application domain. This makes it really easy to use, as the code below indicates. 

   76  [DependencyServiceInterface ]

   77      public interface IMockServiceA

   78      {

   79          void DoMockServiceWork();

   80      }

   81  

   82      [DependencyServiceInterface ]

   83      public interface IMockServiceB

   84      {

   85          void DoMoreWork();

   86      }

   87  

   88      public class MockServiceA : IMockServiceA

   89      {

   90          #region IMockService Members

   91  

   92          public void DoMockServiceWork()

   93          {

   94              System.Diagnostics.Debug .WriteLine( "Doing Mock Service A's Job" );

   95          }

   96  

   97          #endregion

   98      }

   99  

  100      public class MockServiceB : IMockServiceB

  101      {

  102          #region IMockServiceB Members

  103  

  104          public void DoMoreWork()

  105          {

  106              System.Diagnostics.Debug .WriteLine( "Doing Mock Service B's Job" );

  107          }

  108  

  109          #endregion

  110      }

I create a new interfaces and mark them with the DSI attribute. Then, I implement each interface with a custom class containing some basic functionality. Notice that I don't have to do anything to the classes themselves; the DI layer we'll begin to investigate next will do that work for us. 

Comprehensive - Sure, Why Not?!

If you can see where I'm going with this you're most likely scratching your head and saying "no way, not everything..." Looking through each assembly for implementations of interfaces marked with metadata isn't the most performant approach to doing type-loading but for now it will serve it's purpose. Think of it this way - I'm making exhaustively sure I'm not going to miss any service implementation that I might need later by a dependent class.  If I suspect that any interface I've got implementations of will ever be needed by any dependent class that might be required later, I can just slap the DSI attribute onto the interface and off we go.

Just to take a quick look at the code that'd perform this exhaustive search it's right here. The DependencyServiceContainer does just what you suspected - it looks through everything in the application domain to find any implementations of any interfaces that have been marked with the DSI attribute. Whenever it finds such an implementation, an instance of it is created and added via the base method ServiceContainer.AddService.

    8  public class DependencyServiceContainer : ServiceContainer

    9      {

   10          public static DependencyServiceContainer Instance

   11          {

   12              get { return _instance; }

   13          }

   14  

   15          static DependencyServiceContainer _instance;

   16          static DependencyServiceContainer()

   17          {

   18              _instance = new DependencyServiceContainer ();

   19          }

   20  

   21          internal DependencyServiceContainer()

   22          {

   23              Preload();

   24          }

   25  

   26          private int _svcCount;

   27          public int ServiceCount

   28          {

   29              get { return _svcCount; }

   30          }

   31  

   32          void Preload()

   33          {

   34              foreach (Assembly assm in AppDomain .CurrentDomain.GetAssemblies())

   35              {

   36                  SearchAssemblyForDSIAttributes(assm);

   37              }

   38          }

   39  

   40          void SearchAssemblyForDSIAttributes(Assembly assm)

   41          {

   42              foreach (Type tp in assm.GetTypes())

   43              {

   44                  if (!tp.IsInterface)

   45                  {

   46                      SearchTypeForInterfaces(tp);

   47                  }

   48              }

   49          }

   50  

   51          void SearchTypeForInterfaces(Type t)

   52          {

   53              foreach (Type intrfc in t.GetInterfaces())

   54              {

   55                  if (IsInterfaceDSI(intrfc))

   56                  {

   57                      AddService(

   58                          intrfc,

   59                          Activator .CreateInstance(t)

   60                      );

   61  

   62                      _svcCount++;

   63                  }

   64              }

   65          }

   66  

   67          internal static bool IsInterfaceDependencyInjectable(Type intrfc)

   68          {

   69              return (

   70                  (intrfc.GetCustomAttributes(typeof (DependencyServiceInterfaceAttribute ), false ).Length > 0 )

   71                  &&

   72                  (intrfc.IsInterface)

   73              );

   74          }

   75  

   76          bool IsInterfaceDSI(Type intrfc)

   77          {

   78              return DependencyServiceContainer .IsInterfaceDependencyInjectable(intrfc);

   79          }

   80      }

Activation of Dependent Objects - the Point

Now that the DI framework has a service container into which services required by dependent classes have been added the logic to create dependent objects must be created. Basically, we're going to use reflection to inspect dependent classes. During reflection each constructor signature will be observed. When a constructor is found that has parameters of interface types that are all being held within the DependencyServiceContainer, the constructor will be called and the resulting object returned. 

    8  public class DependentClassActivator

    9      {

   10          static DependentClassActivator _instance;

   11          static DependentClassActivator()

   12          {

   13              _instance = new DependentClassActivator ();

   14          }

   15  

   16          public static DependentClassActivator Instance

   17          {

   18              get { return _instance; }

   19          }

   20  

   21          public T CreateInstance () where T : class

   22          {

   23              Type tp = typeof (T);

   24  

   25              foreach (ConstructorInfo ctor in tp.GetConstructors())

   26              {

   27                  if (Observe(ctor))

   28                  {

   29                      return InvokeConstructor (ctor);

   30                  }

   31              }

   32  

   33              return null ;

   34          }

   35  

   36          #region Private Helper Methods

   37  

   38          bool Observe(ConstructorInfo ctor)

   39          {

   40              foreach (ParameterInfo prm in ctor.GetParameters())

   41              {

   42                  if (!Observe(prm)) return false ;

   43              }

   44  

   45              return true ;

   46          }

   47  

   48          bool Observe(ParameterInfo prm)

   49          {

   50              return (

   51                  DependencyServiceContainer .IsInterfaceDependencyInjectable(prm.ParameterType) &&

   52                  GetTypeFromServiceContainer(prm.ParameterType)

   53              );

   54          }

   55  

   56          bool GetTypeFromServiceContainer(Type intrfc)

   57          {

   58              return (DependencyServiceContainer .Instance.GetService(intrfc) != null );

   59          }

   60  

   61          T InvokeConstructor (ConstructorInfo ctor) where T : class

   62          {

   63              object [] prms = GetConstructorParametersFromServiceContainer(ctor);

   64              return ctor.Invoke(prms) as T;

   65          }

   66  

   67          object [] GetConstructorParametersFromServiceContainer(ConstructorInfo ctor)

   68          {

   69              List < object > prms = new List < object >();

   70  

   71              foreach (ParameterInfo prm in ctor.GetParameters())

   72              {

   73                  prms.Add(DependencyServiceContainer .Instance.GetService(prm.ParameterType));

   74              }

   75  

   76              return prms.ToArray();

   77          }

   78  

   79          #endregion

   80      }

Here's the basic run-down. The DependentClassActivator creates looks at all of a class's constructors. Specifically, at each constructor's parameter. When it finds a constructor that has parameters it sees in the DependencyServiceContainer, it calls that constructor. So you don't have to use your class's constructors any longer. Instead, you tell the new DependentClassActivator to give you an instance, instead. Something like this:

   27              MockServiceA a = DependentClassActivator .Instance.CreateInstance< MockServiceA >();

 

To use a metaphor, it's like walking into a kitchen in which everything you ever need to make anything is already there. You need a spatula, you got it, you need a mixer, you got it. And so on.

How Do You Test It?

Possibly the most important aspect of all this is the ability to test it. In fact, code written using this DI framework is rather simplistic to test. To explain how you'd use this approach we'll consider the ever-relevant banking scenario. Below you'll see the test code for all the functionality described earlier. You'll see some interfaces that have been marked with the DSI attribute, some implementations to create and use, and tie it all up with a mock banking execution example.

 

   10  #region Bank Service Interfaces and Implementations

   11  

   12      public class Account

   13      {

   14          private int _accountId;

   15          private decimal _bal;

   16  

   17          public decimal Balance

   18          {

   19              get { return _bal; }

   20              set { _bal = value ; }

   21          }

   22  

   23          public int AccountId

   24          {

   25              get { return _accountId; }

   26              set { _accountId = value ; }

   27          }

   28      }

   29  

   30      [DependencyServiceInterface ]

   31      public interface IAccountLookupService

   32      {

   33          Account FindAccount(int accountId);

   34      }

   35  

   36      [DependencyServiceInterface ]

   37      public interface IWithdrawalService

   38      {

   39          bool Withdraw(Account account, decimal amount);

   40      }

   41  

   42      [DependencyServiceInterface ]

   43      public interface IDepositService

   44      {

   45          bool Deposit(Account account, decimal amount);

   46      }

   47  

   48      public class AccountLookup : IAccountLookupService

   49      {

   50          #region IAccountLookupService Members

   51  

   52          public Account FindAccount(int accountId)

   53          {

   54              if (accountId != 1234 ) return null ;

   55  

   56              Account mockAccount = new Account ();

   57              mockAccount.AccountId = 1234 ;

   58              mockAccount.Balance = 100 ;

   59              return mockAccount;

   60          }

   61  

   62          #endregion

   63      }

   64  

   65      public class AccountWithdrawer : IWithdrawalService

   66      {

   67          #region IWithdrawalService Members

   68  

   69          public bool Withdraw(Account account, decimal amount)

   70          {

   71              if (amount > account.Balance) return false ;

   72              account.Balance -= amount;

   73              return true ;

   74          }

   75  

   76          #endregion

   77      }

   78  

   79      public class AccountDepositer : IDepositService

   80      {

   81          #region IDepositService Members

   82  

   83          public bool Deposit(Account account, decimal amount)

   84          {

   85              account.Balance += amount;

   86              return true ;

   87          }

   88  

   89          #endregion

   90      }

   91  

   92      public class Bank

   93      {

   94          IAccountLookupService lookupService;

   95          IWithdrawalService withdrawalService;

   96          IDepositService depositService;

   97  

   98          public Bank(IAccountLookupService lookupService,

   99              IWithdrawalService withdrawalService,

  100              IDepositService depositService)

  101          {

  102              this .lookupService = lookupService;

  103              this .withdrawalService = withdrawalService;

  104              this .depositService = depositService;

  105          }

  106  

  107          public Account GetAccount(int id)

  108          {

  109              return this .lookupService.FindAccount(id);

  110          }

  111  

  112          public bool Withdraw(Account account, decimal amount)

  113          {

  114              return this .withdrawalService.Withdraw(account, amount);

  115          }

  116  

  117          public bool Deposit(Account account, decimal amount)

  118          {

  119              return this .depositService.Deposit(account, amount);

  120          }

  121      }

  122  

  123      #endregion

  124  

  125      #region Tests

  126  

  127      [TestFixture ]

  128      public class BankAccountTests

  129      {

  130          [SetUp ]

  131          public void Setup()

  132          {

  133          }

  134  

  135          [TearDown ]

  136          public void TearDown()

  137          {

  138          }

  139  

  140          [Test ]

  141          public void CanBankClassBeCreated()

  142          {

  143              Bank bank = DependentClassActivator .Instance.CreateInstance< Bank >();

  144              Assert .IsNotNull(bank);

  145          }

  146  

  147          [Test ]

  148          public void CanBankHandBackAccount()

  149          {

  150              Bank bank = DependentClassActivator .Instance.CreateInstance< Bank >();

  151              Account account  = bank.GetAccount(1234 );

  152              Assert .IsNotNull(account );

  153              account = bank.GetAccount(1000 );

  154              Assert .IsNull(account );

  155          }

  156  

  157          [Test ]

  158          public void CanBankAccountWithdrawMoney()

  159          {

  160              Bank bank = DependentClassActivator .Instance.CreateInstance< Bank >();

  161              Account account = bank.GetAccount(1234 );

  162              Assert .IsNotNull(account);

  163              decimal bal = account.Balance;

  164              decimal amt = 42 ;

  165  

  166              bool result = bank.Withdraw(account , amt);

  167              Assert .IsTrue(result);

  168              Assert .AreEqual((bal - amt), account .Balance);

  169  

  170              result = bank.Withdraw(account , 99999999 );

  171              Assert .IsFalse(result);

  172          }

  173  

  174          [Test ]

  175          public void CanBankAccountDepositMoney()

  176          {

  177              Bank bank = DependentClassActivator .Instance.CreateInstance< Bank >();

  178              Account account = bank.GetAccount(1234 );

  179              Assert .IsNotNull(account );

  180              decimal bal = account .Balance;

  181              decimal amt = 42 ;

  182  

  183              bool result = bank.Deposit(account , amt);

  184              Assert .IsTrue(result);

  185              Assert .AreEqual((bal + amt), account .Balance);

  186          }

  187      }

  188  

  189      #endregion

I welcome any comments on this approach. Is this DI or have I completely missed the boat on this whole concept. Hopefully this look at one appraoch DI has been as enlightening as it has been for me. Happy coding!


NHQS Part 2- SessionFactory Creation and Containment

As explained in various posts all over the internet ( like this one at StackOverflow ) the ISessionFactory interface is fundamental to how NHibernate works. As the StackOverflow links points out, creating a session factory is possibly the most expensive of the things NHibernate does during execution. A frequent misstep of developers new to NHibernate is to write code that will frequently – sometimes prior to each database call for the more heinous abusers – create session factory instances. As with any technology, that which isn’t used properly will probably result in less-than-favorable outcomes. NHQS simplifies this step for the developer, as well as facades the complexity associated with session factory storage once the application using the session factory has instantiated it.

Example Domains

image Throughout this blog series the Visual Studio 2010 solution pictured to the left will be used. The solution consists of the NHQS framework, two domain projects, two data access projects to accommodate those domain projects, and a unit test project. To demonstrate how NHQS can connect not only to multiple database instances, but to multiple database platforms agnostically, the domain/data-access examples use 2 different databases; the People scenario will be backed by a SQL Server Compact Edition database and the Orders scenario will be backed by a SQL Server 2008 database. NHQS has support for many other database platforms so you should be covered in virtually all RDBMS situations.

Session Factory Creation via a Convention

Within NHQS there exists an interface named ISessionFactoryCreator. Obviously, that’s the spot at which our investigation will begin. The interface contract is shown below. The idea behind it is quite simple – just give a developer an easy way of handing their application a session factory and let them do pretty much whatever they want to do to create it.

The contract for creating a session factory

The implementation of the session factory creator interface isn’t too difficult. For the People domain we’re using SQLCE, so the code below set up the session factory instance for that particular data source using the SQLCE persistence configuration and fluent auto-mapping. An in-depth exploration into these topics is beyond the scope of this post; I can assure you there are far better resources out there to explain such techniques. For now, take a look at the implementation.

Creating a SQL Compact Edition-backed NHibernate Session Factory

Now that the session factory for the People data source can be created it’ll need a place to hang out in the application’s domain. The next aspect of NHQS, session factory containment, solves this problem for developers.

Session Factory Containment

One of the goals of NHQS is to provide multiple database access to a single application or web site. Inspired by the idea posited in the DAAB of “accessing multiple databases by name,” this goal was an interesting one to solve. As mentioned previously, the act of creating a session factory is an expensive one and must be done sparingly. Therefore, session factories can be thought of as things that you might want to make Singleton instances, so their storage is quite important.

Once a session factory is created it must be added to the session factory containment class. This isn’t too difficult and can be demonstrated in the unit test setup method below. When the application (in this case a unit test execution) starts up, all of the session factories used by the application should be created and added to the container.

Adding a session factory to the container

The SessionFactoryContainer class does a little more than just contain the session factories created by each ISessionFactoryCreator implementation, too, but we’ll cover those in slightly more detail in the subsequent posts. For now, consider the multi-database goal alongside the domain centric access strategy goal. Since NHQS will be containing the session factories for you, chances are it will have the ability to do some mild interrogation of the entity domains it wraps.

Consider the code below, which modifies the test setup function slightly to accommodate a second session factory that is also created by an implementation of the ISessionFactoryCreator interface.

Adding two session factories to the container

The next post in this series will begin to explain how, once the session factories have been created and contained, the domain-centric language can provide the CRUD operations necessary to work with the entities comprising these two domains. That’s when the power and simplicity of NHQS becomes obvious, so stay tuned!


Reconfiguring the Graffiti Data Provider with NAnt

I'm getting started with Graffiti CMS right now and so far feel pretty good about all of the excellent features they've thrown in. I prefer the SQL provider method to the default of VistaDB, so I had to go to the Graffiti support site to find information on how convert to the SQL provider . I figure if I need to do it, so will some other people. To make life easier I've written a NAnt script to perform this, and you'll find it right here .

Happy coding!


ASP.NET MVC Model Binding Example

Scott does an excellent job in his introduction blog post to the new features in the ASP.NET MVC beta release. The Model Binder support is an excellent feature for which I wanted to put forth a simple example. 

In essence, Model Binding allows you to pass an object into your controller methods rather than be required to pass the values of each property for your model that you intend to set within your Controller method. In retrospect, that description sounds pretty harsh and confusing, doesn't it? For now I'll spare you an introduction to Model Binding, Scott's already done an excellent job of that via these two posts

For this example I'll continue with Scott's Person class example. I'll have my Controller method, which takes instances of Person classes. The code below contains all of the Controller and Person Model code for this form-posting scenario. Pay special attention to the Save method, as it demonstrates usage of Model Binding; an instance of the Person class is passed into this method via a parameter named person

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using System.Web.Mvc.Ajax;
namespace LearningMvcBeta
{
    namespace LearningMvcBeta.Controllers
    {
        public class PeopleController : Controller
        {
            public ActionResult Create()
            {
                return View();
            }
            public ActionResult Save(Person person)
            {
                return View(person);
            }
        }
    }
    public class Person
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public Address Address { get; set; }
    }
    public class Address
    {
        public string Street { get; set; }
        public string City { get; set; }
        public string State { get; set; }
    }
}

 

Then (this is the part I think most examples won't be as clear on as I'm trying to be herein) I'll create my Create view HTML code. This will present the user with a form. Take note, the Person class parameter from the Save method above is named person, so the names of the HTML text elements have to be in the person.PropertyName format. If I'd named the Person parameter p, I'd want to name the HTML text elements using the p.PropertyName format. 

 

    
        Your Name
    
        
            First Name 
            
        
        
        
            Last Name 
            
        
        
        Your Address
        
        
            Street 
            
        
        
        
            City 
            
        
        
        
            State 
            
        
        
        
    
    

 

The image below is a screenshot I took of my debugger while testing this code with sample data. This is really powerful stuff - the child Address object defined by the instance of this Person class is sent in as well as was the First and Last name properties, because the Model Binding support trickles down the object graph!

Using the nomenclature MVC expects in your HTML form elements results in Model Binding automatically figuring out how to map the HTML form values to the parameter's properties just before the resulting object is fed into the controller method. 

This is one of my favorite new features of ASP.NET MVC, for sure. Thanks a lot, ASP.NET MVC overlords!

Update: Sorry if you've tried to find this post and received an Index out of range error. The syntax highlighter I've been using seems to have pooped the bed. So, I had to make a few changes to the supporting code and I think all's well now. 


Dictionary Extension Method - Append

I've been creating a few extension methods here and there recently and figured I'd share one of the ones I'm using in a lot of places. This one is an extension method named Append that you can use with a generic IDictionary implementor to do quick-and-dirty creations of the object. I've found this to be really useful when I'm writing tests that use the generic IDictionary class in my tests. The first block of code below will point out the extension method.

 

public static class DictionaryExtensions
{
  public static IDictionary Append(this IDictionary dictionary,
    TKey key, TValue value)
  {
    dictionary.Add(key, value);
    return dictionary;
  }
}

 

This second block demonstrates one potential use for it. In this case I was creating a generic Dictionary to store name/value pairs of data for an HTTP post. 

 

IDictionary prms = new Dictionary().Append("status", status);

 

Happy coding!


Excellent Noob RhinoMocks Post

Buddy, the Beginnermediate developer (I love that tagline BTW) has written an excellent blog post on Rhino Mocks . For someone like myself who understands TDD, implements it at a "B-minus" level and who thinks that mocks could help but have no idea how to get started, this post is a must-read. I can't wait to download RM now and get started. Excellent work!


Using a ServiceContainer with Delegates - Truly Simple Services

While working with a colleague on a project I got into a debate about event-handling versus delegates stuffed into a service container. It sounded weird to me at the time; I've always been a huge fan of event-driven programming so I tend to lean that way when I need to make my objects responsive to one another. The way my colleague looks at it, "everything's a service, so if you just add your methods that you need to run to your service container you can call them when you need them." The idea seemed truly nutty to me so I just had to code it up. It is important to note here that our system already made use of the ServiceContainer approach. We have a custom service container that our applications use; in this way we can add any service to the container at run-time and provide our whole application (and all of it's components) with access to everything via the service container. If you're not familliar with that approach this may seem a little off-kilter. 

First thing I did was to create a custom ServiceContainer implementation. The code for my GenericServiceContainer is below. Note the extra method I've added which makes use of generic types. 

 

public class GenericServiceContainer : System.ComponentModel.Design.ServiceContainer
{
  public virtual T GetService() where T : class
  {
    return this.GetService(typeof(T)) as T;
  }
}

Next, I'll create a few delegate types. In this way, any class that has knowledge of these delegate types can request the GSC send'em out.

public delegate void MessageDelegate(string message);
public delegate int AdditionDelegate(int x, int y);

 

Though the functionality abstracted in the form of delegates I still need to create a class that can "do the work" for the application. To accomplish this I've written a simple WorkerClass, the code for which is below.

public class WorkerClass
{
  public static void ShowMessage(string message)
  {
    Console.WriteLine(message);
  }
  
  public static int Add(int x, int y)
  {
    return (x + y);
  }
}


Finally, some tests will prove the theory.  

[TestFixture]
public class TestServiceContainer
{
  GenericServiceContainer _container;
  
  [SetUp]
  public void Setup()
  {
    _container = new GenericServiceContainer();
    _container.AddService(typeof(MessageDelegate),
      new MessageDelegate(WorkerClass.ShowMessage));
      
    _container.AddService(typeof(AdditionDelegate),
      new AdditionDelegate(WorkerClass.Add));
  }
  
  [TearDown]
  public void TearDown()
  {
    _container.Dispose();
  }
  
  [Test]
  public void TestMessageDelegate()
  {
    MessageDelegate del = _container.GetService();
    Assert.IsNotNull(del);
    del.Invoke("Testing");
  }
  
  [Test]
  public void TestAdditionDelegate()
  {
    AdditionDelegate del = _container.GetService();
    Assert.IsNotNull(del);
    int result = del.Invoke(2, 2);
    Assert.AreEqual(result, (2 + 2));
  }
}

 

So that's it! With this code I've defined the structure of the methods that will be doing the work and allowed redirection to a worker class to perform the work. Now, any class within the application being augmented by my custom ServiceContainer can have easy access to centralized functionality.

Happy Coding! 


El Cheapo Service Container

A few days ago a collegaue of mine blogged about performing DI on the cheap . His post got me to thinking about the various IoC containers and DI frameworks that have sprung up (no pun intended, sorry for that heinous attempt at geek humor). As I've been working through the Social Timeline architecture I've concluded the importance that'll reside on being able to snap in time line data providers with ease. Inspired by Bo's post, I've named the project El Cheapo . Though it doesn't make use of any of the popular IoC/DI frameworks out there it does borrow some of the general ideas from their implementations. I was specifically inspired by Nikola Malovic's discussion of Unity, namely the way interfaces are resolved to types. 

As you'll see, the only significant difference between a typical service container implementation and this custom version is that this implementation takes into account the concern of having multiple implementations of a given interface. In Social Timeline for instance, I've listed the various time line data providers as tabs in the GUI layer. In this way a user can see all that they could represent on the timeline below the tabs. I had intended to use the Unity framework in this implementation but decided it was a little more lightweight to just build what I needed myself. Below, you'll see the code for the El Cheapo service container in its [current] entirety.

 

public class FunctionalityBasket
{
    Dictionary> _innerBasket;
    public FunctionalityBasket()
    {
        _innerBasket = new Dictionary>();
    }
    public void Register(I implementation)
    {
        Register(implementation, implementation.GetType().Name);
    }
    public void Register(I implementation, string name)
    {
        Type t = typeof(I);
        if (!_innerBasket.ContainsKey(t))
            _innerBasket[t] = new Dictionary();
        _innerBasket[t].Add(
            name,
            implementation
            );
    }
    public Dictionary GetImplementations()
    {
        Type t = typeof(TInterface);
        Dictionary ret = new Dictionary();
        Dictionary.Enumerator enm = _innerBasket[t].GetEnumerator();
        while (enm.MoveNext())
            ret.Add(enm.Current.Key, (TInterface)enm.Current.Value);
        return ret;
    }
    public TInterface GetImplementation(string name)
    {
        return GetImplementations()[name];
    }
}

 

I feel the best way to exemplify usage of the FunBasket (another bad pun) is via a series of unit tests, which are below.

 

[TestFixture]
public class ElCheapoUnitTests
{
    [TestFixtureSetUp]
    public void SetupTestFixture()
    {
    }
    [TestFixtureTearDown]
    public void TearDownTestFixture()
    {
    }
    [Test]
    public void InterfacesAndImplementationsCanBeAddedToBasket()
    {
        FunctionalityBasket basket = new FunctionalityBasket();
        basket.Register(new NullLogger());
    }
    [Test]
    public void InterfacesAndImplementationsCanBeAddedAndListOfInterfacesRetrieved()
    {
        FunctionalityBasket basket = new FunctionalityBasket();
        basket.Register(new NullLogger(), NullLogger.Name);
        basket.Register(new LameLogger(), LameLogger.Name);
        Dictionary loggers = basket.GetImplementations();
        Assert.That(loggers.Count > 0, "Implementations can be added and retrieved by interface type.");
    }
    [Test]
    public void ReturnedImplementationsAreUsefulBasedOnInterface()
    {
        FunctionalityBasket basket = new FunctionalityBasket();
        basket.Register(new NullLogger(), NullLogger.Name);
        basket.Register(new LameLogger(), LameLogger.Name);
        Dictionary loggers = basket.GetImplementations();
        Assert.That(loggers.Count > 0, "Implementations can be added and retrieved by interface type.");
        foreach (ILogger logger in loggers.Values)
            logger.Log();
    }
    [Test]
    public void ImplementationsCanBeRetrievedByName()
    {
        FunctionalityBasket basket = new FunctionalityBasket();
        basket.Register(new NullLogger(), NullLogger.Name);
        basket.Register(new LameLogger(), LameLogger.Name);
        Assert.IsNotNull(basket.GetImplementation(NullLogger.Name), "Null Logger was added and is retrievable");
        Assert.IsNotNull(basket.GetImplementation(LameLogger.Name), "Lame Logger was added and is retrievable");
        basket.GetImplementation(NullLogger.Name).Log();
        basket.GetImplementation(LameLogger.Name).Log();
    }
}
// -------------------------------------------------------
// test implementations
// -------------------------------------------------------
interface ILogger
{
    void Log();
}
class NullLogger : ILogger
{
    public static string Name = "NullLogger";
    public void Log()
    {
        Console.WriteLine("This isn't happening");
    }
}
class LameLogger : ILogger
{
    public static string Name = "LameLogger";
    public void Log()
    {
        Console.WriteLine("I'm so lame");
    }
}

 

Happy Coding! And GO Bulldogs and Panthers this weekend!


Testing ASP.NET MVC with QUnit - Part 2

In Part 1 of this series I demonstrated how QUnit can be used to test JsonResult action methods in ASP.NET MVC applications. Part 2 will take the idea a little further by showing an example of how QUnit can be used to inspect potential user-input areas on your MVC forms and to use those values in tests that will verify the requirements have been met.

The scenario will be a search form that will search a database of people. To use the mindset from a post I made earlier this week, the application will need to meet the following requirements:

Not too difficult a set of requirements but clearly-enough stated so that tests can be provided. Of course, a stub of functionality will be created to provide the database of people. The code below demonstrates a PersonFactory class that will return instances of the Person class in a generic List.

public class PersonFactory
{
  public static List GetAll()
  {
    List people = new List();
    people.Add(new Person { FirstName = "Al", LastName = "Pacino" });
    people.Add(new Person { FirstName = "Val", LastName = "Kilmer" });
    people.Add(new Person { FirstName = "Robert", LastName = "DeNiro" });
    return people;
  }
}
public class Person
{
  public string FirstName { get; set; }
  public string LastName { get; set; }
}

Next, a controller method is added that will make use of the PersonFactory and return the search results to the client in a JsonResult instance.

public JsonResult FindPerson(string name)
{   List people = PersonFactory.GetAll();   JsonResult result = new JsonResult();   people =   people.FindAll(x => x.FirstName.Contains(name) || x.LastName.Contains(name)).ToList();   result.Data = people;   return result; }

Since the expectation is that the person-searching functionality will be performed from a web page on which a textbox is provided to the user for free-form entry a mock GUI will be created to drive the test itself. The HTML code below provides a form that the tests will use in a moment.


FindPerson Parameter

Think of it this way. If a web application needs to first go through a rigid QA process to succeed a checkpoint, what better way to make sure the QA process runs as smoothly as possible by automating the use-cases agreed on by the team in the form of unit tests? The obvious next step is to do the very thing we'd expect a QA person to do - enter some text into the specified text box and perform the search with the expectation that our search would return a result. 

$('#findPersonName').val('Pacino')
  $.getJSON('/Home/FindPerson', { name: $('#findPersonName').val() }, function(data)
  {
    module('FindPerson');
    test('FindPerson with known value for last name returns matching person records', function()
    {
      equals((data.length > 0), true, 'At least one person should return from the search');
    });
  });
  $('#findPersonName').val('Robert')
  $.getJSON('/Home/FindPerson', { name: $('#findPersonName').val() }, function(data)
  {
    module('FindPerson');
    test('FindPerson with known value for first name returns matching person records', function()
    {
      equals((data.length > 0), true, 'At least one person should return from the search');
    });
  });

Of course, any good testing process will test the process's ability to fail gracefully as well. To accomplish that, another string will be entered that logically would always fail to return a result. 

$('#findPersonName').val('NOWAY')
  $.getJSON('/Home/FindPerson', { name: $('#findPersonName').val() }, function(data)
  {
    module('FindPerson');
    test('FindPerson with value not found in list returns no records', function()
    {
      equals((data.length == 0), true, 'No results should be returned from the search');
    });
  });

The HTML output would appear like the screenshot below. 

In this example, QUnit was used to automate the modification of form data and to perform unit tests with that user data. Hopefully this gives you some ideas of how you might be able to automate your own client-side GUI experiences. Writing this blog raised an interesting question for me. If a software product's QA lifecycle involves end-to-end system and user-acceptance testing, could an approach like this the use of QUnit or any other client-automation/scripting tool reduce testing and acceptance times? 


JsonResult Extension Method

Mildly silly and maybe making too many assumptions though it is, I worked up a little extension method to generically evaluate (and return) the Data property of a JsonResult class. Makes life a little easier when testing JsonResult action methods on controller instances.

image

Helps during testing, for sure. See below for a demonstration of how I’m using this in a pet project.

image


Taming NHibernate Child Collection Persistence

I’ve been using [hacking at] NHibernate as my main ORM choice for just over a year and I’ve been exceptionally happy with the growing number of projects to augment it’s functionality, especially the Fluent NHibernate and LINQ-to-NHibernate contributions. NH isn’t exactly a simple ORM layer and does have some confusing aspects. One of these aspects – at least for me – has been that I’ve seen inconsistent behavior in the persistence of child collection properties. I think I concluded via a series of unit tests this evening for myself how to gain the typical desired outcome and I felt it was something I should share.

Take, for example, the traditional order-to-order-detail relationship. The screenshot below shows a code example of a POCO domain object organization exemplifying this sort of relationship. The Product domain object has been included for the sake of completeness for this discussion.

image

Following in typical NHibernate/ORM style I’ve created a series of repository interfaces and implementations to provide persistence functionality for these objects; though a discussion of those implementations is somewhat beyond the scope of this article I’m relatively certain that with a bit of ORM and TDD experience the remainder of this text should still make sense. I tend to use base classes when I’m working up my unit tests. The base class below is one that I use in this storefront project example that exposes references to local instances of those repositories. Likewise, it inherits from one of the classes provided via the S#arp architecture, the RepositoryTestsBase class.

image

Next is an examination of the TDD setup for this domain object hierarchy. Not too complicated an idea, this is the simplest example of a storefront possible. Products are created, then a sample order is created. To the sample order two products, with varying quantities, are added to the order.

image

The unit tests are also as simple as the setup. Each unit test makes sure each item was saved to the persistence layer, and each bases it’s pass/fail criterion solely on the data loaded via the method in the figure above.

image

Seems simple enough but when the tests are executed via TestDriven.NET the tests indicate a failure to save the order details simultaneously when the order is persisted.

image

Now, in the case where you’re like me and you want the children to be saved with the parent, this just won’t do. If you’re of the mindset that each object should be created on it’s own in the persistence layer, this might not be so bad. My point is, the code below should save both of the order detail records with the order, each time, no matter what.

image

In this case, my problem was in the way I was mapping the relationship between the Order and OrderDetail classes. I’m using Fluent NHibernate and Auto Mapping, so the changes should be minimal and made within the mapping override for my Order object. The code for my Order mapping override is below.

image 

Once I add in the single line informing the mapping that it should cascade all changes via the Cascade.All() method call, the test passes and the order detail records are created alongside the Order.

image

If you’re not comfortable with the cascading delete impact this would have the option for cascading only on saves and updates could be a better choice.

image

Happy coding!


Complex Type Messaging with ActiveMQ and .NET

Mark Bloodworth wrote a few blogs on the art of using ActiveMQ within .NET. Those posts are the origination for my ability to produce this post, so thanks, Mark. I’d highly advise going to read Mark’s introductory post on the topic first. I’ll paraphrase a good deal of Mark’s post in this one so look to him for the details and deeper explanation, he’s the man.

Mark’s post shows at a high level how to pass simple string messages via ActiveMQ. This post will answer the question of passing more complicated types via ActiveMQ so that you can use strongly-typed instances as your messages.

Getting Started

It sounds a bit scary, especially if you’re not akin to using non-MSMQ messaging engines. Don’t freak out, ActiveMQ actually quite easy. I’m a Java troglodyte, so if I can do this, you can do this.

  1. Download ActiveMQ (as of this post the current release was 5.5.0)
    It’ll come as a ZIP file. I just unzipped mine to my root drive, so C:\ActiveMQ
  2. Download Spring.NET
    I placed mine again, in my system root, at C:\SpringNet
  3. In the ActiveMQ\conf you’ll find the file activemq.xml, and in that file you’ll need to make sure you set the file as displayed in the screenshot below. This is explained in the comments of Mark’s post in more detail, but for now, just trust me.

    image

  4. Go to a dos prompt, cd into the ActiveMQ\bin directory, and type “activemq” (without the quotes). You’ll see a window like that below open and you’ll need to find a line similar in nature to the one highlighted. Again, there’s some detail in Mark’s post on this we won’t delve into at this point.

    image
  5. That’s it! We’re ready to code! Most of the code is again, inspired – okay, mimics – Mark’s examples. Our point here is to pass complex message types via ActiveMQ, so we’ll do a few things slightly differently. I’ll also take a few naïve steps and use generic implementations for publishing and subscribing to messages.

Ontology

Both sides of the conversation will need to have context. I’ll also throw in a strings class to mitigate fat-fingering [and potentially configuration of some sort later on]. The message class and utility class are below.

image

Subscription

With ActiveMQ running we’re now ready to subscribe to a queue in which messages of the SampleRequest type will be sent. As mentioned earlier the first thing needed is a listener. The main difference in this example and in Mark’s post is that this code expects for the messages to be Object Messages and not Simple messages, as Mark’s example was for primarily passing string messages. Below is the generic listener, which basically expects the body of a particular message to be of the type specified in the generic argument.

image

Next is the program that creates an instance of the listener (and the queue if it doesn’t already exist in the ActiveMQ installation). The listener is created and bound to a ActiveMQ connection. Again, more detail in other places, for now the idea is, we’re going to listen for messages. The program code below begins watching the ActiveMQ specified in the Strings.Destination property at the URL specified in the Strings.Url property.

image

Publication

With the subscriber listening for messages within ActiveMQ the next application will need to send messages into the queue. As with the subscription side the publication side will rely on a generic approach to publishing messages into ActiveMQ queues.

image

The program code will take input from the user. Then it will use that input as the value of the SampleRequest.Message property and will set the SampleRequest.ClientMachine property to the client computer’s machine name.

image

When both the projects included in the accompanying download are executed the results are instantaneous.

image

Durability

To prove the durability of the ActiveMQ layer during debug mode, stop the subscription application and use the publication client to send a few more messages.

image

Then, stop the publication client. Once the subscription client is re-started, the messages sent into the ActiveMQ while the subscription application was shut down are collected and processed. The screenshot below shows how the later messages sent when the subscription app wasn’t running are processed as soon as it is re-started. Then, the publication app sends in two more messages and the new instance of the subscription app continues processing them as they arrive.

image

Happy coding!


Generics and Reflection via TDD

Pluggable development requires trickery. Sometimes naivety is a requirement in the development of pluggable solutions and, though not always the best idea, dynamically-dynamic code is the only way these problems can be solved. If for no other reason than to have a record of how to solve certain situations when they arise in my own life again, I’m going to try to put together a series of posts on how reflection and generic usage can be pretty neat together.

image

This first post is to answer a question from a colleague tonight. He’s reading a type via a custom configuration element and then using that type as a generic parameter to a method call.

How do I call a generic method if I don’t know the type I’ll be providing the generic argument at run-time?

The question’s one I’ve had to remember how to do a few times, so here’s to hoping it helps someone else. The screen shot below contains a unit test class with a sample generic method. The test points to a type’s generic method using reflection via the MethodInfo.MakeGenericMethod method and then calls the method using the MethodInfo.Invoke method.

Happy coding!


Generics/Reflection Tricks Part 2- NDecision

This post serves a dual purpose, to add to the list of articles in the series on Generics and Reflection. It also introduces a Fluent business logic processor I’ve built about 100 times in other incantations but I think all eventually leading up to this point. In keeping with my current trend of hypercomplicating the sublime by making giving it a Fluent interface to make it easy for reuse later on, the result was, as my friend Bill Hargett would probably say, is an object-oriented way of representing a procedure. Pretty much. So that’s our goal statement.

Represent the relationship between an object and the steps it takes through a system or the actions on which a process acts out on that object as a series of disconnected objects. Expose the ability to use that decisioning power via a Fluent interface. Use business language-rich unit tests to provide guidance on the manner in which the processes should be executed.

Typical late-night insomnia fodder. Thank goodness for Generics, for they make the whole engine and core architecture of this idea possible in about 200 lines of code. Maybe less.

To get started take a look at the super-simple business entity and exception. You can probably figure out what the business logic looks like. We’ll spell it out in provable statements:

  1. Allow all users 21 or over to drink
  2. Disallow all users under 21 to drink

image

Taking that logic and rolling unit tests is pretty easy.

image

Given those tests we have a rather common language from which a Fluent interface can be created. Handle a target instance or list of instances using associated method calls. Why method calls and not interfaces? That’d force too much structure. You could have one old app with all the  methods you need, just not glued together properly. You refactor a little, tie it all together, and use something like NDecision to direct traffic.

image

With the unit tests defined and a little imagination and patience applied the Fluent interface takes a structure that makes the business rules quite easy to implement. The test out put is below. you’ll see both tests pass in spite of the fact that the result behavior of the underage run is an exception is thrown; it’s just one the test is actually is expecting.

image

If you’re interested in this, let me know and I’ll get you a copy of the code. It is pretty short and is probably going to be shown in a future post, so stay tuned.


Hashtables in reverse got you bugged?

well, it was driving me nuts too. apparently using a foreach loop in a custom Hashtable has a strange "reverse-order" implementation. so i looked at the class library - finally, a reason to use the HybridDictionary object. More lightweight than a Hashtable, and it iterates as expected (from-first-to-last in sequential order) - something truly important when developing Provider implementations.


Jumping the Gun with C#

A colleague sent me an article today about some of the new language features in C# 3.0 . I couldn't resist but get a little unnerved by the discussions that have begun regarding the third release of C#, bearing in mind that the second release has still not yet received complete adoption. At this point my company has made a few decisions regarding the adoption of 2.0. Mainly, we've decided not to adopt it, or at least, not to adopt it yet, and to wait until such time as it is more popular with the IT staffers who control the computers on which our software runs. We don't control the state government offices in which our software packages are installed, and we don't control a lot of other things that our clients have to maintain. Too many technologists are too hardcore these days about harnessing the new stuff, and I think that Microsoft and other such companies do a horrible job of supporting the developers who use their stuff. I heard a client just this week complain about another software company (link not provided, but let's say that they're pretty huge and in a lot of cases the MS of their own industry) who forces the upgrade process down their throats by not providing support for previous releases. I understand this notion from being in the industry as a developer, but at the same time feel that a distinction can be drawn between companies who write software and companies who write software the sole purpose of which is to write other software. In situations such as this - Java, .Net, PowerBuilder, all of it - that support for linguistic adoption and utilization should never be dropped. I urge Microsoft to concentrate not on the horizon, but on the current adoption. Likewise, I urge the developers who are working with the communities being driven by technology to do their best to remember those of us who are not only using the existing "stuff," but also who have clients who have no option but to continue to do so for the foreseeable future. "Because it is cool, Because it is new, and Because the old stuff is so... old," are all poor excuses for adopting new technology. If your clients and customers want the latest and greatest, go for it. If they aren't interested, don't force them. And for GOD'S SAKE people, take a slow look at the technologies before you begin to talk about them and to adopt them into environments that may suffer should the technologies fail or prove to be less than effective options to what you know works.


Automating the build process using Visual Studio.Net 2003, CruiseControl.Net, NAnt, and NDoc

I like the idea of automated builds a lot. Cory Foy taught me a lot about the idea, and I've only recently really had a lot of time to dedicate to learning more about the whole process of setting up an automated build. Today I decided to fire off NDoc during a CCNet build. It took awhile to get it down-pat (and required a little help from Cory, to boot). In the spirit of trying to make life as easy as possible, here's the code to do it in as simple-and-direct-a-method as I could come up with.

First of all, here's the CruiseControl.Net server configuration file in its entirety.

    1  < cruisecontrol >

    2      < project name ="CCNetSample">

    3          < tasks >

    4              < devenv >

    5                  < solutionfile > C:\CCNetSample\CCNetSample.sln solutionfile >

    6                  < configuration > Debug configuration >

    7              devenv >

    8              < nant >

    9                  < executable > c:\nant\bin\nant.exe executable >

   10                  < baseDirectory > C:\CCNetSample\ baseDirectory >

   11                  < buildFile > build.xml buildFile >

   12                  < targetList >

   13                      < target > doc target >

   14                  targetList >

   15              nant >

   16          tasks >

   17      project >

   18   cruisecontrol >


Finally, the build.xml file.

    1  < project name ="CCNetSample" default ="doc" basedir =".">

    2      < target name ="doc">

    3          < ndoc >

    4              < assemblies basedir ="C:\CCNetSample\CCNetSample.Client\bin\Debug\">

    5                  < include name ="CCNetSample.Client.exe" />

    6                  < include name ="CCNetSample.Lib.dll" />

    7              assemblies >

    8              < documenters >

    9                  < documenter name ="MSDN">

   10                  documenter >

   11              documenters >

   12          ndoc >

   13      target >

   14   project >


Hope that helps someone who's ever in the same need.

One hand...

This new server-side.net article, Visual Studio vs. Vista: What's going on here?, raises an interesting question. The quote "Ensuring that VS2005 works well on Windows Vista is a core goal of ours. Visual Studio 2005 SP1 will run on Vista but will likely have a few compatibility issues. We are working with the Vista team to understand those, to provide workarounds where possible and also work on providing you with a set of fixes beyond SP1" makes me wonder - at Microsoft, it seems like one hand has ripped off the other one and beaten the owner to a blood pulp. Guys, get your sh17 straight.


ASP.Net, Ajax and Web Services Using Non-primitive Method Parameters

I've seen quite a few examples on communicating with web services from JavaScript code placed in ASPX pages. This morning I came back to work and decided to rework an existing example of my company's web service API so that it uses this feature set. In the process it occurred to me that our API has certain methods that require the passing of non-primitive data types (for example, a search critera object a-la CSLA, which is a whole other conversation). I did a little Googling and came up with very little, placing me once again into "try it and see" mode. The good news, there's very little work to be done on your part, as most of this stuff gets automagically serialized by the framework. So do the snoopy dance while you're reading this tutorial, because this is one of those magical times when the framework works wonders for us.

Take a look at this C# code, which embodies a relatively simplistic web service. In particular, take note of the custom BankAccount class, which we'll be using in our example to prove the point.

using System;

using System.Web;

using System.Collections;

using System.Web.Services;

using System.Web.Services.Protocols;

using System.Web.Script.Services;

[ScriptService ]

public class BankService : WebService

{

[WebMethod, ScriptMethod ]

public BankAccount GetAccountById(int id)

{

BankAccount b = new BankAccount ();

b.AccountId = id;

b.Balance = 3443.23D;

return b;

}

[WebMethod, ScriptMethod ]

public bool DeleteBankAccount(BankAccount account)

{

if (account.AccountId == 1) return false ;

return true ;

}

}

public class BankAccount

{

public double Balance;

public int AccountId;

}

Not too real-world of an example service, of course, but it'll do for now.

In addition, we've got to examine our ASPX code. I'll take a top-down approach to this examination, covering each fragment individually. This first segment shows the beginning of the ASPX client page, where you can see that the ScriptManager is being used to inform our page that we have a ScriptService (our web service from earlier) with which the page will be communicating.

@ Page Language ="C#" AutoEventWireup ="true" CodeFile ="Default.aspx.cs" Inherits ="_Default" %>

DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">

< html xmlns ="http://www.w3.org/1999/xhtml">

< head runat ="server">

< title > Untitled Page title >

head >

< body >

< form id ="form1" runat ="server">

< asp : ScriptManager ID ="ScriptManager1" runat ="server">

< Services >

< asp : ServiceReference Path ="~/BankService.asmx" />

Services >

asp : ScriptManager >

Notice in particular the bolded section in the code above. That Path attribute points to the URL of the web service the page will use to "do stuff" on the server. Without it things just won't function (I mention this because I've seen quite a few examples that leave it out entirely). Next  we'll have a little more HTML code to place our elements nicely about the page and to give the user some clues as to how to use the page:

        < input type ="button" onclick ="getAccount();" value ="Get Account" />< br />

< input type ="button" onclick ="deleteAccount();" value ="Delete Account" />< br />

< div style ="width: 200px; font-family: Arial;">

< div id ="accountLbl" style ="float: left;"> Account Id: div >

< div id ="accountId" style ="color:red; float:right;"> div >

< br />

< div id ="balanceLbl" style ="float: left;"> Balance: div >

< div id ="balance" style ="color:red; float:right;"> div >

< br />

< div id ="resultLbl" style ="float: left;"> Result: div >

< div id ="delResult" style ="color:red; float:right;"> div >

< br />

div >

Not too much going on until we see the final piece, the JavaScript that ties it all together. Note in particular again the bolded code below. It creates a variable of type BankAccount so that our delete functionality will work properly.

        < script language ="javascript" type ="text/javascript">

function getAccount()

{

BankService.GetAccountById(2,getAccountCallback);

}

function getAccountCallback(result)

{

$get("accountId" ).innerHTML = result.AccountId;

$get("balance" ).innerHTML = result.Balance;

}

function deleteAccount()

{

var account = new BankAccount();

account.AccountId = 1;

BankService.DeleteBankAccount(account,deleteAccountCallback);

}

function deleteAccountCallback(result)

{

delResult.innerHTML = result;

}

script >

form >

body >

html >

As you can see, the ASP.Net Ajax extensions automatically serialize everything in your web service so that you have access to everything as simplistically as you would in C#. Just create the object as you normally would, set the instance's properties by name, and send the message!

Happy coding!