Archive for the “.NET Application Architecture” Category

How would you like to achieve detailed exception and trace logging, including method timing and correlation all within a lightweight in-memory database that you can easily manage and query, as exhibited below?

All of this requiring nothing more of you than simply decorating your methods with a very simple attribute, as highlighted below.

In this post, I’m going to demonstrate how to configure PostSharp, an aspect-oriented framework, along with NLog and SQLite to achieve the benefits highlighted above. Before I get into the details of the configuration and aspect source code, I’ll provide a bit of background on PostSharp.


PostSharp is a powerful framework that supports aspect-oriented programming using .NET attributes. Attributes have been around in the .NET Framework since version 1.0. If you weren’t used to using attributes in the past, their increased usage in WCF (including WCF RIA Services and Data Services), ASP.NET MVC, the Entity Framework, the Enterprise Library and most of Microsoft’s other application frameworks will surely mean you’ll be encountering them in the very near future. PostSharp allows you to create your own attributes to meet a variety of needs (cross-cutting concerns, in aspect-oriented parlance) you may have such as persistence, security, monitoring, multi-threading, and data binding.

PostSharp has recently moved from a freely available to a commercially supported product. PostSharp 1.5 is the last open source version of the product with PostSharp 2.0 being the first release of the commercially supported product. Don’t let the commercial product stigma scare you away, both PostSharp 1.5 and 2.0 are excellent products. If you chose to go with PostSharp 2.0 you can select either a pretty liberal community edition or more powerful yet reasonably priced Professional edition. For the purpose of this post, I’ll be using the community edition of PostSharp for forward compatibility. The Community Edition includes method, field, and property-level aspects, which is more than enough for the purposes of this post. You will also find examples of PostSharp aspects on their site, in the blogosphere, and on community projects such as PostSharp User Plug-ins.

What makes PostSharp stand out among competing aspect-oriented frameworks is how it creates the aspects. PostSharp uses a mechanism called compile-time IL weaving to apply aspects to your business code. What this essentially means is that, at build time, PostSharp opens up the .NET intermediate language binary where you’ve included an aspect and injects the IL specific to your aspect into the binary. I’ve illustrated below what this looks like when you use .NET Reflector to disassemble an assembly that’s been instrumented by PostSharp. The first image is before a PostSharp attribute is applied to the About() method on the controller. The second image represents what the code looks like after PostSharp compile-time weaving.

Before PostSharp Attribute Applied to About() Method

After PostSharp Attribute Applied to About() Method

What this means is that you get very good performance of aspects but will need to pay a higher price at build/compile time. Ayende provides a good overview of various AOP approaches, including the one that PostSharp uses. Don’t be concerned by his “hard to implement” comment. The hard part was done by the creators of PostSharp, who have made it easy for you.

Implementation of Aspect-Oriented Instrumentation

The remainder of this post will focus on the actual implementation of the solution. Much of the code I have here was cobbled together from a blog post I archived long ago from an unknown author. I’d love to provide attribution but, like many blogs out there, it seemed to have disappeared over time. I’ll start off first with the SQLite table structure, which can be found below.

The logging configuration file is very similar to my post on logging with SQLite and NLog with minor changes to the SQLite provider version.

<nlog xmlns="" xmlns:xsi="">
    <target name="File" xsi:type="File" fileName="C:Temp${shortdate}.nlog.txt"/>
    <target name="Database" xsi:type="Database" keepConnection="false" useTransactions="false"
            dbProvider="System.Data.SQLite.SQLiteConnection, System.Data.SQLite, Version=, Culture=neutral, PublicKeyToken=db937bc2d44ff139, processorArchitecture=x86"
            connectionString="Data Source=C:ProjectsMyApp_Logging.s3db;Version=3;"
            commandText="INSERT into LOGTABLE(Timestamp, Loglevel, ThreadId, Message, Context, User, DurationInMs, Exception) values(@Timestamp, @Loglevel, @ThreadId, @Message, @Context, @User, @DurationInMs, @Exception)">
      <parameter name="@Timestamp" layout="${longdate}"/>
      <parameter name="@Loglevel" layout="${level:uppercase=true}"/>
      <parameter name="@ThreadId" layout="${threadid}"/>
      <parameter name="@Message" layout="${message}"/>
      <parameter name="@Context" layout="${ndc}"/>
      <parameter name="@User" layout="${aspnet-user-identity}"/>
      <parameter name="@DurationInMs" layout="${mdc:item=DurationInMs}"/>
      <parameter name="@Exception" layout="${mdc:item=exception}"/>
    <logger name="*" minlevel="Debug" writeTo="Database" />

The most important component of the solution is the source code for the PostSharp aspect. Before letting you loose, I’ve highlighted some of the features of the source code to avoid cluttering it with comments:

• You need to have PostSharp (the DLLs and the necessary build/compilation configuration) set up on your machine for the aspects to work correctly. Specifically, my code works against PostSharp 2.0
• For those of you not familiar with Log4Net or the original implementations of the NDC (NestedDiagnosticContext) and MDC (MappedDiagnosticContext), the original documentation from the Log4J project provides good background.
• The NDC is used to push GUID’s on the stack which can then be used as correlation ID’s to trace calls through the stack for methods annotated with the [LogMethodCall] attribute that this code implements.
• The MDC map stores timing information in all cases and exception information in the case of an Exception in one of the calling methods annotated with the [LogMethodCall] attribute.
• To use the attribute, just decorate the method you wish to instrument with the [LogMethodCall] attribute. Then sit back and enjoy detailed instrumentation for free.

using System;
using System.Diagnostics;
using NLog;
using NLog.Targets;
using PostSharp;
using PostSharp.Aspects;

namespace MvcApp.Web.Aspects
    public class LogMethodCallAttribute : MethodInterceptionAspect
        public override void OnInvoke(MethodInterceptionArgs eventArgs){
            var methodName = eventArgs.Method.Name.Replace("~", String.Empty);
            var className = eventArgs.Method.DeclaringType.ToString();
            className = className.Substring(className.LastIndexOf(".")+1, (className.Length - className.LastIndexOf(".")-1));
            var log = LogManager.GetCurrentClassLogger();
            var stopWatch = new Stopwatch();

            var contextId = Guid.NewGuid().ToString();

            log.Info("{0}() called", methodName);

            catch (Exception ex)
                var innermostException = GetInnermostException(ex);
                MDC.Set("exception", innermostException.ToString().Substring(0, Math.Min(innermostException.ToString().Length, 2000)));
                log.Error("{0}() failed with error: {1}", methodName, innermostException.Message);
                throw innermostException;

           NLog. MDC.Set("DurationInMs", stopWatch.ElapsedMilliseconds.ToString());
           log.Info("{0}() completed", methodName);
           stopWatch = null;

        private static Exception GetInnermostException(Exception ex)
            var exception = ex;
            while (null != exception.InnerException)
                exception = exception.InnerException;
            return exception;

Comments Comments Off on Lightweight, Aspect-Oriented Instrumentation with PostSharp, NLog, and SQLite

Sometimes you know that something works a certain way but you haven’t really internalized it, you haven’t grok’ed it, until you experience it firsthand. Such was my knowledge of the interaction between .NET web services and the XML Serializer a couple of weeks ago.

While troubleshooting calls made from a smart client to back-end web services, I was using Process Monitor to see what was going on under the hood. It didn’t dawn on me at first why I was seeing dynamic generation of temporary C# files and compilation of these files using the C# compiler (csc.exe) as shown in the image below. Since the services in question were provided by a legacy mainframe and were necessarily granular and immutable, the client application was generating a lot of new proxies, meaning a lot of wasted time.

The items being generated and compiled, XML serialization assemblies, are necessary to guarantee speedy serialization and de-serialization of type-specific data. Although you’d ideally like to have these assemblies there all of the time, I’ve found several instances under which these assemblies are not present by default:

• If you compile your assemblies using the compiler (e.g. csc.exe) directly. This is true even if you build in release mode with optimizations enabled.
• If you build in debug mode using Visual Studio, as illustrated by the image below.
• If you’re using a third party product that leverages .NET assemblies for service-based interoperation, you might be subject to dynamic proxy compilation and not even be aware of it.

In the final part of this post, I’ll assume that you’re interested in knowing the ways you can get around dynamic XmlSerializer generation and compilation. As you’ve probably derived from the previous section, building a solution with VisualStudio or directly from the underlying MSBuild tool will take care of call the XML Serializer Generator (sgen.exe) for you, as illustrated in the following image.

You can also invoke sgen.exe directly. If you do so, be sure to use the /C flag to pass compiler options. Alternately, if you’re using WCF, you can choose the alternative DataContractSerializer, which has been optimized to avoid the overhead of generating and using the extra assembly and shared only the data contract without sharing the underlying type data.

Comments Comments Off on Watch Out! .NET Web Services and the XML Serializer

I picked up this gem of a book when it first came out in eBook format during the PDC. I sent it over to my Kindle and got through the entire book during session downtimes. I planned on being the first to post a review of this book on Amazon but I’ve sat it out too long and will now be the fifth review.

Ultra-Fast ASP.NET

The first four reviewers did a pretty respectable job of providing and overview of Mr. Kiessig’s qualifications and the book content and have all awarded the book the entirely deserved 5 start rating. Rather than pile on more information about Rick Kiessig or what’s in the book, I’m going to tell you why, as a person who has spent a good amount of time looking at .NET application performance, I recommend this book to every person I work with as mandatory reading:

  • Although there are great rules out there for web site optimization and corresponding tools to test these rules (e.g. Yahoo’s Yslow), it’s great to see the client side examples from an ASP.NET specific point of view.
  • It’s interesting to see someone who bucks the current trends and provides some real insight on when it’s appropriate to use ORM’s, saying essentially that objects are good but ORM’s might not be the best engine if you’re building a Formula 1 race car.
  • Try finding another book that will even touch web gardens, partitioning an application into different AppPools, or using the /3GB switch. Try finding a Microsoft engineer who will talk to you about those items and offer objective guidance.
  • The write-up and source code on asynchonous web pages and background worker threads – worth the price of the book alone.
  • Creative, out-of-the-box ideas: using SQL Server Express for caching, using BI services to support the web tier of the application, etc. – not the kind of advice you find in your typical MSDN article.

It would be interesting to see how ASP.NET MVC and Silverlight play out performance-wise but alas, these technologies are a bit newer and Mr. Kiessig had to get a book to press. I’d gladly pay for the second edition of this book if it includes a couple of additional chapters that address these technologies. Until then, this is by far the most thorough and pragmatic book on ASP.NET performance to be had on the market. It might be simply an eye-opening read or the book that saves your skin one day. Either way, you won’t regret picking this book up.

Comments Comments Off on Book Review: Ultra-Fast ASP.NET

For quick and easy prototypes, you’ve got to admire ASP.NET MVC and WCF RIA Services. These approaches may not be perfect out-of-the-box but they’re structured much better than the old “bind a dataset to a grid and let it fly” approach of 2003. As easy as these approaches are, I’m always looking for ways to make things easier. I get a lot of bang for my buck by using SQLite as an in-memory database whenever I create a new MVC or RIA Services solution. In fact, I create 4 SQLite databases with each new solution: one each for application data, test data, membership/role data, and logging/tracing data. Below I’ve described the techniques I make use of to utilize each of these databases.

System.Data.Sqlite + ORM of Choice
If you’ve never used SQLite with .NET before, you’ll be happy to know that it’s as easy as can be. The System.Data.SQLite open source ADO.NET provider gives you everything you need. The provider is a complete ADO.NET implementation, including full support for the ADO.NET Entity Framework and Visual Studio design-time support – all in a 900 KB assembly. Need support for VisualStudio 2010? Ion123 includes a library compatible with 2010 in this post. So whether you use Entity Framework or NHibernate, just drop in the System.Data.Sqlite DLL, create a database, wire up your objects to the ORM and go to town. Data access simply could not be easier.

SQLite-Backed Testing
There are lots of good reasons to implement proper interfaces and mock objects or stubs for the purposes of testing. Sometimes it’s just easier not to have to deal with it. SQLite-backed testing provides the perfect alternative. You can still create your unit tests, even exercising framework elements and third party libraries that aren’t always the easiest to cover with traditional mocking frameworks. Just plug in a temporary SQLite test database, write your test code just as you’d write your application code and use one of several mechanisms to purge the data between tests. As usual, Ayende provides the definitive reference on how to do this for NHibernate. I’ve provided code below from my experiences doing this with file backed databases for Castle ActiveRecord. Find your way to Google for references on how to accomplish this with the Entity Framework.

using Castle.ActiveRecord;
using Castle.ActiveRecord.Framework;
using Castle.ActiveRecord.Framework.Config;
using Gallio.Framework;
using Gallio.Model;
using MbUnit.Framework;
using MyNameSpace.Models;
using System;

namespace MyNameSpace.Tests
    public abstract class AbstractBaseTest
        protected SessionScope scope;

        public void InitializeAR()
            IConfigurationSource source = new XmlConfigurationSource("TestConfig.xml");
            ActiveRecordStarter.Initialize(source, typeof(Object1), typeof(Object2));

        public virtual void Setup()
            scope = new SessionScope();

        public virtual void TearDown()

        public void Flush()
            scope = new SessionScope();

SQLite as a Membership and Role Provider
Both ASP.NET MVC and WCF RIA Services use SQL Server ASP.NET Membership and Role Providers by default. Take SQL Server out of the equation and swap in the custom SQLite Membership and Role Providers and you can use SQLite for your security data as well. Configuration of the custom provider can all be done right in the web.config file, as illustrated below.

    <add name="MembershipConnection" connectionString="Data Source=C:ProjectsDatabasesMyApp_Membership.s3db;Version=3;"/>
		<authentication mode="Forms">
			<forms loginUrl="~/Account/LogOn"/>
    <membership defaultProvider="SQLiteMembershipProvider" userIsOnlineTimeWindow="15">
        <add name="SQLiteMembershipProvider" type="MyNameSpace.Web.Helpers.SqliteMembershipProvider" connectionStringName="MembershipConnection" applicationName="MyApplication" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="true" passwordFormat="Hashed" writeExceptionsToEventLog="true"/>
    <roleManager defaultProvider="SQLiteRoleProvider" enabled="true" cacheRolesInCookie="true" cookieName=".ASPROLES" cookieTimeout="30" cookiePath="/" cookieRequireSSL="false" cookieSlidingExpiration="true" cookieProtection="All">
        <add name="SQLiteRoleProvider" type="MyNameSpace.Web.Helpers.SQLiteRoleProvider" connectionStringName="MembershipConnection" applicationName="MyApplication" writeExceptionsToEventLog="true"/>

SQLite Logging and Tracing with NLog
I recently covered the integration of NLog with SQLite. A simple configuration file entry and all of your log and trace output can go into a single SQLite database.

Comments Comments Off on SQLite and .NET – Agility Tips and Tricks

One of the things I’m often asked to do for clients is to create an applicability matrix. That is, which technology applies best to which particular challenges in an enterprise? There would seem to be an acute need for this type of clarification in the realm of Microsoft’s service technologies. With the recent releases of Web Process Activations Services (WAS) on Windows Server 2008, WCF 3.5 and 4.0, Windows Server AppFabric, BizTalk 2009 and 2010, and Windows Azure AppFabric, the waters of Microsoft’s service and integration technologies is muddy indeed. In this post, I’m going to provide some clarification; explaining what new service and integration offerings are on the way from Microsoft, offering a frame of reference on how I see them applying to enterprise customers, and furnishing references to materials you can use to educate yourself in these technologies.

Let’s start off with a quick tour of Microsoft’s new service and integration offerings. Specifically, I’m going to cover WCF 4.0, Server AppFabric, and Azure AppFabric. In this overview, I’m going to restrict the discussion to technologies that specifically relate to the challenges of traditional large enterprise application integrations. Interesting aspects of Microsoft’s new offerings such as WCF 4.0 RESTful service support (incorporated from the WCF Rest Starter Kit) and AppFabric Caching (formerly known as ‘Velocity’) will not be covered in detail.

Windows Communications Foundation (WCF) 4
Release focuses on ease of use along with new features, such as routing, support for WS-Discover, and enhancements from the WCF Rest Starter Kit.
Key Enterprise Application Features

  • A complete message routing solution that is useful for the following scenarios: redundancy, load balancing, protocol bridging, and versioning
  • Support for the WS-Discovery protocol that allows the discovery of services on a network. Support is provided via managed mode discovery which uses a centralized discovery proxy and via adhoc mode, in which the location of the service is broadcast.

Windows Server AppFabric
The best way to think of Windows Server AppFabric is as a replacement for the COM+ hosting environment. In the same way that WCF unified/replaced web services, remoting, and DCOM, AppFabric is replacing the COM+ hosting environment. Hosting administration, monitoring services, and management tools allow AppFabric to play this role. Also includes workflow persistence and a distributed caching platform.
Key Enterprise Application Features

  • A WAS-based hosting environment, which includes durable workflow hosting. Includes tools for managing, monitoring, and querying in-flight services and workflows.
  • Workflow persistence that allows AppFabric workflows to scale across machines. This includes the ability to monitor long-running workflows.
  • Health monitoring and troubleshooting of running WCF and WF services. High performance instrumentation based on Event Tracing for Windows (ETW) with analytics from a SQL monitoring store leveraging SQL Server Reporting Services (SSRS).
  • Management of services and workflows through the AppFabric dashboard, an extension to the IIS manager. PowerShell cmdlts enable management of services and workflows from the PowerShell console and enable further automation of AppFabric.

Windows Azure AppFabric
Branded as the Azure cloud-based version of its Windows Server-based counterpart, Azure AppFabric is perhaps better understood as a parallel service in the cloud. It provides features that Server AppFabric doesn’t, such as cloud-based relay, a service registry, and a service bus buffer. At the same time, several of Server AppFabric’s core features such as workflow persistence and health monitoring either aren’t built in or don’t make sense for the cloud-based version. Remains to be seen if these two products ever achieve true parity.
Key Enterprise Application Features

  • Relay service that removes the need for point-to-point bindings, instead routing non-transactional calls through the cloud.
  • Service bus registry that provides an ATOM feed of services listening on a particular namespace.
  • Variety of service bindings that represents a rough subset of the WCF bindings. Includes a WS-compliant binding as well as a TCP binging that operates in several modes, including a hybrid mode that can promote a connection from a cloud-based relay to a more direct connection.
  • Cloud-based service bus buffer queuing service. MSMQ-like, utilizable by both the client and the server with the condition that the queues are cloud-based. Allows the messages to be stored on the bus for a configurable amount of time, even if the service endpoint is not available.
  • Robust service authentication service, based upon claims-based application-to-application authentication.

What I’ve found is that knowledge of these new service and integration offerings alone does not get you to the point where you intuit when to apply them to enterprise application integration challenges. Therefore, I have begun to cluster these technologies together and think about what the best use cases are for each of the respective technologies. The image below represents these clusters, along with the archetype use case and particular features of the clusters’ technologies. This clustering represents a fundamental simplification of reality and doesn’t account for many of the shades of gray. Decisions such as whether workflows are best hosted in WF under AppFabric or under BizTalk are best made by application architects, based upon their knowledge of the organizational, business and technical constraints that impact their applications. That said, these clusters represent what I feel to be sound heuristics for Microsoft service and integration decisions over the next several years.

Microsoft Service Integration Technologies

Comments Comments Off on Microsoft Service and Integration Technologies – WCF 4.0, AppFabric, BizTalk 2010