Archive for April, 2010

As many of you likely know, Lutz Roeder turned over control of one of the “must have” .NET developer tools, .NET Reflector to Red Gate software. True to their promise, RedGate has continued to support the free version of Reflector and make continued improvements, including the addition of a Visual Studio plugin to jump into Reflector and the support of .NET 4 assemblies with their most recent release of the tool.

In addition to their support of the free tool, RedGate has extended Reflector’s core disassembly capabilities and is now offering a commercial version of the tool, Reflector Pro. RedGate has rolled the tool into their .NET Developer Bundle. So, if you hold existing .NET Developer Bundle licenses, you can pick up Reflector Pro and start using it. So I did.

Reflector Pro integrates right into Visual Studio (including the VS 2010 RC), which is really important given its core competency. What it allows, simply stated, is for you to step through any assembly in Visual Studio as if it were your own. This is a killer feature that you almost never need… until you really need it. Reflector Pro does this by disassembling assemblies of your choosing and then generating the corresponding debug symbols you need to perform a variety of functions:

  • Stepping into third party libraries
  • Setting breakpoints in third party libraries
  • Watch and modify values in third party libraries

Like the free version of Reflector we’ve all grown to love, the Pro version of the software isn’t needed all the time and it just works the way you want when you need it. RedGate’s site ( provides a simple video demo and walkthrough. More your should not need to get going with this tool.

Comments Comments Off on Redgate Reflector Pro

How would you like to achieve detailed exception and trace logging, including method timing and correlation all within a lightweight in-memory database that you can easily manage and query, as exhibited below?

All of this requiring nothing more of you than simply decorating your methods with a very simple attribute, as highlighted below.

In this post, I’m going to demonstrate how to configure PostSharp, an aspect-oriented framework, along with NLog and SQLite to achieve the benefits highlighted above. Before I get into the details of the configuration and aspect source code, I’ll provide a bit of background on PostSharp.


PostSharp is a powerful framework that supports aspect-oriented programming using .NET attributes. Attributes have been around in the .NET Framework since version 1.0. If you weren’t used to using attributes in the past, their increased usage in WCF (including WCF RIA Services and Data Services), ASP.NET MVC, the Entity Framework, the Enterprise Library and most of Microsoft’s other application frameworks will surely mean you’ll be encountering them in the very near future. PostSharp allows you to create your own attributes to meet a variety of needs (cross-cutting concerns, in aspect-oriented parlance) you may have such as persistence, security, monitoring, multi-threading, and data binding.

PostSharp has recently moved from a freely available to a commercially supported product. PostSharp 1.5 is the last open source version of the product with PostSharp 2.0 being the first release of the commercially supported product. Don’t let the commercial product stigma scare you away, both PostSharp 1.5 and 2.0 are excellent products. If you chose to go with PostSharp 2.0 you can select either a pretty liberal community edition or more powerful yet reasonably priced Professional edition. For the purpose of this post, I’ll be using the community edition of PostSharp for forward compatibility. The Community Edition includes method, field, and property-level aspects, which is more than enough for the purposes of this post. You will also find examples of PostSharp aspects on their site, in the blogosphere, and on community projects such as PostSharp User Plug-ins.

What makes PostSharp stand out among competing aspect-oriented frameworks is how it creates the aspects. PostSharp uses a mechanism called compile-time IL weaving to apply aspects to your business code. What this essentially means is that, at build time, PostSharp opens up the .NET intermediate language binary where you’ve included an aspect and injects the IL specific to your aspect into the binary. I’ve illustrated below what this looks like when you use .NET Reflector to disassemble an assembly that’s been instrumented by PostSharp. The first image is before a PostSharp attribute is applied to the About() method on the controller. The second image represents what the code looks like after PostSharp compile-time weaving.

Before PostSharp Attribute Applied to About() Method

After PostSharp Attribute Applied to About() Method

What this means is that you get very good performance of aspects but will need to pay a higher price at build/compile time. Ayende provides a good overview of various AOP approaches, including the one that PostSharp uses. Don’t be concerned by his “hard to implement” comment. The hard part was done by the creators of PostSharp, who have made it easy for you.

Implementation of Aspect-Oriented Instrumentation

The remainder of this post will focus on the actual implementation of the solution. Much of the code I have here was cobbled together from a blog post I archived long ago from an unknown author. I’d love to provide attribution but, like many blogs out there, it seemed to have disappeared over time. I’ll start off first with the SQLite table structure, which can be found below.

The logging configuration file is very similar to my post on logging with SQLite and NLog with minor changes to the SQLite provider version.

<nlog xmlns="" xmlns:xsi="">
    <target name="File" xsi:type="File" fileName="C:Temp${shortdate}.nlog.txt"/>
    <target name="Database" xsi:type="Database" keepConnection="false" useTransactions="false"
            dbProvider="System.Data.SQLite.SQLiteConnection, System.Data.SQLite, Version=, Culture=neutral, PublicKeyToken=db937bc2d44ff139, processorArchitecture=x86"
            connectionString="Data Source=C:ProjectsMyApp_Logging.s3db;Version=3;"
            commandText="INSERT into LOGTABLE(Timestamp, Loglevel, ThreadId, Message, Context, User, DurationInMs, Exception) values(@Timestamp, @Loglevel, @ThreadId, @Message, @Context, @User, @DurationInMs, @Exception)">
      <parameter name="@Timestamp" layout="${longdate}"/>
      <parameter name="@Loglevel" layout="${level:uppercase=true}"/>
      <parameter name="@ThreadId" layout="${threadid}"/>
      <parameter name="@Message" layout="${message}"/>
      <parameter name="@Context" layout="${ndc}"/>
      <parameter name="@User" layout="${aspnet-user-identity}"/>
      <parameter name="@DurationInMs" layout="${mdc:item=DurationInMs}"/>
      <parameter name="@Exception" layout="${mdc:item=exception}"/>
    <logger name="*" minlevel="Debug" writeTo="Database" />

The most important component of the solution is the source code for the PostSharp aspect. Before letting you loose, I’ve highlighted some of the features of the source code to avoid cluttering it with comments:

• You need to have PostSharp (the DLLs and the necessary build/compilation configuration) set up on your machine for the aspects to work correctly. Specifically, my code works against PostSharp 2.0
• For those of you not familiar with Log4Net or the original implementations of the NDC (NestedDiagnosticContext) and MDC (MappedDiagnosticContext), the original documentation from the Log4J project provides good background.
• The NDC is used to push GUID’s on the stack which can then be used as correlation ID’s to trace calls through the stack for methods annotated with the [LogMethodCall] attribute that this code implements.
• The MDC map stores timing information in all cases and exception information in the case of an Exception in one of the calling methods annotated with the [LogMethodCall] attribute.
• To use the attribute, just decorate the method you wish to instrument with the [LogMethodCall] attribute. Then sit back and enjoy detailed instrumentation for free.

using System;
using System.Diagnostics;
using NLog;
using NLog.Targets;
using PostSharp;
using PostSharp.Aspects;

namespace MvcApp.Web.Aspects
    public class LogMethodCallAttribute : MethodInterceptionAspect
        public override void OnInvoke(MethodInterceptionArgs eventArgs){
            var methodName = eventArgs.Method.Name.Replace("~", String.Empty);
            var className = eventArgs.Method.DeclaringType.ToString();
            className = className.Substring(className.LastIndexOf(".")+1, (className.Length - className.LastIndexOf(".")-1));
            var log = LogManager.GetCurrentClassLogger();
            var stopWatch = new Stopwatch();

            var contextId = Guid.NewGuid().ToString();

            log.Info("{0}() called", methodName);

            catch (Exception ex)
                var innermostException = GetInnermostException(ex);
                MDC.Set("exception", innermostException.ToString().Substring(0, Math.Min(innermostException.ToString().Length, 2000)));
                log.Error("{0}() failed with error: {1}", methodName, innermostException.Message);
                throw innermostException;

           NLog. MDC.Set("DurationInMs", stopWatch.ElapsedMilliseconds.ToString());
           log.Info("{0}() completed", methodName);
           stopWatch = null;

        private static Exception GetInnermostException(Exception ex)
            var exception = ex;
            while (null != exception.InnerException)
                exception = exception.InnerException;
            return exception;

Comments Comments Off on Lightweight, Aspect-Oriented Instrumentation with PostSharp, NLog, and SQLite

Sometimes you know that something works a certain way but you haven’t really internalized it, you haven’t grok’ed it, until you experience it firsthand. Such was my knowledge of the interaction between .NET web services and the XML Serializer a couple of weeks ago.

While troubleshooting calls made from a smart client to back-end web services, I was using Process Monitor to see what was going on under the hood. It didn’t dawn on me at first why I was seeing dynamic generation of temporary C# files and compilation of these files using the C# compiler (csc.exe) as shown in the image below. Since the services in question were provided by a legacy mainframe and were necessarily granular and immutable, the client application was generating a lot of new proxies, meaning a lot of wasted time.

The items being generated and compiled, XML serialization assemblies, are necessary to guarantee speedy serialization and de-serialization of type-specific data. Although you’d ideally like to have these assemblies there all of the time, I’ve found several instances under which these assemblies are not present by default:

• If you compile your assemblies using the compiler (e.g. csc.exe) directly. This is true even if you build in release mode with optimizations enabled.
• If you build in debug mode using Visual Studio, as illustrated by the image below.
• If you’re using a third party product that leverages .NET assemblies for service-based interoperation, you might be subject to dynamic proxy compilation and not even be aware of it.

In the final part of this post, I’ll assume that you’re interested in knowing the ways you can get around dynamic XmlSerializer generation and compilation. As you’ve probably derived from the previous section, building a solution with VisualStudio or directly from the underlying MSBuild tool will take care of call the XML Serializer Generator (sgen.exe) for you, as illustrated in the following image.

You can also invoke sgen.exe directly. If you do so, be sure to use the /C flag to pass compiler options. Alternately, if you’re using WCF, you can choose the alternative DataContractSerializer, which has been optimized to avoid the overhead of generating and using the extra assembly and shared only the data contract without sharing the underlying type data.

Comments Comments Off on Watch Out! .NET Web Services and the XML Serializer

I picked up this gem of a book when it first came out in eBook format during the PDC. I sent it over to my Kindle and got through the entire book during session downtimes. I planned on being the first to post a review of this book on Amazon but I’ve sat it out too long and will now be the fifth review.

Ultra-Fast ASP.NET

The first four reviewers did a pretty respectable job of providing and overview of Mr. Kiessig’s qualifications and the book content and have all awarded the book the entirely deserved 5 start rating. Rather than pile on more information about Rick Kiessig or what’s in the book, I’m going to tell you why, as a person who has spent a good amount of time looking at .NET application performance, I recommend this book to every person I work with as mandatory reading:

  • Although there are great rules out there for web site optimization and corresponding tools to test these rules (e.g. Yahoo’s Yslow), it’s great to see the client side examples from an ASP.NET specific point of view.
  • It’s interesting to see someone who bucks the current trends and provides some real insight on when it’s appropriate to use ORM’s, saying essentially that objects are good but ORM’s might not be the best engine if you’re building a Formula 1 race car.
  • Try finding another book that will even touch web gardens, partitioning an application into different AppPools, or using the /3GB switch. Try finding a Microsoft engineer who will talk to you about those items and offer objective guidance.
  • The write-up and source code on asynchonous web pages and background worker threads – worth the price of the book alone.
  • Creative, out-of-the-box ideas: using SQL Server Express for caching, using BI services to support the web tier of the application, etc. – not the kind of advice you find in your typical MSDN article.

It would be interesting to see how ASP.NET MVC and Silverlight play out performance-wise but alas, these technologies are a bit newer and Mr. Kiessig had to get a book to press. I’d gladly pay for the second edition of this book if it includes a couple of additional chapters that address these technologies. Until then, this is by far the most thorough and pragmatic book on ASP.NET performance to be had on the market. It might be simply an eye-opening read or the book that saves your skin one day. Either way, you won’t regret picking this book up.

Comments Comments Off on Book Review: Ultra-Fast ASP.NET