Archive for August, 2006

In addition to the technical assets that I’ve mentioned in previous blog postings, BSCoE also makes a set of software process assets available. These software process assets are arranged into disciplines and collected under the umbrella of BSCoE’s Software Engineering Process (SEP). Information about the BSCoE SEP is available online to the general public.


The SEP is based roughly upon the Rational Unified Process (RUP) and Microsoft Solutions Framework (MSF). Those looking at the sample assets will notice the similarities with the standard RUP templates. The process component of the SEP is specifically vague, leaving decisions such as formality versus agility, process activities, and roles to the projects employing the SEP. In particular, projects have several options for SEP customization including document-driven (RUP-style development case), local modifications to the process, or modifications with intent to contribute back changes to BSCoE for inclusion in the master SEP distribution. The SEP is conceptually similar in some ways to Ivar Jacobson’s new Essential Unified Process (EssUP). However, whereas the EssUP variability comes through the selection of practices, SEP’s variability comes through the selection of artifacts.

For those of you thinking of creating your own software processes, I can only recommend the experience. Using the RUP and the Rational Method Composer or the MSF and the new Team System templates, although educational, is not an easy process. This is especially true if you intend to employ lightweight, agile processes on your projects. In this case, modifying the base processes from the RUP or MSF is tantamount to carving a chess piece from a sculpture sized block of marble.

Fortunately, as always, there are many that have gone before us in this endeavor. The sources highlighted below were of great help to us in the creation of our process and are highly recommended whether you require guidance or are just looking to better understand software engineering processes in general:

  • Scott Ambler’s Writings – This guy is truly the best place to start when you’re looking for anything process related. All roads will eventually lead through his work. Particularly interesting are Scott’s Agile Unified Process and his Enterprise Unified Process. The former is a lightweight version of the RUP focusing on test driven development, agile modeling, agile database techniques, and refactoring. The latter is an extension to the RUP covering enterprise disciplines not mentioned in conjunction with many software processes such as portfolio management, enterprise architecture, strategic reuse, and software process improvement. In addition, it adds two new phases to the RUP, production and retirement.
  • Philippe Kruchten’s Books – The two Addison-Wesley Professional books Rational Unified Process Made Easy and Rational Unified Process: An Introduction are seminal works on the RUP. Often overlooked is Software Engineering Processes: With the UPEDU, a book that runs through an educational version of the Unified Process with some pretty decent explanations and online examples.
  • Craig Larman’s Books – Craig’s books, although not focused on process engineering or the intricacies of processes, convey an awful lot of information in real world contexts. Agile and Iterative Development: A Manager’s Guides is one of the best ways for non-techie types to get their arms around lower ceremony development processes. Applying UML and Patterns focuses on modeling and design patterns but does so in the context of a process using the Unified Process infused with Agile methods, making it a source of great contextual information.

In addition, we have been picking up reference artifacts along the way to illustrate best practices and real world examples. One of the artifacts that I fell in love with was the Yummy SAD, available online as HTML and downloadable in document format via HTTP. This is one of the most generic, understandable, and widely-applicable examples of a software architecture document (SAD) that I have come across. One of the big selling points of the Yummy SAD is that aside from just being an artifact, it also comes with an approach to architectural decomposition. In particular, it espouses an architecturally significant use case / quality attributes based approach to documenting and realizing your architecture. Explaining software architecture in this fashion has helped me clarify the relevance and importance of the SAD on more than one occasion.

Please feel free to share any best practice artifact references or software process tips that you might help accumulated over the years in the comments section below.

Comments Comments Off on Software Engineering Processes and the BSCoE SEP

Just as I was doing a bit of mindmapping of ideas around Internet and societal convergence, my RSS reader buzzed with a new post from Dion Hinchcliffe on Social Computing and Internet Singularity. Dion didn’t go into great details; referring instead to ideas he had articulated in earlier posts. His posting was, however, enough to prompt me to pull together my thoughts, give them a bit more structure and then send them into the great wide world to begin a life of their own.

The idea around Internet singularity, to reuse the quote that Dion referenced from Microsoft’s Dr. Gary Flake is “the idea that a deeper and tighter coupling between the online and offline worlds will accelerate science, business, society, and self-actualization.” “Wow, is Flake talking about true, top-of-the-Maslow pyramid type self actualization” you might ask yourself. I can’t say for sure since self-actualization means different things to different people. To me at least, it appears that the Internet, in its many incarnations, enables the creativity, knowledge, and energy of billions of people to be set free from the shackles of time and space to which they’ve been confined since the beginning of time. This sounds very powerful except that you now have billions of people running in millions of different directions. I postulate that what’s missing in our collective journey to self-actualization is a next generation coordination mechanism.

Next Generation Coordination Mechanisms

I use the term next generation because coordination mechanisms have been around as long as human societies. Monarchs coordinated the construction of pyramids and cathedrals; governments coordinated the creation of nations and cities; corporations coordinated the design, manufacture, and sale of products. Who will coordinate the creative energies of these billions of people now that they can collaborate across space and time? How will their diverse priorities be aligned? What laws will govern the products produced by citizens of this world’s diverse nations?

I’m not suggesting an Orwellian type mechanism to control the masses. Instead, I’m stating the rather simple fact that few of the things society values today (with the exception of some varieties of arts & letters and products of the remaining lone craftsmen, perhaps) are products of a single individual’s labor. The products we use, the homes we live in, the organizations we work for, and yes, even the children we raise, are all products of teams. Yet, individuals rarely function in teams in the Web 2.0 world. In fact, the Web 2.0 world encourages the free-agent, lone gun, mentality. Creating your own movies, podcasts, publications, and software services are the order of the day. “As long as you expose these things to be ‘mashed up’ or they are collected in a common repository”, goes the common thinking, “we are building communities.”

I submit that when we mash things up, we rarely ever get anything more than mash, mush or some similar m*sh. Coming from a huge fan of Google Maps mashups; I can say that these mashups aren’t getting me any closer to self-actualization. Are the mashups fascinating and fun? Sure. Are they accelerating science, business, society, and self-actualization? Not so much. Acceleration, the type I believe of which Flake speaks, is facilitated by communities of likeminded individuals in pursuit of something greater than the individual. The names in the halls of greatness span the decades: NASA, DARPA, Bell Labs, Xerox PARC, the Manhattan Project. These organizations have achieved the ends which I believe Flake professes. Web 2.0’s Ajax-based maps and digital audio / video are at best standing on the shoulders of giants when compared with such predecessors.

So where does this leave Social Computing and Internet Singularity? Surely there are enough serious challenges that would benefit from a networked team of one billion minds – think global warming and alternative energy, stem cell, DNA, AIDS, and cancer research, space travel and so many more things. With corporations and governments cutting funding for long-term scientific research in favor of short-term profits and political partisanship, respectively, the opportunity is there for someone to fill the shoes of the next generation coordinator.

Can such coordination be done via the Web? How would such projects be funded? How would patents and intellectual property be handled? What aside from money would motivate people to participate in such projects? All these questions and more have yet to be handled in a serious and pragmatic fashion. It’s the answer to these questions and the energy of the visionaries that choose to tackle them, not AJAX and SOA, that will get us all climbing Maslow’s pyramid.

Comments Comments Off on Next Generation Coordination Mechanisms – Harnessing the Power of Many

I’m not well-versed in the nuances of NASCAR racing and don’t understand the spectacle very well so when I say this, please take it with a grain of salt. The whole Java versus .NET thing seems to me like a NASCAR race, one car edging ahead of the other and then again giving up ground to the competitor… on and on again for countless monotonous laps. I am in the process of re-immersing myself in the newer releases of Java. It has been years since I dealt with Java on a regular basis – the 1.1 through 1.4 days. This week, I had the chance to see a lightweight EJB 3 container in action, working through Oracle’s slick new IDE with integrated Java Server Faces (JSF) and Object Relational Mapping tool (Toplink, in this case) support. Suffice it to say that I was floored with the progress the Java community had made away from the monolithic J2EE / EJB 2.1 towards the lightweight model espoused by frameworks such as Spring.  Just when it appeared that the Java car was pulling ahead ready to steal the race, along comes the announcement of the Community Technology Preview of ADO.NET vNext. NASCAR fans, we’ve got ourselves a race again. Below I offer a preview of some of the aforementioned technologies:

  • Java Server Faces (JSF) – Formally, the implementation of the JSF specification developed under JSR 127 and JSR 252. For those from the .NET world, this is tantamount to custom Web controls. The newer UI’s allow you to drag and drop JSF components onto a pallet and wire up event-based handling to build your user interface layer. As with the newer .NET components, AJAX capabilities are being built into a lot of these components. The Wikipedia writeup for JSF has links to several popular JSF toolkits, many of which offer online demonstrations or reference applications.
  • Object Relational Mapping Support – Along with EJB 3 came the Java Persistence API, which codifies various persistence frameworks into a single API. This includes support for new Java 1.5 language features, such as annotations, which are pure Java metadata descriptors that can be used to interact with the persistence framework (e.g. Hibernate) being used. In addition, XML mapping descriptors continue to receive support. Particularly interesting to me was not the support for ORM, which has been an item in the Java developer’s toolbox for years, but the rich integration of the API support in some of the IDEs. I’ve included a screenshot from Oracle’s JBuilder that illustrate the steps in creating Entity Beans from existing database tables. In particular, note the ability to select the type of collection to be used as well as to annotate the collections as either eager or lazy loaded (see Fowler Lazy Load pattern from P of EAA)
  • ADO.NET vNext – This version of ADO.NET, slated to come out after the not-yet-released ADO.NET v3.0, is revolutionary in the minds of those at Microsoft. It’s great to hear the folks at Microsoft talk about impedance mismatch as if they had just discovered it. “With the new version of ADO.NET and LINQ and the powerful capabilities of the Entity Framework …” Y-A-W-N. C’mon guys, you’re just as far behind in the ORM world as the Java community was in the visual design and UI component world 5 years ago. Time to hit the gas and at least catch up. What should be really interesting is to see just how good the IDE support is for their Entity Framework. As you can see from the Oracle screenshot, Microsoft has a lot of catching up to do.

Java Versus .NET - The Race

Comments Comments Off on Java Versus .NET – The Race

I tried answering Tad Anderson’s comment within the bounds of my August 8th posting but eventually decided that this topic really warranted a posting of its own. My thoughts around using Extreme Programming and other Agile approaches to software development are pretty well formed. As an ex-soldier, military analogies seem to work particularly well for me:

  • On the high end, the truly capable XP team is small, lightweight, and meets the requirements Tad set forth in his post. In many ways, effective Agile teams parallel our military’s elite special forces (e.g. SEALs, Green Berets, Delta Force, etc).
  • In the middle, there are well-trained, capable teams that practice UP, ICONIX, and other iterative processes with varying degrees of agility. These teams may infuse certain XP practices into their process as they are needed. The individuals on these teams may have the mettle to be elite special forces. However, due to project requirements, team size, or other factors they are not “active operators”, in military parlance. These folks operate in a fashion analogous to our highly-skilled tactical units, such as the 82nd Airborne, 10th Mountain, and the like.
  • On the lower end (of the agility scale, that is) are teams that mirror our more traditional Army units: infantry, mechanized infantry, armor, artillery, etc. Their capabilities are more geared toward larger, more structured engagements. Backing these capabilities are detailed approaches and tactics, significantly greater support and infrastructure requirements, and longer lead times to get a team on the ground to effectively engage the problem.

Agile Methods and Special Operations Unites

So where does this leave us? To me, at least, it’s pretty clear that, on software projects, as in combat, there is no one-size-fits-all approach. I would no more likely try to tackle building an air traffic control system with XP than I would send in a special forces team to face off against a Soviet armored column. Analogously, I would neither call in the third armored division to handle a jungle-based guerilla insurgency nor try to use CMM level 5 processes to build a simple Web-based e-commerce application.

Deciding what type of software development process to use is ultimately a managerial responsibility. From this decision, to paraphrase a saying, will come 90% of your happiness or misery. Making the right decision requires that the manager has an intimate understanding of his team’s capabilities, understands the scope of the project, and has correctly assessed the level of involvement that can be expected from project stakeholders. This knowledge is, by no means, easy to come by; which helps explain why there is so often a dissonance between the approach applied and the one required. Just like with a SEAL team, there are limitations on the number of individuals that have the raw capabilities to be effective members of an Agile A-Team. If you done an honest assessment and are certain that the Agile approach is right for the project and you have an A-Team, then go for it.

As a footnote, this military analogy brings to light interesting questions of contingent or complementary force parallels in software engineering. That is, Operation Iraqi Freedom was fought with a mix of closely coordinated heavy armor, airborne, and special forces units. I can imagine that within an enterprise, or even within a project, there might be room for similar collaboration between high ceremony teams, medium-weight iterative teams, and light-weight agile teams. I’ve started to see the first signs of this on projects and would be interested to hear your take on this.

Comments Comments Off on Agile Methods and Special Operations Units

When I purchased this book almost 3 weeks ago, I was surprised to find that it had been on the shelves for 3 months already. Books that unify advanced architectural concepts such as Domain-Driven Design and design patterns are few and far between. This is especially true in the .NET world since many of the source materials originated in the Java realm.

Applying Domain Driven Design and Patterns

Nilsson does a rather unique job of puling together some of the best domain-driven, object-oriented patterns and approaches and explain them using .NET-specific examples. The pros and cons, as I see them, are taken from my review and reprinted below:


  • Combines the ideas of Domain Driven Design (Evans) with Patterns of Enterprise Application Architecture (Fowler). These books are pretty much mandatory reading prior to diving into this book.
  • Draws upon a myriad of other well-known sources, including materials from Refactoring to Patterns and the GoF, work from Johnson and Lowy, as well as a rare reference to Naked Objects. The more experienced and better read you are, the more this stuff will make sense.
  • Rare .NET coverage of advanced concepts like Plain Old CLR Objects (POCOs), persistence ignorant (PI) objects, O/R mapping with NHibernate, Dependency Injection, Inversion of Control, and Aspect-Oriented Programming.


  • While some sections are really insightful and could contain more interesting materials, other sections seem to drone on too long. The work on defining the NUnit tests, in particular, flows like a stream of consciousness and doesn’t really add a lot of structured value to understanding DDD, patters, or TDD for that matter.
  • Embedded comments in the text adopt from the style used in Framework Design Guidelines. It worked very well for Cwalina / Abrams in their book because it seemed planned in from the outset. Comments like “one reviewer commented on the code with the following, more succinct version” seem like editorial comments left in and not collaborative authoring by design.

Comments Comments Off on Applying Domain-Driven Design and Patterns: With Examples in C# and .NET