What I have learned from my TOGAF 9.1 course

Besides being what Fergie used to do in the nineties, TOGAF is a method for enterprise architecture, and a couple of weeks ago I went on a TOGAF 9.1 Foundation and Certified course. My interest was exploratory rather than targeted: I wanted to hear a view on how to do architecture properly without really expecting to execute it myself at any point soon. Here’s a summary of some things I learned from the course.

  • Drawing lots of diagrams of the same thing for different audiences is an inescapable part of being an architect. This is a disappointment to me because I find drawing diagrams quite difficult, and maintaining multiple diagrams representing a changing system is even more difficult. There are two solutions to the problem of maintenance, and I don’t think either is possible to reach perfectly:
    1. Only start drawing multiple representations when the system isn’t going to change at the level at which you want to draw the diagrams. (Although you will still need to make many drafts of your master diagram: the one that represents your viewpoint.)
    2. Get an architecture repository which can auto-generate nice projections of systems for appropriate audiences.
  • This isn’t part of TOGAF, but I first heard of spider diagrams, which are a way of comparing two or more different solutions, each of which is evaluated in various ways.


Definitely one for the business toolkit rather than the science one. If you can mentally translate them into a simple bar chart, like the one below, then fine.


If not then they can be a bit misleading. For one thing, if there are more than three aspects to compare, the area covered by each option is affected by the order of the spokes. Look at these two solutions:

Solution 1 Solution 2
Cost 10 10
Maintainability 1 10
Usability 10 10
Coolness 1 1
Performance 10 1
Customisability 1 1

Both solutions have an equivalent numerical score, but look very different on a spider graph: solution 2 covers more than a third of the hexagon whilst solution 1’s area is close to zero.


The other problem is them is that they don’t allow you to weight the importance of the various aspects very well. However, this weighting is something that tends to happen in each individual stakeholder’s mind, so providing the raw data (like this or in a bar chart form) is probably more politically advisable. And, as my father pointed out to me this evening, they do actually allow you to put different units on each spoke, which is much more difficult in a bar chart.

Arguably the point of a diagram is to be interesting and pretty enough for someone to actually look at it, so perhaps it doesn’t matter that spider diagrams can do misleading things with areas as long as they’re sufficiently novel to get a busy important person to look at them long enough to read the figures.

In the same vein, the instructor told us about ‘tube map’ diagrams, which don’t seem to have to mean anything in particular as long as they’ve got circular intersections and thick coloured lines, like the example below.


Besides being founded on the superlative clarity of the tube map, they doubtless draw their effectiveness from the fact that most of their audience are, or have at some point been, commuters or international travellers, and so have developed an unconscious attaction towards anything that might help them travel more efficiently. This fact is ruthlessly exploited by the advertisements on the London Underground, which use tube-map analogies to advertise everything from homeless charities to cold remedies. (Note to any advertisers who happen to be reading this techie blog: it’s best to obey the grammatical rules of tube maps. For example, all lines should be horizontal, vertical, or at 45 degrees and the corners should be arcs of small circles: if you can’t make it with a wooden BRIO train set, it ain’t right. If you break the rules you risk putting off your audience, unless you’re doing something clever by subverting the format, such as showing how a drunken night can go hideously off course by going all swirly towards the end of the line.)

  • The exam is probably a bit too easy to be able to use it to draw any conclusions about the effectiveness of certified architects or TOGAF course instructors. (I haven’t actually taken the exam yet.) There are two stages to it: Foundation and Certified. Both are multiple choice.

Foundation is all about terminology, and requires a bit of rote learning (e.g. the Architecture process consists of ten phases, eight of which are drily called Phases A-H, and you have to remember which letter of the alphabet corresponds to which inputs, activities and outputs).

Certified involves reading various scenarios and choosing one of four options about what the action of a TOGAF enterprise architect would be. This could be a good test of understanding, but the mark scheme is such that if you can eliminate the two least likely answers from each question then you’re certain to pass, and in fact even if you’re only able to eliminate the least likely answer, your expected result is still the pass mark. And, from the fake test that my course provider gave me, it looks as though they always put howlers of answers in which are easy to spot if you’ve learnt your TOGAF terminology.

  • The opinion that enterprise architecture is expensive and ineffective is not universally held. However it is sufficiently widespread that a significant part of the course was about learning to sell the benefits (higher-quality and cheaper IT) to management.
  • An architecture respository is essential if you want to be able to work out what systems could be affected by changes you are considering making to your enterprise architecture. This makes sense to me, because the alternative system – asking people who were there when the system was first implemented – doesn’t seem to work very well.
  • I learnt a process for doing architecture. I don’t think I’ll be executing it rigorously in my current workplace, as that would require cooperation with others and the consequent need to become a TOGAF bore, but I do plan to turn it into a checklist to see whether we’ve done everything that the process suggests should be done.

What’s wrong with SOAP?

I don’t mean to talk about everything that’s wrong with SOAP. I would just like to draw attention to a drawback of its greatest feature – the fact that machines can auto-generate proxies to it.

InfoQ recently drew my attention to a very useful document published by Microsoft called ‘.NET Guide for Business Applications’. It provides guidance on which components to use where. One of the most interesting things in it was the recommendation that Web API should usually be used for REST, while WCF should be used for SOAP. I don’t have a problem with that recommendation, but I did think that it was a bit of a shame that a technology where the client could be written almost codelessly (REST implemented in WCF) was being replaced with one which would require client developers to craft HTTP requests and parse responses. So I considered the possibililty of writing a component that would do something similar to WCF and remove the gruntwork out of writing Web API clients by sharing some code on client and server side.

But then I thought some more. One of the scenarios for Web API described in the paper is creating an app which targets several mobile devices, all of which share a back end. There might be an iOS app written in Objective-C; an Android app written in Java; an HTML5 site for PCs and people who don’t want to download an app; and maybe even a Windows Phone app written in .NET. Now if I were to write an auto-generating client it would only save time in the least useful platform: Windows Phone. In order to remove the grunt-work completely I would have to write a .NET utility which would output a machine-readable description of the Web API interfaces and write several non .NET utilities which would allow iOS, Java and Javascript apps to generate their own clients from the description. But what is this like? WSDL and SOAP.

Why, apart from the fact that this would be pointlessly replicating a well-established technology, would this be bad?

SOAP is bad over the internet for several reasons – it’s verbose because it bloats messages with XML; it uses HTTP POST for everything so doesn’t allow reads to be cached; and it’s procedure-oriented so it tends to be designed for particular clients wanting to DO something rather than general clients wanting to USE some data.

A few years back in my organisation, a paper was written which advocated the creation of a service-oriented architecture with applications talking to each over using RESTful services with XML payloads. This was just before I joined, so I didn’t have the opportunity to ask why we were preferring REST over SOAP. In a fast internal network, the bloatedness and non-cacheablility of SOAP don’t really matter, and we tend to have the ability to adapt services when new clients or new requirements emerge. So why? I think the reason might be that REST forces you to do two very good things:

  • Write documentation for clients. (Because otherwise they won’t be able to do anything.)
  • Think about you really want to expose in your interface. (Because if you don’t, nothing will be exposed at all.)

And it forces you to do these two things because it doesn’t come with autogenerated, machine readable documentation and autogenerated client proxies. (There are some standards for describing RESTful services but they haven’t achieved anything like the success of WSDL for SOAP.)

What can happen when you write a SOAP interface, is that you put functionality which the client doesn’t really need on your endpoint. It’s so easy just to expose your business layer over SOAP that you may well expose the whole lot. But if you expose more than you think your clients need, then the clients may start using the bits that you didn’t think they needed, and then when you come to replace the service with another technology, you find you need to replicate the whole implementation. Especially if you didn’t produce any documentation in the first place to let them know what they were and weren’t supposed to use.

You can, of course, get around these problems when you’re using SOAP by using will power to write documentation and think about what you want to expose. But if you lack will power, it’s better to use a technology which forces you to do those things – and you get the side benefits of a streamlined, cacheable and re-usable service.

ThoughtWorks’ Brandon Byars has recommended hand-coding REST clients in his article here, also pointed out to me by InfoQ.