A font-free skin for TinyMCE version 4

Since my last post, I’ve been doing another Coursera course, this time one called Computing for Data Analysis, by Roger Peng of John Hopkins University. It was an introduction to the statistical programming language, R, and I wanted to do it because I thought it would increase my potential for answering questions on StackOverflow. Sadly I think there are too many very qualified people watching the topic and the people asking the questions are too sensible for a person with shallow knowledge to be able to pick up points. But it was good, nonetheless. Two of the most generally useful things I learnt were:

  • R isn’t suitable for crunching Big Data, because all the data points are held in RAM;
  • What ‘lexical scope’ is. (Finally!)

Intriguingly, the final exercise involved analysis of the police records of the 1250 homicides in Baltimore (the home town of John Hopkins) in the period between 2007 and 2012. Is it the university’s policy to encourage distance learning?

Anyway, enough about my life. I thought it would be worth a little post to say what I’ve been doing over the weekend, which is creating a new skin for the popular browser-based HTML editor, TinyMCE. At work we’ve been using the default ‘lightgray’ skin, but we’ve come across a problem with some of our users. What should look like this:tinyMCEIcons

actually looks like this:


It’s not hard to see the problem. All the icons have disappeared, which kinda reduces the usability of the editor.

The reason for this problem is that the icons aren’t normal images, they’re actually characters in a special font which has been embedded using the CSS @font-face rule. This is a nice idea for at least two reasons:

  1. you can get lots of images with just one download, and thus load faster,
  2. fonts are designed to scale nicely, so they’re a bit like scalable vector graphics, but supported on older versions of I.E.

But unfortunately we have a few users who are using a locked-down installation of Internet Explorer which is set not to download fonts. And, unlike many of the woes relating to developing for IE, this problem isn’t fixed by later versions. IE 11 also allows font downloads to be blocked.

So all I’ve done is create a new skin which uses CSS sprites rather than fonts. I converted the fonts (there are two, one for normal buttons and one for slightly smaller buttons) to sprites using the excellent IcoMoon site, and then edited the icons.less and icons.ie7.less files in the original lightgray skin to refer to the sprite rather than the font. So where there had been a rule like this before:

.mce-i-paste:before          { content: "\e008"; }

I changed it to this:

.mce-i-paste          { background-position: -384px -64px; }

I had to do this for 53 icons, so I’m now quite a lot better at my 32 times table.

The results are pretty good – the only difference between my skin and the original I can discern is that there isn’t an embossed effect on the icons for the font-free skin. It might be possible to fix this by putting a disabled HTML element showing the same image underneath each icon and offsetting the lower image by one 1px. But I am not bothered enough to do this. Besides, 3D effects in UIs are OUT OF FASHION.

Spot the difference - fonty above, font-free below.

Spot the difference – fonty above, font-free below.

I’ve put the skin on my BitBucket. I need to write a readme telling you how to compile it if you need to (although it will actually work if you just put all the code into your TinyMCE skins directory and add skin: ‘lightgraynofonts’ to your tinyMCE.init function) and I need to work out what legal stuff to put in. TinyMCE is under the Gnu Lesser GPL v 2.1, so I would guess that my skin is as well. I do mean to submit it to the TinyMCE site once it’s all been tested by a professional tester.


What I have learned from my TOGAF 9.1 course

Besides being what Fergie used to do in the nineties, TOGAF is a method for enterprise architecture, and a couple of weeks ago I went on a TOGAF 9.1 Foundation and Certified course. My interest was exploratory rather than targeted: I wanted to hear a view on how to do architecture properly without really expecting to execute it myself at any point soon. Here’s a summary of some things I learned from the course.

  • Drawing lots of diagrams of the same thing for different audiences is an inescapable part of being an architect. This is a disappointment to me because I find drawing diagrams quite difficult, and maintaining multiple diagrams representing a changing system is even more difficult. There are two solutions to the problem of maintenance, and I don’t think either is possible to reach perfectly:
    1. Only start drawing multiple representations when the system isn’t going to change at the level at which you want to draw the diagrams. (Although you will still need to make many drafts of your master diagram: the one that represents your viewpoint.)
    2. Get an architecture repository which can auto-generate nice projections of systems for appropriate audiences.
  • This isn’t part of TOGAF, but I first heard of spider diagrams, which are a way of comparing two or more different solutions, each of which is evaluated in various ways.


Definitely one for the business toolkit rather than the science one. If you can mentally translate them into a simple bar chart, like the one below, then fine.


If not then they can be a bit misleading. For one thing, if there are more than three aspects to compare, the area covered by each option is affected by the order of the spokes. Look at these two solutions:

Solution 1 Solution 2
Cost 10 10
Maintainability 1 10
Usability 10 10
Coolness 1 1
Performance 10 1
Customisability 1 1

Both solutions have an equivalent numerical score, but look very different on a spider graph: solution 2 covers more than a third of the hexagon whilst solution 1’s area is close to zero.


The other problem is them is that they don’t allow you to weight the importance of the various aspects very well. However, this weighting is something that tends to happen in each individual stakeholder’s mind, so providing the raw data (like this or in a bar chart form) is probably more politically advisable. And, as my father pointed out to me this evening, they do actually allow you to put different units on each spoke, which is much more difficult in a bar chart.

Arguably the point of a diagram is to be interesting and pretty enough for someone to actually look at it, so perhaps it doesn’t matter that spider diagrams can do misleading things with areas as long as they’re sufficiently novel to get a busy important person to look at them long enough to read the figures.

In the same vein, the instructor told us about ‘tube map’ diagrams, which don’t seem to have to mean anything in particular as long as they’ve got circular intersections and thick coloured lines, like the example below.


Besides being founded on the superlative clarity of the tube map, they doubtless draw their effectiveness from the fact that most of their audience are, or have at some point been, commuters or international travellers, and so have developed an unconscious attaction towards anything that might help them travel more efficiently. This fact is ruthlessly exploited by the advertisements on the London Underground, which use tube-map analogies to advertise everything from homeless charities to cold remedies. (Note to any advertisers who happen to be reading this techie blog: it’s best to obey the grammatical rules of tube maps. For example, all lines should be horizontal, vertical, or at 45 degrees and the corners should be arcs of small circles: if you can’t make it with a wooden BRIO train set, it ain’t right. If you break the rules you risk putting off your audience, unless you’re doing something clever by subverting the format, such as showing how a drunken night can go hideously off course by going all swirly towards the end of the line.)

  • The exam is probably a bit too easy to be able to use it to draw any conclusions about the effectiveness of certified architects or TOGAF course instructors. (I haven’t actually taken the exam yet.) There are two stages to it: Foundation and Certified. Both are multiple choice.

Foundation is all about terminology, and requires a bit of rote learning (e.g. the Architecture process consists of ten phases, eight of which are drily called Phases A-H, and you have to remember which letter of the alphabet corresponds to which inputs, activities and outputs).

Certified involves reading various scenarios and choosing one of four options about what the action of a TOGAF enterprise architect would be. This could be a good test of understanding, but the mark scheme is such that if you can eliminate the two least likely answers from each question then you’re certain to pass, and in fact even if you’re only able to eliminate the least likely answer, your expected result is still the pass mark. And, from the fake test that my course provider gave me, it looks as though they always put howlers of answers in which are easy to spot if you’ve learnt your TOGAF terminology.

  • The opinion that enterprise architecture is expensive and ineffective is not universally held. However it is sufficiently widespread that a significant part of the course was about learning to sell the benefits (higher-quality and cheaper IT) to management.
  • An architecture respository is essential if you want to be able to work out what systems could be affected by changes you are considering making to your enterprise architecture. This makes sense to me, because the alternative system – asking people who were there when the system was first implemented – doesn’t seem to work very well.
  • I learnt a process for doing architecture. I don’t think I’ll be executing it rigorously in my current workplace, as that would require cooperation with others and the consequent need to become a TOGAF bore, but I do plan to turn it into a checklist to see whether we’ve done everything that the process suggests should be done.

What’s wrong with SOAP?

I don’t mean to talk about everything that’s wrong with SOAP. I would just like to draw attention to a drawback of its greatest feature – the fact that machines can auto-generate proxies to it.

InfoQ recently drew my attention to a very useful document published by Microsoft called ‘.NET Guide for Business Applications’. It provides guidance on which components to use where. One of the most interesting things in it was the recommendation that Web API should usually be used for REST, while WCF should be used for SOAP. I don’t have a problem with that recommendation, but I did think that it was a bit of a shame that a technology where the client could be written almost codelessly (REST implemented in WCF) was being replaced with one which would require client developers to craft HTTP requests and parse responses. So I considered the possibililty of writing a component that would do something similar to WCF and remove the gruntwork out of writing Web API clients by sharing some code on client and server side.

But then I thought some more. One of the scenarios for Web API described in the paper is creating an app which targets several mobile devices, all of which share a back end. There might be an iOS app written in Objective-C; an Android app written in Java; an HTML5 site for PCs and people who don’t want to download an app; and maybe even a Windows Phone app written in .NET. Now if I were to write an auto-generating client it would only save time in the least useful platform: Windows Phone. In order to remove the grunt-work completely I would have to write a .NET utility which would output a machine-readable description of the Web API interfaces and write several non .NET utilities which would allow iOS, Java and Javascript apps to generate their own clients from the description. But what is this like? WSDL and SOAP.

Why, apart from the fact that this would be pointlessly replicating a well-established technology, would this be bad?

SOAP is bad over the internet for several reasons – it’s verbose because it bloats messages with XML; it uses HTTP POST for everything so doesn’t allow reads to be cached; and it’s procedure-oriented so it tends to be designed for particular clients wanting to DO something rather than general clients wanting to USE some data.

A few years back in my organisation, a paper was written which advocated the creation of a service-oriented architecture with applications talking to each over using RESTful services with XML payloads. This was just before I joined, so I didn’t have the opportunity to ask why we were preferring REST over SOAP. In a fast internal network, the bloatedness and non-cacheablility of SOAP don’t really matter, and we tend to have the ability to adapt services when new clients or new requirements emerge. So why? I think the reason might be that REST forces you to do two very good things:

  • Write documentation for clients. (Because otherwise they won’t be able to do anything.)
  • Think about you really want to expose in your interface. (Because if you don’t, nothing will be exposed at all.)

And it forces you to do these two things because it doesn’t come with autogenerated, machine readable documentation and autogenerated client proxies. (There are some standards for describing RESTful services but they haven’t achieved anything like the success of WSDL for SOAP.)

What can happen when you write a SOAP interface, is that you put functionality which the client doesn’t really need on your endpoint. It’s so easy just to expose your business layer over SOAP that you may well expose the whole lot. But if you expose more than you think your clients need, then the clients may start using the bits that you didn’t think they needed, and then when you come to replace the service with another technology, you find you need to replicate the whole implementation. Especially if you didn’t produce any documentation in the first place to let them know what they were and weren’t supposed to use.

You can, of course, get around these problems when you’re using SOAP by using will power to write documentation and think about what you want to expose. But if you lack will power, it’s better to use a technology which forces you to do those things – and you get the side benefits of a streamlined, cacheable and re-usable service.

ThoughtWorks’ Brandon Byars has recommended hand-coding REST clients in his article here, also pointed out to me by InfoQ.

How to use Visio to draw a box with more than 15 lines

One of the drawbacks of being an architect is that you don’t get to spend much time writing fun code, and you spend a lot of time writing boring documents. Luckily, Microsoft has kindly made using Visio, which is a product in the extended Office family that is used to draw diagrams (by far the most scrutinised parts of architectural documents), a bit like programming. I’ll demonstrate  by going through the steps need to make a simple customisation of one of the Visio shapes.

Visio comes with two nice single-column tables, a 5 line one and a 15-line one. Here’s the 5 line one:

five-line box

If you’re using the 5 line box and you want fewer than 5 lines, or you’re using the 15 line one and you want fewer than 15 lines you can just make make your box shorter, and the bottom lines will disappear. So, using the built-in shapes you can make any table with one column and up to fifteen rows. But if you need more than 15 rows you need to use programmer-like ingenuity. (Of course you could argue that if you’re putting tables that are that long into your diagram then you’re using the wrong tool and you should be linking to Excel, but I would argue that you were missing the point.)

I’m using Visio 2010 Professional here. Firstly you need to show the Developer toolbar. Go to the File tab in the ribbon, press Options, and on the Options dialogue press ‘Customize Ribbon’.

Visio Options

Now tick ‘Developer’ in the right-hand box and press ‘OK’.

Now open or create a diagram, and put a 15 ruled column on it. Do this by showing the Title Blocks shapes sheet (Shapes pane –> More Shapes –> Visio Extras –> Title Blocks) then selecting the 15 ruled column shape and dragging it to your diagram. (Unlike in Visual Studio, double-clicking the shapes doesn’t do anything.)

Now, right click your 15 ruled column on the diagram and click ‘Show ShapeSheet’. A pane like the following should appear:


Scroll down to ‘Geometry 15’. ‘Geometry 15’ is the second-last horizontal line in the column. The idea is that we’re going to create a similar line but one row down.

Right click in the ShapeSheet pane and click ‘Insert Section’. The following dialogue will appear:


Tick ‘Geometry’ and press OK. You should see a new geometry section called ‘Geometry 16’ appear. I can’t pretend to know what all the values in this box mean, but making them almost the same as the values in Geometry 15 seems to work. So to this end,

  1. Select the cell next to Geometry16.NoSnap and F2 into it. Delete the contents and press Enter.
  2. Select the cell next to Geometry16.NoQuickDrag and F2 into it. Delete the contents and press Enter.
  3. Two rows down, select the cell under the X and next to ‘Move to’ and replace its contents with ‘=0’.
  4. In the cell to the right of this, replace the contents with ‘=Height-MIN(15,Scratch.A1)*Scratch.X1’. (It’s easiest to do this by copying the corresponding cell in Geometry 15 and changing the 14 to a 15.)
  5. Change the cell in column X next to ‘LineTo’ with ‘=Width’.
  6. Change the cell to the right of this with = ‘Geometry16.Y1’.

Here’s the final result:


Finally, you have to set some global variable in the shape to say that the outermost rectangle should be 16 lines high. Scroll down to the ‘Scratch’ section, which should be the section below Geometry16. Change cell B1 from 15 to 16.


Now you can close the pane, and, if you haven’t already done so, resize your 15 ruled column so that it’s large enough to show 16 lines.

I’m scrabbling round in the dark here: I really don’t know what I’m doing with Visio ShapeSheets. But I’m pleased that it’s a powerful enough product to be able to edit the built-in shapes in such detail. It gives me confidence that I’ll be able to draw what I want, and solve interesting problems while I do it. My job does have some perks!

My recipe for WCF part 3 – Rolling your own integration with an IoC

It’s been a while since I last posted about WCF. In the meantime I’ve been doing a Coursera course on Computational Investment. I must say that I don’t feel much better qualified to start a hedge fund as a result – it all seems a little too simple, and I’m sure that transaction charges would push any trading algorithm I could devise into negative returns. I have, however, adopted the sophisticated investment strategy of buying Royal Mail shares in the IPO. My husband and I decided to hedge against an undervaluation by purchasing approximately the stake we already own (about £1000 between us based on a market capitalisation of £3.3bn and a UK population of 70million) We got £750, as did all retail investors except those enthusiastic enough to have applied for over £10K’s worth who were rewarded with no shares at all.

I’ve made some improvements to the algorithm: following a discussion with a mathematician, I’ve used a Discrete Fourier Transform method for multiplying polynomials. This has reduced the time needed to test all primes less than 100000 down from about a week to about an hour. It’s still quite slow though and really I’ve come to the conclusion that .NET isn’t suitable for doing calculations with large numbers. One of the problems is that BigInteger is a value type, which means that every time you assign it to a variable or pass it to a method a copy is made. At one point I made multiplication run 10 times more slowly by doing something like

BigInteger b = a[0];

i.e. assigning a reference to a BigInteger in an array to a variable. It might be interesting to see if it were possible to wrap fast maths implemented in Iron Python or something in a .NET interface, though I’m not sure I can bear the prospect of implementing this particular algorithm again.

So! Let’s get to the WCF programming point of this post. I’m showing how to get an IoC library to manage the creation of the proxy classes when using WCF. I used StructureMap because it doesn’t come with a pre-packaged library for doing this. There isn’t that much code to write on the client side: it’s only necessary to register the Channel Factories as singletons:

                .Use(new ChannelFactory<IPrimeTester>(
                           new BasicHttpBinding("defaultBinding"),

and then to register the proxy interfaces themselves as being created by these ChannelFactories:

                .Use(() =>
                        var channelFactory = 
                        return channelFactory.CreateChannel();

The full code of the class which is used to register the client side proxies is below.

using System.ServiceModel;
using CalculatorServiceInterfaces;
using StructureMap;
using StructureMap.Configuration.DSL;

namespace Calculator.StructureMapConfig
    public class WcfRegistry : Registry
        public WcfRegistry()

        private void RegisterChannels()
                .Use(() =>
                        var channelFactory = ObjectFactory.GetInstance<ChannelFactory<IHcfService>>();
                        return channelFactory.CreateChannel();

                .Use(() =>
                        var channelFactory = ObjectFactory.GetInstance<ChannelFactory<IPrimeTester>>();
                        return channelFactory.CreateChannel();

        private void RegisterChannelFactories()
                .Use(new ChannelFactory<IHcfService>(new BasicHttpBinding("defaultBinding"),
                .Use(new ChannelFactory<IPrimeTester>(new BasicHttpBinding("defaultBinding"),

        private string GetServiceAddress(string serviceName)
            return ObjectFactory.GetInstance<AppSettings>().ServiceBaseUrl + serviceName;

On the server side there’s not really anything that I can add to this blog post, which tells you exactly what to do, so I won’t try. I have, however, taken the code in that post and put it into a separate project with an output to a Nuget package, so if you want to use it you won’t have to write it yourself.

If you want to see all the code, fully loaded with Fourier Transforms and StructureMap, take a look here.

My recipe for WCF part 2 – Using Castle Windsor to set up the connections

Download the code for this post here.

In my first post on WCF, I mentioned that I might implement a fast primality test around which I would build WCF services. Well, I tried, but in fact I’ve implemented a test which, whilst being fast from a theoretical computer scientist’s point of view, is really, really slow from anyone else’s. It took a week to test all the integers up to 100,000 on my newish, development-grade office desktop. It’s the Agrawal–Kayal–Saxena primality test, which, when it was published in this paper in 2002, was the first ever general deterministic primality test which was guaranteed to run in polynomial time. (Polynomial time, by the way, seems to mean polynomial in the logarithm of a number when it comes to primality testing or factoring. Go figure.) I was helped in my implementation of this by reading at least half of this friendly paper by Andrew Granville, and my implementation was of a version of the algorithm which incorporated the improvements I read about in this paper by Robert G. Salembier and Paul Southerington.

In theory, I think my implementation would test the primality of N where N is up to around 2^(2^20). The limit is imposed by the fact that I’ve used an array indexed by an 64-bit integer to hold the coefficients of polynomials whose degree can be as large as (lg N)^3 (where lg is the base 2 logarithm of its argument.) In practice, however, it took so long to test 2^31-1 (a Mersenne prime) that I gave up. I may set this test running on a computer which, unlike my laptop, doesn’t automatically shut itself down after a period of what it naively assumes to be inactivity.

But that’s enough amateur maths and computer science. Let’s get down to the programming.

The point is that Castle Windsor makes it extremely easy to use WCF, particularly on the client side. All you need to do to create strongly-typed proxy objects on your client side is

  1. Add the WcfFacility to your Windsor Container.
  2. Create a binding in your config file.
  3. Register all your services in the Windsor container, referencing the name of the binding in your config file and the URL of the service.

Here is the source code for setting up the IoC (here done in the Configure class of a Caliburn.Micro Bootstrapper class, because I’ve used Caliburn.Micro as a window driver):

protected override void Configure()


            _container = new WindsorContainer();










                        .AsWcfClient(new DefaultClientModel(WcfEndpoint.

                                BoundTo(new BasicHttpBinding(“defaultBinding”))



                        .AsWcfClient(new DefaultClientModel(WcfEndpoint.

                                BoundTo(new BasicHttpBinding(“defaultBinding”))




And here is the source code of the config file:

<?xml version=1.0 encoding=utf-8 ?>



        <supportedRuntime version=v4.0 sku=.NETFramework,Version=v4.5 />





      <binding name=defaultBinding/>





 Not a ChannelFactory in sight. Lovely.

On the server side, you need to edit the markup in your .svc files to use the Castle Windsor factory:

<%@ ServiceHost Language=”C#” Debug=”true” Service=”PrimeTester     Factory=”Castle.Facilities.WcfIntegration.DefaultServiceHostFactory” %>

And finally you need to register your services, and all their dependencies, in the container. Here’s the global.asax class:

    public class Global : System.Web.HttpApplication

        private WindsorContainer _container;

        protected void Application_Start(object sender, EventArgs e)

            _container = new WindsorContainer();
            // Register WCF services
            // Register all other classes.


So the take-home message is that I would strongly recommend you use Castle Windsor or another IoC which supports this sort of automatic creation of client proxies and makes a WCF class feel like any other dependency. But don’t worry if you’re using an IoC which doesn’t have this sort of feature, because it’s not that difficult to write it yourself, and I’ll demonstrate this in the next post on this subject.

The source code provides a very basic UI for inputting a number and determining whether it’s prime.


Note that the sample code doesn’t cope very well with failure. (A quick and easy exmple is to put ‘2’ in to test.) Exception handling across a WCF process boundary sounds like good material for a later post.

My Recipe for WCF

Download the code for this post here.

Sometimes you want to build a distributed application, not because you’re adopting a Service-Oriented Architecture, but simply because for some reason part of your code need to run on one machine and the rest of it needs to run on another. A classic example (now consigned to history by HTML5) is the old thick-client desktop application which would make service calls to a middle tier. In modern times, you might find yourself writing a native mobile application which needs to connect to an online service. Or, in a cloud scenario, you might want to avoid running your precious business-logic code on a machine that is directly accessible to the outside world, so you put your presentation layer on a machine with a public IP address and make it call a service on another machine which isn’t available publicly.

In this post, I want to promote the idea that just because an application is distributed doesn’t mean you have to actually write client and server code. With WCF, you can just define a single interface and get the framework to do all the plumbing for you. In the language of Domain-driven Design, we would say that

Process boundary != Context boundary.

I find this point is worth making explicit because on several occasions I’ve seen implementations that are structured like this:


(At runtime there is some WCF magic which creates a proxy object conforming to the IClientContract interface which calls the ServiceImplementation accross process boundaries.)

As an emotionally fragile architect, it’s not going too far to say that I find this architecture upsetting. What I dislike about it is all the duplicated code. I don’t mean the implementations being separated from the classes: I thoroughly approve of this practice for the purpose of testability and the wider principle of dependency inversion. I mean that there are far too many interfaces with no material differences.

Take IServiceContract and IClientContract in the first instance. If you’ve used ‘Add Web Dependency’ then this interface will have been generated for you; if you’re using ChannelFactory then you’ll have written it yourself. In either case, the interfaces will be interchangeable. So the following refactoring would cut down on the duplication of code:


If you want to do this you have to use ChannelFactory rather than ‘Add Web Reference’, because adding web references auto-generates the client-side interfaces.

This is better, because you’ve removed one interface. But I would recommend going further and actually making your Business Logic layer into your Service Endpoint layer.

FinalRefactoringNow you have no more classes than you would have written had everything been in the one process. A downside is that your Business Logic layer now depends indirectly on the WCF libraries, and the interfaces have [ServiceContract] attributes. If you can’t live with this then you’ll have to go back to the intermediate refactoring.

So I’ve told you how to deconstruct an over-engineered implementation. Here is my recipe for writing this pattern from scratch:


1 business problem

1 requirement for inter-machine communication

A small handful of .NET hosting environments (pick from a variety of development, system test, UAT, pre-prod and production)


  1. Write your Business Logic Layer and presentation layer as if there were no WCF involved.
  2. Separate out the interfaces of your Business Logic Layer into their own assembly.
  3. Put [ServiceContract] and [OperationContract] attributes on your interfaces.
  4. Create .svc classes inheriting from your Business Logic Layer classes
  5. Host your .svc files in a Web application, Windows Service or self-host in a console application, although I have to admit that my knowledge of the latter two options is purely theoretical.
  6. In your start-up code for your presentation layer, create a Factory class which will create clients of the interfaces you defined in step 2.

I’ve created an sample application which shows this architecture. It does something very, very slightly elevated from the trivial – it hosts and calls a service that uses Euclid’s algorithm to calculate the highest common factor of two integers. Maybe for the next iteration I’ll add something really advanced such as a fast primality test.

I want to make it clear that you aren’t preventing your service from being a worthy participant in an SOA just because you’ve used the same code in your client side. After all, exactly the same URLs are being used and the same messages are going accross the wire. And how do you make sure that this service is suitable for an SOA? Document the interface and put it under change control. That’s it. The difference between a loosely-coupled interface and a tightly-coupled one is governance, not implementation.

In later posts I will do something like the following: