What’s wrong with SOAP?

I don’t mean to talk about everything that’s wrong with SOAP. I would just like to draw attention to a drawback of its greatest feature – the fact that machines can auto-generate proxies to it.

InfoQ recently drew my attention to a very useful document published by Microsoft called ‘.NET Guide for Business Applications’. It provides guidance on which components to use where. One of the most interesting things in it was the recommendation that Web API should usually be used for REST, while WCF should be used for SOAP. I don’t have a problem with that recommendation, but I did think that it was a bit of a shame that a technology where the client could be written almost codelessly (REST implemented in WCF) was being replaced with one which would require client developers to craft HTTP requests and parse responses. So I considered the possibililty of writing a component that would do something similar to WCF and remove the gruntwork out of writing Web API clients by sharing some code on client and server side.

But then I thought some more. One of the scenarios for Web API described in the paper is creating an app which targets several mobile devices, all of which share a back end. There might be an iOS app written in Objective-C; an Android app written in Java; an HTML5 site for PCs and people who don’t want to download an app; and maybe even a Windows Phone app written in .NET. Now if I were to write an auto-generating client it would only save time in the least useful platform: Windows Phone. In order to remove the grunt-work completely I would have to write a .NET utility which would output a machine-readable description of the Web API interfaces and write several non .NET utilities which would allow iOS, Java and Javascript apps to generate their own clients from the description. But what is this like? WSDL and SOAP.

Why, apart from the fact that this would be pointlessly replicating a well-established technology, would this be bad?

SOAP is bad over the internet for several reasons – it’s verbose because it bloats messages with XML; it uses HTTP POST for everything so doesn’t allow reads to be cached; and it’s procedure-oriented so it tends to be designed for particular clients wanting to DO something rather than general clients wanting to USE some data.

A few years back in my organisation, a paper was written which advocated the creation of a service-oriented architecture with applications talking to each over using RESTful services with XML payloads. This was just before I joined, so I didn’t have the opportunity to ask why we were preferring REST over SOAP. In a fast internal network, the bloatedness and non-cacheablility of SOAP don’t really matter, and we tend to have the ability to adapt services when new clients or new requirements emerge. So why? I think the reason might be that REST forces you to do two very good things:

  • Write documentation for clients. (Because otherwise they won’t be able to do anything.)
  • Think about you really want to expose in your interface. (Because if you don’t, nothing will be exposed at all.)

And it forces you to do these two things because it doesn’t come with autogenerated, machine readable documentation and autogenerated client proxies. (There are some standards for describing RESTful services but they haven’t achieved anything like the success of WSDL for SOAP.)

What can happen when you write a SOAP interface, is that you put functionality which the client doesn’t really need on your endpoint. It’s so easy just to expose your business layer over SOAP that you may well expose the whole lot. But if you expose more than you think your clients need, then the clients may start using the bits that you didn’t think they needed, and then when you come to replace the service with another technology, you find you need to replicate the whole implementation. Especially if you didn’t produce any documentation in the first place to let them know what they were and weren’t supposed to use.

You can, of course, get around these problems when you’re using SOAP by using will power to write documentation and think about what you want to expose. But if you lack will power, it’s better to use a technology which forces you to do those things – and you get the side benefits of a streamlined, cacheable and re-usable service.

ThoughtWorks’ Brandon Byars has recommended hand-coding REST clients in his article here, also pointed out to me by InfoQ.


My recipe for WCF part 3 – Rolling your own integration with an IoC

It’s been a while since I last posted about WCF. In the meantime I’ve been doing a Coursera course on Computational Investment. I must say that I don’t feel much better qualified to start a hedge fund as a result – it all seems a little too simple, and I’m sure that transaction charges would push any trading algorithm I could devise into negative returns. I have, however, adopted the sophisticated investment strategy of buying Royal Mail shares in the IPO. My husband and I decided to hedge against an undervaluation by purchasing approximately the stake we already own (about £1000 between us based on a market capitalisation of £3.3bn and a UK population of 70million) We got £750, as did all retail investors except those enthusiastic enough to have applied for over £10K’s worth who were rewarded with no shares at all.

I’ve made some improvements to the algorithm: following a discussion with a mathematician, I’ve used a Discrete Fourier Transform method for multiplying polynomials. This has reduced the time needed to test all primes less than 100000 down from about a week to about an hour. It’s still quite slow though and really I’ve come to the conclusion that .NET isn’t suitable for doing calculations with large numbers. One of the problems is that BigInteger is a value type, which means that every time you assign it to a variable or pass it to a method a copy is made. At one point I made multiplication run 10 times more slowly by doing something like

BigInteger b = a[0];

i.e. assigning a reference to a BigInteger in an array to a variable. It might be interesting to see if it were possible to wrap fast maths implemented in Iron Python or something in a .NET interface, though I’m not sure I can bear the prospect of implementing this particular algorithm again.

So! Let’s get to the WCF programming point of this post. I’m showing how to get an IoC library to manage the creation of the proxy classes when using WCF. I used StructureMap because it doesn’t come with a pre-packaged library for doing this. There isn’t that much code to write on the client side: it’s only necessary to register the Channel Factories as singletons:

                .Use(new ChannelFactory<IPrimeTester>(
                           new BasicHttpBinding("defaultBinding"),

and then to register the proxy interfaces themselves as being created by these ChannelFactories:

                .Use(() =>
                        var channelFactory = 
                        return channelFactory.CreateChannel();

The full code of the class which is used to register the client side proxies is below.

using System.ServiceModel;
using CalculatorServiceInterfaces;
using StructureMap;
using StructureMap.Configuration.DSL;

namespace Calculator.StructureMapConfig
    public class WcfRegistry : Registry
        public WcfRegistry()

        private void RegisterChannels()
                .Use(() =>
                        var channelFactory = ObjectFactory.GetInstance<ChannelFactory<IHcfService>>();
                        return channelFactory.CreateChannel();

                .Use(() =>
                        var channelFactory = ObjectFactory.GetInstance<ChannelFactory<IPrimeTester>>();
                        return channelFactory.CreateChannel();

        private void RegisterChannelFactories()
                .Use(new ChannelFactory<IHcfService>(new BasicHttpBinding("defaultBinding"),
                .Use(new ChannelFactory<IPrimeTester>(new BasicHttpBinding("defaultBinding"),

        private string GetServiceAddress(string serviceName)
            return ObjectFactory.GetInstance<AppSettings>().ServiceBaseUrl + serviceName;

On the server side there’s not really anything that I can add to this blog post, which tells you exactly what to do, so I won’t try. I have, however, taken the code in that post and put it into a separate project with an output to a Nuget package, so if you want to use it you won’t have to write it yourself.

If you want to see all the code, fully loaded with Fourier Transforms and StructureMap, take a look here.

My recipe for WCF part 2 – Using Castle Windsor to set up the connections

Download the code for this post here.

In my first post on WCF, I mentioned that I might implement a fast primality test around which I would build WCF services. Well, I tried, but in fact I’ve implemented a test which, whilst being fast from a theoretical computer scientist’s point of view, is really, really slow from anyone else’s. It took a week to test all the integers up to 100,000 on my newish, development-grade office desktop. It’s the Agrawal–Kayal–Saxena primality test, which, when it was published in this paper in 2002, was the first ever general deterministic primality test which was guaranteed to run in polynomial time. (Polynomial time, by the way, seems to mean polynomial in the logarithm of a number when it comes to primality testing or factoring. Go figure.) I was helped in my implementation of this by reading at least half of this friendly paper by Andrew Granville, and my implementation was of a version of the algorithm which incorporated the improvements I read about in this paper by Robert G. Salembier and Paul Southerington.

In theory, I think my implementation would test the primality of N where N is up to around 2^(2^20). The limit is imposed by the fact that I’ve used an array indexed by an 64-bit integer to hold the coefficients of polynomials whose degree can be as large as (lg N)^3 (where lg is the base 2 logarithm of its argument.) In practice, however, it took so long to test 2^31-1 (a Mersenne prime) that I gave up. I may set this test running on a computer which, unlike my laptop, doesn’t automatically shut itself down after a period of what it naively assumes to be inactivity.

But that’s enough amateur maths and computer science. Let’s get down to the programming.

The point is that Castle Windsor makes it extremely easy to use WCF, particularly on the client side. All you need to do to create strongly-typed proxy objects on your client side is

  1. Add the WcfFacility to your Windsor Container.
  2. Create a binding in your config file.
  3. Register all your services in the Windsor container, referencing the name of the binding in your config file and the URL of the service.

Here is the source code for setting up the IoC (here done in the Configure class of a Caliburn.Micro Bootstrapper class, because I’ve used Caliburn.Micro as a window driver):

protected override void Configure()


            _container = new WindsorContainer();










                        .AsWcfClient(new DefaultClientModel(WcfEndpoint.

                                BoundTo(new BasicHttpBinding(“defaultBinding”))



                        .AsWcfClient(new DefaultClientModel(WcfEndpoint.

                                BoundTo(new BasicHttpBinding(“defaultBinding”))




And here is the source code of the config file:

<?xml version=1.0 encoding=utf-8 ?>



        <supportedRuntime version=v4.0 sku=.NETFramework,Version=v4.5 />





      <binding name=defaultBinding/>





 Not a ChannelFactory in sight. Lovely.

On the server side, you need to edit the markup in your .svc files to use the Castle Windsor factory:

<%@ ServiceHost Language=”C#” Debug=”true” Service=”PrimeTester     Factory=”Castle.Facilities.WcfIntegration.DefaultServiceHostFactory” %>

And finally you need to register your services, and all their dependencies, in the container. Here’s the global.asax class:

    public class Global : System.Web.HttpApplication

        private WindsorContainer _container;

        protected void Application_Start(object sender, EventArgs e)

            _container = new WindsorContainer();
            // Register WCF services
            // Register all other classes.


So the take-home message is that I would strongly recommend you use Castle Windsor or another IoC which supports this sort of automatic creation of client proxies and makes a WCF class feel like any other dependency. But don’t worry if you’re using an IoC which doesn’t have this sort of feature, because it’s not that difficult to write it yourself, and I’ll demonstrate this in the next post on this subject.

The source code provides a very basic UI for inputting a number and determining whether it’s prime.


Note that the sample code doesn’t cope very well with failure. (A quick and easy exmple is to put ‘2’ in to test.) Exception handling across a WCF process boundary sounds like good material for a later post.

My Recipe for WCF

Download the code for this post here.

Sometimes you want to build a distributed application, not because you’re adopting a Service-Oriented Architecture, but simply because for some reason part of your code need to run on one machine and the rest of it needs to run on another. A classic example (now consigned to history by HTML5) is the old thick-client desktop application which would make service calls to a middle tier. In modern times, you might find yourself writing a native mobile application which needs to connect to an online service. Or, in a cloud scenario, you might want to avoid running your precious business-logic code on a machine that is directly accessible to the outside world, so you put your presentation layer on a machine with a public IP address and make it call a service on another machine which isn’t available publicly.

In this post, I want to promote the idea that just because an application is distributed doesn’t mean you have to actually write client and server code. With WCF, you can just define a single interface and get the framework to do all the plumbing for you. In the language of Domain-driven Design, we would say that

Process boundary != Context boundary.

I find this point is worth making explicit because on several occasions I’ve seen implementations that are structured like this:


(At runtime there is some WCF magic which creates a proxy object conforming to the IClientContract interface which calls the ServiceImplementation accross process boundaries.)

As an emotionally fragile architect, it’s not going too far to say that I find this architecture upsetting. What I dislike about it is all the duplicated code. I don’t mean the implementations being separated from the classes: I thoroughly approve of this practice for the purpose of testability and the wider principle of dependency inversion. I mean that there are far too many interfaces with no material differences.

Take IServiceContract and IClientContract in the first instance. If you’ve used ‘Add Web Dependency’ then this interface will have been generated for you; if you’re using ChannelFactory then you’ll have written it yourself. In either case, the interfaces will be interchangeable. So the following refactoring would cut down on the duplication of code:


If you want to do this you have to use ChannelFactory rather than ‘Add Web Reference’, because adding web references auto-generates the client-side interfaces.

This is better, because you’ve removed one interface. But I would recommend going further and actually making your Business Logic layer into your Service Endpoint layer.

FinalRefactoringNow you have no more classes than you would have written had everything been in the one process. A downside is that your Business Logic layer now depends indirectly on the WCF libraries, and the interfaces have [ServiceContract] attributes. If you can’t live with this then you’ll have to go back to the intermediate refactoring.

So I’ve told you how to deconstruct an over-engineered implementation. Here is my recipe for writing this pattern from scratch:


1 business problem

1 requirement for inter-machine communication

A small handful of .NET hosting environments (pick from a variety of development, system test, UAT, pre-prod and production)


  1. Write your Business Logic Layer and presentation layer as if there were no WCF involved.
  2. Separate out the interfaces of your Business Logic Layer into their own assembly.
  3. Put [ServiceContract] and [OperationContract] attributes on your interfaces.
  4. Create .svc classes inheriting from your Business Logic Layer classes
  5. Host your .svc files in a Web application, Windows Service or self-host in a console application, although I have to admit that my knowledge of the latter two options is purely theoretical.
  6. In your start-up code for your presentation layer, create a Factory class which will create clients of the interfaces you defined in step 2.

I’ve created an sample application which shows this architecture. It does something very, very slightly elevated from the trivial – it hosts and calls a service that uses Euclid’s algorithm to calculate the highest common factor of two integers. Maybe for the next iteration I’ll add something really advanced such as a fast primality test.

I want to make it clear that you aren’t preventing your service from being a worthy participant in an SOA just because you’ve used the same code in your client side. After all, exactly the same URLs are being used and the same messages are going accross the wire. And how do you make sure that this service is suitable for an SOA? Document the interface and put it under change control. That’s it. The difference between a loosely-coupled interface and a tightly-coupled one is governance, not implementation.

In later posts I will do something like the following: