Click here to Skip to main content
15,881,089 members
Articles / Programming Languages / C#

So you enjoy writing boilerplate?

Rate me:
Please Sign up or sign in to vote.
5.00/5 (7 votes)
30 Jun 2016CPOL16 min read 12.6K   97   9  
Really? Then why is so much effort wasted writing boilerplate which can be more accurately and efficiently automated?

Image 1

Introduction

In a web application project that I was recently involved with, I estimated that around 12% of project files and 18% of unit tests were solely occupied with maintaining boilerplate related to the application service layer. This included client and server interfaces and mapping to and from DTOs and data contracts.

What a Sisyphean task! And so I decided to look at how I could improve a typical implementation of a service layer, as an investigation into how to reduce or eliminate boilerplate. Service layers are one of the most important layer interfaces and their successful implementation can be crucial to performant and maintainable systems, and they are not served well by typical boilerplate implementations.

As well as being such a waste of developers' time, there are many other disadvantages to common boilerplate-based architecture:

  • Manual boilerplate interface typing-in is prone to human error
  • Manual boilerplate is tedious and can lead to lazy practice, e.g. do you really always implement cross-layer exception handling consistently?
  • Because of the extra effort to work across the service layer, business logic can begin to creep into the client layer, or even the boilerplate itself, and decohere the whole architecture
  • Manual boilerplate creates a thicket of code which is complicated to refactor, precisely at the point where refactoring is critical to to the performance and maintainability of the system
  • Manual boilerplate is difficult to transition between different cross-layer transports (compare with this project, where you can move between local, WCF, WebAPI and RabbitMQ implementations - or a mix - simply by updating one line of a .config file)
  • Manual boilerplate makes customising and optimising the entire service layer a big job, e.g. upgrading to a modern serializer
  • Manual boilerplate makes instrumenting the entire service layer a big job, e.g. standard analysis such as counting the frequency or data payloads of service layer calls
  • All that extra boilerplate has to be source-controlled/compiled/unit-tested at every build cycle
  • Debugging in separate processes is difficult compared to configuring development code to call service layer code inline

Background

A quick backgrounder to Service Layers

Service Layers are designed to work in a separate process, usually on a separate dedicated server. The service layer host will normally support one or more Domain Services; these are not normal classes but collections of methods that perform similar functions or are connected with a particular business domain. The domain service classes are normally stateless since this is easier to manage and more scalable, although some transports may offer a session-level option. Domain service methods should be thread-safe for scalability because you do not want multiple clients queueing up to execute one-by-one.

Domain service methods should also shield clients from direct access to databases and/or entity manipulation. One way of doing this is creating a further tier by implementing a dedicated Entity Server, an example of which I have built is A Customisable ORM for Multi-tier Applications.

Because domain service methods are so frequently called, they are prime candidates for regular refactoring. One particular point to watch is the management of large amounts of entity data, especially where eager loading is in force (a common consequence of moving from a local process lazy-loading implementation to a remote service layer). I have seen commercial web apps where some domain service methods transfer as much as 2Mb of data for a single method call, so you don't want to get this wrong.

There are a variety of ways of hosting domain services, including a host for each domain service, a single host for all domain services, or some combination of multiple redundant hosts and it is useful to have flexibility in the service layer configuration. For that reason, it is preferable to call between different domain services using the service layer rather than direct method calls, even if they are currently hosted in a single process - an aspiration which is helped if the service layer can detect inter-domain-service calls in the same host and transparently implement inline method calls.

Using the code

Compile the solution with servicelayer.config default client set to "local" and run the unit tests. Then set the default client to "wcf" and rebuild all (to distribute the config file into the bin directories of all the various components); launch the WCF servers and the unit tests should now run using WCF. Similarly with RabbitMQ, although in this case you will first need to set up RabbitMQ. For WebAPI implementations, you may need to run the hosting services with administrator access.

For both clients and servers, running the programs with administrator permissions will probably be needed to make the instrumentation work - it may be appropriate to start your visual studio in administrator mode if you plan to launch the various components from there. View instrumentation on Performance Monitor, by adding ServiceLayer counters.

To adapt this project to your own requirements, remove all projects except the ServiceLayer project (or compile it separately and copy the DLL and config files into ypur own projects). Add your own client and server projects (and unit tests). You will need all client projects (including servers which themselves make service layer calls) to include the T4 file (ServiceLayer.tt) and both client and server projects to be included in the same solution with references to ServiceLayer.dll. Don't forget to flag the service layer interface files with the ServiceLayerDefinition attribute and to flag each domain service implementation with a ServiceLayerImplementation attribute.

That's it.

How it works

This boilerplate-killer is implemented by dynamically compiling or using code templates to find service layer calls or implementations marked with custom attributes, and sets up calls to proxy methods transparently.

The ServiceLayer Project creates a DLL with interfaces which are used by both clients and servers. It expects each domain service to expose its own interface to domain services clients and servers, but the domain services interfaces do not need to be known to the ServiceLayer DLL.

Each domain service interface is flagged with a ServiceLayerDefinition attribute and each domain service implementation is flagged with a ServiceLayerImplementation attribute; these attributes can be automatically scanned-for by reflection and then accessors can be cached one-time by the use of a static constructor.

Each service layer method is serialized/deserialized across process boundaries into a string containing the method name and an array of parameters - the serializer is injectable.

Dynamic Compilation

Server-side, all assemblies present in the bin folder are scanned by reflection at startup and any classes with ServiceLayerImplementation attribute are added to a list of dynamically-compiled proxy methods. The small hit of dynamic compilation will not be significant in server start-up, but enables very performant and flexible implementations. In particular, where multiple domain services are hosted by a single server then they can call each other directly without going through a ServiceLayer proxy.

 

Client-side, a very simple and straightforward implementation would be to require all server calls to explicitly match a servicelayer standard, such as

servicelayer.call("myinterface","mymethod",new dynamic[] {paramA, paramB, paramC});

rather than the more normal

domainService.mymethod(paramA, paramB, paramC);

This is not really acceptable though if you want clarity and compile-time support and therefore some technology is required to convert the latter to the former by automated inline compilation. For a client, "reflection" through scanning source code of the client project ensures that you do not have to pre-compile code to generate assemblies which you can then interrogate through reflection of the binaries - making build scripts simpler. There are also good side-effects in that there is no overhead on client startup and the client does not require to know anything about server assemblies or the location of their binaries. The T4 template engine is already bundled into Visual Studio and produces C# source code which can be compiled normally and the output can also be read visually to check correctness during development.

On the debit side, it can be temperamental if the syntax is wrong and the debugger doesn't manage well displaying the COM object variables which T4 is working on. Be aware that T4 can only find source files relative to your current solution. If you wish to maintain multiple solutions then you need at least two compile-steps - one to copy or compile the interface file(s) and the second to compile the client code which uses the result of the first step. The T4 code could initiate internal reflection through external imported DLLs, although that is not needed here because it is a single solution.

Configuration requirements

For clients, which includes "clients" within servers calling other servers through the service layer, the servicelayer.config specifies where to find each domain service and how to talk to it.

For servers, it is simple and performant to support multiple domain services in a single server, and those domain services will call each other directly rather than going through a service layer proxy.

Fire-and-Forget method calls

Other than possible performance enhancement, the significance of Fire-and-Forget methods is really that any exceptions thrown in the service layer will not be flagged to the client process and the order of execution of two asynchronous calls is not guaranteed.

Service Layers can optionally support Fire-and-Forget asynchronous implementations of individual methods, and any methods returning void is a possible candidate. An example where FireAndForget implementations are very useful is logging code, which can be a significant performance issue.

RabbitMQ, of course, comes with persistent queues "out of the box". WCF and WebAPI would need to add extra code to persist servicelayer fire-and-forget calls - if you need to be certain that these calls will eventually run.

Solution layout

I have included three domain services in my sample implementation; the purpose is to illustrate one server process with two domain services and a second server process running a third domain service which can be called both by the sample client and by the other two domain services.

A real-world example of the second server might be an email domain service, which might require to run in a process with special permissions, or on a specific server or might simply be a generic service used by a large number of different software applications in different contexts.

The solution contains the following projects

  • SampleClient - a sample client, with illustrative calls to the sample domain services
  • Sample Domain Services folder
    • SampleServiceDomainServicesAandB - a class library with two sample domain services (IDomainServiceA and IDomainServiceB)
    • SampleServiceDomainServiceC - a class library with a single sample domain service (IDomainServiceC), which may be called from both client and other server processes.
  • Sample Servers folder
    • ServerRabbit1 - RabbitMQ self-hosting server for SampleServiceDomainServicesAandB
    • ServerRabbit2 - RabbitMQ self-hosting server for SampleServiceDomainServiceC
    • ServerWcf1 - the WCF self-hosting server for SampleServiceDomainServicesAandB
    • ServerWcf2 - the WCF self-hosting server for SampleServiceDomainServiceC
    • ServerWebApi1 - the WebApi self-hosting server for SampleServiceDomainServicesAandB
    • ServerWebApi2 - the WebApi self-hosting server for SampleServiceDomainServiceC
  • ServiceLayer - the boilerplate-less code library
  • UnitTests - tests to check the cross-layer interfaces by calling the sample client

The self-hosting programs are implemented as console programs, although in the real world they should probably be implemented as Windows Services (this is why I have arranged the console programs to use static OnStart and OnStop methods, rather than a disposable class with constructor).

Sync and Async implementations

Implementing async implementations is not entirely straightforward. Firstly client "asyncness" is not necessarily correlated with the "asyncness" of server methods - each is entirely independent. It is assumed that server methods will follow the convention of appending "Async" to the method name so, to distinguish cases where the servicelayer method is called asynchronously from the client, these methods follow the convention of appending "_ClientAsync" to the method name. This give the somewhat awkward convention that could name a method as "DoSomethingAsync_ClientAsync".

Secondly, should inherently synchronous methods be also exposed with async implementations, and vice versa? Ideally not, but as a service library it is perhaps necessary to offer wider functionality and hope that it is not abused

Serialization and Deserialization

NewtonSoft and FastJSON implementations are dynamically selectable. I found that NewtonSoft is generally reliable, but that FastJSON could not manage to correctly serialise Exceptions or the contents of lists.

Method calls are first massaged into an array of parameter values (using the very useful params keyword), which can then be serialized. In principle, or so it seems to me, serializers should be able accept an array of the dynamic parameter values and serialize them so that their actual types are recoverable during deserialization. No such luck for Newtonsoft, each value in the array is flagged as an object type and so we have to add some extra manual intervention to ensure that we we rehydrate to the correct types by explitly defining their required type. To do this, we need to store the parameter and return types for each method call.

So, instead of this simple code:

// client
 void ClientCall (params dynamic[] args)
 {
     var string = SerializeArray(args);
 }
 // server
 dynamic[] args = DeserializeArray (mystring);
 var result = ServiceLayerDelegates[methodname].DynamicInvoke(args);
 return Serializer.SerializeObject(result);

We have to do this manually by serializing each value independently and then concatenating each serialized string value with a delimiter, \0 will do nicely unless you expect to pass strings with \0 inside.

   // client
    void ClientCall (params dynamic[] args)
    {
        StringBuilder sb = new StringBuilder(methodname);
        foreach (var param in args)
            sb.AppendFormat("\0{0}", SerializeObject(param));
        return sb.ToString();
    }
    // server
    string[] json = jsonStrings.Split('\0');
    var info = ServiceLayerCallTable[methodname];
    var args = new List<dynamic>();
    for (int i = 0; i < info.ArgTypes.Length - 1; i++)
        args.Add(Serializer.DeserializeObject(json[i + 1], info.ArgTypes[i]));
    return Serializer.SerializeObject(info.ServiceCallDelegate.DynamicInvoke(args.ToArray()));
</dynamic>

If you know a serializer which would support the simple approach, please let me know.

The ServiceLayer Project

The ServiceLayer project (in the ServiceLayer solution) provides client and server interfaces and implementations. There are four sample server implementations: an inline implementation in the same process, WCF (using NetTcp, as recommended), RabbitMQ and WebAPI. There are also plenty of other implementation choices which should be straightforward to code yourself, such as MSMQ or Azure Queues.

It is assumed that in all client and server processes that DLLs exist that implement the service layer domain services, this allows the target to be simply switched from local to remote servers simply by updating the configuration file and so redirecting the client to the desired target. The configuration file is read at system startup, theoretically you could switch dynamically but I can't see any real value in this. You could also potentially configure clients so that some domain services are implemented locally, some perhaps by WCF and some perhaps by RabbitMQ - again I'm not sure why one might want to do this, but the option is there.

Of course you could choose not to ship domain service DLLs with clients and this should work fine provided you don't configure the clients for local options. But a few extra DLLs is not normally a big deal in exchange for flexibility and even, possibly, resilience.

 

Using this code doesn't mean that you shouldn't think about the requirements of your own layer implementation: for example, most WCF service layer implementations would use NetTcp and would cache the Channel Factory. Also, both WCF and Rabbit are implemented with a single shared endpoint or queue (respectively) rather than having one for each hosted domain service. Also, are you expecting synchronous or asynchronous execution of the service layer methods - and are they executed serially or in parallel ?

 

Server-side exceptions are propagated back to the client by returning an Exception object whch are then thrown by the client (not for one-way calls, of course), so if you should ever write a service layer call to return an Exception as its return value then you are out of luck!

Server code

The BaseServiceLayerServer contains a static constructor to expose all methods in classes flagged with the ServiceLayerImplementationAttribute for servicelayer execution. Flagging the class with the attribute is just for efficiency purposes, and it avoids forgetting to set method attributes as you add new items to the interface, but you could choose other strategies such as exposing all public methods rather than all flagged public methods - it's a tradeoff between ease of use (the former, no chance of forgetting to flag each method) and flexibility (more control of exactly which methods are to be exposed). Note that there is no particular downside to caching too much, such as domain services implemented by other servers, since servers are passive and accept only what their clients request from them.

The static constructor caches each implemented method name and the expected type of each of its parameters.

Servers also have control of their own service layer clients; they may not use any service layer calls at all, or they may have some service layer calls which are local and some which are hosted elsewhere.

ServiceLayer Configuration

Servicelayer.config is a file which, by virtue of being in the ServiceLayer project and being set to "Copy Always", will always get copied into the bin directory of all projects referencing the code. It's a bit of an anomaly because it is describing the configuration of constituent domain services rather than of the servicelayer project itself. Feel free to remove it from the ServiceLayer project and copy it instead into all the referencing projects - although you will then need to maintain each individual file separately.

 

The configuration file specifies the default transport layer and serializer for service layer calls, and layer-specific configuration such as WCF urls and RabbitMQ queue names.

It should be pretty easy to tweak the configuration file so that different transports are used to connect to specified domain services, should you need to do this (most probably don't).

ServiceLayer-supported Injectable Transports and Serializers

I have chosen to implement "local", WCF, RabbitMQ and WebApi transports and NewtonSoft and FastJSON serialiser options. I chose not to implement, for example, Azure messaging because that is harder for a sample projects to demonstrate but they should be easily implementable.

Each injectable option has its own strengths and may implement certain features natively:

 

Local "Transport"

Incredibly useful for developers with debugging very easy. In the local case there is no need to serialise or deserialise so the stack is simpler.

WCF

WCF has a lot of configuration properties and also supports some servicelayer options out of the box, including FireAndForget ("one way") and async, and you can add a whole lot more (such as throttling).

RabbitMQ

RabbitMQ offers unmatched control of queues, temporary or persistent, and gives you simple control of multi-threading of queue items. RabbitMQ can also be configured to persist completed servicelayer calls to debug queues, for later analysis.

WebAPI

WebAPI appears to be the fastest remote servicelayer implementation.

The unit tests have been constructed to check correct management of sync and async method calls, especially exceptions, and have not been designed for performance testing. However you can see comparative results below:

Image 2

And another thing...

Some O-O constructions provide the option of future flexibility without requiring a full implementation in advance. I am thinking of property implementations:

string myProperty {get; set; }
 // can be easily extended to..
 string myProperty { get {return myValue; } set { myValue = value; AndDoOtherStuff(); }}

Well, before you reach automatically for your poor overworked AutoMapper when copying data objects and DTOs between layers why not investigate the use of C# aliases first?

using BookViewModel = Book;
  // rework later to (IF you need to)
  AutoMapper.Mapper.CreateMap<book, bookviewmodel="">();
  var model = AutoMapper.Mapper.Map<bookviewmodel>(book);</bookviewmodel></book,>

In many, many cases the mapped classes will be identical or have identical fields with different attribute decorations - no harm in having the attributes "in" both classes, and it may even clarify usage. So you could use an all-purpose class like this to transition from entity, to DTO, to data contract, to viewmodel, all without actually doing any work:

[DataContract]
 public class Book
 {
     [DisplayName("Book Title")]
     [Required(ErrorMessage = "Title is required")]
     [DataMember]
     public string Title {get; set;}
     [DataMember]
     public string Title {get; set;}
 }

As long as your namespaces are distinct, this is very simple to use. Any namespace issues, check out this article

History

Version 1

  • Local implementation
  • WCF implementation
  • RabbitMQ implementation

Version 2

  • new ASYNC capability
  • new WebAPI implementation
  • new Instrumentation classes
  • Improved T4 scripts to support project folders
  • Improved T4 scripts to only generate proxies if not present (fast calling between domain services hosted by the same server)
  • Dynamic code compilation by servers using (fast) calling delegates directly rather than (slower) DynamicInvoke
  • Simplified server implementations with common code

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Architect
United Kingdom United Kingdom
Check me out at LinkedIn: http://uk.linkedin.com/pub/simon-gulliver/20/303/251

Comments and Discussions

 
-- There are no messages in this forum --