Wednesday, March 14, 2012

About business logic

An interesting and really non-trivial question in application development, especially important for business applications, is "How to package/organize the business logic?". By business logic I mean the code that does business-relating stuff, like paying the invoice or processing the purchase order.
I will review potential choices we have and explain the direction chosen for VITA applications.

VITA entities and business logic
If you looked at VITA sample code, you had seen the implementation of some entities - simple data objects that carry the values from database table rows. You might have asked the question - how to turn these entities into business objects with some custom methods? There does not seem to be a way to do it, simply from the way things are organized. Entities defined in your code as interfaces; real objects are created from classes which are dynamically IL-generated at runtime. You cannot add method implementation to interface, and you cannot define a method in an inherited class - the base class implementing the entity interface does not exist yet (at design time).
The answer is - you can't do this, you can't turn entities into "business objects". But it is not necessarily a show stopper - the idea of encapsulating logic with data might not be a good idea after all. I will explain why in the following sections.

Classic OOP approach - how things can go wrong
Classic OOP suggests to package the functionality related to data into the classes that contain the data. This is called "encapsulation" in OOP land. Such packaging makes it possible to implement polymorphic behavior by sub-classing and overriding methods - which is a good thing, no doubt.  This all makes sense, as long as we stay with textbook examples about animals, mammals, cats and dogs - perfect for illustrating the principles.
This classic OOP approach works extremely well in some real-life scenarios, for example - libraries of UI controls. The very base _Control_ class defines a number of properties and behaviors (methods) which are progressively overridden and extended in sub-classes, up to a very sophisticated pieces like grids and tree views.
However, using this approach does not always work well - complex business applications (LOBs) is one example. Let us illustrate the problem with some fictitious anti-pattern story.
We start a new application from scratch. The initial "data" objects are derived from real-world objects, lists of properties are figured out and implemented - things like Invoice, PurchaseOrder, etc. We create a database schema, and write some data access methods and SQLs to read/write the objects.
The next step is building some rough UI for editing and viewing the objects. The UI elements need some code support for properly showing the data, so "business logic" appears. We start adding methods and "smart" properties to our classes that used to be simple data containers - but not anymore, they are now "business objects". But everything works well so far, all according OOP principles - things are encapsulated and isolated, and we have a nice clean API.
Next, it is time for more complex processing; some of which should be done in the "background" batches. We start adding more methods for these processes, right into our business objects. When we start running the batches, we notice that in some cases the UI-related behavior gets activated - like automatic load of referenced objects we implemented for UI.  It worked great for UI, but now it gets annoyingly too automatic - a lot of extra queries are executed,  with data that never used, and it slows down the process considerably. Quite possibly, we can fix it with a few tweaks, and things are working well again, both for UI and background process.
But then we starting adding more processes (billing, invoicing, posting to general ledger, doing payments, receipts, refunds) - and things starting to get worse. The processes start to stamp on each other. The business objects become bloated, we may even have a hard time finding a good name for a method - it is already taken by another process, and "Adjust" for billing is not the same as for receipting. You will find out that often it is not so easy to choose the location of the code, among many business objects involved - the methods don't seem to belong to a particular class, they work with data across entity boundaries.
The other problem is starting to surface - the shared business methods start to be more and more aware of the "context" - the process in which they are invoked. They originated as generic shared methods, but now have to contain more and more conditional logic dependent on external context. It is no longer "operations over encapsulated data" - it is dependency on context. No longer OOP.
But the real trouble unfolds when we start to create "variations" of the objects and business processes. Here is what I mean. The "invoice" in the real world is not a single standard thing. There are many different kinds of invoices. The trouble is not that there are many types - but that classification does not fit  into a tree-like single-inheritance schema we have in c# or similar OOP languages. It varies by the industry - manufacturing, retail, services, utility company - in each case the invoice is a quite different thing. It varies by company type, business type, procurement channel type, etc. And invoice that company receives from its vendor (AP invoice) is recorded differently from the invoice the company sends out to a customer (AR invoice). The point is that the variation is multi-dimensional, each particular case is a combination of dimensions, not specialization of one parent and more generic thing. We may have very hard time modeling this variability with a plain inheritance tree. The "inherit and override" pattern does not quite work anymore.
(Note: multiple inheritance or interface-based API would not make much difference, take my word for this)  
You might say that this invoice example is an extreme case relevant only to a full-blown ERP system "built for all" - which is not what our project is. But it illustrates the very common pattern which you may encounter, maybe to a lesser degree. Even in a more limited case the consequences might be quite troubling for your code - it may get really messy.
The point I'm trying to make is that classic OOP structures do not work well for business applications, when used in "business entities". What would be then a viable alternative? The alternative is to separate data from code: the data is stored in dumb entities, and business logic is in separate "processing" classes that hold no data - except for process parameters.  
Data objects and processor objects
The proposed alternative is to keep the "data" in dumb objects free of any logic (entities), and to put the processing code into separate classes that do all processing. In fact, you may discover that these processing classes actually fit quite well into inheritance paradigm of classic OOP - and you will find yourself building the trees of processing sub-classes. The variation might be achieved either by sub-classing, or using pluggable sub-processes for specific operations.
What is interesting is that complex data-connected system seem to deviate towards this separation over time, even if they start with classic OOP-style business objects. I have seen this once, in a  big ERP system - after facing more and more problems with business objects as system added more and more features, the team started implementing the new functionality using new processes, without modifying old business objects holding the data and old code. The other system I worked on has adopted this style before I joined the team, and I witnessed the development of new features almost exclusively in separate process classes. The business objects holding the data had very limited code supporting UI editing.
Another interesting observation - the MVC architectural pattern suggests this separation as well. The Model is mostly data, while all editing functionality is confined to Controller classes. The UI-support code is mostly in Views.
REST also seems to suggest this style of API design. With a limited set of HTTP verbs, you do not have much area to invent fancy methods on top of entities; instead, you go with processes (as resources), use verbs to activate them and use entities/resources are pure data containers as process parameters.
I had been using this approach for some time already. In [url:Irony|] project, the parsing automaton is constructed using numerous classes like ParserState, ParserAction, etc. There are two major kinds of activities happening with these objects:

  • Construction of the automaton at startup - an extensive, computationally heavy process implementing some tricky algorithms
  • Using automaton during parsing - LALR parsing algorithm working with the graph of ParserState and related objects.
One choice would have been to put all logic from for both activities into the automaton objects. That would really mix up the code for two almost unrelated activities, bloat the objects (obscuring their "essense"), and generally break the rule of "separation of concerns". Instead, I chose to place the code into separate processing areas: there is a number of -builder classes for building the state graph, and a Parser class that does the parsing. The classes constituting the "parsing automaton" (ParserData) are mostly code-free property containers. The same approach is used inside VITA - there are classes that hold data and others that do stuff.

Business code in VITA applications
Long story short - this is the pattern I am planning to follow in VITA. The entities are code-free collections of values, with some smart auto-loading functionality underneath - but this is hidden in the framework. The application is assembled from entity modules - containers for a group of entities implementing functional block. All business code should be placed into modules and helper processing classes.
As one early example - the Error logging module in _Vita.Modules_ assembly. It manages the read/write operations for error log table internally, and it exports two facilities:

  • ErrorLogService - a "processor" service for logging errors
  • ErrorLogViewer - a service for displaying the error log in the browser

Both facilities are pure "processors" - they work with  _ErrorInfo_ entity, but do not "extend" or encapsulate it in classic OOP style. That said, the particulars of processing classes design are not clear yet and are subject to further research. One foggy area for now is how to expose the functionality through REST.    

About exceptions

- in general and VITA exception handling in particular
This is an almost random collection of thoughts on exception handling. Hopefully reading it would make it clear what kind of problems VITA is trying to address in this area. The other goal is to discuss some controversies and misconceptions surrounding the exceptions. 

Exception handling - a common anti-pattern
I have seen this happen more than once, and if you are a developer with a few projects in the resume, chances are you had seen this too.  
The story unfolds like this. A new project is started. Requirements and specs are ready (or not), the development is in full speed. Parts of the future application start to evolve, and something actually works, some shiny UI is up and you can do something with the app. Then comes up the issue of error handling - errors happen, need to be handled and logged. If it's not yet done - no problem, a global catch block is added, other catch blocks are sprinkled throughout the code. The catch blocks log the exception using some logging facility - home-baked, or Log4Net, or maybe you decide to use this huge pile of nonsense called "Exception Handling Block" in Enterprise Library - whatever, everything seems under control. 
And then problems start to surface. It appears that some exceptions are different from others. Even if they are of the same type. For example, _InvalidArgumentException_ may result from a serious bug, but it also surfaces when user enters an invalid value (or no value at all) in some field in UI form. In the latter case, it is not necessary to log a mile-long exception report and spam the developers - just show an error message to the user. 
The first almost instinctive response is usually - OK, let's just make a catch block a little smarter, separate exception apples from oranges, and we'll be doing fine. For some time, it works. The code is "guessing" the exception kind by looking into the message and other "clues". But then the catch blocks becomes larger and more "sophisticated" and turn into a complete mess. 
Suddenly everything is ruined by "globalization" requirement - all messages should be translated into different languages. Worse, even standard .NET exceptions will be localized - the app will be running on non-English computers, in which the .NET Framework speaks French or German. 
It's not over yet, it appears that not all validation checks can be localized into a dedicated methods - as we thought initially, user error can surface from anywhere. Even batch process. Plus, you discover that a submitted UI form may have MULTIPLE user errors - let's try to pack all messages into a single exception.Message property using some delimiter like line-break... Except some messages contain line breaks. Hm, should we use Xml? Oops... UI says each message should be shown next to the control with the invalid value, so we need to pack property names with messages. 
Damn, what to do about submitting duplicates of some records?! These are not detected in c# code, they are rejected by the database server, which throws SqlException - we need to start parsing this message as well... 
And now the final blow - REST. Web service. The UI layer should not talk to application logic or database directly, it should communicate with remote RESTful API.  It was initial requirement, we just delayed separating layers until later, when we have some basic code working. Now we have to come up with ways to transport our "exceptions" and deal with them on the client. And separate them from the added Web exceptions. And by the way, at the REST layer, we should follow REST rules for handling errors. 
Yeah, that's gonna be a loooong night...
Sounds familiar? Details may vary, but the anti-pattern is clear. Exception handling (and user errors handling) should be addressed from the very beginning and built into the core system. Otherwise, you get into trouble. 

The solution
VITA introduces a conceptual framework for proper exception-handling. It works for user errors (including multiple user errors), database-produced errors, and transparently works through REST while obeying the REST rules for error handling. 
The central concept in this framework is _NonFatalException_ - a base class for all exception that do not indicate a bug or system malfunctioning - most likely a user error. The subclass of NonFatalException - ValidationException - holds a collection of errors, each having a message and additional information for pinning the message to a particular object or UI control. The exception is able to travel along the REST connection (as BadRequest response) and resurrects on the client, with all information in tact.

About WarningException
What always surprised me is that .NET base libraries (BCL) never introduced a special exception class for non-fatal errors - at least I was not aware of any such class. It seems so natural and so common to all applications - but it never surfaced in the basic framework. They did have an ApplicationException - a base class for all non-BCL exceptions, which proved to be useless, and was deprecated. And none of the existing frameworks (logging, exception handling, ORM) appear to address the problem properly. 
Recently, my friend and active Irony contributor AlexeyY prompted me that there is in fact a WarningException class in the BCL that has somewhat similar purpose to VITA's NonFatalException. I was quite surprised that something like this escaped me, but Google search revealed why. Other than the MSDN documentation, it is virtually non-existent. No articles, no third party samples. Lot of Java links although - to namesake java class. And if the thing is not on Google, it does not exist. Apparently, Microsoft did not promote its use at all. 
From the description in MSDN, it looks like the purpose of the exception is to show an error message to the user (instead of Exception Manager window). In that way, it is somewhat similar to VITA's ValidationException. Similar, but not quite the same. Lacking more information, it seems like the purpose of the exception is to carry a single(!) message to the UI to show it to the user - quite limited indeed. 
I contemplated for a while whether VITA should use this exception type in place of NonFatalException, and decided not to do it. 

  • One reason is its obscurity bordering on non-existence - it is dangerous to base a fundamental feature on a not-well-known class. 
  • Secondly, there is a chance that existing of future MS or third-party code is/would treat this exception in a special way, not expected by VITA's code, so it might break the application.
  • Finally, the name - seems like self-contradictory, an oxymoron. Warning is a question to ask before proceeding. Exception on the other hand is about stopping and canceling the current work. If we through exception - we are canceling the operation. What we show after that - is not a warning anyway.

That's why I decided to stay with a custom NonFatalException class. 

Controversy: using exceptions for user errors 
There is a big controversy - at least for some developers - about "proper" use of exceptions, and whether it is OK to use exceptions for things like user errors. The often repeated mantra is "use exceptions only for unexpected cases, and user error is an expected thing". If you are among those who is troubled with this dilemma - to throw or not to throw - here is my digest on the issue. I hope this will clear the air, and will let you accept the approach in VITA framework - it's OK to throw on user error.  

  • First of all - what the heck does this mean - "unexpected"?! Let's just look at a good old SqlConnection object in .NET data libraries. If you open a connection to the database, you can reasonably expect that sometimes the database would not be available. Right? it is a kind of "expected" case, at least you, the developer, can foresee it happening. Why then connection.Open() throws an exception when it fails to connect? Until I get the clear answer, or Microsoft changes the implementation of the SqlConnection class - I assume the objection does not hold any truth at all. 
  • The other often-sited objection is - "Exceptions are expensive, use it only when you have no choice. If you use it for user errors, your code will be really SLOOOOOW." Yes, if you use exceptions to return values from functions - you are doing non-sense. This crazy case aside, let's say you are in the server code and find an error in the data submitted by the user. At this moment all of the previous work of submitting request and handling it - it is already a waste! Throwing or not throwing does not change much the amount of wasted effort. 
  • Finally, if you are going to handle user errors using return codes instead of exceptions, at least in some parts of your code, then you'll have to switch ALL of your code to this error handling style. Do not think that user errors popup only in isolated methods that do validation - this is not the case in the real world. But using errors codes - oh, not that again. Welcome back to the 70's and all of the misery of languages without structured exception handling.

I hope this clears the issue. VITA uses a exceptions for signalling interruption events that are not caused by system failure, but by user errors. This allows you to write a well organized code with really structured exception handling.