Wednesday, September 23, 2009

The Adaptive Object-Model Architectural Style

The purpose of this architectural style/pattern is pretty straightforward although I did have some problems understanding some of the details. The goal of the architect using this pattern is to allow the system to be extended with new "business rules" without having to change the source code to do so. The way one often model an application is to let concepts be abstract classes and then subclass these for each instantiation of that concept. The example given in the paper is the one of a car which can be sub-classed as Prius, etc.

If I understood it correctly the adaptive object-model pattern changes this so that instead of modeling the domain as classes one create a set of meta-concepts (classes) that the user can use to model the business rules by composing them in different ways (favor composition). In the extreme case these concepts would form a complete domain-specific language that could itself be turing-complete. This way the application moves from being an implementation of business rules to an interpreter of them. This flexibility comes with a cost in that it makes the application harder to understand for developers not used to this way of thinking. However, on the plus side they can offload some of the work understanding and creating business rules to domain experts who are often not programmers.

In the spirit of sharing I once developed an application that had some of these characteristics although it was far simpler than the full architecture presented here. The application was a performance analysis tool that presented various groups of HW/SW counters and derived counters that had been sampled from a graphics processor and its SW infrastructure. When I started on this project the decision to rewrite the original tool had just been made so we could start from scratch. The original tool would read a fixed set of counters from a binary file created by our graphics drivers. It would then, based on "business knowledge" of the different counters create derived counters (<#something per frame> -> <#something per second> and <#counter1 + #counter2 / #counter3>. This was not a good idea as it required the application to be changed if a base counter was added, if we wanted a new derived counter or if counters were replaced as they would be when new processors came along. In the new version the application became an interpreter that read an XML file of an arbitrary number of counters in arbitrary groupings. These counters were then tagged with information describing how they should be processed and formated by the application. This new design was agnostic of what it displayed which proved very useful as the HW and driver people could add new counters without involving my team. Furthermore, when the maintenance was later taken over by another location in another continent this flexibility became even more valuable as the team maintaining the application and the teams with counters to show could work independently with the XML file format as the interface.

Finally, as a digression and to try to better understand the pattern myself, I found it interesting to compare it to the evolution of graphics processors. These used to be fixed function, which required the HW to be changed to support new lighting models (bump mapping, shadows) and other effects. This proved to be too constraining and the fixed functionality was gradually loosened until it was finally replaced with programmable stages that allowed the user to define their own "rules" (shader programs) for how lighting, etc. should be calculated. The graphics processor would then "interpret" these rules the same way as AOMA software interprets business rules in a database. And as we all know, in the case of graphics processors, the rules eventually became so flexible that graphics processors became useful in areas that had nothing to do with graphics.

No comments:

Post a Comment