In the CS527 class discussion today we were discussing parallelism and the topic of functional languages for parallelism came up. The question was posed whether functional languages are inherently easier to parallelize than imperative languages due to various constructs such as map/fold and functions without side effects and, if this is true, whether we should therefore switch to these languages.
I made the argument that we should not be quick to throw away all the investment we have made in imperative languages like C and Java as well as all the knowledge we have gathered about using these languages over the years. Professor Johnson pointed out that even though functional languages have long been proclaimed as the right way to program without ever really breaking through to the masses does not mean that they won't in the future. He gave an example from a talk he recently attended where the speaker showed various devices from 1984 (the year, not the book =) that we today typically think of as recent innovations. Examples included touch-screen watches and iphone-like devices, the point I believe being that inventions typically exist for a long time before their time finally comes and they are used in innovative ways.
I found the discussion as well as JLSjr and Professor Johnson's points to be really interesting so I thought about it quite a bit on my way home. It seems to me that what is currently happening is that our imperative languages are starting to soak up features from functional languages that make parallelism easier instead of being full out replaced by them. Examples include broad usage of map-reduce frameworks, a feature that have long been supported natively in functional languages and the planned introduction of lambda-calculus, or anonymous functions, in C++ primarily to make it easier to define kernels to execute in parallel.
A third example of the top of my head is the introduction of closure blocks as an extension to C and C-derivatives (C++ and Objective C) that came with the last release of Mac OS X Snow Leopard. The main rationale for these closures, a long-time functional programming mainstay, is to make it easy to specify small blocks of code that can form a task to be executed in parallel by their new Grand Central. Grand Central is the core of the new parallelism support in OS X that is integrated with their kernel and that basically uses a thread-pool to execute user-specified tasks taken from FIFO queues. Without closures creating such tasks would require a fair bit of plumbing (Java's new ParallelArray library defines more than one hundred classes that one must subclass from to implement different types of task kernels). With closures a lot of very common pieces code can be parallelized by just adding a couple lines of code and a couple of blocks. If you are interested in seeing how this works an interesting overview of Grand Central and the usage of closure blocks in C can be found here.
Perhaps functional programming is here, not through some grand sweeping march where everyone repent their sins and switches to the "only computer languages that are beautiful", but by gradually influencing and changing our current bag of tools (like they have done in the past)? Does anyone have any thoughts on this? Am I reading too much into this or are perhaps these constructs are really a first step on the way to switching completely over?
Tuesday, September 29, 2009
Monday, September 28, 2009
Jikes RVM
Turtles all the way down.
Chapter 10 of our mainstay, Beautiful Architecture, describes another to me somewhat surprising use of Java. The Jikes RVM is a so-called metacircular virtual machine or runtime. It hosts Java byte code, but what makes it metacircular is that it is also implemented in Java. My first thought was how this could be because somewhere something must be running on the actual hardware. Surely it can't be turtles all the way down. I started to dream about fancy bootstrapping tricks, but the answer turned out to be straightforward which I admit disappointed me a bit. Even though the Jikes RVM is written in Java, a language that is usually compiled to bytecode, in this case they compile it down to machine code.
The authors spend quite a bit of text-space arguing the virtues of metacircularity. I thought that most of these argument was actually for using Java at all and not for using Java for implementing a JVM in particular. The case of metacircularity, from what I could distill, boiled down to two arguments. The first is that this means that the runtime/VM and the code being run will use the same conventions (calling, etc.) which removes some need for converting. The second is that using the VM's JIT compiler to compile itself (there were some bootstrapping in there after all!) creates a virtuous cycle where good thing happens and I can definitely see the benefits with this. However, writing a compiler is an art of tradeoffs (or so I'm told) and surely some of the tradeoffs must be different for regular JIT code and a compiler being compiled offline? I wonder whether they have situations where they must choose between bloating the compiler, making it less suitable as a JIT, or leaving out some costly optimizations thereby making it less suitable for offline compilation of itself and the JVM. However, the project seems to be fairly successful so I guess they managed to navigate these tradeoffs satisfactorily.
If I were to point out any disadvantages with the concept of metacircularity itself then I would argue that every tool has a job and every job has a tool. Not all languages are equally good at implementing all software so a language that is, for example, highly productive in creating enterprise applications is not necessarily the most effective one for creating a compiler. On the contrary a different language with different design decisions will often be more suited for this job. Note that I do not here consider wether Java is best suitable for creating a VM in general. That is another discussion (of which, based on JPC I would say the answer is probably no in most cases). In this paragraph I only considered whether the target language is inherently the best choice for implementing its runtime.
Marcus Strautman posed an interesting question of whether a JVM rolling its own threading model is a good idea today. According to the chapter Jikes RVM did this because of lacking multithreading support in the previous century. Although OS support for multithreading is much better now I would not necessarily throw away green treads completely as there are other benefits to them. The number of cores in processor is expected to be the main beneficiary of Moore's law in the years to come (as a side note Moore's law is falsely, or at least prematurely, proclaimed dead as the law is about transistors per CPU and not about speed). As we must expect cores to increase in the future we should at least try to create as many independent threads of execution as we can in software that needs to be scalable. However, native thread switching comes at a cost which means that our software will be slower today! Green threads are usually cheaper and in environment with cheap threads the application programmer can afford to create more of them (programming time not considered). This means that an implementation with green threads on a dual-core processor could, for example, use two-three native threads in a thread-pool fashion to execute them while on a quad-core it could use four-six threads and so on with no change to the application. Furthermore, with more application threads available the runtime is freer to schedule them in clever ways to hide latency by exploiting techniques such as simultaneous multithreading (aka. hyperthreading), etc.
Chapter 10 of our mainstay, Beautiful Architecture, describes another to me somewhat surprising use of Java. The Jikes RVM is a so-called metacircular virtual machine or runtime. It hosts Java byte code, but what makes it metacircular is that it is also implemented in Java. My first thought was how this could be because somewhere something must be running on the actual hardware. Surely it can't be turtles all the way down. I started to dream about fancy bootstrapping tricks, but the answer turned out to be straightforward which I admit disappointed me a bit. Even though the Jikes RVM is written in Java, a language that is usually compiled to bytecode, in this case they compile it down to machine code.
The authors spend quite a bit of text-space arguing the virtues of metacircularity. I thought that most of these argument was actually for using Java at all and not for using Java for implementing a JVM in particular. The case of metacircularity, from what I could distill, boiled down to two arguments. The first is that this means that the runtime/VM and the code being run will use the same conventions (calling, etc.) which removes some need for converting. The second is that using the VM's JIT compiler to compile itself (there were some bootstrapping in there after all!) creates a virtuous cycle where good thing happens and I can definitely see the benefits with this. However, writing a compiler is an art of tradeoffs (or so I'm told) and surely some of the tradeoffs must be different for regular JIT code and a compiler being compiled offline? I wonder whether they have situations where they must choose between bloating the compiler, making it less suitable as a JIT, or leaving out some costly optimizations thereby making it less suitable for offline compilation of itself and the JVM. However, the project seems to be fairly successful so I guess they managed to navigate these tradeoffs satisfactorily.
If I were to point out any disadvantages with the concept of metacircularity itself then I would argue that every tool has a job and every job has a tool. Not all languages are equally good at implementing all software so a language that is, for example, highly productive in creating enterprise applications is not necessarily the most effective one for creating a compiler. On the contrary a different language with different design decisions will often be more suited for this job. Note that I do not here consider wether Java is best suitable for creating a VM in general. That is another discussion (of which, based on JPC I would say the answer is probably no in most cases). In this paragraph I only considered whether the target language is inherently the best choice for implementing its runtime.
Marcus Strautman posed an interesting question of whether a JVM rolling its own threading model is a good idea today. According to the chapter Jikes RVM did this because of lacking multithreading support in the previous century. Although OS support for multithreading is much better now I would not necessarily throw away green treads completely as there are other benefits to them. The number of cores in processor is expected to be the main beneficiary of Moore's law in the years to come (as a side note Moore's law is falsely, or at least prematurely, proclaimed dead as the law is about transistors per CPU and not about speed). As we must expect cores to increase in the future we should at least try to create as many independent threads of execution as we can in software that needs to be scalable. However, native thread switching comes at a cost which means that our software will be slower today! Green threads are usually cheaper and in environment with cheap threads the application programmer can afford to create more of them (programming time not considered). This means that an implementation with green threads on a dual-core processor could, for example, use two-three native threads in a thread-pool fashion to execute them while on a quad-core it could use four-six threads and so on with no change to the application. Furthermore, with more application threads available the runtime is freer to schedule them in clever ways to hide latency by exploiting techniques such as simultaneous multithreading (aka. hyperthreading), etc.
Our Pattern Language
Our Pattern Language is a pattern language developed by several parties, including Professor Johnson's group at UIUC, but that is hosted by Berkeley. The name is lent from Alexander's pattern language and is supposedly a placeholder for another future name. It is interesting to see how much clearer this new paper is than earlier papers and versions of the OPL wiki and the pattern language definitely seems to be maturing.
OPL is a hierarchical set of patterns divided into five groups. One of the underpinnings of OPL is the arguable point that our inability to architect parallel software stems from our inability to architect software in general. The first group therefore contains general architecture patterns from which the parallel implementation should flow. The next two groups contain the computational problems in parallel computing and algorithms to implement parallelism. The lower groups contain programming models and primitives that are necessary to implement these algorithms.
Another driving idea behind OPL is that research in parallel computing should be driven by the applications that will actually be written. The language therefore includes the 7 dwarfs of super computing and the rest of the 13 more general parallel motifs recast as Computational patterns. These patterns break a little with the common way of framing patterns, namely as a solution to a recurring problem in context. Instead the are broad categories of solutions to a large number of possible problems. One way to get around this that I think works quite well is to treat them as pattern languages consisting of a number of more concrete solutions to problems encountered in the pattern language domain. One can then discuss the forces that are relevant in the domain at large before presenting the common patterns and the tradeoffs in selecting one or the other. This is the approach taken in the paper "N-Body Pattern Language" by Danny Dig, Professor Johnson and Professor Snir.
One point about the computational patterns that I found enlightening it is that they do not cover all problems solved with parallel programming. Many projects use threading or multiple processes for other reasons such as implementing threading to avoid hogging the GUI thread or to wait for and collect hardware interrupts. Other examples mentioned in this course are Professor King's web browser where parallelism is mainly for security and Erlang/Guardian where parallelism was originally mostly for reliability. However, the computational patterns are the problems that need an almost inexhaustible supply of processors and where we can always efficiently use more processors by increasing the problem size. The other problems mentioned above are inherently constrained in how much parallelism one can expose; the computational patterns are not. As such, if we can find just a few killer desktop application that really needs to solve one of these problems then we have a reason to continue making chips with more processors. Otherwise, as Professor Patterson puts it, we are on the road to commoditization and consumers will (rightly) demand cheaper instead of more powerful hardware. After all we don't really want to spend 97 out of 100 cores detecting viruses and there are only so much embarrassingly parallel concurrency available through the user running different programs.
The paper identifies four classes of developers and presents a vision that we will architect software in such a way that each kind of developer will only need to concern themselves with the problems at their layer. One example of this separation of concerns is that an application developer (the largest group of programmers by far) should only need to be "weakly aware of parallel programming". This is a nice vision and is the way it currently works in domains such as enterprise systems where parallelism is ubiquitous, but where most programmers do not spend time thinking about it. If we could achieve this for all the thirteen computational patterns and then find killer applications that need these types of computations then the problem would be largely solved and most developers would not even need much re-education!
However, I do think this will be harder than it was with web-servers. The difference this time around is that the problems have much more complex communication patterns. With web servers each client can be run in parallel in different sessions. The sessions are for the most part independent of all other sessions and only communicate with them through a database management system (that much effort has gone into making parallel). Most of the computational patterns on the other hand have more complex and intrusive communication patterns making it harder to write framework that are applicable to many applications. But then again frameworks don't really need to be that general.
OPL is a hierarchical set of patterns divided into five groups. One of the underpinnings of OPL is the arguable point that our inability to architect parallel software stems from our inability to architect software in general. The first group therefore contains general architecture patterns from which the parallel implementation should flow. The next two groups contain the computational problems in parallel computing and algorithms to implement parallelism. The lower groups contain programming models and primitives that are necessary to implement these algorithms.
Another driving idea behind OPL is that research in parallel computing should be driven by the applications that will actually be written. The language therefore includes the 7 dwarfs of super computing and the rest of the 13 more general parallel motifs recast as Computational patterns. These patterns break a little with the common way of framing patterns, namely as a solution to a recurring problem in context. Instead the are broad categories of solutions to a large number of possible problems. One way to get around this that I think works quite well is to treat them as pattern languages consisting of a number of more concrete solutions to problems encountered in the pattern language domain. One can then discuss the forces that are relevant in the domain at large before presenting the common patterns and the tradeoffs in selecting one or the other. This is the approach taken in the paper "N-Body Pattern Language" by Danny Dig, Professor Johnson and Professor Snir.
One point about the computational patterns that I found enlightening it is that they do not cover all problems solved with parallel programming. Many projects use threading or multiple processes for other reasons such as implementing threading to avoid hogging the GUI thread or to wait for and collect hardware interrupts. Other examples mentioned in this course are Professor King's web browser where parallelism is mainly for security and Erlang/Guardian where parallelism was originally mostly for reliability. However, the computational patterns are the problems that need an almost inexhaustible supply of processors and where we can always efficiently use more processors by increasing the problem size. The other problems mentioned above are inherently constrained in how much parallelism one can expose; the computational patterns are not. As such, if we can find just a few killer desktop application that really needs to solve one of these problems then we have a reason to continue making chips with more processors. Otherwise, as Professor Patterson puts it, we are on the road to commoditization and consumers will (rightly) demand cheaper instead of more powerful hardware. After all we don't really want to spend 97 out of 100 cores detecting viruses and there are only so much embarrassingly parallel concurrency available through the user running different programs.
The paper identifies four classes of developers and presents a vision that we will architect software in such a way that each kind of developer will only need to concern themselves with the problems at their layer. One example of this separation of concerns is that an application developer (the largest group of programmers by far) should only need to be "weakly aware of parallel programming". This is a nice vision and is the way it currently works in domains such as enterprise systems where parallelism is ubiquitous, but where most programmers do not spend time thinking about it. If we could achieve this for all the thirteen computational patterns and then find killer applications that need these types of computations then the problem would be largely solved and most developers would not even need much re-education!
However, I do think this will be harder than it was with web-servers. The difference this time around is that the problems have much more complex communication patterns. With web servers each client can be run in parallel in different sessions. The sessions are for the most part independent of all other sessions and only communicate with them through a database management system (that much effort has gone into making parallel). Most of the computational patterns on the other hand have more complex and intrusive communication patterns making it harder to write framework that are applicable to many applications. But then again frameworks don't really need to be that general.
Thursday, September 24, 2009
JPC: An x86 PC Emulator in Pure Java
JPC is an emulator for x86 machine code written in Java. Chapter 9 of Beautiful Architecture describes its rationale, development as well as challenges in creating an emulator in Java. The bulk of the chapter is organized around nine tips for creating fast Java code. An emulator in this context is a piece of software that reads instructions in a machine language and then executes these instructions. As such it is basically not much more than a specialized interpreter for machine code. The benefit emulators as opposed to virtual machines is that they reads and executes machine code instead of running it directly on the host machine. They are therefore more flexible and can support running one type of machine code on a completely different machine. The JPC emulator in this chapter does exactly this as it interprets x86 code while it itself is running on a JVM. The fact that the JVM often happen to be running on an x86 machine (giving the ironic emulation chain x86->Bytecode->x86) is not relevant for JPC and indeed they do provide examples of JPC being able to emulate x86 on other architectures. The downside of emulators as opposed to VMs on the other hand is that they are slower. Instead of just running the program instructions and then trapping system calls they have to read, interpret and run every instruction which adds overhead.
The chapter begins and ends with the authors arguing why one would want to write an emulator in Java since it does add some overhead that native code doesn't have (even though the overhead is less and less as technology matures). The main two benefits they describe which are specific to a Java emulator is safety and portability. They argue that since JPC is running inside the fairly secure JVM such a system is more secure. This is based on the fact that one have several layers of security and an attacker therefore have to find bugs in more than one place to break all the way into the native environment. The second benefit, namely portability, stems from the fact that Java Virtual Machines are available for more platforms than the x86 which means these platforms can use it to run x86 code without having to port the emulator.
When reading this chapter I liked the way they balanced the opposing forces of simplicity and efficiency. Instead of just deciding that it has to go fast so let us optimize everything they tried to pinpoint the areas where it was actually needed while keeping the rest of the system as simple as possible. They seemed to pay for complexity in some parts with simplicity in others (maybe the average complexity of a system like this is a better measure than the most complex function). I also found their sometimes very obscure tricks to be fun to read. One thing I did find to be strange however was their comment on using the heuristic that "the typical machine will not exceed 2 GB of physical RAM". I thought we were passed making such assumptions and indeed this one seems to be becoming outdated already.
The moderator for this paper posed the question of whether JPC could be useful for other architectures than x86. I assume he meant this in the sense of running it on another architecture and not of porting it to support another one. I do think this could be very useful. The JVM is very popular on other architectures like the ARM. In fact ARM devices with JVMs are probably far more common than x86 devices since ARM processors are the most sold 32-processors on the planet (They passed the 10 billion processors mark a year ago and around 1 billion are now shipped every quarter). Just look at the (likely) ARM-powered, Java-capable device in your pocket - the cellphone. That being said however a lot of useful software exist only for the x86 architecture so an emulator capable of running this software on medium-sized, non-x86 devices such as smartphones, PDAs and some netbooks would be very useful.
The chapter begins and ends with the authors arguing why one would want to write an emulator in Java since it does add some overhead that native code doesn't have (even though the overhead is less and less as technology matures). The main two benefits they describe which are specific to a Java emulator is safety and portability. They argue that since JPC is running inside the fairly secure JVM such a system is more secure. This is based on the fact that one have several layers of security and an attacker therefore have to find bugs in more than one place to break all the way into the native environment. The second benefit, namely portability, stems from the fact that Java Virtual Machines are available for more platforms than the x86 which means these platforms can use it to run x86 code without having to port the emulator.
When reading this chapter I liked the way they balanced the opposing forces of simplicity and efficiency. Instead of just deciding that it has to go fast so let us optimize everything they tried to pinpoint the areas where it was actually needed while keeping the rest of the system as simple as possible. They seemed to pay for complexity in some parts with simplicity in others (maybe the average complexity of a system like this is a better measure than the most complex function). I also found their sometimes very obscure tricks to be fun to read. One thing I did find to be strange however was their comment on using the heuristic that "the typical machine will not exceed 2 GB of physical RAM". I thought we were passed making such assumptions and indeed this one seems to be becoming outdated already.
The moderator for this paper posed the question of whether JPC could be useful for other architectures than x86. I assume he meant this in the sense of running it on another architecture and not of porting it to support another one. I do think this could be very useful. The JVM is very popular on other architectures like the ARM. In fact ARM devices with JVMs are probably far more common than x86 devices since ARM processors are the most sold 32-processors on the planet (They passed the 10 billion processors mark a year ago and around 1 billion are now shipped every quarter). Just look at the (likely) ARM-powered, Java-capable device in your pocket - the cellphone. That being said however a lot of useful software exist only for the x86 architecture so an emulator capable of running this software on medium-sized, non-x86 devices such as smartphones, PDAs and some netbooks would be very useful.
Wednesday, September 23, 2009
The Adaptive Object-Model Architectural Style
The purpose of this architectural style/pattern is pretty straightforward although I did have some problems understanding some of the details. The goal of the architect using this pattern is to allow the system to be extended with new "business rules" without having to change the source code to do so. The way one often model an application is to let concepts be abstract classes and then subclass these for each instantiation of that concept. The example given in the paper is the one of a car which can be sub-classed as Prius, etc.
If I understood it correctly the adaptive object-model pattern changes this so that instead of modeling the domain as classes one create a set of meta-concepts (classes) that the user can use to model the business rules by composing them in different ways (favor composition). In the extreme case these concepts would form a complete domain-specific language that could itself be turing-complete. This way the application moves from being an implementation of business rules to an interpreter of them. This flexibility comes with a cost in that it makes the application harder to understand for developers not used to this way of thinking. However, on the plus side they can offload some of the work understanding and creating business rules to domain experts who are often not programmers.
In the spirit of sharing I once developed an application that had some of these characteristics although it was far simpler than the full architecture presented here. The application was a performance analysis tool that presented various groups of HW/SW counters and derived counters that had been sampled from a graphics processor and its SW infrastructure. When I started on this project the decision to rewrite the original tool had just been made so we could start from scratch. The original tool would read a fixed set of counters from a binary file created by our graphics drivers. It would then, based on "business knowledge" of the different counters create derived counters (<#something per frame> -> <#something per second> and <#counter1 + #counter2 / #counter3>. This was not a good idea as it required the application to be changed if a base counter was added, if we wanted a new derived counter or if counters were replaced as they would be when new processors came along. In the new version the application became an interpreter that read an XML file of an arbitrary number of counters in arbitrary groupings. These counters were then tagged with information describing how they should be processed and formated by the application. This new design was agnostic of what it displayed which proved very useful as the HW and driver people could add new counters without involving my team. Furthermore, when the maintenance was later taken over by another location in another continent this flexibility became even more valuable as the team maintaining the application and the teams with counters to show could work independently with the XML file format as the interface.
Finally, as a digression and to try to better understand the pattern myself, I found it interesting to compare it to the evolution of graphics processors. These used to be fixed function, which required the HW to be changed to support new lighting models (bump mapping, shadows) and other effects. This proved to be too constraining and the fixed functionality was gradually loosened until it was finally replaced with programmable stages that allowed the user to define their own "rules" (shader programs) for how lighting, etc. should be calculated. The graphics processor would then "interpret" these rules the same way as AOMA software interprets business rules in a database. And as we all know, in the case of graphics processors, the rules eventually became so flexible that graphics processors became useful in areas that had nothing to do with graphics.
If I understood it correctly the adaptive object-model pattern changes this so that instead of modeling the domain as classes one create a set of meta-concepts (classes) that the user can use to model the business rules by composing them in different ways (favor composition). In the extreme case these concepts would form a complete domain-specific language that could itself be turing-complete. This way the application moves from being an implementation of business rules to an interpreter of them. This flexibility comes with a cost in that it makes the application harder to understand for developers not used to this way of thinking. However, on the plus side they can offload some of the work understanding and creating business rules to domain experts who are often not programmers.
In the spirit of sharing I once developed an application that had some of these characteristics although it was far simpler than the full architecture presented here. The application was a performance analysis tool that presented various groups of HW/SW counters and derived counters that had been sampled from a graphics processor and its SW infrastructure. When I started on this project the decision to rewrite the original tool had just been made so we could start from scratch. The original tool would read a fixed set of counters from a binary file created by our graphics drivers. It would then, based on "business knowledge" of the different counters create derived counters (<#something per frame> -> <#something per second> and <#counter1 + #counter2 / #counter3>. This was not a good idea as it required the application to be changed if a base counter was added, if we wanted a new derived counter or if counters were replaced as they would be when new processors came along. In the new version the application became an interpreter that read an XML file of an arbitrary number of counters in arbitrary groupings. These counters were then tagged with information describing how they should be processed and formated by the application. This new design was agnostic of what it displayed which proved very useful as the HW and driver people could add new counters without involving my team. Furthermore, when the maintenance was later taken over by another location in another continent this flexibility became even more valuable as the team maintaining the application and the teams with counters to show could work independently with the XML file format as the interface.
Finally, as a digression and to try to better understand the pattern myself, I found it interesting to compare it to the evolution of graphics processors. These used to be fixed function, which required the HW to be changed to support new lighting models (bump mapping, shadows) and other effects. This proved to be too constraining and the fixed functionality was gradually loosened until it was finally replaced with programmable stages that allowed the user to define their own "rules" (shader programs) for how lighting, etc. should be calculated. The graphics processor would then "interpret" these rules the same way as AOMA software interprets business rules in a database. And as we all know, in the case of graphics processors, the rules eventually became so flexible that graphics processors became useful in areas that had nothing to do with graphics.
Tuesday, September 22, 2009
Guardian: A fault-tolerant operating system environment
This weeks chapter from Beautiful Architecture for cs527 was on the Guardian operating system and the T/16 machine. Both the operating system and the machine was engineered for reliability trading most other quality attributes for this. The core idea was that everything should be duplicated in case one goes down. The T/16 machines had at least two processors, two busses, (often) two disks, etc.
Each process would be duplicated on two processors. On one processor it would be active and on the other it would be passive waiting for the first one to die or give up control. In the reliability world there are basically three ways to recover in the face of failure namely job replication, checkpointing or to attempt to repair the state of the execution. The last one is the least general (but the most used) and must be custom-fit to each problem for example by using exceptions. For the Guardian operating system they chose to do application-controlled checkpointing to allow for recovery. As such each program would be responsible for checkpointing its state at various intervals and if a processor goes down the other one would start from the last checkpoint. The biggest risk with this approach is if an application fails to checkpoint after an externally visible operation (giving the ATM customer money...). If this were to happen the operation would be performed again by the other processor. And what if a processor fails between a request for an IO operation and the point where data is checkpointed?
When reading the chapter I was thinking that the architecture presented was a long string of peculiarities and ad-hoc addons that had been necessary over the decades and not a beautiful system with high conceptual integrity. However, when writing this post it occurs to me that its beauty lies in the way every aspect of it and every decision made enforces its reliability. It is obvious that the architects really had duplicity in their blood as the author points out.
The author states improved commodity hardware and the burden of legacy code as two reasons for why the system became obsolete in the nineties. This sounds reasonable too me and the fact that the system was popular for 15-20 years is no mean feat for special-purpose HW.
Each process would be duplicated on two processors. On one processor it would be active and on the other it would be passive waiting for the first one to die or give up control. In the reliability world there are basically three ways to recover in the face of failure namely job replication, checkpointing or to attempt to repair the state of the execution. The last one is the least general (but the most used) and must be custom-fit to each problem for example by using exceptions. For the Guardian operating system they chose to do application-controlled checkpointing to allow for recovery. As such each program would be responsible for checkpointing its state at various intervals and if a processor goes down the other one would start from the last checkpoint. The biggest risk with this approach is if an application fails to checkpoint after an externally visible operation (giving the ATM customer money...). If this were to happen the operation would be performed again by the other processor. And what if a processor fails between a request for an IO operation and the point where data is checkpointed?
When reading the chapter I was thinking that the architecture presented was a long string of peculiarities and ad-hoc addons that had been necessary over the decades and not a beautiful system with high conceptual integrity. However, when writing this post it occurs to me that its beauty lies in the way every aspect of it and every decision made enforces its reliability. It is obvious that the architects really had duplicity in their blood as the author points out.
The author states improved commodity hardware and the burden of legacy code as two reasons for why the system became obsolete in the nineties. This sounds reasonable too me and the fact that the system was popular for 15-20 years is no mean feat for special-purpose HW.
Monday, September 21, 2009
Big Ball of Mud
A big ball of mud is a system that is characterized by lack of structure and conceptual integrity and that contains many ad hoc fixes. The paper argues that this is a pattern and not an anti-pattern as it is likely the most dominant architecture of todays systems. It is argued that a big ball of mud is not necessarily bad. Examples of when it may be warranted is if a system is not complex enough to warrant more architecture, in cases of very strict time to market and in the early phases of the development process before the natural architecture is uncovered.
I liked the many insightful quotes in this article about both architecture and organizations and many of them felt uncomfortably familiar. One of these quotes were that "Sometimes freedom from choice ... is what we really want" and I'd like to reference Barry Schwartz's great Google tech talk "The Paradox of Choice" where he talks about why more choice can be bad.
The pattern Throwaway Code pattern talked about throw away prototypes and why they tend to stick around. They suggest writing such code in another language as a remedy. Another trick to ensure clients/managers doesn't ask for a prototype to be shipped that I learned from a realtime instructor is to ensure every prototype crashes at the very end of the demo. This trick admittedly requires a lot of courage and is a bit sneaky, but is nonetheless interesting and clearly communicates that something is not production code to those that are not initiated to the details behind the user interface.
The final thing that came to mind when reading this article was that I thought it argued a bit too strongly for reconstruction although it did point out that this is not always the best choice. Admittedly, reconstruction is sometimes the only way forward, but the general tendency among programmers seems to lean towards reconstruction more often that I think is warranted. A new implementation will introduce new bugs so in most cases I think refactoring is a better choice as it builds on 'working' code.
I liked the many insightful quotes in this article about both architecture and organizations and many of them felt uncomfortably familiar. One of these quotes were that "Sometimes freedom from choice ... is what we really want" and I'd like to reference Barry Schwartz's great Google tech talk "The Paradox of Choice" where he talks about why more choice can be bad.
The pattern Throwaway Code pattern talked about throw away prototypes and why they tend to stick around. They suggest writing such code in another language as a remedy. Another trick to ensure clients/managers doesn't ask for a prototype to be shipped that I learned from a realtime instructor is to ensure every prototype crashes at the very end of the demo. This trick admittedly requires a lot of courage and is a bit sneaky, but is nonetheless interesting and clearly communicates that something is not production code to those that are not initiated to the details behind the user interface.
The final thing that came to mind when reading this article was that I thought it argued a bit too strongly for reconstruction although it did point out that this is not always the best choice. Admittedly, reconstruction is sometimes the only way forward, but the general tendency among programmers seems to lean towards reconstruction more often that I think is warranted. A new implementation will introduce new bugs so in most cases I think refactoring is a better choice as it builds on 'working' code.
Thursday, September 17, 2009
Layers
Layers is arguably the most common architectural pattern and most of the software I have been exposed too have been organized more or less using it. Broadly speaking it is a pretty obvious concept, but the pattern does contain pretty interesting discussions on tradeoffs. It highlights the most central set of conflicting forces in considering this pattern which are the tension between clear separation into multiple layers (modularity) and efficiency. Efficiency concerns come from the fact that multiple layouts and enforced interfaces usually carries an overhead. As one version of the controller part of the MVC song goes: "I wish I had a dime for every single time, I passed on a String". In some types of software such as driver and system software this might be an important tradeoff.
One such case I have experienced was implementing an old industry standard API on a modern and novel piece of hardware. In this case the software layers had to do a lot of costly reshuffling and tricks in order to conform to the specified API interface. As such the required layering came with a big performance hit. However, in most types of software such as most application software this is not a big problem and one are free to select the best abstraction without worrying too much about the efficiency of them.
The chapter mentioned defined interface objects to layers. I guess that they by this mean facade classes such as the one they used as the interface to their model in the making memories system. Furthermore I found the comment on keeping the lower layers as slim as possible to be very true. It also follows the Unix adage of providing features and not policy. Another interesting point was their observation that one should avoid defining components first and them put them into layers. They argue that this will likely lead to a system where layers doesn't really capture the inherent ordering principles of the abstractions. The layer organization will therefore not be intuitive and is unlikely to be respected by maintainers in the future who will take shortcuts and violate the systems conceptual integrity.
One such case I have experienced was implementing an old industry standard API on a modern and novel piece of hardware. In this case the software layers had to do a lot of costly reshuffling and tricks in order to conform to the specified API interface. As such the required layering came with a big performance hit. However, in most types of software such as most application software this is not a big problem and one are free to select the best abstraction without worrying too much about the efficiency of them.
The chapter mentioned defined interface objects to layers. I guess that they by this mean facade classes such as the one they used as the interface to their model in the making memories system. Furthermore I found the comment on keeping the lower layers as slim as possible to be very true. It also follows the Unix adage of providing features and not policy. Another interesting point was their observation that one should avoid defining components first and them put them into layers. They argue that this will likely lead to a system where layers doesn't really capture the inherent ordering principles of the abstractions. The layer organization will therefore not be intuitive and is unlikely to be respected by maintainers in the future who will take shortcuts and violate the systems conceptual integrity.
Xen
Xen is a virtualization platform that allows one PC or server to run multiple operating systems simultaneous. This allows the users to do things like running different operating systems from the same machine or to provide one operating system for each application. The latter is good because it gives each application a clean execution environment. This ensures that different applications don't interact in mysterious ways and that one doesn't accidentally or intentionally bring the whole system down. It also allows for a more overall robust system as Xen is smaller than a full operating system and therefore less software (and less bugs) have to run at the highest privilege level.
The chapter on Xen in the book Beautiful Architecture presents it as an architecture built on distrust. Since each user is encapsulated in their own OS they have less opportunities to interfere with other users or the execution environment (Xen). In addition the client is somewhat more secure since they run their own environment, although as one other An-Hoe Shih pointed out there are still security risks involved as the virtualization platform might contain malicious code. However, this requires that the service provider is malicious and the client is still more secure against other clients.
What interest me most with Xen is the way the architecture divides concerns into different processors for security. Each operating system runs in its own process with dom0 being the supervisor. In addition it even allows different device drivers to be farmed out to completely new driver domains. The rationale for this is safety, and indeed processors aren't only for speed, but the side effect of this is a very scalable system as cores per chip are likely to move into the tens and hundreds.
It also makes me wonder whether we are finally getting ready to welcome the microkernel operating systems. In the nineties many argued their case, but they never quite made it through to most popular systems (Mac OS X being the exception). The reason for this was probably that they never reached the speed of well designed monolithic or hybrid kernels. But if we are ready to accept multiprocessor web-browser and whole virtualization platform indirections for safety and encapsulation then surely microkernels can't be that bad anymore. Besides, they too would have a good scaling story on future chips.
The chapter on Xen in the book Beautiful Architecture presents it as an architecture built on distrust. Since each user is encapsulated in their own OS they have less opportunities to interfere with other users or the execution environment (Xen). In addition the client is somewhat more secure since they run their own environment, although as one other An-Hoe Shih pointed out there are still security risks involved as the virtualization platform might contain malicious code. However, this requires that the service provider is malicious and the client is still more secure against other clients.
What interest me most with Xen is the way the architecture divides concerns into different processors for security. Each operating system runs in its own process with dom0 being the supervisor. In addition it even allows different device drivers to be farmed out to completely new driver domains. The rationale for this is safety, and indeed processors aren't only for speed, but the side effect of this is a very scalable system as cores per chip are likely to move into the tens and hundreds.
It also makes me wonder whether we are finally getting ready to welcome the microkernel operating systems. In the nineties many argued their case, but they never quite made it through to most popular systems (Mac OS X being the exception). The reason for this was probably that they never reached the speed of well designed monolithic or hybrid kernels. But if we are ready to accept multiprocessor web-browser and whole virtualization platform indirections for safety and encapsulation then surely microkernels can't be that bad anymore. Besides, they too would have a good scaling story on future chips.
Tuesday, September 15, 2009
Pipes and Filters
One of this weeks readings was a pattern from the POSA book on pipes and filters. The chapter didn't really contain much new information for me, but I had not thought of it in terms of active/passive filters before. Other than this I thought the chapter mixed the general idea with Unix concept too much, which would make it a bit confusing for readers not familiar with this. Perhaps it would have been better to first discuss the generalized pattern and then have an example describing its use Unix?
Pipes and filters, often called pipelines in the parallel world, is a very common pattern that is used many places for different reasons. In Unix shells its primary usage is to provide a facility for communication between small applications that do one thing and one thing well. The intent there is chiefly flexibility, modularity and reusability - the applications can be used together with other applications to solve problems the original authors may not have thought of.
In the parallel world pipelines are used as a means to expose parallelism in a problem. A task is broken into different stages and if many tasks have to be processed or if one stage can start processing a task before the previous stage is finished then we can do this concurrently. A third place where pipes and filters and the general form of often called data networks is extremely common is in media processing systems. In a previous blog I mentioned gstreamer which is a popular pipes and filters framework for all sorts of media processing in the Linux world. Other media APIs that employ pipelines are OpenGL and OpenAL.
Typical systems implementing the latter two demonstrate yet another benefit of pipelines: If we have broken a task into specialized pipeline stages then we can process some stages more efficiently using specialized HW. In the mobile world there are typically many different specialized processors on a SoC (System On Chip) such as graphics, audio, video, wireless and bluetooth and the most common form of using them is through pipelining. As the number of transistors on a chip, at least for the time being, still increases exponentially (even though the speed doesn't) I suspect that more heterogeneity will start to become the norm on PC microprocessors as well. As such we in the computer world will also have the specialized machinery that Eric G. mentions.
Pipes and filters, often called pipelines in the parallel world, is a very common pattern that is used many places for different reasons. In Unix shells its primary usage is to provide a facility for communication between small applications that do one thing and one thing well. The intent there is chiefly flexibility, modularity and reusability - the applications can be used together with other applications to solve problems the original authors may not have thought of.
In the parallel world pipelines are used as a means to expose parallelism in a problem. A task is broken into different stages and if many tasks have to be processed or if one stage can start processing a task before the previous stage is finished then we can do this concurrently. A third place where pipes and filters and the general form of often called data networks is extremely common is in media processing systems. In a previous blog I mentioned gstreamer which is a popular pipes and filters framework for all sorts of media processing in the Linux world. Other media APIs that employ pipelines are OpenGL and OpenAL.
Typical systems implementing the latter two demonstrate yet another benefit of pipelines: If we have broken a task into specialized pipeline stages then we can process some stages more efficiently using specialized HW. In the mobile world there are typically many different specialized processors on a SoC (System On Chip) such as graphics, audio, video, wireless and bluetooth and the most common form of using them is through pipelining. As the number of transistors on a chip, at least for the time being, still increases exponentially (even though the speed doesn't) I suspect that more heterogeneity will start to become the norm on PC microprocessors as well. As such we in the computer world will also have the specialized machinery that Eric G. mentions.
Sunday, September 13, 2009
Beautiful Architecture - Data grows up
Chapter six of the Beautiful Architecture book presents the architecture of the APIs that Facebook presents to external third-party applications. It is written by an engineering manager at Facebook, but one would almost suspect that it was written by a product manager as it is written to sell a service. It reads almost like a tutorial presenting different services, the ideas behind them and examples of their use.
The main idea is that data is key and that other applications besides Facebook can benefit from the social relationship data contained in its databases. These applications need not have social interaction management as their core services, but can be anything that can benefit from this type of information to enhance how they present their own data to their users. The chapter's argument is that Facebook already have this data so why should every other application need to build it from scratch? Some time ago I heard about the vision of Facebook as the core platform of the social internet in the same way that Windows has been the central platform of the personal computer. The APIs presented in this chapter ties directly into this vision, providing services as well as a full platform for augmenting application domain-specific data with Facebook's.
It is divided in four main sections. The first two present data retrieval services that facebook provides, namely their data retrieval web services and their data query language FQL. The former allows the user to retrieve small sets of data while the latter allows the user to retrieve larger data structures in one call. The rationale for the second is to reduce the latency incurred by remote calls across the Internet by allowing such calls to be conglomerated as well as to move the burden of query logic to the FB servers. In a recent tech talk by Google that I attended one of the speakers gave a very interesting talk about latency pointing it out as the enemy of user experience. Their empirical studies showed that 158 ms is the upper limit to what we perceive as instantaneous. Furthermore the round-trip latency of one http call across the continent can easily consumes one fifth or more of instantaneous. This is a fundamental property that is due to the "limited" speed of light and latency in routing mechanisms and that means that the budget of instantaneous doesn't allow for many such calls!
Their web service APIs (RPC and FQL) are obvious ways of exposing their data though, even though the discussion on security and authentication challenges were fairly interesting. What I found much more groundbreaking however was the services described in the later sections. These services, mainly the Facebook Markup Language (FBML), FB Java Scripts (FBJS) and FB cookies turns the previous model on its head. Instead of Facebook providing a set of services that allows the third party applications to query social data the applications themselves become the services providing information to the Facebook platform. Facebook adopts the role of the web browser from the applications' point of view and they in turn provide FB with markup information and data from their own databases. FB will then process this markup language providing data from its databases as needed and forward it to the end user's browser as HTML, etc. In addition FB allows the applications to provide limited declarative scripts that FB turns into Javascripts.
The whole concept of Facebook as a browser is quite interesting, although I find it humorous how it follows the adage of "solving every problem by adding yet another level of indirection". In addition they have reached the conclusion that imperative, turing-complete language such as JS poses too much of a security risk to allow into their system outside the (somewhat) protected environment of the web browser. Instead they provide declarative languages (FBML and FBJS) that they in turn turn into the necessary standards accepted by the browsers. Like all declarative languages these languages require the developer to say what she wants instead of how she wants it. This means it is easier for the compilers inside the FB platform to guarantee that the code is safe by design as it has more semantic information.
The main idea is that data is key and that other applications besides Facebook can benefit from the social relationship data contained in its databases. These applications need not have social interaction management as their core services, but can be anything that can benefit from this type of information to enhance how they present their own data to their users. The chapter's argument is that Facebook already have this data so why should every other application need to build it from scratch? Some time ago I heard about the vision of Facebook as the core platform of the social internet in the same way that Windows has been the central platform of the personal computer. The APIs presented in this chapter ties directly into this vision, providing services as well as a full platform for augmenting application domain-specific data with Facebook's.
It is divided in four main sections. The first two present data retrieval services that facebook provides, namely their data retrieval web services and their data query language FQL. The former allows the user to retrieve small sets of data while the latter allows the user to retrieve larger data structures in one call. The rationale for the second is to reduce the latency incurred by remote calls across the Internet by allowing such calls to be conglomerated as well as to move the burden of query logic to the FB servers. In a recent tech talk by Google that I attended one of the speakers gave a very interesting talk about latency pointing it out as the enemy of user experience. Their empirical studies showed that 158 ms is the upper limit to what we perceive as instantaneous. Furthermore the round-trip latency of one http call across the continent can easily consumes one fifth or more of instantaneous. This is a fundamental property that is due to the "limited" speed of light and latency in routing mechanisms and that means that the budget of instantaneous doesn't allow for many such calls!
Their web service APIs (RPC and FQL) are obvious ways of exposing their data though, even though the discussion on security and authentication challenges were fairly interesting. What I found much more groundbreaking however was the services described in the later sections. These services, mainly the Facebook Markup Language (FBML), FB Java Scripts (FBJS) and FB cookies turns the previous model on its head. Instead of Facebook providing a set of services that allows the third party applications to query social data the applications themselves become the services providing information to the Facebook platform. Facebook adopts the role of the web browser from the applications' point of view and they in turn provide FB with markup information and data from their own databases. FB will then process this markup language providing data from its databases as needed and forward it to the end user's browser as HTML, etc. In addition FB allows the applications to provide limited declarative scripts that FB turns into Javascripts.
The whole concept of Facebook as a browser is quite interesting, although I find it humorous how it follows the adage of "solving every problem by adding yet another level of indirection". In addition they have reached the conclusion that imperative, turing-complete language such as JS poses too much of a security risk to allow into their system outside the (somewhat) protected environment of the web browser. Instead they provide declarative languages (FBML and FBJS) that they in turn turn into the necessary standards accepted by the browsers. Like all declarative languages these languages require the developer to say what she wants instead of how she wants it. This means it is easier for the compilers inside the FB platform to guarantee that the code is safe by design as it has more semantic information.
Friday, September 11, 2009
Excerpts the works of Christopher Alexander
Yesterday I read some excerpts from Christopher Alexander's "A Timeless Way of Building" and "A Pattern Language". I first heard of Christopher Alexander during my undergrad and have been meaning to read some of his works so I am glad I finally got an "excuse" to do it.
Alexander's writings was pretty much what I expected based on his reputation with a zen-like prose and insightful observations. In the flower and the seed he talks about the quality of living things - the quality without a name. To an engineer the concept seems very vague yet at the same time it feels so familiar. The QWAN can not be made, but must flow out of our work on its own. Since the QWAN can't be made we can not bring it into life through some monumental task. We can only generate things with life through an incremental process where each part is shaped individually to be in perfect harmony with its surroundings. The way I read it is that we must therefore not try to make things with QWAN, but to try to shape our process so as to let QWAN emerge on its own.
It seems clear indeed how the founders of XP was influenced by Alexander to distrust big design up front and instead favor small iterations where the system and its architecture is allowed to emerge on its own.
With "Our Patterns Language" Alexander tries to document the patterns that already exist in our traditional towns and buildings. These are not the patterns created from the minds of a few architects, but the ones that have emerged on their own wherever buildings have been built by their users in harmony with their surroundings and their use. Alexander's patterns are not strict replicas, but recurring structures that are similar, but yet slightly different in each manifestation. He describes how each house in the Alps is similar yet different, each one being perfectly adapted to its particular location and use. In this I believe some of the QWAN lies. Beauty lies not in perfect geometric shapes or in sameness, but in the small variations on a common theme that would only make a structure beautiful in its particular location with its particular use.
For me a pattern have always been a solution to a problem that many people have faced before and solved with the same general of idea. A pattern does not need to follow the common rules of thumb of a field. It must only have been useful in solving a problem for people in the past. Indeed I remember thinking when reading design patterns that I thought some of them were a breach of certain design principles as I understood them. But that is OK because they, although the probably not the best, are good solutions to the problems they address.
Alexander thinks design patterns are important because they are the natural rules for creating beautiful structures that have emerged on their own. They are not only "... a pattern which one might or might not use ...", but they are ".. desirable pattern[s] ..." that one must create "... in order to maintain a stable and healthy world." As such Alexander regards the patterns that naturally arise when the people who are most familiar with a structure's use and location and who are familiar with how similar structures have been created build something. These builders does not sketch out every detail, but brick by brick build a structure from the mental images in their mind. It is these shared images in "a farmers mind" that are what Alexander calls patterns.
In the last excerpt a few of Alexander's patterns are presented. I must admit that I had a hard time reading them while focusing on their application in CS as my mind constantly wandered to thoughts of the places I have lived and worked an how they fit in. Even so I want to attempt to one of them to our field.
Site repair states that one should always build a structure on the parts of a land that are in worst condition. The good parts are already healthy and do not need our help. It is on the rest that we should apply ourself to improve our surroundings. In terms of software engineering I can not help thinking about the concept of technical debt. In a system there is often one component that is in a worse state than the others. Perhaps it had to be rushed in order to reach an important deadline. Or perhaps it has gradually deteriorated to a point where people dread touching it and apply great ingenuity to avoid changing its internals. Because why risk breaking something that after all does work. And why not implement that new feature which may belong in this component in another one that is cleaner and that won't break down when changed. If we were to apply site repair to this problem we should fix the part of our system that are in the worst shape. The good parts are already healthy and do not need our help. It is in the unhealthy part, and everyone will have an idea of where this is, where we can increase the healthiness of our system.
The intimacy gradient is also quite easy to apply in terms of layers, but I am curious to hear suggestions for analogies for light on two sides of every room. I suspect that there aren't really any which is fine. We can still learn from Alexanders thoughts and his description on how he uncovered the pattern, not by analytical studies, but by casually observing it wherever he went.
Alexander's writings was pretty much what I expected based on his reputation with a zen-like prose and insightful observations. In the flower and the seed he talks about the quality of living things - the quality without a name. To an engineer the concept seems very vague yet at the same time it feels so familiar. The QWAN can not be made, but must flow out of our work on its own. Since the QWAN can't be made we can not bring it into life through some monumental task. We can only generate things with life through an incremental process where each part is shaped individually to be in perfect harmony with its surroundings. The way I read it is that we must therefore not try to make things with QWAN, but to try to shape our process so as to let QWAN emerge on its own.
It seems clear indeed how the founders of XP was influenced by Alexander to distrust big design up front and instead favor small iterations where the system and its architecture is allowed to emerge on its own.
With "Our Patterns Language" Alexander tries to document the patterns that already exist in our traditional towns and buildings. These are not the patterns created from the minds of a few architects, but the ones that have emerged on their own wherever buildings have been built by their users in harmony with their surroundings and their use. Alexander's patterns are not strict replicas, but recurring structures that are similar, but yet slightly different in each manifestation. He describes how each house in the Alps is similar yet different, each one being perfectly adapted to its particular location and use. In this I believe some of the QWAN lies. Beauty lies not in perfect geometric shapes or in sameness, but in the small variations on a common theme that would only make a structure beautiful in its particular location with its particular use.
For me a pattern have always been a solution to a problem that many people have faced before and solved with the same general of idea. A pattern does not need to follow the common rules of thumb of a field. It must only have been useful in solving a problem for people in the past. Indeed I remember thinking when reading design patterns that I thought some of them were a breach of certain design principles as I understood them. But that is OK because they, although the probably not the best, are good solutions to the problems they address.
Alexander thinks design patterns are important because they are the natural rules for creating beautiful structures that have emerged on their own. They are not only "... a pattern which one might or might not use ...", but they are ".. desirable pattern[s] ..." that one must create "... in order to maintain a stable and healthy world." As such Alexander regards the patterns that naturally arise when the people who are most familiar with a structure's use and location and who are familiar with how similar structures have been created build something. These builders does not sketch out every detail, but brick by brick build a structure from the mental images in their mind. It is these shared images in "a farmers mind" that are what Alexander calls patterns.
In the last excerpt a few of Alexander's patterns are presented. I must admit that I had a hard time reading them while focusing on their application in CS as my mind constantly wandered to thoughts of the places I have lived and worked an how they fit in. Even so I want to attempt to one of them to our field.
Site repair states that one should always build a structure on the parts of a land that are in worst condition. The good parts are already healthy and do not need our help. It is on the rest that we should apply ourself to improve our surroundings. In terms of software engineering I can not help thinking about the concept of technical debt. In a system there is often one component that is in a worse state than the others. Perhaps it had to be rushed in order to reach an important deadline. Or perhaps it has gradually deteriorated to a point where people dread touching it and apply great ingenuity to avoid changing its internals. Because why risk breaking something that after all does work. And why not implement that new feature which may belong in this component in another one that is cleaner and that won't break down when changed. If we were to apply site repair to this problem we should fix the part of our system that are in the worst shape. The good parts are already healthy and do not need our help. It is in the unhealthy part, and everyone will have an idea of where this is, where we can increase the healthiness of our system.
The intimacy gradient is also quite easy to apply in terms of layers, but I am curious to hear suggestions for analogies for light on two sides of every room. I suspect that there aren't really any which is fine. We can still learn from Alexanders thoughts and his description on how he uncovered the pattern, not by analytical studies, but by casually observing it wherever he went.
Wednesday, September 9, 2009
Beautiful Architecture - Resource-Oriented Architectures
Chapter 5 of the book Beautiful Architecture dealt with a architectural "style" I was not familiar with which they called "resource-oriented architecture". The chapter describes the current popular SOA approaches (which I am not very familiar with either) to the architecture of business system and how these fail to solve the problem of reducing complexity for the developers. The chapter also claims that SOA approaches to a degree also fail to reach their goal of allowing heavy reuse of business services in different contexts. They then present an alternative modeled on the world wide web which is what they call resource-oriented architecture or ROA.
My understanding of ROA, which will likely reveal my ignorance as I have only had limited exposure to the business system development world, is that it moves the primary focus from the services (behavior/verb/action) to the the data (content/noun/thing). In a sense I see this as weakly analog to the move from structural programming to object oriented programming. The first emphasized functions that perform some action while the latter emphasize the things that are being modeled (I believe the original goal of Simula was to model things in the real world in order to simulate them more intuitively). Stretching this analogy too far a SOA is similar to providing function pointers that implements services while a ROA would give you a pointer to an interface of an object which you could then use (and reuse) to query/set different properties independent of the underlying implementation.
Of course ROA resource links, like objects, provide services allowing manipulation of and access to different representations of the data (GET/POST/PUT/DELETE - what is wrong with good ol' CRUD?), but the first class concept is the statical dimension (the content pointed to by the URLs) as opposed to the time dimension (the action to perform on that content). As E. W. Dijkstra observed in his letter arguing why goto sentences are harmful "My second remark is that our intellectual powers are rather geared to master static relations and that our power to visualize process evolving in time are relatively poorly developed" (yes I'm a bit of a sucker for good quotes). I think this is an insightful and rather pragmatic observation and one we should make use of.
The chapter also discuss more practical considerations such as security being more easily handled by design in a ROA system as it passes links around instead of data. The clients could therefore request and get a link to a resource. This link could then be used several times by the client and could potentially even be shared with other clients without worrying too much about authentication, authorization and encryption. The access control is then be performed when one of the clients try to use this link, which in many ways is how it works on the internet. Another benefit they discussed was that ROA made it easier to handle caching, which seemed to be grounded in the observation that it is easier to cache a resource than a "service".
Finally, Will Leinweber put forth the question of whether always putting data up front is a good idea. It was an interesting question and I thought about it a bit. My instinct is to not believe in silver bullets, but for an information handling system I would be inclined to say that I think it is a good idea as it is easier to conceptualize and aligns the architecture with the goal being solved (I.e. accessing information).
My understanding of ROA, which will likely reveal my ignorance as I have only had limited exposure to the business system development world, is that it moves the primary focus from the services (behavior/verb/action) to the the data (content/noun/thing). In a sense I see this as weakly analog to the move from structural programming to object oriented programming. The first emphasized functions that perform some action while the latter emphasize the things that are being modeled (I believe the original goal of Simula was to model things in the real world in order to simulate them more intuitively). Stretching this analogy too far a SOA is similar to providing function pointers that implements services while a ROA would give you a pointer to an interface of an object which you could then use (and reuse) to query/set different properties independent of the underlying implementation.
Of course ROA resource links, like objects, provide services allowing manipulation of and access to different representations of the data (GET/POST/PUT/DELETE - what is wrong with good ol' CRUD?), but the first class concept is the statical dimension (the content pointed to by the URLs) as opposed to the time dimension (the action to perform on that content). As E. W. Dijkstra observed in his letter arguing why goto sentences are harmful "My second remark is that our intellectual powers are rather geared to master static relations and that our power to visualize process evolving in time are relatively poorly developed" (yes I'm a bit of a sucker for good quotes). I think this is an insightful and rather pragmatic observation and one we should make use of.
The chapter also discuss more practical considerations such as security being more easily handled by design in a ROA system as it passes links around instead of data. The clients could therefore request and get a link to a resource. This link could then be used several times by the client and could potentially even be shared with other clients without worrying too much about authentication, authorization and encryption. The access control is then be performed when one of the clients try to use this link, which in many ways is how it works on the internet. Another benefit they discussed was that ROA made it easier to handle caching, which seemed to be grounded in the observation that it is easier to cache a resource than a "service".
Finally, Will Leinweber put forth the question of whether always putting data up front is a good idea. It was an interesting question and I thought about it a bit. My instinct is to not believe in silver bullets, but for an information handling system I would be inclined to say that I think it is a good idea as it is easier to conceptualize and aligns the architecture with the goal being solved (I.e. accessing information).
Tuesday, September 8, 2009
ArchJava
The paper "ArchJava: Connecting Software Architecture to Implementation" addresses the problem of architecture and implementation happening independently of each other. A problem with this duplicity is that the implementation usually ends up diverging from the architecture with backdoor communication paths and "hacks" that are not captured in the architectural diagrams. In fact in many cases I believe such backdoors are intentionally kept out of diagrams and descriptions to prevent them from messing up the otherwise beautiful illusion. To fix this the developers either have to update the architecture diagrams (which would probably complicate them) or change the implementation to conform to the architecture description.
The paper present ArchJava which is a set of extensions to the Java programming language that captures the architecture through components and communication paths. As our previous readings have tried to tell us (BA Ch. 1, 4+1, Boxology) these are far from all the aspects of architecture, but they are probably among the most important ones. We could see this quite clearly in the wreck that was the messy metropolis (BA Ch. 2). Components in ArchJava are a special type of classes that can contain subcomponents, thus forming a component hierarchy. Communication paths are "ports" that can be connected to other components. These ports consist of required and provided methods that forms a contract that all components we want to connect to them must conform to. In fact they rather remind me of two-way Qt/boost signal-slots, which allows developers to connect conforming signals and slots from any two classes together. Signal/slots in these C++ libraries are really useful, but have been around for a long time. Furthermore ArchJava disallows all non-port method calls to components that are not a sub-component of the caller and guarantee that this restriction is enforced.
An important feature of ArchJava is that it is implemented directly in the Java language as a set of extensions. The developers are thus relieved from having to maintain two duplicate architectural descriptions; the explicit description on paper (or in a separate ADL) and the implicit one in the source code. In addition to less maintenance is the practical consideration that everything that is duplicated will diverge and cause confusion, misunderstandings and sometimes havoc (I've always wanted to use that word in an academic setting!).
In addition to avoiding duplications having the architecture description explicit in the source code allows the ArchJava compiler to guarantee that there are no hidden back doors (with the rather big exception of shared data). This guarantee is rather useful as one can be sure that the architecture one discusses is the one that actually exist, even though it is probably not the same as one started out with. I think it could also be useful in allowing developers to refactor the architecture with more confidence the same same way that unit tests and interface contracts do.
The authors of the paper spent considerable paper-space on a case study of a small to medium sized application. Their intent was on demonstrating how ArchJava could be added to an existing application to force the architecture to be explicit, but I got the feeling most of the paragraphs were about refactorings that were based more on the knowledge of more experienced Java developers. That being said they did manage to convince me of the benefits in being sure the architecture you think you have is the one you actually have, especially as the size of the project increases into the hundreds of thousands of lines.
Referring to the questions from Professor Johnson, like Jason Danielson (I really liked the video blog by the way), I do not think ArchJava would have helped the team making memories much in the initial development. The system was created by a team of seemingly skilled developers that appeared to be highly motivated and set on making a great system with a great architecture. In a sense they had solved this problem through process, skills, gelling and caring and this is far more potent than any set of language extensions.
However, the part of the making memories saga that we were told about was only the initial development which is a fraction of the whole story. The system they made will need to be maintained for many more years and it is my experience that architectural rot has a higher tendency to set in in this phase. The discipline that ArchJava enforces would probably be far more useful ten years down the line when all the original developers and all the excitement of creating a new great system are long gone.
The paper present ArchJava which is a set of extensions to the Java programming language that captures the architecture through components and communication paths. As our previous readings have tried to tell us (BA Ch. 1, 4+1, Boxology) these are far from all the aspects of architecture, but they are probably among the most important ones. We could see this quite clearly in the wreck that was the messy metropolis (BA Ch. 2). Components in ArchJava are a special type of classes that can contain subcomponents, thus forming a component hierarchy. Communication paths are "ports" that can be connected to other components. These ports consist of required and provided methods that forms a contract that all components we want to connect to them must conform to. In fact they rather remind me of two-way Qt/boost signal-slots, which allows developers to connect conforming signals and slots from any two classes together. Signal/slots in these C++ libraries are really useful, but have been around for a long time. Furthermore ArchJava disallows all non-port method calls to components that are not a sub-component of the caller and guarantee that this restriction is enforced.
An important feature of ArchJava is that it is implemented directly in the Java language as a set of extensions. The developers are thus relieved from having to maintain two duplicate architectural descriptions; the explicit description on paper (or in a separate ADL) and the implicit one in the source code. In addition to less maintenance is the practical consideration that everything that is duplicated will diverge and cause confusion, misunderstandings and sometimes havoc (I've always wanted to use that word in an academic setting!).
In addition to avoiding duplications having the architecture description explicit in the source code allows the ArchJava compiler to guarantee that there are no hidden back doors (with the rather big exception of shared data). This guarantee is rather useful as one can be sure that the architecture one discusses is the one that actually exist, even though it is probably not the same as one started out with. I think it could also be useful in allowing developers to refactor the architecture with more confidence the same same way that unit tests and interface contracts do.
The authors of the paper spent considerable paper-space on a case study of a small to medium sized application. Their intent was on demonstrating how ArchJava could be added to an existing application to force the architecture to be explicit, but I got the feeling most of the paragraphs were about refactorings that were based more on the knowledge of more experienced Java developers. That being said they did manage to convince me of the benefits in being sure the architecture you think you have is the one you actually have, especially as the size of the project increases into the hundreds of thousands of lines.
Referring to the questions from Professor Johnson, like Jason Danielson (I really liked the video blog by the way), I do not think ArchJava would have helped the team making memories much in the initial development. The system was created by a team of seemingly skilled developers that appeared to be highly motivated and set on making a great system with a great architecture. In a sense they had solved this problem through process, skills, gelling and caring and this is far more potent than any set of language extensions.
However, the part of the making memories saga that we were told about was only the initial development which is a fraction of the whole story. The system they made will need to be maintained for many more years and it is my experience that architectural rot has a higher tendency to set in in this phase. The discipline that ArchJava enforces would probably be far more useful ten years down the line when all the original developers and all the excitement of creating a new great system are long gone.
Wednesday, September 2, 2009
Beautiful architecture - Making Memories
The system in this story, a photo storage, manipulation and processing system, was a bit strange to read about for me since so many of their architecture choices and considerations were so similar to the ones we made in an open-source application I co-wrote years ago during my undergrad. (The application was called Stopmotion and can now be found in both the Debian and Mandriva official repositories.)
However, this is not all that strange since this is really, as far as I can see, just a variant of the well known MVC architecture pattern. Their domain is the MVC model, their forms are controllers and their properties are the subjects in the observer relationship between the model and the view/presentation layer, which allows the model to indirectly and dynamically inform the presentation layer of changes and thus keep it in sync. In fact I find it strange that they can even describe all of this without even mentioning that pattern. Also, when reading the article I do wonder whether this system is a bit over-engineered and perhaps even an example of the second system effect (I wonder about the same thing with Stopmotion by the way). The Kiosks seemed fairly basic with the GUI operating in a screen-by-screen workflow, which limits the amount of interacting components on the screen at one time. However, I do believe it is better to err on the side of a bit too much infrastructure than the opposite and based on this story they do seem to have been successful at keeping entropy in check so that they could work efficiently. This impression is strengthened by the author's closing comment about remembering the different classes and their interaction fondly, which is one I share from my experience with Stopmotion.
One thing I would like to point out though is the usefulness of domain/application facades in implementing things like undo (or logging, etc.). These classes work as gateways to the domain/model which means they are ideal places to log commands or take snapshots of the model state so that these can later be run backwards/restored in order to undo an action. In our application we used a variant of the Command pattern to implement this functionality.
However, this is not all that strange since this is really, as far as I can see, just a variant of the well known MVC architecture pattern. Their domain is the MVC model, their forms are controllers and their properties are the subjects in the observer relationship between the model and the view/presentation layer, which allows the model to indirectly and dynamically inform the presentation layer of changes and thus keep it in sync. In fact I find it strange that they can even describe all of this without even mentioning that pattern. Also, when reading the article I do wonder whether this system is a bit over-engineered and perhaps even an example of the second system effect (I wonder about the same thing with Stopmotion by the way). The Kiosks seemed fairly basic with the GUI operating in a screen-by-screen workflow, which limits the amount of interacting components on the screen at one time. However, I do believe it is better to err on the side of a bit too much infrastructure than the opposite and based on this story they do seem to have been successful at keeping entropy in check so that they could work efficiently. This impression is strengthened by the author's closing comment about remembering the different classes and their interaction fondly, which is one I share from my experience with Stopmotion.
One thing I would like to point out though is the usefulness of domain/application facades in implementing things like undo (or logging, etc.). These classes work as gateways to the domain/model which means they are ideal places to log commands or take snapshots of the model state so that these can later be run backwards/restored in order to undo an action. In our application we used a variant of the Command pattern to implement this functionality.
Tuesday, September 1, 2009
A Field Guide to Boxology
The paper on boxology was to me the most interesting paper/chapter of the first two weeks of this course. The paper attempts to establish a taxonomy and framework for describing different architectural styles and to place different well known styles in a two dimensional topology. The paper appears to be from before architecture styles started to be documented as patterns (i.e. from before the POSA book) and treats styles in a somewhat more formal manner than patterns do. However, since the paper is about creating a framework for classifying different architectures it is equally applicable as a classifications of architectural patterns. In addition the papers treatment of architectural styles is observational as opposed to a priori or analytical and as such it is very similar to the pattern approach.
After the introduction the main part of the paper has one section describing their framework for classification followed by two example architectural (dataflow networks/pipes and filters and message passing processors) that are elaborated to demonstrate and attempt to validate the classification.
To me the section on their classification strategy was by far the most interesting. They defined components and connectors as primary classification criteria in addition to the secondary criteria control, data organization and the interaction between these. One of the things I like about this section is the treatment of connectors as an aspect that are as important as components since there are many more ways to do this than simple static function calls (observers, function pointers, virtual functions, message passing, etc.). I also found their examples of different characteristics that can be associated with connectors such as format conversion (Adapters...) and even additional functionality such as logging and performance monitoring to be interesting.
The architectural styles that I am most familiar with from the their classification in the appendix ("Table 1") are layered architectures, dataflow networks (mostly through pipelines), different call-and-return styles and to some degree message passing processors. The architectures I am the least familiar with are the data-centered repository architectures and I don't actually know how a blackboard works so I guess I have some reading to do.
I have encountered pipelines mainly through my work on graphics drivers. A graphic driver, along with the GPU it drives, is primarily a pipeline for producing images. In this particular instance of a pipeline the user will provide stream data in the form of vertices and attributes as well as auxiliary data such as textures and matrices. In addition to this the user can typically configure the operation of different pipeline stages (lighting, texture stages) and/or, in more modern systems, even provide whole stages to be executed on the stream data in the form of shader programs. Such a system can consist of many stages such as draw-call setup, vertex processing, fragment processing and each stage can typically operate concurrently on different sets of data as one would expect from a pipeline. Some stages are performed on a CPU while other stages are more typically offloaded to specialized hardware such as a GPU.
An example of directed acyclic data network would be the gstreamer architecture for media processing. This pipe-and-filter architecture allows the user to define different filters to process or transform data as well as sources of data and sinks (for example a file or a surface to be blitted to a computer screen). These components can then be stitched together into a DAG that can be used to process a flow of data (video/audio/images). This provides for an incredible powerful architecture (although at the time I was trying to use it several years ago gstreamer was quite immature and caused me to loose some of my hair) where a relatively small set of filters can be composed into many different interesting components of a larger application.
One architecture that is interesting to try to place in the authors table is model-view-controller. This architecture (often, but certainly not exclusively, found as a "sub-architecture" inside the server of a client/server architecture) can in some sense be seen as variation of main program/subroutines specialization of call-and-return. Flow of control, for the most part, moves hierarchically from the presentation layer downwards, but that also has a feedback loop that is dynamically bound at runtime in the form of an observer that synchronizes different parts of the presentation layer with the current model state. However, a typical MVC can also be seen as a data-centered repository style architecture as the model will usually represent a set of data to be manipulated and presented.
After the introduction the main part of the paper has one section describing their framework for classification followed by two example architectural (dataflow networks/pipes and filters and message passing processors) that are elaborated to demonstrate and attempt to validate the classification.
To me the section on their classification strategy was by far the most interesting. They defined components and connectors as primary classification criteria in addition to the secondary criteria control, data organization and the interaction between these. One of the things I like about this section is the treatment of connectors as an aspect that are as important as components since there are many more ways to do this than simple static function calls (observers, function pointers, virtual functions, message passing, etc.). I also found their examples of different characteristics that can be associated with connectors such as format conversion (Adapters...) and even additional functionality such as logging and performance monitoring to be interesting.
The architectural styles that I am most familiar with from the their classification in the appendix ("Table 1") are layered architectures, dataflow networks (mostly through pipelines), different call-and-return styles and to some degree message passing processors. The architectures I am the least familiar with are the data-centered repository architectures and I don't actually know how a blackboard works so I guess I have some reading to do.
I have encountered pipelines mainly through my work on graphics drivers. A graphic driver, along with the GPU it drives, is primarily a pipeline for producing images. In this particular instance of a pipeline the user will provide stream data in the form of vertices and attributes as well as auxiliary data such as textures and matrices. In addition to this the user can typically configure the operation of different pipeline stages (lighting, texture stages) and/or, in more modern systems, even provide whole stages to be executed on the stream data in the form of shader programs. Such a system can consist of many stages such as draw-call setup, vertex processing, fragment processing and each stage can typically operate concurrently on different sets of data as one would expect from a pipeline. Some stages are performed on a CPU while other stages are more typically offloaded to specialized hardware such as a GPU.
An example of directed acyclic data network would be the gstreamer architecture for media processing. This pipe-and-filter architecture allows the user to define different filters to process or transform data as well as sources of data and sinks (for example a file or a surface to be blitted to a computer screen). These components can then be stitched together into a DAG that can be used to process a flow of data (video/audio/images). This provides for an incredible powerful architecture (although at the time I was trying to use it several years ago gstreamer was quite immature and caused me to loose some of my hair) where a relatively small set of filters can be composed into many different interesting components of a larger application.
One architecture that is interesting to try to place in the authors table is model-view-controller. This architecture (often, but certainly not exclusively, found as a "sub-architecture" inside the server of a client/server architecture) can in some sense be seen as variation of main program/subroutines specialization of call-and-return. Flow of control, for the most part, moves hierarchically from the presentation layer downwards, but that also has a feedback loop that is dynamically bound at runtime in the form of an observer that synchronizes different parts of the presentation layer with the current model state. However, a typical MVC can also be seen as a data-centered repository style architecture as the model will usually represent a set of data to be manipulated and presented.
Subscribe to:
Posts (Atom)