The paper "Refactoring for Reentrancy" presents a refactoring algorithm for converting global/static state into thread local storage (tls) variables in order to separate the execution of individual threads from each other. Reentrancy is a property of code that is usually used to mean that that particular piece of code will not have side effects that will interfere with other code that is run in parallel.
To begin me their vocabulary confused me as they talked about programs needing to be reentrant and not have global state to prevent them from interfering with other programs being run in parallel. The most common usage of the term program implies it is a separate process, which means it will have its own set of global variables and there will be no problem. However, I quickly figured out that their use of the term program was broader and included libraries (that can be called from multiple threads) as well as one thread of execution inside an application. With that definition their discussion makes much more sense.
As I mentioned the goal of their algorithm was to replace static data with thread local storage. Conceptually it is quite straight forward, but it introduced the need for getters and setters and issues such related to the initialization of that data. The latter they handled by using lazy initialization which seems to work well in most cases as long as it did not use mutable variables to initialize the static fields.
The point of reentrancy is of course to be able to execute code in parallel without race conditions. In the most common usage the term refer to functions that do not have side effects on global memory and is stricter than the term thread safety. The property of thread safety allows side effects as long as they do not contain races. For non-reentrant functions this can be guaranteed through the careful use of locking. However, thread safety does not mean functions will not interfere with each other and the use of locks will often lead to the code becoming serialized, thereby removing the benefits of running it in parallel. Reentrancy in a function or in library is therefore a very attractive property which means that it can safely be used from different threads and that this usage will not introduce thread synchronization overhead. In effect reentrant code is embarrassingly parallel.
Of course it is not possible to always turn global variables into tls variables. In many cases program correctness depend on threads being able to communicate through global data and removing the global data will in these cases lead to incorrect programs. In addition usage of tls data is not free and accessing it is more expensive than accessing global data introducing overhead. However, I like their approach of first making the program correct by removing race conditions on global data and then having the programmer optimize the code by moving tls data to the heap or the stack and potentially also re-introduce shared data (global or on the heap), but this time with proper synchronization.
Finally, their algorithm did not let them make static fields that were initialized using mutable data from another object into tls data structures. This was because moving the initialization time to the first usage of that variable means that the other variable that was used to initialize it could have changed. I felt they could have solved this case by introducing a new static final variable that could be initialized to the load-time value of the mutable variable and then use this variable to initialize the tls variable lazily on first usage. That way all the threads' tls variable would be initialized and they would all be initialized to the same, and semantically correct, value. Can anyone see a hole in this technique?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment