Tuesday, September 29, 2009

Random thoughts on parallelism, functional languages and new C/C++ constructs

In the CS527 class discussion today we were discussing parallelism and the topic of functional languages for parallelism came up. The question was posed whether functional languages are inherently easier to parallelize than imperative languages due to various constructs such as map/fold and functions without side effects and, if this is true, whether we should therefore switch to these languages.

I made the argument that we should not be quick to throw away all the investment we have made in imperative languages like C and Java as well as all the knowledge we have gathered about using these languages over the years. Professor Johnson pointed out that even though functional languages have long been proclaimed as the right way to program without ever really breaking through to the masses does not mean that they won't in the future. He gave an example from a talk he recently attended where the speaker showed various devices from 1984 (the year, not the book =) that we today typically think of as recent innovations. Examples included touch-screen watches and iphone-like devices, the point I believe being that inventions typically exist for a long time before their time finally comes and they are used in innovative ways.

I found the discussion as well as JLSjr and Professor Johnson's points to be really interesting so I thought about it quite a bit on my way home. It seems to me that what is currently happening is that our imperative languages are starting to soak up features from functional languages that make parallelism easier instead of being full out replaced by them. Examples include broad usage of map-reduce frameworks, a feature that have long been supported natively in functional languages and the planned introduction of lambda-calculus, or anonymous functions, in C++ primarily to make it easier to define kernels to execute in parallel.

A third example of the top of my head is the introduction of closure blocks as an extension to C and C-derivatives (C++ and Objective C) that came with the last release of Mac OS X Snow Leopard. The main rationale for these closures, a long-time functional programming mainstay, is to make it easy to specify small blocks of code that can form a task to be executed in parallel by their new Grand Central. Grand Central is the core of the new parallelism support in OS X that is integrated with their kernel and that basically uses a thread-pool to execute user-specified tasks taken from FIFO queues. Without closures creating such tasks would require a fair bit of plumbing (Java's new ParallelArray library defines more than one hundred classes that one must subclass from to implement different types of task kernels). With closures a lot of very common pieces code can be parallelized by just adding a couple lines of code and a couple of blocks. If you are interested in seeing how this works an interesting overview of Grand Central and the usage of closure blocks in C can be found here.

Perhaps functional programming is here, not through some grand sweeping march where everyone repent their sins and switches to the "only computer languages that are beautiful", but by gradually influencing and changing our current bag of tools (like they have done in the past)? Does anyone have any thoughts on this? Am I reading too much into this or are perhaps these constructs are really a first step on the way to switching completely over?

No comments:

Post a Comment