Ehmmm, yeah, right. Thanks for making me feel stupid. Now I still don't know whether it's a library that I need to include to get the application to run, a standalone Flex viewer/debugger or a collection of examples. By clicking "More information" and reading further, I didn't actually get more information. Only after a PgDn I finally encounteredCairngorm is the lightweight micro-architecture for Rich Internet Applications built in Flex or AIR. A collaboration of recognized design patterns, Cairngorm exemplifies and encourages best-practices for RIA development advocated by Adobe Consulting, encourages best-practice leverage of the underlying Flex framework, while making it easier for medium to large teams of software engineers deliver medium to large scale, mission-critical Rich Internet Applications.
ah, framework. That's a word I know. So it's a library.The Cairngorm microarchitecture is intended as a framework for Enterprise RIA developers.
I have this kind of experience more often than I like to: I read an introduction to something and afterwards I still have no idea what the thing is. Is it just me or do you also feel introductions should start with the simple facts and purpose and maybe expand towards an all-encompassing vision of the developers towards the end, rather than starting out that way and leaving you at a plane of abstraction that causes you to miss the actual simplicity of the thing?
The piece is excellent and details matter, but I'll attempt a summary anyway:
if you use a regular HashMap in a multithreaded environment, it seems the worst that can happen is that you incur some additional cache misses due to race conditions. In a lot of situations that is perfectly acceptable and since wrapping the HashMap with Collections.synchronizedMap() incurs quite a performance penalty (and at that point, that was basically the choice), you are tempted to just leave the HashMap in. Don't. The 'put' operation may trigger a resize and redistribution of the internal data structure of the HashMap that can thoroughly hose it is concurrently accessed during the restructing , to the extent that your program goes into an infinite loop.
These days, there isn't any performance reason to decide in favor of the regular HashMap: the java.util.concurrent.ConcurrentHashMap has excellent performance, even in singlethreaded applications. However, I think I've made the mistake of using a regular HashMap somewhere in the past. However, the application has never malfunctioned (as far as I know), so it may well be that the chances of this bug occurring are so small that they are negligible for all practical purposes. Nevertheless, unless you want to do the math, replacing it by a ConcurrentHashMap is a safe bet.
As an example, suppose you need to extract some information from a string and decide the best way is to use a regular expression. You know that the problem isn't trivial and that you probably need a few attempts to get it right. Still, your initial inclination is to just put the regex in there and go through the motions of compiling the code, performing the steps required to invoke the code (click a button, enter some text, etc), find out that the regex doesn't work, modify it and go through the motions again and again and again, probably while making some other modifications. Because the cycles take relatively long, progress is slow and you start cursing every typo. If you recognize this, then I think you can become faster and happier at your problem solving by reading on.
The advantage of the 'intuitive' approach is that, if the solution is right, you have immediately come closer to solving the larger problem. However, if the solution fails, the iterations to be informed of your failure and to verify subsequent modifications may soon take more time than developing a seperate solution to the subproblem would have taken. For some reason, we always seem to underestimate the amount of work it takes to get it right and consequently we opt for the small change that we will have the correct solution the first time around.
Some of you may now be thinking that the obvious solution is to write a proper unit test and run that unit test after every modification, until the code passes the test. I agree that goes a long way, but usually the test is part of a larger number of classes and running all those tests takes times, especially if it bootstraps an entire Spring-Hibernate application or something of the like. In such as case, it still takes more time than is needed.
My solution is not to be afraid to experiment. It seems to cost too much time to create a new script or class, seperate from the larger project, provide the correct libraries, run this small project multiple times to get feedback on your solution and finally copy-paste the solution back into the project your are working on. However, my experience is that it is well worth the time you need to do this, because it prevents the endless cycles of building-deploying the entire application and getting it to provide feedback. I hardly write a regular expression or SQL query without first testing it seperately from the application. Now these example are given at the smallest level, but it also holds for somewhat larger design issues and even for issues people call 'software architecture'.
Another thing about these small experiments that I find a major advantage, is that it gives you the freedom to explore some avenues that you wouldn't dare put into the actual project. You can't just start switching libraries or refactoring relatively large parts of the code and changing too much is risky and frowned upon, even if you can easily revert your changes (assuming you are using a versioning system. If you aren't, stop reading, install Subversion and commit your code before continuining this article).
A third advantage of experimenting is that it encourages you to rewrite and polish the code that you are writing. You can go through more iterations in experiments 'outside' of the project, because you have a clear overview over the code involved. Modifications of code inside a larger project are less inviting to proper refactoring to do 'the right thing'.
Painters, writers, craftsmen, even philosophers: if famous ones are asked for the secret of their succes, they always advise exercise and experimentation. I hope I have explained that it also simply makes sense. If you don't want to take it from them, take it from rationale.
I think that the actual reason for the limited adoption of FP languages is exactly that: the way the advantages are formulated and exemplified are completely obvious and compelling to their proponents, but those same explanations and examples utterly fail to convince most software engineers, because they simply do not appeal to the problems they are confronted with in their daily line of work.
I think this hypothesis is best exemplified by the seminal paper Why Functional Programming Matters. This paper sets out to
and then goes on with explanations like[..] demonstrate to the 'real world' that functional programming is vitally important,
[listing of advantages]
Such a catalogue of 'advantages' is all very well, but one must not be surprised if outsiders don't take it too seriously.
We must must find something to put in its place - something which not only explains the power of functional programming, but also gives a clear indication of what the functional programmer should strive towards.
and applications to things like the Newton-Raphson approximation and tree pruning in artifical intelligence.The definition of reduce can be derived just by parameterising the definition of sum, giving
(reduce f x) nil = x
(reduce f x) (cons a l) = f a ((reduce f x) l)
[I'm inserting an intentional dramatic silence here, in which you can see my facial expression turn to complete astonishment]
Now how in heavens name can one expect to convince the average software engineer of the tenets of functional programming with these kinds of examples? If it does one thing, it is scaring people away by seeming unnecessarily difficult and academic.
Every introduction to a programming language shows you the recursive method to calculate Fibonacci numbers. It's abstract, many people do not relate to it very well, but it's only a single example. However, the documentation for FP languages seem to consist solely of these kinds of highly mathematically inspired examples. No 'Address' class to be found there. Hasn't anyone written a functional equivalent of the Pet Store application to demonstrate the power of FP for the regular work that most of us do?
People that want to improve the world often overlook one fundamental problem: you cannot improve the world just by being right. You need to convince others of that fact if you want to exert influence. If you cannot convince them, find out why you cannot convince them. I think there is a bright future ahead for functional programming, as soon as someone stands up to convince the masses.
As not everyone reads the comments, I would like to highlight a few points made there:
- The article I quote is 25 years old and was directed to people in academia. It would not be fair to conclude that all FP documentation is that hard to understand for someone without an academic background.
- There are deliberate attempts at displaying the 'real life' value of FP, for instance in Real World Haskell and Practical Common Lisp.
- Several people assert that FP is catching on, but it just isn't very visible (yet). This may be because those that use it consider it a competative advantage