Reclame onzin: zilvermoleculen nl

Door Confusion op zaterdag 28 maart 2009 10:19 - Reacties (30)
Categorie: -, Views: 6.439

"Nieuw Nivea For Men Silver Protect met zilvermoleculen" schreeuwt de radio. Ik hoop dat iedereen op dat moment denkt: .... pardon? Wat in vredesnaam zijn 'zilvermoleculen'?! Ik kijk op de website: misschien dat daar iets zinnigs staat. Maar nee, daar wordt het alleen maar erger:
Silver Protect is de eerste en enige deodorant op basis van uiterst effectieve en wetenschappelijk bewezen zilvertechnologie. De unieke formule bevat actieve zilvermoleculen [..]
Wat een verzameling ongelovelijke flauwekul!
  • Er bestaan geen zilvermoleculen. Er bestaan zilveratomen en er bestaan zilverdeeltjes. Niemand die daadwerkelijk iets van zilver weet en iets met zilver doet spreekt van 'zilvermoleculen', niet in de laatste plaats omdat 'molecuul' gedefinieerd is als bestaande uit tenminste twee soorten atomen.
  • Wat in vredesnaam is 'zilvertechnologie'? Als die term al ergens gebruikelijk voor is, dan is het voor methoden om zilver te winnen of zilver te verwerken. De enige die zo'n term in de context van een verzorgingsproduct hanteert is een marketeer die van toeten noch blazen weet.
  • 'Wetenschappelijk bewezen ...technologie'? Je kan de juistheid van hypothesen en theorieen bewijzen door het gebruik van wetenschappelijke methoden. Een technologie is de toepassing van dergelijke kennis teneinde een bepaald doel te bereiken, zoals het in deodorant stoppen van zilveratomen. Daar valt verder niets aan te bewijzen, tenzij ze bedoelen dat ze bewezen hebben dat er daadwerkelijk zilveratomen in de deodorant zitten. Nou, applaus...

Go forth and talk to strangers

By Confusion on Monday 23 March 2009 20:42 - Comments (3)
Category: Philosophy, Views: 2.739

I sometimes wonder why I keep taking the trouble of rummaging through the thousands of bits of information that RSS feeds offer me on a daily basis. As always, I don't have to wonder for long, for behold, I stumble upon another wonderfully educational insight:
When I was growing up, children were commonly taught: "don't talk to strangers." Strangers might be bad, we were told, so it's prudent to steer clear of them.

As it turns out, this is profoundly bad advice. Most people are honest, kind, and generous, especially when someone asks them for help. If a small child is in trouble, the smartest thing he can do is find a nice-looking stranger and talk to him.

The advice in each of these paragraphs may seem to contradict each other, but they don't. The difference is that in the second instance, the child is choosing which stranger to talk to. Given that the overwhelming majority of people will help, the child is likely to get help if he chooses a random stranger. But if a stranger comes up to a child and talks to him or her, it's not a random choice. It's more likely, although still unlikely, that the stranger is up to no good.
Bruce Schneier goes on to explain what the relevance of such an insight is for the trustworthiness of collaborative spam filtering, Tor and Wikipedia.

The ability to predict a decision does not disprove free will

By Confusion on Sunday 22 March 2009 17:56 - Comments (7)
Categories: Philosophy, Science, Views: 3.091

Every once in a while an experiment is published that seems to disprove the existence of free will. For a recent case, see this /. article. The experiments are usually sophistications of Benjamin Libet's famous experiment, in which he showed that you can predict that someone is going to respond to a certain event, before the subject himself is consciously aware of the fact that he is going to respond. This fact, that your brain has already fired the signals to perform a certain action, while you have not yet consciously registered that you are going to perform that action, seems to leave very little room for free will. After all, doesn't free will require you to consciously deliberate the action that will be taken?

The answer is simple: no. It is perfectly possible for free will to exist without it requiring conscious deliberation. To understand how this works, it is required to rethink what 'consciousness' is. Consciousness is often portrayed as the faculty that allows us to actively participate in our thought processes. We reflect on various ideas and possibilities and finally assemble a conclusion, that we can then use to undertake a certain action. However, and this is the essential point: this is not our usual mode of thinking. Most of the time, we respond to our environment in an immediate and involuntary way. When your are going to get a cup of coffee, you don't consciously deliberate "Hmmm, it seems I have decided to get a cup of coffee. Are there any objections to this? It seems not. Well then, let's start to contract the relevant muscles to stand up and start walking, etc. ...". In fact, you just do it. Only after you got up and started walking towards the kitchen, you may consciously register "wait, there is still a used mug here; why don't I bring it along to the kitchen?". The earlier decision, taken by unconscious (but not necessarily irrational) processes, registered itself in your consciousness a few moments after the decision was first reached and action was initiated. Conscious deliberation then still allows you to intervene.

As a result, the original experiments of Benjamin Libet, nor later sophistications, have anything to say about the existence of free will. At most, they have something to say about the time it takes the brain to reach a decision, compared to the time it takes the brain to become aware of its own decision.

Writing good code requires you to perform experiments

By Confusion on Thursday 5 March 2009 07:52 - Comments (22)
Categories: Java, Software engineering, Views: 19.078

How often do you have to solve a small problem that is a part of a larger project and decide to take the time to perform some seperate experiments to solve the problem, before adding the partial solution to the whole? In the past, I hardly ever did that and everytime I encounter such a situation, I still have to resist the temptation to take the route that seems to be the shortest, but that has long proven to be the longest road to the solution.

As an example, suppose you need to extract some information from a string and decide the best way is to use a regular expression. You know that the problem isn't trivial and that you probably need a few attempts to get it right. Still, your initial inclination is to just put the regex in there and go through the motions of compiling the code, performing the steps required to invoke the code (click a button, enter some text, etc), find out that the regex doesn't work, modify it and go through the motions again and again and again, probably while making some other modifications. Because the cycles take relatively long, progress is slow and you start cursing every typo. If you recognize this, then I think you can become faster and happier at your problem solving by reading on.

The advantage of the 'intuitive' approach is that, if the solution is right, you have immediately come closer to solving the larger problem. However, if the solution fails, the iterations to be informed of your failure and to verify subsequent modifications may soon take more time than developing a seperate solution to the subproblem would have taken. For some reason, we always seem to underestimate the amount of work it takes to get it right and consequently we opt for the small change that we will have the correct solution the first time around.

Some of you may now be thinking that the obvious solution is to write a proper unit test and run that unit test after every modification, until the code passes the test. I agree that goes a long way, but usually the test is part of a larger number of classes and running all those tests takes times, especially if it bootstraps an entire Spring-Hibernate application or something of the like. In such as case, it still takes more time than is needed.

My solution is not to be afraid to experiment. It seems to cost too much time to create a new script or class, seperate from the larger project, provide the correct libraries, run this small project multiple times to get feedback on your solution and finally copy-paste the solution back into the project your are working on. However, my experience is that it is well worth the time you need to do this, because it prevents the endless cycles of building-deploying the entire application and getting it to provide feedback. I hardly write a regular expression or SQL query without first testing it seperately from the application. Now these example are given at the smallest level, but it also holds for somewhat larger design issues and even for issues people call 'software architecture'.

Another thing about these small experiments that I find a major advantage, is that it gives you the freedom to explore some avenues that you wouldn't dare put into the actual project. You can't just start switching libraries or refactoring relatively large parts of the code and changing too much is risky and frowned upon, even if you can easily revert your changes (assuming you are using a versioning system. If you aren't, stop reading, install Subversion and commit your code before continuining this article).

A third advantage of experimenting is that it encourages you to rewrite and polish the code that you are writing. You can go through more iterations in experiments 'outside' of the project, because you have a clear overview over the code involved. Modifications of code inside a larger project are less inviting to proper refactoring to do 'the right thing'.

Painters, writers, craftsmen, even philosophers: if famous ones are asked for the secret of their succes, they always advise exercise and experimentation. I hope I have explained that it also simply makes sense. If you don't want to take it from them, take it from rationale.