- Act 1. Someone makes an outrageous claim about a programming language.
- Act 2. Someone points that claim out to others.
- Act 3. Someone compares a snippet of code in the original article to his version in his preferred language and asserts that language is superior.
- Act 4. Someone, in this case me, points out that the exact same solution is possible in the original language. 
 I don't claim he's serious. In fact, I think he just ends his post on a tongue-in-cheek, semi-hopeful note.
 Singling out the least interesting part of the blog post as the title of his submission, IMHO
 Which explains the title of this post: if you don't have at least that much understanding of a language that you know whether a certain solution is possible or not, than you probably shouldn't be commenting on it.
 Act 4 has many variations, but usually people produce a different solution in the original language that approaches the solution in the alternative language.
- You need to source sqllib/db2profile in every shell where you are going to run commands from sqllib/bin:
confusion@ulm:~$ . sqllib/db2profile
- Running the db2val validation program multiple times in a row can yield different results. For instance, on the first attempt, it told me it couldn't find the sqllib/logs directory. However, that one was already present (perhaps created by db2val?) and when I ran the validation program again, it noticed that.
- If db2val fails to start your instance with
then you probably need to increase the maximum amount of shared memory the kernel may allocate:SQL1220N The database manager shared memory set cannot be allocated.
sysctl -w kernel.shmmax=268435456
The default is 32M and this increases it to 256M, which turned out to be enough. For 64-bit systems, they advise pushing it to 1GB.
Edit: As moto-moi points out in the comments, this is a temporary change that will disappear with a reboot. To make it permanent, follow his instructions.
- If running
then you probably forgot the first step I described here.SQL10007N Message "-1390" could not be retrieved. Reason code: "3".
- If you try to make a connection from your favorite programming language and you receive
then you probably did a non-root install. Unfortunately, DB2 doesn't have any DB-level users: all user management, including authentication, is delegated to the OS. On a *nix system, the routine checking the password usually requires root privileges. The problem is that the file sqllib/security/db2ckpw needs to be owned by root and needs to have its setuid bit set:ROOT CAPABILITY REQUIRED
chown root db2ckpw chmod u+s db2ckpw
I first found the file sqllib/security32/db2ckpw, but that doesn't seem to be used on linux. Might be the Windows version? Afterwards, perform a
1 2 3
db2 force applications all db2stop db2start
The first command breaks all connections, otherwise the db2stop probably won't work.
- Keep an eye on sqllib/db2dump/db2diag.log: that's where interesting logging about DB2's functioning ends up. The db2diag command can be used to extract information from that file and can be used to tail it, if you haven't done so already.
- A warning I encountered in the db2diag log was
This can be solved by issuingThe user per process file descriptor limit of 1024 is less than the maxfilop setting of 30720
ulimit -n 32768
However, that usually won't work, as the default limits prevent you from going above 1024. To overcome that limit, add the following line to /etc/security/limits.conf
your_db2_user hard nofile 32768
After that, you need to open a new shell (for that user) (if you are running an X environment: restart X) and the new shell will have it maximum number of file descriptors set to 32768 after you issue the ulimit command. Now all you need to do is restart DB2.
final long YEAR_IN_MS = 1000 * 60 * 60 * 24 * 365; System.out.println(YEAR_IN_MS);
What will be printed?
Ehmmm, yeah, right. Thanks for making me feel stupid. Now I still don't know whether it's a library that I need to include to get the application to run, a standalone Flex viewer/debugger or a collection of examples. By clicking "More information" and reading further, I didn't actually get more information. Only after a PgDn I finally encounteredCairngorm is the lightweight micro-architecture for Rich Internet Applications built in Flex or AIR. A collaboration of recognized design patterns, Cairngorm exemplifies and encourages best-practices for RIA development advocated by Adobe Consulting, encourages best-practice leverage of the underlying Flex framework, while making it easier for medium to large teams of software engineers deliver medium to large scale, mission-critical Rich Internet Applications.
ah, framework. That's a word I know. So it's a library.The Cairngorm microarchitecture is intended as a framework for Enterprise RIA developers.
I have this kind of experience more often than I like to: I read an introduction to something and afterwards I still have no idea what the thing is. Is it just me or do you also feel introductions should start with the simple facts and purpose and maybe expand towards an all-encompassing vision of the developers towards the end, rather than starting out that way and leaving you at a plane of abstraction that causes you to miss the actual simplicity of the thing?
The piece is excellent and details matter, but I'll attempt a summary anyway:
if you use a regular HashMap in a multithreaded environment, it seems the worst that can happen is that you incur some additional cache misses due to race conditions. In a lot of situations that is perfectly acceptable and since wrapping the HashMap with Collections.synchronizedMap() incurs quite a performance penalty (and at that point, that was basically the choice), you are tempted to just leave the HashMap in. Don't. The 'put' operation may trigger a resize and redistribution of the internal data structure of the HashMap that can thoroughly hose it is concurrently accessed during the restructing , to the extent that your program goes into an infinite loop.
These days, there isn't any performance reason to decide in favor of the regular HashMap: the java.util.concurrent.ConcurrentHashMap has excellent performance, even in singlethreaded applications. However, I think I've made the mistake of using a regular HashMap somewhere in the past. However, the application has never malfunctioned (as far as I know), so it may well be that the chances of this bug occurring are so small that they are negligible for all practical purposes. Nevertheless, unless you want to do the math, replacing it by a ConcurrentHashMap is a safe bet.