Joel Spolsky has a damn interesting article on smelly code, hungarian notation, and misunderstandings. It’s fairly heavy going, delving into the rights and wrongs of writing instantly recognisable correct code. Perhaps the most interesting stuff is around the origins of Hungarian Notation1 and the idea that the current, supremely annoying approach to Hungarian Notation came about because of a misunderstanding.
Apparently the original ‘inventor’ of HN had intended that prefix for a variable should be used to describe the behaviour or purpose of a variable, rather than simply its type. So for example the prefix cb might mean a counter of bytes (e.g. a buffer size), or dw might mean a difference in widths (with perhaps d always denoting some sort of difference variable), regardless of whether those variables were integers, long ints, or whatever. Of course this makes a bucket load of sense to me. I could never figure out how prefixing variables with int or str was meant to make my life easier. It’s just extra typing for stuff that would be caught at compile time in any case.
As Joel points out:
This was a subtle but complete misunderstanding of Simonyi?s intention and practice, and it just goes to show you that if you write convoluted, dense academic prose nobody will understand it and your ideas will be misinterpreted and then the misinterpreted ideas will be ridiculed even when they weren?t your ideas. So in Systems Hungarian you got a lot of dwFoo meaning ?double word foo,? and doggone it, the fact that a variable is a double word tells you darn near nothing useful at all. So it?s no wonder people rebelled against Systems Hungarian.
Footnote 1: For the non-coders, Hungarian Notation is the practise of prefixing variable names with a few letters to denote the type or behaviour of that variable. In theory it saves having to scan back through the code to work out how and when a variable should be used.