Whilst in some programming languages the type of a variable is usually explicitly stated (eg: name$, count% in Basic are a string and an integer), others (such as Perl) will let you concatenate 1 and 6, and then let you square-root it - and still give the right answer, and yet others (Pascal) have strong typing, many such as C do little to check whether an string is being multiplied by a pointer. Unintentional errors of this sort generally result in pseudo-random numbers.1 This is made worse by the hundreds of different types typically held within complex OOP-programmed software.
Hungarian notation attempts to remedy this by adding the type of the variable to the name. Examples from Microsoft (http://support.microsoft.com/default.aspx?scid=kb;EN-US;q110264) include dlgFileOpen, keyCaps and rptQtr1Earnings. The use of Camel case allows for easy determination of the individual words within the name.
Hungarian notation was invented by Charles Simonyi, a GUI programmer whom Microsoft got from Xerox (and whose roots originated in Hungary). The name was probably a pun on Polish notation, a stack-based method of implementing computer languages.
Hungarian Notation: The Good, the Bad and the Ugly (http://ootips.org/hungarian-notation) gives some reasons for why Hungarian notation should and shouldn't be used.
Search Encyclopedia
|
Featured Article
|