Encyclopedia > Millennium Bug

  Article Content

Year 2000 problem

Redirected from Millennium Bug

The Year 2000 Problem (also known as the Y2K problem and the Millennium Bug) was a flaw in computer program design that caused some date-related processing to operate incorrectly for dates and times after January 1, 2000. It turned into a major fear that critical industries (electricity, financial, etc.) and government functions would stop working at 12:00 AM, January 1, 2000 and at other critical dates which were billed as "event horizons." This fear fueled and was fueled by huge amounts of press coverage and speculation, as well as copious official corporate and government reports.

Y2K (or Y2k) was the common slang for the year 2000 problem. It also went by Millennium Bug (though, strictly speaking, 2000 is the last year of the 20th century and does not start a new millennium).

It was thought computer programs could stop working or produce erroneous results because they stored years with only two digits and that the year 2000 would be represented by '00' and would not follow 1999 (i.e., '99') in numerical sequence. This would cause date comparisions to produce incorrect results. It was also thought that embedded systems, making use of similar date logic, might fail and cause utilities and other crucial infrastructure to fail.

In the years prior to 2000, some corporations and governments, when they did testing to determine the extent of the potential impact, reported that some of their critical systems really would need significant repairs or risk serious breakdowns. Throughout 1997 and 1998, there were news reports about major corporations and industries that had made uncertain estimates as to their preparedness. The vagueness of these reports, and the apparent uncertainty regarding what sort of breakdowns were possible--and the fact that literally hundreds of billions of dollars were reportedly spent in remediation efforts--were a major part of the reason for the public fear. Special committees were set up by governments to monitor remedial work and contingency planning, particularly by crucial infrastructure such a telecommunications, utilities and the like, to ensure that the most critical services had fixed their own problems and were prepared for problems with others. By early- to mid-1999, when the same corporations, industry organizations, and governments were claiming to be largely prepared, the public relations damage had been done. It was only the safe passing of the main "event horizon" itself, January 1, 2000, that fully quelled public fears.

The programming problem

The underlying programming problem was quite real. In the 1960s, computer memory and storage were scarce and expensive, and most data processing was done on punch cards which represented text data in 80-column records. Programming languages of the time, such as COBOL and RPG, processed numbers in their ASCII or EBCDIC representations. They occasionally used an extra bit called a "zone punch" to save one character for a minus sign on a negative number, or compressed two digits into one byte in a form called binary-coded decimal, but otherwise processed numbers as straight text. Over time the punch cards were converted to magnetic tape and then disk files and later to simple databases like ISAM, but the structure of the programs usually changed very little. Popular software like dBase[?] continued the practice of storing dates as text well into the 1980s and 1990s.

Saving two characters for every date field was a significant savings at that time. Most programmers of that time did not expect their programs to remain in use for many decades, so they did not consider it a significant problem. However this was not universally true. The problem was first recognised in 1958 by Bob Bemer[?] as a result of work on genealogical software. He spent the next twenty years trying to make programmers, IBM, the US government and the ISO care about the problem, with little result. This included the recommendation that the COBOL PICTURE clause should be used to specify four digit years for dates. This could have been done by programmers at any time from the initial release of the first COBOL compiler in 1961 onwards. However complacency and lack of foresight prevented this advice from being followed. Despite magazine articles on the subject from 1970 onwards, the majority of programmers only started recognizing Y2K as a looming problem in the 1990s, but even then, inertia and complacency caused it to be mostly ignored until the last few years of the decade.

Storage of a combined date and time within a fixed binary field is often considered a solution, but the possibility for software to mis-interpret dates remains, because such date and time representations must be relative to a defined origin. Roll-over of such systems is still a problem but can happen at varying dates and can fail in various ways. For example:

  • The typical Unix timestamp stores a date and time as a 32-bit integer number representing the number of seconds since January 1, 1970, and will roll over in 2038. See Unix epoch.
  • The popular spreadsheet Microsoft Excel stores a date as a number of days since an origin (often erroneously called a Julian date). A Julian date stored in a 16-bit integer will overflow after approximately 179 years (i.e. 65,536 days). Unfortunately, some releases of the program start at 1900, others at 1904).
  • When a program written in the Perl programming language looks up the current year, it receives the year minus 1900. Many careless programmers incorrectly treated this value as the last two digits of the year. This mostly harmless bug caused many dynamically generated webpages to display January 1, 2000, as "1/1/19100".

Even before January 1, 2000 arrived, there were also some worries albeit lesser compared to those generated by Y2K about September 9, 1999. This date could also be written in the numeric format, 9/9/99, which is somewhat similar to the end-of-file code, 9999, in old programming languages. It was feared that some programs might unexpectedly terminate on that date. Another related problem for the year 2000 was that it was a leap year even though years ending in '00' are normally not leap years. (A year is a leap year if it is divisible by 4 unless it is both divisible by 100 and not divisible by 400.) Fortunately, like Y2K, both fears were proven inaccurate.

Public reaction to the problem

Some industries started experiencing related problems early in the 1990s as software began to process future dates past 1999. For example, in 1993, some people with financial loans that were due in 2000 received (incorrect) notices that they were 93 years past due. As the decade progressed, more and more companies experienced problems and lost money due to erroneous date data. As another example, meat-processing companies incorrectly destroyed large amounts of good meat because the computerized inventory system identified the meat as expired. There were, in fact, many such minor "horror stories" like these, which received much play in the press as 2000 approached.

As the decade progressed, identifying and correcting or replacing affected computer systems or computerized devices became the major focus of information technology departments in most large companies and organizations. Millions of lines of programming code were reviewed and fixed during this period. Many corporations replaced major software systems with completely new ones that did not have the date processing problems. It was frequently reported that corporations had already experienced at least minor Y2K problems, and some major problems as well, due to date look-ahead functions in code and embedded systems, but it was and still is not clear what the full cost and seriousness of these problems was.

Y2K was the big media hype story of 1999. Public apprehension was tremendous, reaching, in some quarters, enormous proportions. Some individuals stockpiled canned or dried food in anticipation of food shortages. A few commentators predicted a full-scale apocalypse--three of the best known were computer consultant Edward Yourdon[?], religious commentator Gary North, and economist Edward Yardeni[?]. But when January 1, 2000 finally came, there were hardly any major problems reported, though a large number of them had been expected. Ironically, many people were upset that there appeared to be so much hype over nothing, because the vast majority of problems had been fixed correctly. Some more sophisticated critics have suggested that much preventive effort was unnecessary -- it would have been cheaper not to spend as much examining non-critical systems for flaws and simply fix the few that would have failed after the event. Such conclusions are easy to draw with the benefit of hindsight, but in any case the overhaul of many systems involved replacement with new, improved functionality anyway and thus in many cases the expenditure proved useful regardless.

Some items of interest:

  • The United States established the Year 2000 Information and Readiness Disclosure Act[?], which limited the liability of businesses who had properly disclosed their Y2K readiness.
  • Insurance companies sold insurance policies covering failure of businesses due to Y2K problems.
  • Attorneys organized and mobilized for Y2K class action lawsuits (which were not pursued).
  • No major failures of infrastructure were reported in the United States or even in many places where they had been widely expected, such as Russia.
  • The Y2K problem mainly affected countries that follow the western calendar (Saudi Arabia does not). (This is very misleading and needs to be restated.)
  • One theory has it that the Federal Reserve increased the money supply in 1999 to compensate for anticipated hording by a frightened populace. The populace, however, was not frightened, and the flood of new money fueled a stock market high tide that went out in spring of 2000.
  • Many organisations finally realised the critical importance of their IT infrastructure to their business, and put in place plans to keep it running and restore capability in case of disaster. Such planning may well have helped the relatively speedy return to functioning of New York's critical financial IT systems after the September 11, 2001 terrorist attack.
  • Speculatively, the Y2K spending on information infrastructure caused a slowdown in information technology spending in the year 2000 and 2001 as may eventually lead to higher productivity in future years.
  • The Long Now Foundation, which (in their words) "seeks to promote 'slower/better' thinking and to foster creativity in the framework of the next 10,000 years", has a policy of anticipating the Year 10,000 problem by writing all years with five digits. For example, they list "01996" as their year of founding.
  • One of the founders of the Long Now Foundation, Danny Hillis[?], was one of the few commentators who publicly predicted that Y2K bugs would cause no significant problems. (see Why Do We Buy the Myth of Y2K?, Newsweek, May 31, 1999)



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Great River, New York

... at an average density of 43.6/km² (112.9/mi²). The racial makeup of the town is 98.12% White, 0.00% African American, 0.06% Native American, 0.58% Asian, 0.00% ...

 
 
 
This page was created in 27.7 ms