Thread: Y2K Revisited
View Single Post
Old 19th August 2017, 09:11 AM
northodox northodox is offline
AFA Member
Join Date: Oct 2010
Location: Wollongong
Posts: 12
Default Re: Y2K Revisited

Having been in The IT industry since 1983, I can attest to the potential risks the Y2K "bug" could have wrought.
Old business systems, written in COBOL, PL/1 or FORTRAN, might store numbers in Binary Coded Decimal. That's where 4 bits are used to store a decimal digit. You can store fifteen numbers in 4 bits (typically 0-15) but BCD stores only 0 through to 9 as follows:

And the values 0110 through to 1111 never appears.
BTW, 4 bits is referred to as a nibble.

So in those days, a date such as 311299 (format DDMMYY) might be stored as:
0011 0001 0001 0010 0101 0101

Calculating the difference between dates (where the bug could really bite) needed a mini-algorithm over those date elements (allowing for leap years as well).
Obviously, finding a difference between 060100 (6 Jan 2000) and 311299 would fail. The program would treat year 00 as 1900 and the difference would be a large negative number. If it didn't error out.

As these systems were originally written in the '80s, who would have thought the code would still be in use in 2000? "Anyway, I'll be retired by then", I heard occasionally.

There were two common solutions:
1) change YY to YYYY. That is, the code needs changing to represent that year format and the dates need to be stored in databases in the extended format. This is the best solution - although more effort - as it will last well into the future. It's amazing how many "ancient" COBOL and FORTRAN programs are still ticking over in the business world.
2) a simple code change: wherever a 2 digit year is used, add logic to calculations where any year greater than 50 is treated as 1900 plus the year and any year less than 51 is treated as 2000 plus the year. This is simpler solution but introduces a Y2051 bug. Plus, that cut over date will be different purposes. For a birth date, in Y2000, the cut over might need to be 10; for a loan start date, cutover might be 60. The inconsistency makes for messiness.

Which solution is used depends on the criticality of the program and the likely future of the software. These days, we laugh at software lasting longer than 5 years, but that wasn't always the case. Software has become commoditised along with so many other products.

So, Y2K could have caused serious issues but it is hard to imagine utilities failing to function.
Nowadays, dates are commonly stored in enterprise systems as full binary, using 32 or 64 bits - some even 128. Some systems will store dates as the number of seconds since January 1, 1970. That makes it easy to find the difference between two dates (down to the nearest second!) but those systems storing dates like this in a 32 bit field will experience a Y2038 bug, as that's when the seconds since 1-Jan-1970 reaches 2^32. Old systems using two digit years with a cutover of 33 will outlast them!
Reply With Quote
Thank Strato thanked this post