There is a real danger of Y2K errors compromising nuclear safety. However, this danger is not in the weapons themselves. The nuclear missiles and warheads will not spontaneously launch or explode due to Y2K malfunctions. Launch control officers in submarines and ICBM launch control centers must physically turn keys, which are electromechanical in nature rather than digital. Also, for the officers’ actions to be translated into a real launch, a correct “unlock” code must be entered into the warhead and missile systems. The chance that a Y2K error would randomly transmit the correct unlock code to the warhead is infinitesimal.
The threat of Y2K-induced nuclear war is found in two areas connected to daily nuclear operations: 1) Command and Control (C2) systems, which are primarily telecommunications systems that depend on automated routers and switches; and 2) early warning information systems, which includes not only the satellites and radars for detecting enemy launch but also the thousands of software programs and millions of lines of code for filtering, analyzing, correlating, and fusing the continuous stream of data so that humans can understand it. These “information technology” (IT) systems depend on giant databases that sort and store the incoming information through the use of dates. Also, the software that breaks down and summarizes the data for human consumption executes mathematical operations using date-dependent information. These systems are highly interconnected in a complex network, which could lead to the unpredictable spread of a Y2K error across operations.
Commanders depend on this early warning information because Russia and the US are still on hair-trigger alert. US analysts at North American Aerospace Defense Command (NORAD) have three minutes to study the data and make a judgment on its meaning and validity, after which NORAD, Strategic Command (STRATCOM) in Nebraska, and top authorities in the National Military Command Center (NMCC) in the Pentagon have only 10 minutes to initiate a large teleconference and make a decision on retaliation. These extremely short decision times are due to the policy of “launch on warning,” which demands that Russia and the US “win” a nuclear war. “Winning” means avoiding the preemption of one’s own forces by an enemy surprise attack, while at the same time preempting as many enemy missile sites as possible in an offensive strike. Simply put, ICBMS have a 25-30 flight time between the two countries, and Launch on Warning mandates that US missiles get off the ground before Russian warheads arrive. The same applies to Russian doctrine.
Without reliable communications during a missile alert, Command and Control (C2) would quickly disintegrate, and the possibility of launch orders being given by mistaken calculations would significantly increase through a combination of human-machine errors. If Y2K were to cause the production of ambiguous data, or incomplete data, or complete blackouts of crucial surveillance sensors (possibly through indirect events such as power outages), the potential exists for escalating actions by lower commanders who may interpret these events as evidence of an ongoing surprise strike by the opponent.
Even without Y2K, there is a disturbing history of computer-related failures in US-Russian operations. In US operations in 1980, an embedded 64-cent chip with a flawed design, nestled deep in telephone switching hardware at NORAD, suddenly started sending messages to other command posts that a Soviet attack was under way, causing two raised alert levels within a three-day period. Nor was this incident an isolated case. According to nuclear expert Bruce Blair of the Brookings Institution, official correspondence between US commanders in later years refer obliquely to multiple computer-based mishaps, such as false reports from an infrared satellite that “could have resulted in unacceptable posturing of SAC forces.” And in a series of reports on the computer modernization programs at NORAD during the last 18 years, the General Accounting Office has described an operating environment plagued by flawed and lost data, bad screen displays for human operators, and sub-optimal system performance. Similarly, in 1983, Russians had a near-accident when satellite software mistook sun glare off of clouds as 5 US Minuteman III ICBMs streaking towards the Soviet Union. Five minutes into the alert, a lower officer decided to tell upper commanders that the data was false because he had a “gut feeling” that the US would not start a nuclear war in this fashion.
Because of this history of computer errors, redundancy in sensors and data processing nodes is essential to avoid accidents. Unfortunately, experts do not have high confidence in the ability of Russian radars to back up Russian satellites, and vice-versa. Russia has only 3 operational satellites out of necessary constellation of 7-9 satellites; some of the satellites in orbit have drifted off-station and are useless for early warning purposes. This means that they can spot an American launch of ICBMs within a minute or two after launch (which is good news!), but they cannot spot any Trident submarine launches closer to Russian territory. Only the ground-based radar arrays can spot Trident launches, which gives Russian leaders very little time (possibly as little as 5 minutes) to analyze the data, make a decision, and issue launch orders. To make matters worse, the ground-based radar network for Russia is outdated, and there are two very large gaps in coverage that would allow US Trident submarines to attack with impunity.
Russia does not have a well-funded and staffed Y2K repair program in place. Russian military officials have found date dependencies and vulnerabilities in early warning and C2 systems, but they have not started the repairs. This leaves little time for testing of Y2K fixes even if systems do get “renovated” before 2000.
In contrast to Russia, many critical U.S. systems have been “renovated” and the Pentagon is now completing the testing phase of the Y2K remediation process. But a U.S. Air Force official admitted in a Senate hearing recently that these tests included only “the thin line, the minimal number of [computer] systems required to execute the mission.” Commercial providers of telecommunications routers and switches were not incorporated in the test plans. (Even in the event of a nuclear crisis, Strategic Command may need the Baby Bells and other commercial telephone companies!) Nor were private suppliers of electricity included. Also, all of the major communications software and hardware for the US nuclear submarine force are behind schedule. The submarine systems not covered in February tests include onshore antennas, signal processing software, automated message distribution software, and embedded systems for the encryption/decryption of secret messages. This leaves the possibility that when the new millennium arrives, computers left out of the integrated test schedules will “infect” the tested systems or cause other disruptions in normal operations.
Dedicated testing programs will only reveal the presence of errors, not their complete absence. Moreover, computer failures rarely repeat themselves in exactly the same form, with the result that none of the documented US and Russian near-accidents could have been predicted beforehand by knowledgeable experts. The only guaranteed way to avoid accidental nuclear war is to end Russian and American dependence on the extravagantly complex computer systems that provide early warning information to commanders. And this can only be done by instituting mutually verifiable de-alerting procedures, replacing the current “warfighting” nuclear stance with a doctrine that reflects true post-Cold War international realities.