Did NASA have a risk management plan?
In the morning of January 28 th , 1986 the entire world watched with excitement as NASA sent its first rocket to space. But, less than 70 seconds later the rocket exploded killing all the seven crew members on board. Since the incident, a lot of parties took a keen interest in digging into the risk management to know exactly what went wrong, and it came as a surprise when most people learned that NASA never had a risk management plan. It is alleged that NASA managers knew that “the design of the Solid Rocket Boosters (SRB) contained a potentially catastrophic flaw since 1977, but failed to address it properly” (Vaughan, 1997). Almost ten years later and the flaw was never dealt with, endangering the lives of the first astronauts to be sent on the rocket shuttle. No plans had been put in place to deal with the faulty SRBs, and neither had any precautions been taken to deal with any potential risks that could befall the shuttle arising from the rocket boosters.
To what extent was the plan followed?
Now that NASA never really had a potential disaster management plan, to what extent was this allegation evident? Sources from within the shuttle assembly claim that on the fateful day of the explosion. NASA had launch fever, steaming from the fact that the engineers knew the rocket wasn’t well equipped and ready for the first launch and disagreed advice not to launch. They pointed out that they still needed more time to upgrade the O-rings in the solid rocket boosters and ascertain their functionality ( Rowland, 1986). But apparently, these concerns from the engineers never made it up the chains of command. It is these lapses in judgment, and lack of a proper communication channel between the engineers and the superiors that triggered the explosion.
Delegate your assignment to our experts and they will do the rest.
Could NASA have identified all the risks using its existing risk identification process?
Various teams that accessed the explosion attribute the disaster to the faulty O-rings that were never upgraded for nine years since they were built. The engineers admitted that the O-rings were built to only accommodate for 40 0 F and never qualified for the 18 0 F they were subjected to in the space. Since the O-rings were the only factor that posed a greater risk to the rocket, and the engineers knew about it prior to the launch, NASA was in a better position to prevent the explosion if it had only dealt with the issue with the uttermost level of dedication it deserved. Most stakeholders, however, point out that the risk identification process put in place at the time wasn’t that much effective to look into all the potential risks, hence the push to review the process (Fisher & Kingma, 2001) .
Could anything have been done to prevent the disaster?
From a risk identification, assessment and mitigation perspective, the disaster could have been averted by simply upgrading the O-rings. Another factor that came into play was the communication breakout. If there had been better channel of communication, the issues raised by the engineers would have been properly put into consideration ( Janis, 2008). But there’s one underlying factor for the explosion that humanists fought to bring to light. It is the Launch Services Agreement (LSA) that NASA entered with Hughes Communications Galaxy, Inc. in December 1985, roughly a month before the unsuccessful launch. The LSA required NASA to use its ‘best efforts’ to launch ten of Hughes satellites on space within a period of 8 years, precisely until September 1994. Humanists argued that NASA was under pressure to deliver its end of the bargain within the stipulated period, hence the push for the January 1986 launch despite concerns from the engineers.
References
Vaughan, D. (1997). The Challenger launch decision: Risky technology, culture, and deviance at NASA . University of Chicago Press.
Rowland, R. C. (1986). The relationship between the public and the technical spheres of argument: A case study of the Challenger Seven disaster. Communication Studies , 37 (3), 136-146.
Fisher, C. W., & Kingma, B. R. (2001). Criticality of data quality as exemplified in two disasters. Information & Management , 39 (2), 109-116.
Janis, I. L. (2008). Groupthink. IEEE Engineering Management Review , 36 (1), 36.