Testing Concurrency

Whatever preferences Schriever and his team had for rapid development, they insisted upon flight tests to detect technical problems. ICBMs were extremely complex, and some failures during initial testing were inevitable. Testing would uncover many problems as it began in late 1956 and 1957.

Missile testing differed a great deal from aircraft testing, primarily because each unpiloted missile flew only once. For aircraft, the air force used the Un­satisfactory Report System, whereby test pilots, crew members, and mainte­nance personnel reported problems, which were then relayed to Wright Field engineers for analysis and resolution. The problem with missiles was the lack of pilots, crew members, and maintenance personnel during development testing. Instead, manufacturers worked with the air force to run tests and ana­lyze results.23

Because each ICBM disintegrated upon completion of its test flight, flight tests needed to be minimized and preflight ground testing maximized. The high cost of ICBM flight tests made simulation a cost-effective option, along with the use of ‘‘captive tests,’’ where engineers tied the rocket onto the launch pad before it was fired. R-W engineers estimated that for ICBMs to achieve a 50% success rate in wartime, they should achieve 90% flight success in ideal testing conditions. With the limited number of flight tests, this could not be statistically proven. Instead, R-W thoroughly checked and tested all compo­nents and subsystems prior to missile assembly, reserving flight tests for ob­serving interactions between subsystems and studying overall performance. Initial flight tests started with only the airframe, propulsion, and autopilot. Upon successful test completion, engineers then added more subsystems for each test until the entire missile had been examined.24

By 1955, each of the military services recognized that rocket reliability was a problem, with ARDC sponsoring a special symposium on the subject.25 Statis­tics showed that two-thirds of missile failures were due to electronic compo­nents such as vacuum tubes, wires, and relays. Electromagnetic interference and radio signals caused a significant number of failures, and about 20% of the problems were mechanical, dominated by hydraulic leaks.26

Atlas’s test program proved no different. The first two Atlas A tests in mid- 1957 ended with engine failures, but the third succeeded, leading eventually to a record of three successes and five failures for the Atlas A test series. Simi­lar statistics marked the Atlas B and C series tests between July 1958 and Au­gust 1959. For Atlas D, the first missiles in the operational configuration, re­liability improved to 68%. Of the thirteen failures in the Atlas D series, four were caused by personnel errors, five were random part failures, two were due to engine problems, and two were design flaws.27

Solving missile reliability problems proved to be difficult. Two 1960 acci­dents dramatized the problems. In March an Atlas exploded, destroying its test facilities at Vandenberg Air Force Base on the California coast. Then, in December, the first Titan I test vehicle blew up along with its test facilities at Vandenberg. Both explosions occurred during liquid propellant loading, a fact that further spurred development of the solid propellant-based Minute – man missile. With missile reliability hovering in the 50% range for Atlas and around 66% for Titan, concerns increased both inside and outside the air force.28

While the air force officially told Congress that missile reliability ap­proached 80%, knowledgeable insiders knew otherwise. One of Schriever’s deputies, Col. Ray Soper, called the 80% figure “optimistically inaccurate’’ and estimated the true reliability at 56% in April I960.29 That same month, Brig. Gen. Charles Terhune, who had been Schriever’s technical director through the 1950s, entertained serious doubts:

The fact remains that the equipment has not been exercised, that the reliability is not as high as it should be, and that in all good conscience I doubt seriously if we can sit still and let this equipment represent a true deterrence for the Ameri­can public when we know that it has shortcomings. In the aircraft program these shortcomings are gradually recognized through many flights and much train­ing and are eliminated if for no other reason, by the motivation of the crews to keep alive but no such reason or motivation exists in the missile area. In fact, there is even a tendency to leave it alone so it won’t blow up.30

ICBM reliability problems drew air force and congressional investigations. An air force board with representatives from ARDC, AMC, and Strategic Air Command reported in November 1960, blaming inadequate testing and train­ing as well as insufficient configuration and quality control. It recommended additional testing and process upgrades through an improvement program. After the dramatic Titan explosion the next month, the secretary of defense requested an investigation by the Weapon Systems Evaluation Group within the Office of the Secretary of Defense. A parallel study by the director of De­fense Research and Engineering criticized rushed testing schedules. In the spring of 1961, the Senate Preparedness Investigating Subcommittee held hear­ings on the issue. The members concluded that testing schedules were too optimistic. With technical troubles continuing, its own officers concerned, and congressional pressure, Schriever’s group had to make ICBMs operation­ally reliable. To do so, the air force and R-W created new organizational pro­cesses to find problems and ensure high quality.31

Solving ICBM technical problems required rigorous processes of testing, inspection, and quality control. These required tighter management and im­proved engineering control. One factor that inadvertently helped was a tem­porary slowdown in funding between July 1956 and October 1957. Imposed by the Eisenhower administration as an economy measure, the funding reduc­tion slowed development from ‘‘earliest possible’’ deployment (as had been originally planned) to ‘‘earliest practicable’’ deployment. As noted by one his­torian, this forced a delay in management decisions regarding key technical questions related to missile hardware configurations, basing, and deployment. This, in turn, allowed more time to define the final products.32

Reliability problems were the most immediate concern, and AMC officers began by collecting failure statistics, requiring Atlas contractor General Dy­namics to begin collecting logistics data, including component failure statis­tics, in late 1955. In 1957 AMC extended this practice to other contractors, and it later placed these data in a new, centralized Electrical Data Processing Cen­ter.33 R-W scientists and engineers statistically rationed a certain amount of ‘‘unreliability’’ to each vehicle element, backing the allocations with empiri­cal data. They then apportioned the required reliability levels as component specifications.34

Starting on Atlas D, Space Technology Laboratories (STL)-the succes­sor to R-W’s GMRD-scientists and engineers began the Search for Critical Weaknesses Program, in which environmental tests were run to stress com­ponents ‘‘until failure was attained.’’ The scientists and engineers ran a series of captive tests, holding down the missile while firing the engines. All compo­nents underwent a series of tests to check environment tolerance (tolerance for temperature, humidity, etc.), vibration tolerance, component functions, and interactions among assembled components. These required the develop­ment of new equipment such as vacuum chambers and vibration tables. By 1959, the Atlas program also included tests to verify operational procedures and training. STL personnel created a failure reporting system to classify fail­ures and analyze them using the central database.35

Environmental testing, such as acoustic vibration and thermal vacuum tests, detected component problems. The failure reporting system also helped

Подпись: Image not available.

Atlas D launch, October 1960. Atlas reliability began to improve with the D series. Courtesy John Lonnquest.

identify common weaknesses of components. Other new processes, such as the Search for Critical Weaknesses Program, looked for problems with compo­nents and for troublesome interactions. These processes identified the symp­toms but did not directly address the causes of problems. For example, some component failures were caused by a mismatch between the vehicle flown and the design drawings. Solving problems, as opposed to simply identify­ing them, required the implementation of additional social and technical pro­cesses. Engineers and managers created the new social processes required on the Minuteman project, and from there they spread far beyond the air force.