Category The Secret of Apollo

Committees, Hierarchies, and Configuration Management

Between 1962 and 1965, NASA’s organization changed from a series of engi­neering committees to a mixture of committees overlaid with a managerial hierarchy. Similarly, between 1950 and 1964, JPL’s committee structure gave way to hierarchical project management. Schriever’s entrepreneurial Western Development Division also used ad hoc committees from 1953 to 1955, sepa­rated from the rest of the air force hierarchy. From 1956 onward, the air force hierarchy increasingly asserted control. These shifts signified changes in the balance of power between the hierarchical models of organization used by military and industrial managers, and the informal committees used by engi­neers and scientists.

The novel technologies of the 1950s required the services of scientists and engineers, who through their monopoly on technical capabilities influenced events. Schriever’s alliance with scientists in the 1940s and 1950s brought sci­entists to the forefront of the air force’s development efforts. In NASA’s first few years, engineers at the field centers effectively controlled NASA and its programs. In both cases, scientists and engineers extensively used committee structures to organize activities. These working groups generated and used the information necessary to create the new technologies. Knowing that Con­gress would pay the bills, scientists and engineers essentially ignored costs. Indeed, if they could have made correct cost predictions, these would to some extent have invalidated their claim to be creating radically new technologies. Schriever shared the scientists’ ‘‘visionary’’ bias. He argued for radical new weapons and the methods to rapidly create them by reminding everyone of the Soviet threat.

By the 1960s, the need for radical weaponry declined. When in 1961 re­connaissance satellites showed the missile gap to be illusory, arch-manager Robert McNamara, the new secretary of defense, immediately imposed hierar­chical structures and centralized information systems to assert control. Simi­larly, the air force asserted control over Schriever’s organization when re­liability problems led to embarrassing questions from Congress. Ironically, the methods used by the air force and the Department of Defense to con­trol Schriever were the methods that Schriever’s group had created to control ballistic missiles. NASA’s turn came after 1963, as Congress clamped down in response to NASA’s wildly inaccurate cost estimates. Air force R&D veterans Mueller and Phillips imposed hierarchy and information systems over NASA’s engineering committees.

NASA’s early history showed that committees could successfully develop reliable technologies, but only when given a blank check and top priority. On the manned space flight programs, NASA’s engineers and contract person­nel had ample motivation. With clear goals and a national mandate, formal control mechanisms were unnecessary. Informal methods worked well both inside and outside NASA, as NASA engineers exerted firm control of con­tractors through informal but extensive contractor penetration. As long as Congress was willing to foot the bill for huge overruns, NASA’s committees sufficed. When motivation was overwhelmingly positive, goals were clear, and funding was generous, coordination worked.

The history of ELDO illustrates how critical motivation and authority are

to an organization’s success. ELDO’s primary function was to coordinate sev­eral national programs through committees whose only authority was their ability to persuade others. Unfortunately for ELDO, the national governments and industrial companies were at least as concerned with protecting their technologies from their national and industrial partners as they were with co­operation. By 1966, both the national organizations and ELDO began to rec­ognize problems with this situation, and they created an Industrial Integrating Group to disseminate information. Without authority, neither the integrat­ing group nor ELDO could bridge the communication gaps between contrac­tors, leading to a series of interface failures and ultimately to ELDO’s demise. Without motivation, authority, or unitary purpose, ELDO failed.

The trick to designing new technologies within a predictable budget was to unite the creative skills of scientific and engineering working groups with the cost-consciousness of managerial hierarchies. JPL and the air force devel­oped the first link: configuration control. In both organizations, configuration control developed as a contractual association between the government and industry. Industrial contractors already used the design freeze as the break­point between design and manufacturing. Configuration control in the air force linked the design frozen by the engineers to the missile as actually built. When managers found that a number of missile failures resulted from mis­matches between the engineering design and the ‘‘as launched’’ missile, air force managers implemented a system of paperwork to link design drawings to specific hardware components.

At JPL, configuration control developed because JPL designed the Ser­geant missile, while industrial contractor Sperry was to build it. Deputy Program Director Jack James realized Sperry needed design information as soon as possible, so he required JPL engineers to document and deliver in­formation in several stages. At each stage, James and others integrated the various engineers’ information into a single package. James then controlled design changes by requiring engineers to communicate with him before mak­ing changes. This gave James the opportunity to rule on the necessity of the change and to ensure communication with other designers to coordinate any other implications of the change. This “progressive design freeze’’ worked so well that James imported it into his next project, Mariner. James and other managers expanded the concept on Mariner and its successors to include cost

and schedule change estimates with every technical modification, thus tying costs and schedules to technical designs.

The air force also realized that tying cost and schedule estimates to engi­neering changes was a way to control engineers. Air force managers and engi­neers from The Aerospace Corporation expanded the concept to include the development of specifications. Soon the air force made specifications contrac­tually binding and tied specification changes to cost and schedule estimates, just as it did for design drawings and hardware. This system of change con­trol for a hardware configuration, tied to cost and schedule estimates, became known as configuration management. Minuteman program director Phillips recognized configuration management as an important tool, and he imposed it on Apollo to coordinate and control not only contractors but NASA field centers.

Configuration management satisfied the needs of managers, systems engi­neers, financial experts, and legal personnel. Systems engineers used con­figuration management to coordinate the designs of the subsystem engineers. Managers found configuration management an ideal lever to control scien­tists, engineers, and contractors because these groups could no longer make changes without passing through a formal CCB. Financial and legal experts benefited from configuration management because it tied cost and sched­ule changes to contractual documentation. One business school professor be­lieved that configuration management was influential as a systematic way to resolve group conflicts in NASA projects.2

The CCB was the link between the formal hierarchy and the informal work­ing groups. At the board, the project manager and project controller evalu­ated changes from the standpoint of cost and schedules. Disciplinary repre­sentatives evaluated change requests to see if they affected other design areas, while systems engineers determined if the proposed change caused higher – level interactions among components. The CCB ensured frequent commu­nication between the groups. Through the linkage of hierarchies and work­ing groups, and the processes that tied paperwork to hardware, configuration management was the heart of systems management and the key to managerial and military control over scientists and engineers.

Technical Challenges in Missile and Space Projects

Missiles were developed from simple rocketry experimentation between World Wars I and II. Experimenters such as Robert Goddard and Frank Ma – lina in the United States, von Braun in Germany, Robert Esnault-Pelterie in France, and Valentin Glushko in the Soviet Union found rocketry experimen­tation a dangerous business. All of them had their share of spectacular mis­haps and explosions before achieving occasional success.5

The most obvious reason for the difficulty of rocketry was the extreme volatility of the fluid or solid propellants. Aside from the dangers of handling exotic and explosive materials such as liquid oxygen and hydrogen, alcohols, and kerosenes, the combustion of these materials had to be powerful and controlled. This meant that engineers had to channel the explosive power so that the heat and force neither burst nor melted the combustion chamber or nozzle. Rocket engineers learned to cool the walls ofthe combustion chamber and nozzle by maintaining a flow of the volatile liquids near the chamber and nozzle walls to carry off excess heat. They also enforced strict cleanliness in manufacturing, because impurities or particles could and did lodge in valves and pumps, with catastrophic results. Enforcement of rigid cleanliness stan­dards and methods was one of many social solutions to the technical problems of rocketry.6

Engineers controlled the explosive force of the combustion through care­fully designed liquid feed systems to smoothly deliver fuel. Instabilities in the fuel flow caused irregularities in the combustion, which often careened out of control, leading to explosions. Hydrodynamic instability could also ensue if the geometry of the combustion chamber or nozzle was inappropriate. Engi­neers learned through experimentation the proper sizes, shapes, and relation­ships of the nozzle throat, nozzle taper, and combustion chamber geometry. Because of the nonlinearity of hydrodynamic interactions, which implied that mathematical analyses were of little help, experimentation rather than theory determined the problems and solutions. For the Saturn rocket engines, von Braun’s engineers went so far as to explode small bombs in the rocket ex­haust to create hydrodynamic instabilities, to make sure that the engine de­sign could recover from them.7 For solid fuels, the shape of the solid deter­mined the shape of the combustion chamber. Years of experimentation at JPL eventually led to a star configuration for solid fuels that provided steady fuel combustion and a clear path for exiting hot gases. Once engineers determined the proper engine geometry, rigid control of manufacturing became utterly critical. The smallest imperfection could and did lead to catastrophic failure. Again, social control in the form of inspections and testing was essential to ensuring manufacturing quality.

Rocket engines create severe structural vibrations. Aircraft designers rec­ognized that propellers caused severe vibrations, but only at specific frequen­cies related to the propeller rotation rate. Jet engines posed similar prob­lems, but at higher frequencies corresponding to the more rapid rotation of turbojet rotors. Rocket engines were much more problematic because their vibrations were large and occurred at a wide range of nearly random frequen­cies. The loss of fuel also changed a rocket’s resonant frequencies, at which the structure bent most readily. This caused breakage of structural joints and the mechanical connections of electrical equipment, making it difficult to fly sensitive electrical equipment such as vacuum tubes, radio receivers, and guidance systems. Vibrations also occurred because of fuel sloshing in the emptying tanks and fuel lines. These ‘‘pogo’’ problems could be tested only in flight.

Vibration problems could not generally be solved through isolated tech­nical fixes. Because vibration affected electrical equipment and mechanical connections throughout the entire vehicle, this problem often became one of the first so-called system issues — it transcended the realm of the structural engineer, the propulsion expert, or the electrical engineer alone. In the 1950s, vibration problems led to the development of the new discipline of reliability and to the enhancement of the older discipline of quality assurance, both of which crossed the traditional boundaries between engineering disciplines.8

Reliability and quality control required the creation or enhancement of so­cial and technical methods. First, engineers placed stronger emphasis on the selection and testing of electronic components. Parts to be used in missiles had to pass more stringent tests than those used elsewhere, including vibra­tion tests using the new vibration, or ‘‘shake,’’ tables. Second, technicians as­sembled and fastened electronic and mechanical components to electronic boards and other components using rigorous soldering and fastening meth­ods. This required specialized training and certification of manufacturing workers. Third, to ensure that manufacturing personnel followed these pro­cedures, quality assurance personnel witnessed and documented all manufac­turing actions. Military authorities gave quality assurance personnel indepen­dent reporting and communication channels to avoid possible pressures from contractors or government officials. Fourth, all components used in missiles and spacecraft had to be qualified for the space environment through a series of vibration, vacuum, and thermal tests. The quality of the materials used in flight components, and the processes used to create them, had to be tightly controlled as well. This entailed extensive documentation and verification of materials as well as of processes used by the component manufacturers. Orga­nizations traced every part from manufacturing through flight.9

Only when engineers solved the vibration and environmental problems could they be certain the rocket’s electronic equipment would send the signals necessary to determine how it was performing. Unlike aircraft, rockets were automated. Although automatic machinery had grown in importance since the eighteenth century, rockets took automation to another level. Pilots could fly aircraft because the dynamics of an aircraft moving through the air were slow enough that pilots could react sufficiently fast to correct deviations from the desired path and orientation of the aircraft. The same does not hold true for rockets. Combustion instabilities inside rocket engines occur in tens of milliseconds, and explosions within 100 to 500 milliseconds thereafter, leaving no time for pilot reaction. In addition, early rockets had far too little thrust to carry something as heavy as a human.

Because rockets and satellites were fully automated, and also because they went on a one-way trip, determining if a rocket worked correctly was (and is) problematic. Engineers developed sophisticated signaling equipment to send performance data to the ground. Assuming that this telemetry equip­ment survived the launch and vibration of the rocket, it sent sensor data to a ground receiving station that recorded it for later analysis. Collecting and processing these data was one of the first applications of analog and digital computing. Engineers used the data to determine if subsystems worked cor­rectly, or more importantly, to determine what went wrong if they did not. The military’s system for problem reporting depended upon pilots, but con­tractors and engineers would handle problem reporting for the new technolo­gies — a significant social change. Whereas in the former system, the military tested and flew aircraft prototypes, for the new technologies contractors flew prototypes coming off an assembly line of missiles and the military merely witnessed the tests.10

Extensive use of radio signals caused more problems. Engineers used radio signals to send telemetry to ground stations and to send guidance and de- struct signals from ground stations to rockets. They carefully designed the electronics and wiring so that electromagnetic waves from one wire did not interfere with other wires or radio signals. As engineers integrated numerous electronic packages, the interference of these signals occasionally caused fail­ures. The analysis of ‘‘electromagnetic interference’’ became another systems specialty.11

Automation also included the advanced planning and programming of rocket operations known as sequencing. Rocket and satellite engineers de­veloped automatic electrical or mechanical means to open and close propul­sion valves as well as fire pyrotechnics to separate stages, release the vehicle from the ground equipment, and otherwise change rocket functions. These ‘‘sequencers’’ were usually specially designed mechanical or electromechani­cal devices, but they soon became candidates for the application of digital computers. A surprising number of rocket and satellite failures resulted from improper sequencing or sequencer failures. For example, rocket stage sepa­ration required precise synchronization of the electrical signals that fired the pyrotechnic charges with the signals that governed the fuel valves and pumps controlling propellant flow. Because engineers sometimes used engine turbo­pumps to generate electrical power, failure to synchronize the signals for sepa­ration and engine firing could lead to a loss of sequencer electrical power. This in turn could lead to a collision between the lower and upper stages, to an engine explosion or failure to ignite, or to no separation. The solution to se­quencing problems involved close communication among a variety of design and operations groups to ensure that the intricate sequence of mechanical and electrical operations took place in the proper order.12

Because satellites traveled into space by riding on rockets, they shared some of the same problems as rockets, as well as having a few unique features. Satellites had to survive launch vehicle vibrations, so satellite designers ap­plied strict selection and inspection ofcomponents, rigorous soldering meth­ods, and extensive testing. Because of the great distances involved-particu – larly for planetary probes-satellites required very high performance radio equipment for telemetry and for commands sent from the ground.13

Thermal control posed unique problems for spacecraft, in part because of the temperature extremes in space, and in part because heat is difficult to dissipate in a vacuum. On Earth, designers explicitly or implicitly use air currents to cool hot components. Without air, spacecraft thermal design re­quired conduction of heat through metals to large surfaces where the heat could radiate into space. Engineers soon designed large vacuum chambers to test thermal designs, which became another systems specialty.

Unlike the space thermal environment, which could be reproduced in a vacuum chamber, weightlessness could not be simulated by Earth-based equipment. The primary effect of zero gravity was to force strict standards of cleanliness in spacecraft manufacturing. On Earth, dust, fluids, and other contaminants eventually settle to the bottom of the spacecraft or into corners where air currents slow. In space, fluids and particles float freely and can dam­age electrical components. Early spacecraft did not usually have this problem because many of them were spin stabilized, meaning that engineers designed them to spin like a gyroscope to hold a fixed orientation. The spin caused particles to adhere to the outside wall of the interior of the spacecraft, just as they would on the ground where the spacecraft would have been spin tested.

Later spacecraft like JPL’s Ranger series used three-axis stabilization whereby the spacecraft did not spin. These spacecraft, which used small rocket engines known as thrusters to hold a fixed orientation, were the first to en­counter problems with floating debris. For example, the most likely cause of the Ranger 3 failure was a floating metal particle that shorted out two adjacent wires. To protect against such events, engineers developed conformal coating to insulate exposed pins and connectors. Designers also separated electrically hot pins and wires so that floating particles could not connect them. Engi­neers also reduced the number of particles by developing clean rooms where technicians assembled and tested spacecraft.

Many problems occurred when engineers or technicians integrated com­ponents or subsystems, so engineers came to pay particular attention to these interconnections, which they called interfaces. Interfaces are the boundaries between components, whether mechanical, electrical, human, or “logical,” as in the case of connections between software components. Problems between components at interfaces are often trivial, such as mismatched connectors or differing electrical impedance, resistance, or voltages. Mismatches between humans and machines are sometimes obvious, such as a door too high for a human to reach, or an emergency latch that takes too long to operate. Others are subtle, such as a display that has too many data or a console with distract­ing lights. Finally, operational sequences are interfaces of a sort. Machines can be (and often are) so complicated to operate that they are effectively unusable. Spacecraft, whether manned or unmanned, are complex machines that can be operated only by people with extensive training or by the engineers who built them. Greater complexity increases the potential for operator error. It is probably more accurate to classify operator errors as errors in design of the human-machine interface.14

Many technical failures can be attributed to interface problems. Simple problems are as likely to occur as complex ones. The first time the Ger­mans and Italians connected their portions of the Europa rocket, the diame­ters of the connecting rings did not match. Between the British first stage and the French second stage, electrical sequencing at separation caused com­plex interactions between the electrical systems on each stage, leading ulti­mately to failure. Other interface problems were subtle. Such was the failure of Ranger 6 as it neared the Moon, ultimately traced to flash combustion of propellant outside of the first stage of the launch vehicle, which shorted out some poorly encased electrical pins on a connector between the launch ve­hicle and the ground equipment. Because the electrical circuits connected the spacecraft to the offending stage, this interface design flaw led to a spacecraft failure three days later.15

Some farsighted managers and engineers recognized that interfaces repre­sented the connection not simply between hardware but also between indi­viduals and organizations. Differences in organizational cultures, national characteristics, and social groups became critical when these groups had to work together to produce an integrated product. As the number of organiza­tions grew, so too did the problems of communication. Project managers and engineers struggled to develop better communication methods.

As might be expected, international projects had the most difficult prob­lems with interfaces. The most severe example was ELDO’s Europa I and Europa II projects. With different countries developing each of three stages, a test vehicle, and the ground and telemetry equipment, ELDO had to deal with seven national governments, military and civilian organizations, and national jealousies on all sides. Within one year after its official inception, both ELDO and the national governments realized that something had to be done about the ‘‘interface problem.’’ An Industrial Integrating Group formed for the pur­pose could not overcome the inherent communication problems, and every one of ELDO’s flights that involved multiple stages failed. All but one failed because of interface difficulties.16

By the early 1960s, systems engineers developed interface control docu­ments to record and define interfaces between components. On the manned space projects, special committees with members from each contributing or­ganization worked out interfaces between the spacecraft, the rocket stages, the launch complex, and mission operations. After the fledgling European Space Research Organisation began to work with American engineers and managers from Goddard Space Flight Center, the first letter from the American project manager to his European counterpart was a request to immediately begin work on the interface between the European spacecraft and the American launch vehicle.17

Systems management became the standard for missile and space systems because it addressed many of the major technical issues of rockets and space­craft. The complexity of these systems meant that coordination and commu­nication required greater emphasis in missile and space systems than they did in many other contemporary technologies. Proper communication helped to create better designs. However, these still had to be translated into techni­cal artifacts, inspected and documented through rigid quality inspections and testing during manufacturing. Finally, the integrated system had to be tested on the ground and, if possible, in flight as well. The high cost and “nonreturn” of each missile and spacecraft meant that virtually every possible means of ground verification paid off, helping to avoid costly and difficult-to-analyze flight failures. All in all, the extremes of the space environment, automation, and the volatility of rocket fuels led to new social methods that emphasized considerable up-front planning, documentation, inspections, and testing. To be implemented properly, these social solutions had to satisfy the needs of the social groups that would have to implement them.

JPL’s Journey from. Missiles to Space

Pride in accomplishment is not a self-sufficient safeguard when
undertaking large scale projects of international significance.

— Kelley Board, after Ranger 5 failure

The Jet Propulsion Laboratory (JPL), located in Pasadena, California, and managed by the California Institute of Technology, began as a graduate stu­dent rocket project in the late 1930s and developed into the world’s leading institution for planetary space flight. Between 1949 and 1960, JPL transformed itself twice: first, from a small research organization to a large engineering de­velopment institution, and second, from an organization devoted to military rocketry to one focusing on scientific spacecraft.1

JPL’s academic researchers did not initially recognize the many differences between a hand-crafted research vehicle and a mass-produced, easily oper­ated weapon, or highly reliable planetary probes. The switch from research to development required strict attention to thousands of details. Properly build­ing and integrating thousands of components was not an academic problem but an organizational issue. JPL’s engineering researchers learned to become design engineers, and in so doing some of them became systems engineers.

Learning systems engineering on tactical ballistic missiles, JPL managers and engineers modified missile practices to design and operate spacecraft. The most significant missile practices that carried over to spacecraft were organizational: component testing and reliability as well as procedures for change control. A few JPL managers learned these lessons quickly. However, it

took a number of embarrassing failures for JPL’s academically oriented engi­neers and managers to accept the structured methods of systems manage­ment.

JPL independently recreated processes that the air force developed on its ballistic missile programs: systems engineering, project management, and configuration control. The history of the two organizations shows that the processes were the result of not individual idiosyncrasies but larger technical and social forces.

Organizing ELDO. for Failure

The failure of F11 in November 1971 brought home to the member states — and this was indeed the only positive point it achieved — the necessity for a complete overhaul of the pro­gramme management methods.

— General Robert Aubiniere, 1974

World War II left Europe devastated and exhausted, while the United States emerged as the world’s most powerful nation, both militarily and economi­cally. Western Europeans feared the Soviet Union’s military power and totali­tarian government, but they worried almost as much about America’s im­mense economic strength. Some asserted that American dominance flowed from the large size of American domestic markets or the competitive nature of American capitalism, while others believed that technological expertise was the primary force creating ‘‘gaps’’ between the United States and Europe. By the late 1960s, the “technology gap’’ was a hot topic for politicians and econo­mists on both sides of the Atlantic.

Investigations showed that European technology and expertise did not radically differ from that of the United States. However, a number of studies showed that Americans managed and marketed technologies more efficiently and rapidly than Europeans. Significant differences between the United States and Western Europe existed in the availability of college-level management education and in the percentage of research and development expenditures. In each of these areas, Americans invested more, in both absolute and per capita terms. Some analysts believed the technology gap to be illusory but a management gap to be real.

To close the gaps, Europeans, actively aided by the United States, took a number of measures to increase the size of their markets, to develop ad­vanced science and technology, and to improve European management. The Common Market was the best-known example of market integration. Science and technology initiatives included the Conseil Europeen pour Recherche Nu – cleaire (CERN [European Committee for Nuclear Research]) for high-energy physics research, EURATOM for nuclear power technologies and resources, the European Space Research Organisation (ESRO) to develop scientific satel­lites, and the European Space Vehicle Launcher Development Organisation (ELDO) to create a European space launch vehicle.

Because of the military and economic significance of space launchers, the national governments of the ‘‘big four’’ Western European states — the United Kingdom, France, West Germany, and Italy—all supported the European launcher effort. Seeking contracts, the European aircraft industry also actively promoted the venture. Paradoxically, these strong national interests rendered ELDO ineffective. Each country and company sought its own economic ad­vantages through ELDO, while withholding as much information as possible. This attitude led to a weak organization that ultimately failed. When the Euro­peans decided to start again in the early 1970s, ELDO’s failure was the spur to do better, a prime example of how not to organize technology development.

Social Gains through Systems Management

Systems management became the standard for space and missile technolo­gies because it promoted the goals of the groups involved. Scientists received credit for conceiving novel technologies. Military officers gained control over radical new weapons and their associated organizations. Engineers earned re­spect by creating reliable technologies. Managers gained by controlling orga­nizations within a predictable (but hopefully large) budget.

In the 1950s scientists formed an alliance with the military to rapidly cre­ate novel weapons. Officers desired quick weapon deployment, while scien­tists provided the novelty. Within systems management, the role of scientists became standardized. They were to perform systems analyses at the begin­ning of programs to determine technological feasibility and whether to de­velop a particular mission or weapon. Scientists also used their quantitative skills to assess the reliability of the new technologies as engineers developed them, acting as credible second sources of information. Managers and mili­tary officers frequently used scientists in this capacity, as Ramo-Wooldridge and Aerospace did for the air force against the contractors.

Military officers also gained influence through systems management. Here it is important to distinguish between technical officers and operational offi­cers. The latter led troops into battle and throughout the long history of the military held the reins of power. Technical officers became more important in the 1950s and 1960s. Before the 1950s, air force technical officers lamented their poor career possibilities. With the creation of Air Research and Develop­ment Command (ARDC) in 1950 and Air Force Systems Command (AFSC) in 1961, technical officers won for themselves career paths separate from those of their operational brethren. Systems management became the formal set of procedures that allowed them to maintain a military career in technical R&D. Technical officers gained a stable career path and significant power.

In the late 1950s and early 1960s, NASA was run by engineers and for engi­neers. Although engineers no longer had free rein after that time, NASA re­mained an engineering organization. So too were ARDC, AFSC, and their laboratories and contractors. Aerospace engineering, and particularly the space program, were the glamour jobs of engineering during this period, with interesting tasks, substantial authority and funding, and excellent opportuni­ties for promotion into managerial or technical positions. Systems manage­ment ensured a large role for engineers throughout the design process and kept alive the engineering working groups that allowed engineers to main­tain substantial authority. Engineers developed the testing techniques that ensured that their products operated, and hence they guaranteed their own credibility and success. These testing techniques too became part of systems management. Engineers benefited from the creation of systems engineering, which gave the chief systems engineer nearly as much authority as the project manager, who more often than not was also an engineer by training and ex­perience.

Management credibility and authority stem from control of a large orga­nization with funding to match. The power of the purse was the manager’s primary weapon, but in the 1950s managers had not yet learned how to use it to control the scientists and engineers. As long as scientists and engineers cre­ated novel technologies and funding was plentiful, they could and did claim that they could not predict costs or schedules. Until the technologies reached testing, where failures appeared, managers could not successfully challenge that claim. However, technical failure gave managers the wedge they needed to gain control.

As tasks and projects repeated, managers used past history to predict costs and hold scientists and engineers to estimates based on prior history. Stan­dardizing systems management made costs and schedules more predictable and allowed managers to distinguish between ‘‘normal’’ cost and schedule patterns and ‘‘abnormal’’ patterns that signaled technical or organizational problems. Project managers used this information to control projects and pre­dict outcomes without being completely dependent on technical experts.

Executive managers wanted to know about the current status of projects and about possible new projects. To determine if new projects should be funded, executive managers created ‘‘breakpoints’’ at which they could inter­vene to continue, modify, or cancel a project. Phased planning implemented these breakpoints and ensured that only limited resources would go toward new projects before executive managers had their say.

From a social viewpoint, each of the four professional groups gained im­portant career niches from the institution of systems management. This helps to explain why it has proven to be a stable method in the aerospace indus­try. However, this leaves unanswered whether systems management actually made costs predictable or novel products dependable. In the end, none of the social factors would matter if the end products ultimately failed.

Systems Management and Its Promoters

Four social groups developed and spread systems management: military offi­cers, scientists, engineers, and managers. All the groups promoted aspects of systems management that were congenial to their objectives and fought those that were not. For example, the military’s conception of ‘‘concurrency’’ ran counter in a number of ways to the managerial idea of ‘‘phased plan­ning,’’ while the scientific conception of ‘‘systems analysis’’ differed from the engineering notion of ‘‘systems engineering.’’ Academic working groups pro­moted by scientists and engineers conflicted with hierarchical structures found in the military and industry, and the working groups’ informal meth­ods frustrated attempts at hierarchical control through formal processes. The winners of these bureaucratic fights imposed new structures and processes that promoted their conceptions and power within and across organizations.

In the early 1950s, the prestige of scientists and the exigencies of the Cold War gave scientists and military officers the advantage in bureaucratic com­petition. Military leaders successfully harnessed scientific expertise through their lavish support of scientists, including the development of new labora­tories and research institutions. Scientists in turn provided the military with technical and political support to develop new weapons.18 The alliance of these two groups led to the dominance of the policy of concurrency in the 1950s.19

To the air force, concurrency meant conducting research and development in parallel with the manufacturing, testing, and production of a weapon. More generally, it referred to any parallel process or approach. Concurrency met the needs of military officers because of their tendency to emphasize external threats, which in turn required them to respond to those threats. Put differ­ently, for military officers to acquire significant power in a civilian society, the society must believe in a credible threat that must be countered by mili­tary force. If the threat is credible, then military leaders must quickly develop countermeasures. If they do not, outsiders could conclude that a threat does not exist and could reduce the military’s resources. For the armed forces, ex­ternal threats, rapid technological development, and their own power and re­sources went hand in hand.

Scientists also liked concurrency, because they specialized in the rapid cre­ation of novel ‘‘wonder weapons’’ such as radar and nuclear weapons. Even when scientists had little to do with major technological advances, as in the case of jet and rocket propulsion, society often deemed the engineers ‘‘rocket scientists.’’ Scientists did little to discourage this misconception. They gained prestige from technical expertise and acquired power when others deemed technical expertise critical. Scientists predicted and fostered novelty because discovery of new natural laws and behaviors was their business. Novelty re­quired scientific expertise, whereas ‘‘mundane’’ developments could be left to engineers.

While the Cold War was tangibly hot in the late 1940s and 1950s, American leaders supported the search for wonder weapons to counter the Communist threat. Although very expensive, nuclear weapons were far less expensive than maintaining millions of troops in Europe, and they typified American prefer­ences for technological solutions.20 Military officers allied with scientists used this climate to rapidly drive technological development.

By 1959, however, Congress began to question the military’s methods be­cause these weapons cost far more than predicted and did not seem to work.21 Embarrassing rocket explosions and air-defense system failures spurred criti­cal scrutiny. Although Sputnik and the Cuban Missile Crisis dampened criti­cism somewhat, military officers had a difficult time explaining the apparent ineffectiveness of the new systems. Missiles that failed more than half the time were neither efficient military deterrents nor effective deterrents of congres­sional investigations. The military needed better cost control and technical reliability in its missile programs. Military officers and scientists were not par­ticularly adept in these matters. However, managers and engineers were.

Engineers can be divided into two types: researchers and designers. Engi­neering researchers are similar to scientists, except that their quest involves technological novelty instead of ‘‘natural’’ novelty. They work in academia, government, and industrial laboratories and have norms involving the pub­lication of papers, the development of new technologies and processes, and the diffusion of knowledge. By contrast, engineering designers spend most of their time designing, building, and testing artifacts. Depending upon the product, the success criteria involve cost, reliability, and performance. Design engineers have little time for publication and claim expertise through product success.

Even more than design engineers, managers pay explicit heed to cost con­siderations. They are experts in the effective use of human and material resources to accomplish organizational objectives. Managers measure their power from the size and funding of their organizations, so they have conflict­ing desires to use resources efficiently, which decreases organizational size, and to make their organizations grow so as to acquire more power. Ideally, managers efficiently achieve objectives, then gain more power by acquiring other organizations or tasks. Managers, like engineers, lose credibility if their end products fail.

As ballistic missiles and air-defense systems failed in the late 1950s, mili­tary officers and aerospace industry leaders had to heed congressional calls for greater reliability and more predictable cost. In consequence, managerial and engineering design considerations came to have relatively more weight in technology development than military and scientific considerations. Man­agers responded by applying extensive cost-accounting practices, while engi­neers performed more rigorous testing and analysis. The result was not a ‘‘low cost’’ design but a more reliable product whose cost was high but pre­dictable. Engineers gained credibility through successful missile performance, and managers gained credibility through successful prediction of cost. Be­cause of the high priority given to and the visibility of space programs, con­gressional leaders in the 1960s did not mind high costs, but they would not tolerate unpredictable costs or spectacular failures.

Systems management was the result of these conflicting interests and ob­jectives. It was (and is) a melange of techniques representing the interests of each contributing group. We can define systems management as a set of orga­nizational structures and processes to rapidly produce a novel but dependable technological artifact within a predictable budget. In this definition, each group appears. Military officers demanded rapid progress. Scientists desired novelty. Engineers wanted a dependable product. Managers sought predictable costs. Only through successful collaboration could these goals be attained. To suc­ceed in the Cold War missile and space race, systems management would also have to encompass techniques that could meet the extreme requirements of rocketry and space flight.

From Student Rocketry to Weapons Research

In 1936, Caltech graduate student Frank Malina learned of Austrian engineer Eugen Sanger’s proposed rocket plane. This stimulated Malina’s interest in rocketry, and aeronautics professor Theodore von Karman agreed to serve as his thesis adviser. Learning little from a visit to secretive rocket pioneer Robert Goddard, Malina and his assistants began rocket motor tests in an isolated area near Pasadena. After several failures, they succeeded in running a forty – four-second test firing. In May 1938, a new heat-resistant design operated for more than a minute.2

In 1938, Army Air Corps commander H. H. ‘‘Hap’’ Arnold made a sur­prise visit to Caltech and took interest in the project. He asked the National Academy of Sciences to fund research on rocket-assisted aircraft takeoff. Von Karman got the job, with Malina doing most of the work. Money soon began to flow, and by July 1940 the group moved permanently from the Caltech cam­pus to the test site.3

Malina’s growing team used theoretical analysis and practical experimen­tation to create a series of technical breakthroughs that became the founda­tion of solid-propellant rocketry. The army and navy wanted rockets to assist aircraft takeoff from short airfields and aircraft carriers, leading Malina’s team to consider mass production of the rockets. Malina unsuccessfully tried to interest aircraft companies. Failing in this, he, von Karman, and others started the Aerojet Company, which by 1943 had large navy orders. Although JPL de­veloped the initial designs, it never had to deal with manufacturing problems, passing these to Aerojet.4

After the military discovered German preparations to launch V-2 rockets,

some army officers paid greater attention to rocketry. The Army Air Forces was not interested because long-range rockets did not promise an immedi­ate payoff and because it had a vested interest in manned bombers. By con­trast, Army Ordnance officers saw rockets as long-range artillery and hence as a means to extend the range of their artillery and political aspirations. They urged Caltech to propose a comprehensive program, which led to the offi­cial founding of JPL in June 1944 with an Army Ordnance contract for $1.6 million. Despite Caltech leaders’ initial view that JPL would aid the army dur­ing only wartime, they quickly became addicted to the contract’s overhead money. JPL became a permanent operation.5

JPL proposed to build a series of progressively larger and more sophisti­cated rockets, named in rank order Private, Corporal, Sergeant, and Colonel. Private, developed in 1944 and early 1945, proved successful when designed as a simple rocket but inaccurate when modified to include wings. Private’s performance proved not only that JPL could design a simple rocket without attitude control or guidance but also that long-range rockets were imprac­tical until JPL developed an automatic control system. The Corporal series began with an unguided sounding rocket known as the WAC Corporal, in­tended to achieve the highest possible altitude. It reached altitudes of forty miles in October 1945 and was the immediate progenitor of Aerojet’s Aero- bee sounding rocket, used for years after as a scientific research vehicle. Rela­tions between JPL and Aerojet were good, as JPL researchers passed research innovations to Aerojet, which developed them for production. With financial interests in Aerojet, JPL researchers benefited handily.6

The organization of JPL’s early rocketry was simple. It began as a stu­dent research project, with Malina, John Parsons, and Edward Forman con­structing test stands, motors, and fuels. The group added a Research Analysis section, which performed parametric analyses of aircraft takeoff with rocket assistance and developed design objectives. Homer Stewart and Hsue-shen Tsien did many of these tasks, which Stewart later recalled as being the sys­tems engineering for the group.7

As the program grew, Malina directed JPL while Army Ordnance handled the coordination among JPL, White Sands Missile Range, the Signal Corps, and the Ballistic Research Laboratory of Aberdeen Proving Ground. The lat­ter two organizations assisted with flight test data acquisition. Malina di­vided JPL’s twenty-two personnel into seven small groups: Booster, Missile, Launcher and Nose, Missile Firing, External Ballistics, Photo and Material, and Transportation and Labor. The army’s contingent totaled thirteen. Prior to each test round, Malina held a conference where each group discussed prior results and checked weather and preparations. Douglas Aircraft manufactured the rocket, but the team often performed last-minute modifications at White Sands.8

WAC Corporal paved the way for JPL’s first true surface-to-surface missile, the larger and more complex liquid-fueled Corporal E. JPL engineers devel­oped a comprehensive test program to ensure that the components and the integrated vehicle functioned correctly. They developed static structural tests, hydraulic tests for all fluid flow components, and rocket motor tests. Engi­neers also created a full-scale model used to check pressure and temperature characteristics under firing conditions on a static test stand at Muroc, Cali­fornia. The test stand held the vehicle on the ground as the engine fired, while electrical instrumentation measured structural loads, pressures, and tempera­tures. Douglas Aircraft manufactured the flight test vehicles, which the army transported to its new assembly and launch facilities at White Sands, where engineers performed final leak and electrical tests. Technicians then moved the rocket seven miles to the launch site, where the crew simulated a firing for training purposes and as a final telemetry check. They then fueled and launched the vehicle.9

JPL engineers fired the first Corporal E in May 1947. The first round was a success, but round two produced insufficient thrust. Round three failed when the rocket motor throat burned out and the control system failed. Engineers went back to the drawing boards. Only in June 1949 did the next Corporal E fly, with a new design using axial-flow motors.10

After the Soviets exploded their first atomic bomb in August 1949, Army Ordnance officers asked JPL Director Louis Dunn11 and Electronics Depart­ment head William Pickering whether Corporal could be converted into an operational missile. Dunn stated that JPL could handle this conversion if it developed a guidance and control system from existing technologies. In March 1950, Army Ordnance decided to make Corporal into a weapon.

When the Korean War broke out in the summer of 1950, the Truman ad­ministration gave Chrysler executive K. T. Keller the charter to develop mis­siles as quickly as possible. Rejecting a Manhattan Project-style program, Keller decided instead to exploit existing missile programs that held prom­ise. Corporal was the army’s best-developed missile, so Army Ordnance com­mitted it to rapid development. With this decision, JPL embarked upon a ven­ture that changed it from a research institution into the equivalent of an army arsenal.12

The American Challenge

The European fear of gaps between themselves and the superpowers derived from changed political, military, and economic realities after World War II. Germany was devastated, occupied, and dismembered. Italy was torn between its Fascist past, the resistance movement led by the Communists, and the Catholic Church. France had been defeated by Nazi Germany, then riven by hostilities between the Vichy regime, Charles de Gaulle’s Free French, and the Communist Party. Britain was victorious, but the war depleted its treasury and exhausted its people. By contrast, the Soviet Union’s armies advanced into and remained in Central Europe. The United States emerged from the war with sole possession of the atomic bomb, a booming economy, and growing resources. The Soviet Union and the United States became the superpowers, relegating Western Europe to second-tier status.

Many American diplomats believed a strong, united Europe was the best means to defend against Soviet military expansion or internal chaos that might lead to Communist takeover. To strengthen the German economy without antagonizing France, American diplomats in 1947 offered the Mar­shall Plan to European countries on the condition that they work together to allocate funds. This led to the creation of the Organization for European Economic Cooperation, which later became the Organization for Economic Cooperation and Development (OECD). The Communist coup in Czechoslo­vakia and the Berlin blockade inaugurated negotiations that led to military cooperation with the North Atlantic Treaty Organization in 1949.1

European leaders also sought to cooperate with each other, apart from the United States. France decided to control German ambitions by forming a strong alliance with its historic enemy. The Low Countries, which needed a strong European economy with which to trade, and the Italians, who needed an outlet for unemployed workers and access to technology and natural re­sources, combined with France and West Germany to form the European Coal and Steel Community in 1950. The same countries agreed to the Common Market in 1957, which lowered mutual tariff barriers and created the large market they believed critical for economic growth.

Nuclear technology development also benefited from European integra­tion. Physicists Pierre Auger of France and Edoardo Amaldi of Italy led efforts to create a European laboratory for high-energy physics research to com­pete with U. S. physics researchers. European research leaders agreed to create CERN in February 1952 to develop a large particle accelerator and support­ing facilities. The need to distribute and control uranium for nuclear reactors led to the creation of EURATOM in the 1957 Treaty of Rome that created the Common Market. Despite these European efforts to enlarge their market and pool resources for nuclear technology, American developments in electronics and computers, and the American and Soviet development of rocketry and missiles, appeared to keep the superpowers several steps ahead.2

In 1964, French journalist Jean-Jacques Servan-Schreiber wrote a book that served as a manifesto to European governments and industry: The American Challenge. He described the penetration of American industry into Europe and argued that the cause was not ‘‘a question of money.’’ Rather, the United States had developed better and more widespread education, leading to more flexible policies and management. As he put it, ‘‘Europe’s lag seems to concern methods of organization above all. The Americans know how to work in our countries better than we do ourselves. This is not a matter of ‘brain power’ in the traditional sense of the term, but of organization, education, and train­ing.’’3 Significantly, he illustrated American dominance by using the examples of the computer and aerospace industries.

Servan-Schreiber had influential readers on both sides of the Atlantic. U. S. Secretary of Defense Robert McNamara thought ‘‘the technological gap was misnamed,’’ believing it to be a managerial gap. Europeans needed to develop their educational systems. McNamara noted, ‘‘Modern managerial education — the level of competence, say, of the Harvard Business School—is practically unknown in industrialized Europe.’’4 West Germany’s defense minister, Franz Josef Strauss, also agreed with the French journalist, believing the technology gap was due to advances in space technology, computers, and aircraft con­struction, three areas he thought decisive for the future. Because large corpo­rations performed the majority of research and because of the large domestic market, American companies had the advantage of scale over their European competitors. Strauss’s solution was to create an integrated European com­munity with common laws and regulations and to pool European resources. European countries needed to launch large, multinational high-technology projects to provide opportunities for European corporations to work on big research and development endeavors.5

Both Servan-Schreiber and McNamara believed that their societies needed more and better management. According to McNamara, ‘‘Some critics today worry that our democratic, free societies are becoming overmanaged. I would argue that the opposite is true. As paradoxical as it may sound, the real threat to democracy comes not from overmanagement, but from undermanage­ment. To undermanage reality is not to keep it free. It is simply to let some force other than reason shape reality.’’6

Servan-Schreiber’s views were similar: ‘‘Only a deliberate policy of reinforc­ing our strong points — what demagogues condemn under the vague term of ‘monopolies’ — will allow us to escape relative underdevelopment.’’ But he did make one allowance: ‘‘This strategy will rightly seem debatable to those who mistrust the influence and political power of big business. This fear is justi­fied. But the remedy lies in the power of government, not in the weakening of industry.’’7 Given the economic and military importance of technology devel­opment, Servan-Schreiber and others accepted the risks of government and business power.

Academic analysts of the technology gap used more sophisticated means to reach similar conclusions. Analyses by the OECD generally recommended increases in the number and scale of integrated European high-technology projects. The technology gap ‘‘is not so much the result of differences in tech­nological prowess, except in some special research-intensive sectors,’’ said economist Antonie Knoppers, ‘‘as of differences in management and market­ing approaches and—possibly above all-in attitudes.’’8 He believed the dis­parities were in ‘‘middle or lower levels’’ of management. Economist Daniel Spencer believed that a ‘‘more fruitful way of assessing the technological gap’’ was ‘‘to define it as a management gap’’ because American managers were ‘‘alert to opportunities created by the research of a military or nuclear or space type.’’9

Officials debated about the existence of the gap, its causes, and solutions. British leaders worried about an American “technological empire’’ and a ‘‘brain drain’’ of technical experts to the United States. The French feared the loss of economic and cultural independence. German leaders worried that the gap exposed flaws in German education and management. American poli­ticians minimized the significance or existence of a technological gap, shift­ing the argument to differences in culture and management. In 1967, Presi­dent Lyndon B. Johnson sent Science Advisor Donald Hornig to Europe with a team of experts to study the problem and asked the National Aeronautics and Space Administration (NASA) to explore new cooperative ventures with the Europeans to ease their fears. All agreed that European countries needed educational reforms to bridge the various gaps. This would take a while to ac­complish, so in the meantime, the most obvious idea was to mimic American management methods on large-scale technology programs.10

In the early 1960s, space launchers beckoned as a particularly fruitful field for integrated efforts. Developing space launchers would eliminate European dependence on the United States to launch spacecraft and also aid national efforts to develop ballistic missiles. A multinational launcher program would teach European companies to manage large programs and spur Europe’s edu­cational system to become more responsive to advanced technology and man­agement. Many advantages would accrue, if Europeans could overcome their differences.

The Technical Gains of Systems Management

Technical failures of aerospace projects are hard to hide. Rockets and missiles explode. Satellites stop sending signals back to Earth. Pilots and astronauts die. To the extent that systems management helped prevent these events, it must be deemed a technical success. Systems management methods such as quality assurance, configuration control, and systems integration testing were among the primary factors in the improved dependability of ballistic missiles and spacecraft. Missile reliability in air force and JPL missile programs in­creased from the 50% range up to the 80% to 95% range, where it remains to this day. JPL’s spacecraft programs suffered numerous failures from 1958 to 1963, but after that JPL’s record dramatically improved, with a nearly perfect record of success for the next three decades. The manned programs suffered a number of testing failures at the start but had an enviable flight record with astronauts, with the one glaring exception of the Apollo 204 fire. A strong cor­relation exists between systems management and reliability improvements.

The nature of reliability argues for the positive influence of systems meth­ods. For aerospace projects to succeed, there must be high-quality compo­nents, proper integration of these components, and designed-in backups in case failures occur. Only the last of these is a technology issue in the design sense. The selection and proper integration of components has more to do with rigorous compliance with design and manufacturing standards than it does with new technology. High component quality comes through unflag­ging attention to manufacturing processes, backed by testing and selection of the best parts. In a nutshell, it is easy to solder a joint or crimp a connec­tor pin but extremely difficult to ensure that workers perform thousands or millions of solders and crimps correctly. Even a worker with the best skills and motivation will make occasional mistakes. In systems management, so­cial processes to rigorously inspect and verify all manufacturing operations ensure high quality across the thousands of workers involved in the process.

Similarly, ensuring proper integration is a matter of making sure that each and every joint is properly soldered, every pin and connector properly crimped, every structure properly handled at all times, and all of these opera­tions rigorously tested. On top of this, ‘‘systems testing’’ checks for design flaws and unexpected interactions among components. In all of these issues, procedures and processes—not new technology—are the keys to success. Sys­tems management provided these rigorous processes and tests.

Once organizations dealt with component problems, they ran into the next most likely cause of failure: interface problems caused by mismatches between designs. By the mid-1960s, both the air force and NASA obsessively concen­trated on interface problems, which resulted ultimately from poor communi­cation, poor organization, or both. Engineers and managers recognized that differences in organizational cultures and methods made communication be­tween organizations more difficult than communication within an organiza­tion. Miscommunication led to incompatibilities between components and subsystems — incompatibilities often found when components were first con­nected and tested. More technology was not the solution. Instead, engineers needed improved communication through social processes.

Engineers enforced better communication by creating standard documents and processes. They required that one organization be responsible for ana­lyzing both sides of an interface and that the specifications and analyses be documented in a formal Interface Control Document. Many interface prob­lems were subtler than simple mismatches between physical or electrical com­ponents.

For example, engineers at Marshall Space Flight Center found that a ‘‘non­liftoff’’ of a Mercury-Redstone test vehicle occurred because the Mercury cap­sule had a different weight than the Redstone’s normal warhead, changing the time it took for the launch vehicle to separate from the launch tower. Because the combined launch complex-launch vehicle electronics required that the ve­hicle lift off at a certain rate, the changed rate led to a shutdown of the launch vehicle as emergency electronics kicked in to abort the launch.

Problems such as these were solvable not through technology but through better engineering communication and better design analysis. Once engineers understood all of the factors, the design solution was usually simple. The problem was making sure the right people had the right information and that someone had responsibility for investigating the entire situation. As ELDO’s history shows, getting an organization to pay for a change in an interface was often more difficult than formulating a technical solution. Authority and communication matter most in interface problems and solutions. Better orga­nization and better systems, not better technology, made for reliability in large aerospace projects by standardizing the processes and providing pro­cedures to cross-check and verify each item, from solder joints to astronaut flight procedures. These methods essentially provided insurance for technical success.

How much did this insurance cost? Did systems management lower costs or speed development compared to earlier processes and methods? Concurrency in the 1950s was widely believed to shorten development times, but at enor­mous cost. The secretary of the air force admitted that the air force could af­ford only one or two such programs. Schriever contended that concurrency saved money because it shortened development time. Because R&D costs are spent mostly on engineering labor, Schriever argued, shortening development time would reduce labor hours and hence cost. Most other experts then and later disagreed with him. Political scientist Michael Brown contends that con­currency actually led to further schedule slips because problems in one part of the system led to redesigns of other parts, often several times over.3

On any given design, having systems management undoubtedly costs more than not having systems management, just as buying insurance costs more than not buying insurance. The real question is whether systems management reduced the number of failures sufficiently, so that it counterbalanced the re­placement cost. For example, a 50% rate of reliability for a missile system such as Atlas in the late 1950s meant that every other missile failed. With this failure rate, the air force and its contractors could afford to spend up to the cost of an entire second missile in improvements to management processes, ifthese pro­cesses could guarantee success. In other words, at a 50% reliability rate and a cost of $10 million per missile, each successful launch costs $20 million. Thus, if process improvements can guarantee success, then spending $10 million or less per missile in management process improvements is cost-effective.

In fact, the early Atlas, Titan, and Corporal projects achieved roughly 40­60% reliability. Reliability improvement programs — that is, systems manage­ment processes — improved reliability into the 60-80% range during the 1950s and early 1960s and into the 85-95% range thereafter.4 The reliability im­provement meant that roughly nine out of ten launches succeeded, instead of one out of two. Therefore, systems management could easily have added more than 50% to each missile’s cost and still been cost-effective. NASA’s efforts to ‘‘man-rate’’ Atlas and Titan could have added 100% to costs for Atlas and Titan and still been cost-effective, because success had to be guaranteed. In fact, considering the potential loss of not only the launchers but also the manned capsules and astronauts, NASA could likely spend 200-500% on launcher im­provements and still be cost-effective, considering the low reliability of these vehicles at that time. Pending detailed cost analysis, systems management was probably cost-effective if costs were measured for each successful launch.

Another way to assess systems management is to compare missile and space programs that implemented systems management methods with programs that did not. ELDO provides the most extreme example of little or no systems management. None of its rockets ever succeeded, despite piecemeal intro­duction of some systems management methods. Comparison of JPL’s Ranger program with the contemporary Mariner program provides another example, because the Mariner design was a modification of the Ranger spacecraft. With less systems management, Ranger’s first six flights failed, whereas Mariner achieved a 50% success rate, with later Mariner spacecraft performing al­most perfectly. After strengthening systems management, Ranger’s record was three successes out of four launches.5 Assuming Ranger and Mariner costs per spacecraft were roughly equal, Mariner cost less per successful flight than early Ranger.

Aside from pure cost considerations, failures hurt an organization’s credi­bility. In the rush to beat the Soviets, early space programs lived the old adage ‘‘There is never time to do it right, but there is always time to do it over.’’ They had many failures, but in the early days executive managers were not terribly concerned. By the early 1960s, however, failures mattered; they led to con­gressional investigations and ruined careers. Systems management responded to the need for better reliability by trying to make sure that engineers ‘‘did it right’’ so they would not have to ‘‘do it over.’’

It is no coincidence that engineers developed systems management for mis­siles and spacecraft that generally cannot be recovered. When each flight test and each failure means the irretrievable loss of the entire vehicle, thorough planning is much more cost-effective than it is for other technologies that can be tested and returned to the designers. This helps to explain why the bu­reaucratic methods of systems management work well for space systems but seem too expensive for many Earth-bound technologies. For most technolo­gies, building a few prototypes and performing detailed tests with them be­fore manufacturing is feasible and sensible. Lack of coordination and plan­ning (each of which costs a great deal) can be overcome through prototype testing and redesign of the prototype. This option is not available for most space systems, because they never return.

The evidence suggests that systems management was successful in improv­ing reliability sufficiently to cover the cost per successful vehicle. Although systems management methods were not the only factor involved in these im­provements, from the standpoint of reliability, they were critical. Process im­provements, not technology improvements, ensured the proper connection and integration of thousands of components. Systems management increased vehicle costs on a per vehicle basis compared to previous methods but re­duced costs when reliability is factored in.

Conclusion

Social and technical concerns drove the development of systems manage­ment. The dangers of the Cold War fed American fears of Communist domi­nation, leading to the American response to ensure technological superiority in the face of the quantitative superiority of Soviet and Chinese military forces. Military officers and scientists responded to the initial call by creating nuclear weapons and ballistic missiles as rapidly as possible.

Technical issues then reared their ugly heads, as the early missile systems exploded and failed frequently. Investigation of the technical issues led to the creation of stringent organizational methods such as system integration and testing, change control, quality inspections and documentation, and configu­ration management. Engineers led the development of these new technical coordination methods, while managers intervened to require cost and sched­ule information along with technical data with each engineering change.

The result of these changes was systems management, a mix of techniques that balanced the needs and issues of scientists, engineers, military officers, and industrial managers. While meeting these social needs, systems manage­ment also addressed the extreme environments, danger, and automation of missile and space flight technologies. By meeting these social and technical needs, systems management would become the standard for large-scale tech­nical development in the aerospace industry and beyond.

TWO