CONCLUDING OBSERVATIONS

We traced the evolution of flight and ground test instrumentation and data from Stage 1 to Stage 3. With the XB-70 project the transition to automated flight test instrumentation, recording, and data analysis essentially was complete. Although there is much overlap between airframe and engine flight testing, engine test cells, and wind tunnels, each test medium imposes its own peculiar instrumentation demands; thus there are variations among them in instrumentation and recording devices utilized at a given stage.

Instrumentation advances between 1940-1969, including computerization, generally increased the quantity of data – the number of variables, parameters, and readings – collected and analyzed but did not increase the accuracy of measurements (see Table 2). Numbers of measurements grew roughly at the rate of increased computational capability.

Primary motivations for undertaking huge costs of automated data collection and processing were

• the need to monitor more channels of data as aircraft themselves became more complex and computer-controlled;

• advantages of being able to monitor data in real time and use it to modify test protocols mid-flight.

The latter can lead to enormous, cost-saving, and ultimately is the economic justification for costly automated data systems with telemetry. For example, when Grumman used a computerized telemetry system backed by a CDC 6400 in development of the F-14 Tomcat they experienced a time saving of 67% compared to prior Grumman flight test programs and performed 47% fewer test flights.59 In less demanding test situations where such cost-saving is not expected, oscillographs and other Stage 2 techniques continue to be used even today.

The First and Second Laws of Scientific Data continue to govern aircraft and engine testing. Thousands of data channels did not obviate the need for modeling data via structurally escalating assumptions. By adding model structures to the data we come to see clearly what is the actual performance of our aircraft and engines. The intelligibility of experimental data largely depends upon correcting for systematic errors, deriving the measures you really want via modeling, and separating real effects from artifacts. We see what otherwise would be lost in the noise of instrumentation and raw data.

Experimental data are not some epistemic “given.” Flight test, test cells, and wind tunnels are quintessentially experimental yet rarely involve hypothesis testing or theory confirmation. Thus they provide marvelous insight into the heart of experiment undistorted by standard yet questionable philosophical views about testing, confirmation, or observation.

ACKNOWLEDGMENTS

Precursor portions were presented in a Year of Data talk, University of Maryland, College Park, September 1992, and to Andrew Pickering’s University of Illinois Sociology of Science lecture series, spring 1995. The assistance of Dr. Jewel Barlow, Director of the Glenn L. Martin Wind Tunnel at University of Maryland is much appreciated. Comments on the draft by Dibner workshop participants, a UMCP audience, and especially Peter Galison were quite helpful.

The following people assisted in the collection of photographs and information: Cheryl A. Gumm, Don Thompson, Jim Young, USAF Flight Test Center, Edwards AFB; Don Haley, NASA Ames Dryden Flight Research Facility; Tom Crouch and Brian Nicklas, National Air and Space Museum; Richard P. Hallion, Office of the

Instrumentation

recorder

Frequency of measurements per channel

Maximum Number of channels

Maximum Data Processing Rate2

Accuracy (Overall – after calibration corrections made)

Unserviceability incidences (by source)

Pilot reading cockpit gauges

< .01/sec

2-3

n/a

±5% or better (best if quantities are stable)

Frequent (preempted by piloting duties)

Photopanel

0-2/sec

10s

200/hour

±5% or better

8% instrument

Oscillograph

0-lK/sec

100s

semi-automatic: 600/hour automatic: 3,600/hour

1 % nominal3 (0.1-10%)

2% galvanometer 16% transducer

Telemetry

0-50K/sec (Meter analog readouts; number noticed probably < 2/sec.)

90

Real Time

±5%

probably > Airborne tape (see below) due to transmission losses

Airborne Tape

0-50K/sec

1000s

400,000/hour

1 % nominal4 (0.1-10%)

2% galvanometer 16% transducer

47% subcarrier oscillators

Frequency Modulation 3-10%

Pulse-Duration Modulation 1 -2%

Pulse Coded Modulation __________________________ 1%___________________________________

NOTES:

1 Most data from Kerr 1961.

2 Varies with type of data analysis. See Bethwaite 1963, p. 237, for estimates.

3 Accuracy varies with the quantities measured: Noise and vibrations: 5-10%

Most flight test channels 1-2%

A few selected channels, achievable only by using digital recording: 0.1-0.5%

It is very difficult to achieve 0.1% accuracy in flight test. Ground facilities such as wind tunnels may achieve accuracies of 0.001% at perhaps 8 measurements per second, during this period.

4 Подпись: THE CHANGING NATURE OF TEST INSTRUMENTATIONHigher frequency measures (e. g., vibrations) tend to have higher errors and require FM recording. For most other signals, PCM is more accurate.

Chief Historian, USAF. Part of the research presented here was supported by an NSF SSTS Award.

Sources for previously unpublished pictures are identified as follows:

Suppe Collection: photographs in my personal collection.

Young Collection: photographs in Jim Young’s collection, USAF Flight

Test Center.

GLMWT: photographs from the Archives of the Glenn L. Martin

Wind Tunnel, University of Maryland; used with the kind permission of Dr. Jewel Barlow, Director.

NASA/NACA pictures, which are in the public domain are identified by NASA photo number or source. Other pictures are reprinted from identified published sources and are used with permission of the publishers.