SOCIOLOGY ON THE FLIGHTDECK

In the NTSB’s final judgment of probable cause was an explicit reference to the fact that the captain failed to reject the takeoff “when his attention was called to anomalous engine instrument readings.” Though not formalized in the probable cause assessment, the investigative team did comment elsewhere in the report that the Safety Board strongly believed in the training program of command decision, resource management, role performance, and assertiveness. As the NTSB pointed out, it had already, in June of 1979 (A-79-47), recommended flightdeck resource management, boosting the merits of participative management and assertiveness training for other cockpit crewmembers. Here a new analytical framework entered, in which causal agency fell not to individual but to group (social) psychology. That framework (dubbed Cockpit Resource Management or CRM) was fairly recent and came in the wake of what became a set of canonical accidents. The NTSB – interpreted record of Air Florida flight 90 became a book in that canon.

For United Airlines, the transformation in their view of CRM came following the December 28, 1978 loss of their flight UA 173. Departing Denver with 46,700 pounds of fuel, with 31,900 predicted necessary for the leg to Portland, the DC-8 came in for final approach. When the gear lowered, those in the body of the plane heard a loud noise and sharp jolt. The captain felt that the gear had descended too rapidly, and noted that the gear lights did not illuminate. Asking his second officer to “give us a current card on weight, figure about another fifteen minutes,” he received a query in reply, “fifteen minutes?” To this, the captain responded “Yeah, give us three or four thousand lbs. on top of zero fuel weight.” Second officer: “not enough. Fifteen minutes is really gonna really run us low on fuel here,” then later: “we got about three on the fuel and that’s it.” When the first officer urged, “We’re going to lose an engine,” the captain responded “why?” To which the first officer responded “Fuel!” Within eight minutes the plane was down in the woods outside the city, with a loss of ten lives.19 The canonical interpretation read the accident in terms of a failure of communication: Why, United Airlines personnel wanted to know, was the captain not listening to his officers?

According to United Airlines’ CRM curriculum of the mid 1990s, the conversion of Delta Airlines to CRM came seven years after the United 173 crash, in the aftermath of its own disastrous flight 191. Approaching Dallas Fort Worth airport on August 2, 1985, Delta’s L-1011 hit a microburst, descended into the ground, and disintegrated. The question raised by investigators was why the otherwise prudent captain had entered an area of known lightning – that is to say a thunderstorm – close to the ground and in a shaft of pounding rain. “Probable cause” included the decision to enter the cumulonimbus area, a lack of training in escape from windshear, and lack of timely windshear warning. Unlike the captain of United 173 or Air Florida 90, no one suggested here that the Delta captain was not listening to the flightcrew. Instead, “given the fact that the captain was described as one who willingly accepted suggestions from flightcrew members,” the Board did not infer that they were intimidated by him. But because neither first nor second officer dissented from the continued approach, the NTSB held the flightcrew responsible for the decision to continue. “Suggestions were not forthcoming,” concluded the investigation, on the basis of which the NTSB argued that air carriers should provide formal cockpit resource management and assertiveness training for their crews.20

When, in the mid 1980s, the airlines began to develop their CRM courses, they invariably turned back to the by-then widely-discussed proceedings of a meeting held

under NASA’s auspices in San Francisco over 26-28 June 1979. In various ways, that conference set out the outline for hundreds of courses, books, and pamphlets designed to characterize and cure the “dangerous” failures of communication on the flightdeck. Most prominent among the speakers was Robert Helmreich, a social psychologist from the University of Texas at Austin, who came to the problem through his work on Navy and NASA crew training efforts for the space program. Psychology (Helmreich declared at the San Francisco meeting) had so far failed those in the cockpit. On one side, he noted, there was personality psychology which had concentrated solely on exclusion of unacceptable candidates, failing utterly to capture the positive individual qualities needed for successful flight. On the other side, Helmreich contended, social psychologists had so far ignored personality and focused on rigorous laboratory experiments only loosely tied to real-life situations. Needed was an approach that joined personality to social interaction. To this end he advocated the representation of an individual’s traits by a point on a two-dimensional graph with instrumentality on one axis and expressivity on the other. At the far end of instrumentality lay the absolutely focused goal-oriented pilot, on the extreme end of expressivity lay the pilot most adept at establishing “warmer” and more effective personal relationships. In a crisis, (argued the authors of United’s CRM course) being at the high end of both was crucial, and likely to conflict with the “macho pilot” who is high in instrumentality and low in expressivity.21

In various forms, this two-dimensional representation of expressivity and instrumentality crops up in every presentation of CRM that I have seen. Perhaps the most sophisticated reading of the problem came in another plenary session of the 1979 meeting, in the presentation by Lee Bolman from the Harvard Graduate School of Education. Bolman’s idea was to pursue the mutual relations of three different “theories”: first, there was the principals’ short-term “theory of the situation” which captured their momentary understanding of what was happening, here the pilots’ own view of the local condition of their flight. Second, Bolman considered the individual’s longer-term “theory of practice,” that collection of skills and procedures accumulated over a period of years. Finally, at the most general level, there was a meta-theory, the “theory-in-use” that contained the general rules by which information was selected, and by which causal relationships could be anticipated. In short, the meta-theory provided “core values,” “beliefs,” “skills,” and “expected outcomes.” Deduced from observation, the “theory in use” was the predictively successful account of what the subject will actually do in specific situations. But Bolman noted that this “theory-in-use” only partially overlapped with views that the subject may explicitly claim to have (“the espoused theory”). Espoused knowledge was important, Bolman argued, principally insofar as it highlighted errors or gaps in the “theory in use”:

Knowledge is “intellectual” when it exists in the espoused theory but not in the theory-in-use: the individual can think about it and talk about it, but cannot do it. Knowledge is “tacit” when it exists in the theory-in-use but not the espoused theory; the person can do it, but cannot explain how it is done. Knowledge is “integrated” when there is synchrony between espoused theory and theory-in-use: the person can both think it and do it.22

Bottom line: Bolman took the highest level theory (“theory-in-use”) to be extremely hard to revise as it involved fundamental features of self-image and lifelong habits. The lowest level theory (“theory of the situation”) might be revised given specific technical inputs (one gauge corrected by the reading of two others) but frequently will only actually be revised through an alteration in the “theory of practice.” It was therefore at the level of a “theory of practice” that training was most needed. Situations were too diverse and patterns of learning too ingrained to be subject to easy alteration. At this level of practice could be found the leamable skills of advocacy, inquiry, management, and role modification. And these, Bolman and the airlines hoped, would contribute to a quicker revision of a faulty “theory of the situation” when one arose. CRM promised to be that panacea.

Textbooks and airlines leaped at the new vocabulary of CRM. Stanley Trollip and Richard Jensen’s widely distributed Human Factors for General Aviation (1991) graphed “relationship orientation” on the у-axis against “task orientation” on the abscissa. High task orientation with low relationship orientation yields the dreadful amalgam: a style that would be “overbearing, autocratic, dictatorial, tyrannical, ruthless, and intimidating.”

According to Trollip and Jensen, who took United 173, Delta 191, and Air Florida 90 as principal examples, the co-pilot of Air Florida 90 was earnestly asking after take-off procedures when he asked about the slushy runway departure, and was (according to the authors) being mocked by captain Wheaton in his response “unless you got something special you’d like to do,” a mockery that continued in the silences with which the captain greeted every subsequent intervention by the co­pilot.23 Such a gloss assumed that copilot Pettit understood that the EPR was faulty and defined the catastrophe as a failure of his advocacy and the captain’s inquiry. Once again agency and cause were condensed, this time to a social, rather than, or in addition to, an individual failure. Now this CRM reading may be a way of glossing the evidence, but it is certainly not the only way; Pettit may have noted the discrepancy between the EPR and N1, for example, noted too that both engines were reading identically, and over those few seconds not known what to make of this circumstance. I want here not to correct the NTSB report, but to underline the fragility of these interpretive moments. Play the tape again:

F. O. Pettit (CAM-1): “That’s not right… well …”

Captain Wheaton (CAM-1): “Yes it is, there’s eighty”

Pettit (CAM-2): “Naw, I don’t think that’s right …. Ah, maybe it is.”

Wheaton (CAM-1): “One hundred twenty”

Pettit (CAM-2): “I don’t know.”

Now it might be that in these hesitant, contradictory remarks Pettit is best understood to be advocating a rejected takeoff. But it seems to be at least worth considering that when Pettit said, “I don’t know,” that he meant, in fact, that he did not know.

United Airlines put it only slightly differently than Trollip and Jensen when the company used its instructional materials to tell new captains to analyze themselves

SOCIOLOGY ON THE FLIGHTDECK

The Grid Approach To Job Performance

A study of how the Grid framework applies to the cockpit can aid individuals in exploring alternative possibilities of behaviour which may have been unclear. Understanding these concepts can enable a person to sort out unsound or ineffective behavior and replace it with more effective behaviors.

The Grid below can be used as a frame of reference to study how each crewmember approaches a job.

 

SOCIOLOGY ON THE FLIGHTDECK

High

 

0

a.

о

£

w

a

E

0

о

e

8

 

Low

 

Figure 4. United CRM Grid. Source: United Airlines training manual, “Introduction to Command/Leadership/Resource Management,” MN-94, 10/95, p. 9.

 

and others on the Grid, a matrix putting “concern for people” against “concern for performance.” Each of several decision-making elements then get graphed to the Grid: inquiry, advocacy, conflict resolution, and critique. Inquiry, for example, comes out this way in the (1,9) quadrant: “I look for facts, decisions, and beliefs that suggest all is well; I am not inclined to challenge other crewmembers” and in the (9,1) quadrant as “I investigate my own and others’ facts, decisions, and beliefs in depth in order to be on top of any situation and to reassure myself that others are not making mistakes.”24 United’s gloss on Flight 90’s demise is not much different from that of Trollip and Jensen: the first officer made various non-assertive comments “but he never used the term, ‘Abort!’ The Captain failed to respond to the inquiry and advocacy of the First Officer.”25

Not surprisingly, the 747 pilot I quoted before, Robert Buck, registered, in print, a strenuous disagreement. After lampooning the psychologists who were intruding on his cockpit, Buck dismissed the CRM claim that the accident was a failure of assertiveness. “Almost any pilot listening to the tape would say that was not the case but rather that the crew members were trying to analyze what was going on. To further substantiate this is the fact the copilot was well-known to be an assertive individual who would have said loud and clear if he’d thought they should abort.”26 With snow falling, a following plane on their tail, АТС telling them to hurry, and the raging controversy over VI still in the air, Buck was not at all surprised that neither pilot aborted the launch.

Again and again we have within the investigation a localized cause in unstable suspension over a sea of diffuse necessary causes.27 We find agency personalized even where the ability to act lies far outside any individual’s control. And finally, we find a strict and yet unstable commitment to protocol even when, in other circumstances, maintenance of that protocol would be equally condemned. In flight 90 the final condemnation fell squarely on the shoulders of the captain. According to the NTSB, Wheaton’s multiple errors of failing to deice properly, failing to abort, and failing to immediately engage full power doomed him and scores of others.

I now want to turn to a very different accident, one in which the captain’s handling of a crippled airliner left him not condemned but celebrated by the NTSB. As we will see even then, the instabilities of localized cause, protocol, and the human/technological boundary pull the narrative into a singular point in space, time, and action, but always against the contrary