NASA ASRS at 40 and the Continued Challenge of Timeliness for Safety Reporting

NASA Aviation Safety Reporting System at 40 and the Continued Challenge of Timeliness for Safety Reporting

On 16 April 16 2016 the National Aeronautics and Space Administration (NASA) Aviation Safety Reporting System (ASRS) celebrated 40 years of operation.  Its origins highlights one major challenge of safety reporting, learning and action: timeliness.

asrs_logo_highres1

The Purpose and Administration of ASRS

The ASRS collects, analyses, and responds to voluntarily submitted aviation safety reports.  ASRS data is used to:

  • Identify deficiencies and discrepancies in the National Aviation System (NAS) so that these can be remedied by appropriate authorities.
  • Support policy formulation and planning for, and improvements to, the NAS.
  • Strengthen the foundation of aviation human factors safety research. This is particularly important since it is generally conceded that over two-thirds of all aviation accidents and incidents have their roots in human performance errors.

ASRS was set up under a Memorandum of Agreement between the Federal Aviation Administration (FAA) and NASA in August 1975.  The FAA fund the programme and provide for its immunity provisions.  The NASA set programme policy and administer its operations.

Similar programmes now exist elsewhere, such as CHIRP (the Confidential Human Factors Incident Report Programme) in the UK (which we have previously discussed).  With the lack of a mandatory occurrence scheme in the US, such as that required by Regulation (EU) 376/2014 Reporting, Analysis and Follow-up of Occurrences in Civil Aviation (and the earlier UK Mandatory Occurrence Reporting [MOR] scheme, which also commenced in 1976), ASRS also fulfils some of that role in the US.

The UK CAA had issued an Aeronautical Information Circular on 2 October 1972 proposing expanded reporting requirements.  Its predecessor, the Air Registration Board (ARB), had introduced defect reporting requirements in 1964 and voluntary reporting of other occurrences had been encouraged in part through the UK Flight Safety Committee (UKFSC).  The CAA proposed expanding this to include mandatory ‘incident’ (i.e. occurrence) reporting.

The Origin of ASRS

The origins of the scheme are particularly interesting.

On 1 December 1974, TWA Flight 514, Boeing 727-231 N54328 was inbound through poor weather to Washington Dulles in Virginia.  The flight was originally destined for Washington National Airport but was diverting to Dulles due to high crosswinds.

As NASA relate:

The flight crew misunderstood an ATC clearance and descended to 1,800 feet before reaching the approach segment to which that minimum altitude applied.

The aircraft collided Mount Weather, near Berryville, Virginia (near a major US government bunker) killing all 92 aboard.

Accident Site (Credit: Unknown)

The NTSB investigation determined the crew’s decision to descend was “a result of inadequacies and lack of clarity” in air traffic control procedures and a misunderstanding between pilots and controllers regarding each other’s responsibilities during terminal operations and in IMC conditions.

The accident is discussed fin the FAA Lessons Learnt Database.  NASA go on:

A disturbing finding emerged from the ensuing NTSB accident investigation. Six weeks prior to the TWA accident, a United Airlines flight crew had experienced an identical clearance misunderstanding and narrowly missed hitting the same Virginia mountaintop. The United crew discovered their close call after landing and reported the incident to their company. A cautionary notice was issued to all United pilots. Tragically, there existed no method of sharing the United pilots’ knowledge with TWA and other airlines. Following the TWA accident, it was determined that safety information must be shared with the entire aviation community. Thus was born the idea of a national aviation incident reporting program that would be non-punitive, voluntary, and confidential.

The Challenge of Timeliness 

While the inability to be able to learn from the experience of others may be disturbing, the most challenging aspect of this story is the previous occurrence had been just 6 weeks before.

It is also a reminder that no matter how good our safety reporting culture of how effective our safety management systems are at classifying, trending and analysing safety data (internal and external), its action that counts.

We have previously commented on the importance of timely reactions here: 7th Anniversary of Newfoundland S-92A Accident  UPDATE 4 August 2016: See also: Aborted Take Off with Brakes Partially On Results in Runway Excursion

Forums exist for operators to discuss experience from incidents.  The UK Flight Safety Committee (UKFSC), formed on 29th July 1959.  UKFSC has grown from 9 members in to around 100 today, from the UK and overseas.  Airlines share occurrence details under the Chatham House Rule.  No doubt this approach will need to become more widespread.  HeliOffshore is developing an Information Sharing programme for the global offshore helicopter industry.  We have previously discussed how such approaches will become even more important as accidents become rarer.


For more details on ASRS see this paper from 1998 by Dr Charles Billings, who was part of the original NASA Ames ASRS team: Incident Reporting Systems in Medicine and Experience With the Aviation Safety Reporting System.  Billings is no in favour of mandatory reporting systems, something that is clearly problematic in a blame culture.   He does make another point:

Underreporting is a recognized problem. But I’m not at all sure that that is the critical problem.

He goes on to highlight a number of (patient safety) problem areas:

The information that these events occur is already present… What is added by more formal, elaborate (and expensive) incident reporting?

A central question facing us is not really how many there are, but how many is enough. That is to say, there are already many signals that point to a variety of failures. Part of the consensus that needs to be formed for successful incident reporting is consensus about what is a sufficiently strong signal to warrant action?

One past case study, involved two incidents (the first in 1972) that highlighted the potential to take off in a Boeing 747 with leading edge flaps retracted that occurred prior to a similar fatal accident.

UPDATE 23 April 2016: The just issued accident report following the loss of MD-83 EC-LTV in Mali in 2014 meanwhile comments:

…”horizontal experience feedback” initiatives are being set up between operators through symposia and associations, in order to share the feedback and contact the entire aviation community. At this stage, and taking into account the facts of this investigation, this type of system does not yet seem to be effective or accessible enough to all of the operators in order to warn their crews. The fact that the MD80 is an aeroplane operated by a diminishing number of operators has not facilitated the development of such horizontal feedback on the specific features of this aeroplane.

UPDATE 15 October 2016: Suzette Woodward discusses the book Team of Teams, by General Stanley McChrystal, comparing and contrasting intelligence lead special forces operations with managing safety in healthcare (two not obviously similar domains!).

Team of Teams Stan McChrystal

She was struck by a section on intelligence.  McChrystal comments: “Like ripe fruit left in the sun intelligence spoils quickly”.  Woodward comments:

…think of this in terms of data related to Safety. If an incident reporting system was about fixing things quickly then by the time incident reports reach an analyst most of the information is worthless.

McChrystal goes on:

Too many the intel teams were simply a black box that gobbled up their hard won data and spat out belated and disappointing analyses.

Woodward comments:

We have all heard that one in relation to incident reporting. Our current catch all approach actually creates this problem. Of course with a mass of data every single day anything learnt is going to be belated and disappointing. What this sadly means is that it is often ignored and frankly because of this, time would be better spent doing other things. But that’s a dilemma- we can’t ignore the data but how can all the data be tackled in a timely manner.

McChrystal goes on:

On the intel side, analysts were frustrated by the poor quality of materials and the delays in receiving them and without exposure to the gritty details of raids they had little sense of what the operators needed.

Woodward comments:

This reminded me so much of the way in which we inappropriately compartmentalise safety into neat boxes with people working independently in an interdependent environment. Safety people need to be exposed to day to day experiences and at the same time appreciated and valued for what they bring.

This I would suggest is symptomatic of a larger problem; the way the whole organisation works and dare I say it the overarching system that is there to support them.

It always boils down to people and relationships in the end.

UPDATE 3 December 2016: How Airlines Decide What Counts as a Near Miss quotes Risk Analysis paper Airline Safety Improvement Through Experience with Near-Misses: A Cautionary Tale.  Researchers claim:

…that airlines learn mostly from incidents that conjure the memory of a prior accident. And that could lead pilots and controllers and mechanics to slip into a frame of mind where they routinize close-calls and last-minute adjustments, a natural human tendency toward “the normalization of deviance,” the researchers wrote.

“It’s the ones that don’t scare you that we want the most attention on,” says Robin L. Dillon–Merrill, a professor at Georgetown University and one of the paper’s three co-authors. The researchers write in the study: the “prior near-misses, where risks were taken without negative consequence, deter any search for new routines” and “often reinforce dangerous behavior.”

Shawn Pruchnicki, a former pilot and faculty member at the Ohio State University Center for Aviation Studies said:

“…everyone assumes more data is better, but more isn’t better”.  Obsession with data, he says, is part of an obsession with rules, and long prescriptive rules are confining. An aborted takeoff, such as the one in April in Atlanta, may not be the culmination of mistakes, but a symbol of a resilient and flexible system. “It’s all about understanding how the system responds to unfavorable events, how we respond, not the nitty gritty details.”

UPDATE 15 January 2017: Power of Prediction: Foresight and Flocking Birds looks at how a double engine loss due to striking Canada Geese had been predicted 8 years before the US Airways Flight 1549 ditching in the Hudson.

UPDATE 28 October 2020: An Uncoordinated Fall from an A320 at Helsinki: How Just Reporting is Not Enough


Aerossurance is pleased to sponsor the 2017 European Society of Air Safety Investigators (ESASI) 8th Regional Seminar in Ljubljana, Slovenia on 19 and 20 April 2017.  Registration is just €100 per delegate. To register for the seminar please follow this link.  ESASI is the European chapter of the International Society of Air Safety Investigators (ISASI) with a particular focus on current European issues in the investigation and prevention of accidents and incidents.


Aerossurance has extensive air safety management and accident analysis experience.  For practical aviation advice you can trust, contact us at: enquiries@aerossurance.com