Error avoidance culture

Fehlervermeidungskultur
Error avoidance culture

The experienced crew of the Boeing 737 of a large US airline is taking off for the 2nd time on this day for their flight between Miami and Atlanta. The crew is late and under time pressure, that to the starting situation.

Since the auxiliary turbine and thus the generator for starting the engines was defective, the captain starts the engine number 1 with electricity at the gate and then pushes it back. He rolled with one engine to a waiting bay to start the second engine with the help of engine 1.

This procedure (crossfeed bleeding) is used by default when the auxiliary power unit has failed.

After the captain stopped the engine in the controlled waiting bay, he reached with his right hand for the stop switch of engine 1 and shut it down again. At the moment the switch was actuated, it was clear to him that he didn’t want to do this at all and immediately put the switch back to “on”.

But it was too late, the turbine stopped, the electrical network went on battery emergency supply and in the cabin it became dark. Also both navigation systems failed and had to be restarted time-consuming.

Now the cockpit crew requested a tow truck again to be pulled back to the gate. There they got power again to start the engine 1.

The captain informed the passengers and the crew that due to a technical problem the aircraft had to return to the gate for a short time and there was another delay.

→ Embarrassing, isn’t it? So much for the first part of this incident.

Now please transfer this situation to any case, for example in your production plant.

The experienced master machine operator has made a slipshod operating error when starting up a production line. Production, which is already working under time pressure anyway, is now losing even more time and there is a delay in delivery (and a penalty is due) for the customer.

What usually happens in companies now?

→ Variant 1 of the scenario to be expected:

The master machine operator invents a story as to why he had to stop the production line again and asks those who know to keep quiet so that the boss doesn’t scold the entire production department again.

Since the colleagues have sympathy for the master and tomorrow another could already be dependent on the support of the master to cover up a mistake, the others play the game.

After “up”, a technical problem is reported that will of course be repaired immediately with vigour.

→ Variant 2 of the scenario:

The boss experiences the error of the master by chance or Verpetzen and complains strongly, since the regular customer becomes now correctly sour over the delay in delivery. The next salary increase can wipe the master now first of the cheek and the department must make overtime! The same could happen in the operation of a clinic or any other branch.

→ We now return to the second part of the story in the cockpit of the Boeing 737 for the flight from Miami to Atlanta.

The captain and the first officer inform the cabin chief at the gate about her misfortune and that the aircraft itself is completely in order.

The captain asks the 1st officer for a role change for the next flight, since he considers it better to break his experienced error routine. The 1st officer gladly takes over the starting of the engines and the taxiing to the runway and the captain the flight himself. Actually it was thought the other way around.

Both deal openly, trustingly and without shame with the situation.

The rest of the day with two further flights runs without any special incidents.

At the end of the working day the captain asks his 1st officer to write the report about his mishap together with him. The captain explains his behaviour with an inconsiderate routine handle after stopping the aircraft with only one engine running. The hand then routinely grabs to stop the second engine, as usual on arrival at the gate x times a day. Since he stopped the plane in the waiting bay and turbine 2 had not yet been started, he became a victim of this “memory effect” of his brain and muscles, as he put it.

He identified two causes for it:

On the one hand the time pressure and arising stress, on the other hand the not everyday start maneuver due to the defective auxiliary aggregate. In the future, he will think more consciously before doing a handle in comparable situations.

He sent the report to his airline and the crew went out to have their usual after-work beer.

A short time later the captain reported his mishap to his examiner before a routine simulator check and asked him for some additional observations of his routine procedures and subsequent tips on how to improve his handling. “The inspector replied “Gladly and thank you for your information”. He passed the simulator training every three months safely and confidently.

After two more weeks – the captain had already flown for many hours – his boss contacted him and asked for a short conversation.

The captain and his superior responsible for flight operations met for lunch without any suspicion and the boss thanked him again for the detailed report about his mishap.

He asked the captain to give a lecture about it at the next NASA flight safety conference.

NASA is responsible for recording all accidents and incidents in the flight operations of airlines in the USA. It would have informed its airline that this type of “error from action routines under pressure” occurred more frequently. They would therefore like to discuss this topic at the next conference in order to improve the procedures for handling handling routines. Hence the desire for the presentation of affected pilots.

“The captain replied, “I like to do that. Both told each other a little about their children and families during the meal and went back to work in a good mood.

After a short time the captain was offered the change to a long-haul Airbus 340. He gladly accepted the transport offer

How do I know that story?

NASA has been publishing these reports for more than 40 years for the American Federal Aviation Administration (FAA) as a neutral authority. With more than 30,000 reports from pilots, crew, technology or air traffic control circles within the framework of the Aviation Safety Reporting System (ASRS), NASA has played a significant role in improving safety in commercial aviation.

ASRS enjoys an excellent and trustworthy reputation among pilots, crews, air traffic controllers and aircraft technicians.

This system is now being copied almost worldwide, even in countries such as China and Russia.

The basis for this is the penalty-free error reporting and management system as part of Crew Resource Management (CRM).

It has not only led to a reduction in fatal aircraft accidents of over 90% since 1990, but also to significantly greater satisfaction and efficiency among the teams in the aircraft, the technical department and air traffic control.

Are you interested in learning more about Crew Resource Management (CRM)?

Send us an e-mail at office@lpm.academy or call us at +49 1622518570.