Does a New Breed of Pilot Error … Fighting Automation to the Ground … Need a New Kind of Pilot?

DC DISPATCH w/Sara Corcoran

DC DISPATCH--Statistically speaking, traveling by plane is the safest mode of transportation. However, when there’s a system challenge in flight, a pilot’s ability to quickly identify and respond to the issue can often be the difference between life and death.

Mid-air accidents are much more often than not the result of pilot error, and it is expected that the global aviation community will reaffirm the safety of the 737 MAX 8. 

A similar pattern of malfunctioning controls that automated unsolicited nose pitches while in flight occurred with Airbus back in 2008. Qantas flight 72 serves as a good case study on how pilots should respond when faced with what has become a catastrophic situation for others. 

In 2008, Qantas Airlines flight 72, an A330 Airbus, was en-route from Singapore to Perth when it took a sudden nose dive over the Indian Ocean. At that time, the A330 was a twin-engine, new generation “fly-by-wire” aircraft, whereby conventional manual flight controls were replaced with an automated electronic interface. That system extrapolates desired outcomes by the pilot's inputs, makes adjustments in aileron, elevator, rudder, engine, and flaps according to that extrapolation, and locks in a sequence.  If a pilot inputs a command at that point that is outside the system's performance envelope, the human input is considered an error and overrode.  Both Boeing and Airbus have adopted a form of “Fly-by-Wire” process when manufacturing their aircraft.  Ironically, and tragically, as well, the multiple, redundant, and interconnected systems were created for the purpose of maintaining stable flight. 

In command of flight 72 was Captain Kevin Sullivan, a former Top Gun fighter pilot with the US Navy. Flight 72 was cruising at an altitude of 37K feet and a hundred nautical miles from the coast when the autopilot mysteriously disconnected. There is a secondary autopilot system, and Sullivan was able to activate it; but he then became bewildered by what happened next: Suddenly, the aircraft gave off both over speed and stall warnings, two contradictory readings.  Normally, an aircraft stalls when it is not traveling at a high enough speed to maintain lift.  The malfunction was obvious. The airspeed indicator was no longer reliable, and Captain Sullivan disengaged the autopilot.  That was the correct thing to do.  

A few minutes later, the aircraft's nose pitched down.  Sullivan manually pulled it back up with the side stick, and the plane leveled off. But then the pattern repeated, this time with a steeper negative Angle of Attack (AOA)--aviation-speak for pitch--and all the passengers who weren't wearing seatbelts were pulled out of their seats and were pinned to the ceiling of the main cabin by the g-force. 

Sullivan once again manually leveled the plane--and the passengers pinned to the ceiling fell to the floor.  Facing a cascade of warning announcements, multiple malfunctioning systems, many injured passengers, and uncertain that the plane would survive the next unsolicited pitch, he requested an emergency landing at Learmonth, Australia. 

With the brakes & spoilers not working, the alarms for both over speed and stall ringing through the cockpit, along with the battery of passengers, this was the sort of doomsday scenario that pilots are expected to be prepared for--and American pilots are, by the rigorous training required for them to earn their wings. 

Weakening certification standards overseas, as evidenced by the two hundred flying hours of the Ethiopian Airlines pilot are a serious problem. Modern day pilots act in a supervisory capacity at the expense of hands-on flying. Unfortunately, many are not equipped with enough crisis management skills independently of AI. 

After Sullivan’s successful landing in Yearmoth, with the plane in one piece and with some passengers battered but all alive, he examined the flight printout and was astonished at what he discerned: H.A.L.-like, it was if the A330 had a mind of its own.  The automated system was sending rogue commands to the flight control system, and the nose would automatically pitch down in absolute obedience to the inaccurate reading. 

It was critical to determine what inputs were driving that dangerous series of maneuvers. 

A few additional months into the investigation of Flight 72, investigators identified the same malfunctioning sequence in three other flights, also A330s, reported off the coast of Australia, and within the same year.  The discernment was terrifying, and many rushed to blame the A330, with campaigns that called for the global grounding of the model.  Airbus resisted by the fact that, unlike the Boeing Max 8s, there were no fatalities with their own malfunctions.

The investigation continued.  Digging deeper into the code running the systems, investigators ultimately identified the mislabeled inputs that lead to the bad AOA readings: They were mixed up with the altitude data. When the automated protections kicked in, the nose pitched down a total of 10 degrees; a 6-degree correction from one system, and 4 degrees from another. 

Systems and networking issues will continue to affect Boeing, Embraer and Airbus aircraft alike. Pilots are expected to know what to do the day when multiple systems malfunction and loud and erroneous electronic error warnings flood the cockpit. In that event, and as Sullivan knew, the only solution is to disengage the autopilot, shut down the malfunctioning system, and prepare for a manual emergency landing, with the risk that the nose would continue to pitch down while attempting to land. 

The auto pilot’s ability to maintain precise control over flight has decreased the need for pilots to have an interactive role on behalf of one that is supervisory in nature.  It was thought that this digital overhaul of flight would be the panacea to human error, and dramatically decrease the number of accidents, and indeed it has.  And as Digital Age consumers of aviation services, we want to know that the aircraft we fly have multiple backup systems which are designed to maintain stable flight. But there will always be a standard error, however small it gets as technology evolves, and we must also accept and expect inflight issues that will likely be driven by software & system errors. This is the new frontier of aviation accidents, and the commercial carriers should mitigate against this risk by preparing their pilots for digital meltdowns.  

Many pilots could be faced with the improbable situation that the plane will steer itself to the ground because it was processing or interpreting data correctly. Surely, redundant systems minimize this risk, but it still can happen. The main deterministic factor of this random situation is the competence and experience of those manning the rudder. 

Flying is still is the safest mode of transportation, and the 737 MAX 8 should be back up where it belongs in under four months.

 

(Sara Corcoran writes DC Dispatch for CityWatch. She is the Publisher of the California and National Courts Monitor and contributes to Daily Koz, The Frontier Post in Pakistan and other important news publications.)

-cw