Jump to content

Podcast of Jetblue 292 comm with mtc


Mitch Cronin

Recommended Posts

You know what Mitch...the guy who recorded this is right: the Captain has to make up his/her own mind and take everybody's piece of advice with a grain of salt including the so-called experts. The ground people providing this flight with maintenance and other information let this crew down badly by "normalizing" the situation. I would love to know what changed the Captain's mind and led him to do the fly-by at LA.

I wonder too, if the internal and NTSB (and probably an FAA) investigation will drill down deeply enough to examine systemic/organizational issues. In these days of de-regulation, high fuel prices and huge demands on management to keep share prices up (if not the wolves from the door), there's as much if not more attention paid to the bottom line than to the "principles of aviation". The management of risk has to be done with informed, intelligent and non-pressured thinking with a full realization that a history of "nothing happening" (which can stand for good, solid work being done or it can stand for the normalization of deviance) is no indication whatsoever of future prospects and that constant re-examination and "checking six" just to make sure, is the only way to ensure a safe operation, and even then, one can get bit hard. It needs as much daily information as possible and a way of conveying and absorbing that information quickly. It needs a "culture of intolerance" when it comes to compromise and lack of knowledge.

If Sambucca's reading this, that's what I meant about my earlier comments re bragging about safety. It would just never occur to a flight safety person to "brag" about a past record or to ever use past history to predict future performance, (sound familiar to investors?).

Only daily examination and checking, checking, checking can keep an organization out of trouble and that process requires a proactive, financially supported and management-reinforced safety culture. It demands trained and experienced flight safety resources who know that their input is sought and taken seriously even though it may take away from "the bottom line". A safety culture is where everybody, from the CEO right down to the front-line people are continuously asking themselves, "are we safe?, or "is what I'm doing safe?", not "how much will this cost?". Besides being the right thing to do, its just good business. And it keeps an organization's name out of the media, it makes questions far easier to answer if an incident happens and it makes the Westray Bill (Bill C45) something to live with rather than something to fear.

Finally, a vibrant safety culture that demands the highest standards (and supports those standards by resourcing and paying for them) keeps these kinds of unfortunate recordings of one's own airline conversations between crews and employees trying to solve a serious maintenance/safety issue off the front pages and off of the internet. There is no faster way to call one's organization into question and create investor and customer doubt, than these kinds of broadcasts splashed all over the internet, but there it is, for everyone to hear and judge. It may in some perverse way increase safety, but its certainly a backwards way of achieving it!

I can't imagine what's going on in JetBlue's offices today as a result of this huge amount of public attention, especially after the snag was signed off, and more broadly, especially at a time when they're doing reasonably well financially. And that Captain, from all I saw, is in my view, the best example around of an airline pilot who earns that $nnn,000 an hour, (reference my remark about earning 150,000 bucks a minute but you'll never know which one).

Link to comment
Share on other sites

"And that Captain, from all I saw, is in my view, the best example around of an airline pilot who earns that $nnn,000 an hour"

Yessir, every cent!

I'd like to know what was done after that same fault occured the day before... and what justification there was for releasing the aircraft.

...very sad to hear that "uhh ya, we're confident it's just an indication fault". ...like to know also what brought them to that "confidence"?

Sounds like all in all it could turn out to be another good case study for human factors training.

Link to comment
Share on other sites

Sounds like all in all it could turn out to be another good case study for human factors training.

I really think so Mitch. Everybody who was offering their expertise and advice was doing their level best to help the crew. Yet the wrong conclusions were arrived at, and the wrong advice was given. So I agree with you...its a human factors issue but I also think there's an organizational issue here. This is about training, experience, expertise and plain, effective communications. You'll notice that the Captain, several times, had to make very sure that the ground people understood what he was saying and what was at stake, yet those entreaties seemed ineffective in the end. I did not get the feeling he had a lot of confidence in what was being offered him.

Whether this is also a story about reduced resources with inappropriate backgrounds/experience is a question which the investigation will resolve...perhaps. "Doing more with less" is a mantra these days.

Link to comment
Share on other sites

Don, Mitch,,,, the thing that seems to be lost in the current race to the bottom is that likely most pilots have had situations that make the question the situation for many sides. This what experience is about, having likely been in a similiar situation or one that has embarassed you enough that you will not accept the assurances of relatively uninvolved and relatively uninterested parties at face value. If you work for an airline like Air Canada, as a pilot, you are probably in the only workforce at the airline that has taken a pay cut to become an employee. Yet the experience and value you bring to the airline is immeasurable when the rubber hits the runway as it did in this case. There is no way to practice for this particular scenario, but it is a culmination and consolidation of years of experience. The outcome was tremendous, but that was not because of a simulation, but rather from years of previous experience. What is the lesson for management? Can they learn that?

Link to comment
Share on other sites

Mitch:

Thanks for posting the link to the audio file. Your comments with respect to the previous days events are worthy of note, but as you and Don have said, it seems that human factors analysis may be the best way to come to a real conclusion as to how that nosewheel ended up pointing off towards the rhubarb patch.

While I am not a line maintenance guy by any stretch, I have seen a couple of similar but less serious events over the years. From my experience, the nature of the Airbus FBW aircraft may have something to do with it. It is common to see the line maintenance folks clear a fault on the ground through some type of reset, or reboot if you will. In the vast majority of cases, the faults are spurious, and are the result of wandering electrons, so a reset is an appropriate action, just like rebooting your PC. But what about the fault which can't be duplicated on the ground, and only reappears when the aircraft goes airborne? How does one really know that their reset was the best way to handle the fault? I don't know the answer; maybe the experts at Airbus have an idea.

Unfortunately, this "just reset it" methodology may be creeping into the way line maintenance deals with more serious write-ups as well, such as JetBlue's shock absorber fault.

To your points Don, you are so right on all accounts. I am troubled by the thought that MOC can and would give confident advice without a thorough analysis of troubleshooting data, not to mention a physical look at the machine. The very idea leaves me wanting. Advising crews as to the status of a fault (real versus faulty indication) just doesn't seem right in my opinion. Similarly, no one on the ground should be suggesting to a crew that they carry on to destination, or troubleshoot a problem outside of the normal checklists; or search for and pull a breaker (other than those approved by the manufacturer in the QRH). I fear that doing so could eventually lead to a tragedy. If you believe some of the scuttlebut about the Helios crash, it may already have. I have no doubt that the folks on the ground at JetBlue had good intentions, and I'm sure the same goes for Helios. And yet the result of their good intentions was inappropriate advice that may have made things worse.

Between the ECAM, the QRH and the FCOM, Airbus gives us a comprehensive set of tools for dealing with technical faults when we're airborne (and so does Boeing for that matter). In my opinion, we should avoid soliciting advice, unless we're totally at a loss to understand some problem. Otherwise, troubleshooting should be done on the ground with the parking brake set.

Jeff

Link to comment
Share on other sites

ikfu;

As the race-to-the-bottom proceeds through employee layoff, departmental change, leadership change, and simple leaving because people are fed up working 25hours a day for fewer rewards and benefits, the loss of history, loss of experience, loss of continuity and loss of job "context", (the 'network, the friends, the connections) in all departments of a major organization, especially one which must manage technological risk, are all "human" and organizational factors which must be realized and managed effectively to retain margins of safety while balancing sustainable economic performance. It is a herculian task, but in my view you are absolutely correct. This is an aviation fact which always must remain in clear sight.

While one must be very careful not to overplay such factors and over-react, (because this is a cyclical business after all and airlines have dealt with these factors before), airlines today have never, ever experienced the fundamental economic, structural, regulatory and political shakeouts as they presently have. The pressures to increase profit and shareprice or just retard the retreat is enormous and in such circumstances it is a well-understood phenomena that that kind of pressure transmits through an organization from the leadership to the line. Flight Safety Foundation - The Dollars and Sense of Risk Management and Airline Safety, and FSF: Orchestrating the Human Symphony in Flight Operations are Flight Safety Foundation Publications that have been around a while and are even more relevant today than when first published, (you have to sign in, but its free and well worth it). Under such unavoidable stresses, "minding the store" is more critical than ever because although the rules of the game may change, the principles of aviation do not. One always gets what one pays for, sooner or later.

The article in the Wall Street Journal is well worth reading, (WSJ, Sept 19th 2005). I can post it here if desired but its longish.

Link to comment
Share on other sites

Hi Jeff;

Although primarily a flight safety tool, one of the tremendous benefits of a functioning FDA (flight data analysis) program is to be able to do precisely this kind of troubleshooting. While it isn't a panacea for all technical issues, its an enormous help in examining intermittent, or "only when airborne" problems. The payback for the initial investment is enormous and the main problem really, is achieving buy-in by others. While the Airbus has an extensive self-trouble-shooting system (AIMS, BITE equipment), an FDA program that captures thousands of parameters can greatly assist in these areas including our favourite topic these days, fuel consumption.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.



×
×
  • Create New...