A proper language comparison...

Joseph Rushton Wakeling joseph.wakeling at webdrake.net
Tue Jul 30 17:16:21 PDT 2013


On 07/27/2013 01:39 AM, Walter Bright wrote:
> Designers make mistakes even in redundant systems - sometimes they turn out to
> be coupled so a failure in one causes a failure in the backup. Sometimes certain
> failure modes are not anticipated.
> 
> But one thing they do NOT do is assume that component X cannot fail.
> 
>> The one that always springs to mind is
>> the De Havilland jets breaking apart mid-flight due to metal fatigue.
> 
> Boeing's fix for that not only involved fixing the particular fatigue problem,
> but designing the structure so WHEN IT DOES CRACK the crack will not bring the
> airplane down.
> 
> This design has been proven through a handful of incidents where an airliner has
> lost whole panels due to cracking and yet the structure remained sound.

I have to say, one of these days I'd really like to buy you a beer (or two, or
three...) and have a long, long conversation about these (and other) aspects of
aerospace engineering.  I imagine it would be fascinating. :-)

But I do think I'm correct in asserting that the particular disaster with the
Comet didn't just result in learning about a new mode of failure and how to cope
with it, but in an awful lot of new knowledge about designing safety procedures,
analysing faults and crash data, and so on?

>> The number of flights and resulting near misses surely helps to battle test
>> safely procedures and designs. That volume of learning opportunities can't
>> readily be matched in many other industries.
> 
> The most important lesson learned from aviation accidents is that all components
> can and will fail, so you need layers of redundancy. The airplane is far too
> complicated to rely on crash investigations to identify problems.
> 
> I watched a show on the Concorde the other day, and was shocked to learn that
> there'd been an earlier incident where a tire burst on takeoff, the tire parts
> had penetrated the wing fuel tank, and the fuel drained away. The industry
> decided to ignore fixing it - and a few years later, it happened again, but this
> time the leak caught fire and killed everybody.

I want to stress that I never suggested relying on crash investigations!  I said
"near misses" ... :-)

What I mean is that I would have thought that with the number of flights taking
place, there would be a continuous stream of data available about individual
component failures and other problems arising in the course of flights, and that
tracking and analysing that data would play a major role in anticipating
potential future issues, such as modes of failure that hadn't previously been
anticipated.  The example you give with the concorde is exactly the sort of
thing that one would expect _should_ have prevented the later fatal accident.

My point was that this volume of data isn't necessarily available in other
engineering situations, so one might anticipate that in these areas it's more
likely that minor failures will be overlooked rather than learned from, as they
are rarer and possibly not numerous enough to build up enough data to make
predictions.

Of course, even if sufficient data was available, it wouldn't save them if the
design (or management) culture didn't take into account the basic principles
you've described.


More information about the Digitalmars-d mailing list