Originally Posted by
Steve Demuth
Martin's way of saying this is a bit more provocative than how I would, but he's essentially right. People who design complex, potentially dangerous systems understand and assume in their engineering that components will fail from time to time, and operators will make mistakes or fail to perform to requirements from time to time. So in addition to the effort we put into making sure the technology is as reliable as we can get it, and people are well trained, to the degree possible, we design every process and it's associated toolset so that the failure of a component or actor won't result in catastrophic harm. That's how you minimize adverse outcomes - injuries, production failures, etc.
Which makes much of the discussion in this thread rather foolish itself. There are no silver bullets. If your goal is to minimize imjuries, you don't choose safe technique or safe tools, you focus on safe technique using safe tools.
But if Martin is fair in some sense in saying even in highly safety conscious and successful industries, like commercial aviation, people "don't trust" technology, it's a rather misleading oversimplification. We certainly rely on technology, albeit with a healthy understanding that we have to accommodate situations where it might fail. When our surgeons perform a robotic, laparoscopic surgery, they most definitely are trusting that the technology will work, and simultaneously trusting that we've got adequate fail-safe tools and procedures in the event it fails. That is, they do not perform every move in the surgery with a calculation of what they are going to do if the device doesn't behave as expected. They trust - assume - that it will. But they know that they and their team also know what to do if it today is the 1 in 100,000 or whatever when it doesn't.