We read a lot in our news about self-driving cars and autopilot-flying airplanes. Reports suggest automation will be the way of the future and poor human decision making will be eliminated.
Let’s just say I’m a bit skeptical of those claims. Maybe in a few more years it might be possible, but there’s always that cost factor. A good case in point is the problems with the Boeing 737 Max aircraft. The reality here is that Boeing designed a totally new airplane, which meant for the manufacturer a totally new aircraft certification process, which is extremely time consuming and costly. For the airlines, a new aircraft meant new type ratings (license to fly) for all their pilots. Furthermore, the selling point of the 737 line is that any pilot rated on a 737 can fly the oldest to the newest without a new type rating.
What Boeing did and the FAA allowed was essentially a software fix to have the controls mimic the behavior of older 737 aircraft. The short story is that the software engineering didn’t quite work out and it’s probably going to be years before all the legal and regulatory dust settles.
The same thing will ultimately be true in self-driving cars and autopilot-flying airplanes. Decisions will be made for cost that unfortunately may have negative safety implications.
The other reason I believe that there will never be totally automated vehicles, either air, sea or land, is that you cannot account for every possible scenario, which is what you have to do with computer and automation programming. Artificial intelligence must become sophisticated and powerful enough to analyze and make decisions when unforeseen circumstances or problems arise and be able to totally mimic human thought processes.
The case for why we have pilots in airplanes is made in recent news reports. If you’ve read my prior columns, I’ve occasionally mentioned Captain Sullenberger and the “Miracle on the Hudson” where he and his flight crew safely ditched an aircraft when both engines failed after being struck by geese on takeoff. The salient point is that Captain Sullenberger and his crew had seconds to make the best decision they could to save the lives of their passengers.
When everyone essentially walks away after a forced landing in water – that’s a right decision.
The same type of accident just happened in Russia. A Ural airlines airliner carrying 226 passengers on a takeoff from Moscow’s airport hit a flock of birds, causing both engines to malfunction. The pilot, Damir Yusupov, was able to successfully land the aircraft gear up in a cornfield outside of the Moscow airport.
Any time everyone walks away from a gear-up landing in a cornfield after an aborted takeoff – that’s another right decision.
Moscow news reporters are calling this pilot a hero, as they should be.
Here’s the challenge with automation and even artificial intelligence. Sometimes, the “by-the-book” decision is not the right decision. In the case of the Hudson River ditching, there have been plenty of people who have second-guessed Captain Sullenberger’s decisions, saying he should have gone for an airport landing instead of a ditching in the Hudson. Maybe, but the point still is that everyone walked away. The decision Captain Sullenberger made was good enough based on the results.
The same thing is possibly true in Moscow – everyone walked away from the landing and the decisions made were good enough.
Humans operate on intuition sometimes, or “gut feelings,” if you will. There may be no logic to those gut decisions but often, the right decision is made. As a career IT guy, I can’t fathom a way to program for “hunches” or “gut feelings.” In fact, I’ve seen it and used it on programming problems.
Maybe in some science fiction-like future, artificial intelligence will get technology to the point that we can create those artificial intelligence hunches or guesses. The other factor that is difficult to figure out how to program for is a human’s will to live. When humans are under pressure of the life-threatening kind, we can get very unpredictable and creative in trying to get away from whatever threat we’re facing. Sometimes it works and sometimes it doesn’t, but it’s there. So, how do you program for the will to live?
These concerns are why, unless we are incredibly foolish, we will always have humans in control. Technology and devices like autopilots are great tools, but they are exactly that – tools. QCBN
By Lance Leighnor
Lance Leighnor has four decades of experience in general aviation aircraft, and active management of rental aircraft since 2011. Lance is the managing member of Leighnor Aircraft. He can be reached by phone at 928-499-3080, by email at lance@LeighnorAircraft.com or via the Leighnor Aircraft website at LeighnorAircraft.com.
Leave a Reply