Politicians and safety organizations are leaving no stone unturned as they investigate the tragic crash of Germanwings Flight 9525, but as mounting evidence suggests co-pilot Andreas Lubitz purposely dove the plane into a mountainside, usual talk of preventing such accidents in the future has fallen short. It seems there is a point past which problems cannot be solved.

A basic level of public trust is essential to conduct any action involving a community of people and Lubitz violated that trust in grotesque and horrifying fashion. Is it crass to speak of mechanical failsafes to public trust — namely, automating planes such that no human could torpedo them into the ground?

The Pentagon is already leading the way on autonomous-flying technology, equipping its F16s to take off, land, and perform mid-air maneuvers on their own. The purpose is to automate training runs for F16 pilots, but the technology could have obvious applications elsewhere.

The cockpit door that co-pilot Lubitz locked was intended to be one such failsafe, keeping out hijackers at the moment of mutiny. How could we suspect that a pilot, having gone through years of training, would use the lock to keep out his fellow pilot while he purposefully crashed the jet? That kind of suspicion seems deeply cynical.

Nonetheless, automation is on the rise in areas previously regarded as beyond the reach of machines. Programming techniques called "deep learning" already allow computers to grasp difficult subjects like tax law that once required a professional to understand (and now requires a TurboTax software download). Next up, according to Andrew McAfee of the MIT Sloan School of Management, are lawyers, writers, and psychiatrists:

"The median American worker doesn’t do manual labor anymore. The average American worker is not a ditch digger. But they’re also not doing incredibly high-end particle physics or data science. They are what you’d call the somewhat routine knowledge worker. That is right in the sweet spot of where technology is making its greatest inroads."

But are we prepared to take humans out of the equation entirely, fully trusting machines to drive our cars and drive us to and fro? If automation is motivated by a corrosion of trust in human capabilities, would such a transition be worth it?

Read more at The New Yorker.