A recent US Air Force simulation of an artificially intelligent drone operation went pretty well if you’re rooting for the machines to take over, as the AI pretty much immediately perceived its human operator and his/her ability to override the drone as a threat to its ability to carry out its mission, so it turned around and killed the operator, according to a blog post detailing summit of the British Royal Aeronautical Society which featured a presentation by Air Force Colonel Tucker Hamilton.
“We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” said Hamilton, the Air Force’s Chief of Artificial Intelligence Test and Operations. We guess the easy answer is to award it points for obeying orders rather than just for pew-pewing the bad guys. That would work, right?