By Jim Garamone — (DOD News) — Washington D.C. — August 26, 2016.
Maybe the idea behind the “Terminator” movie franchise isn’t so far-fetched.
In the “Terminator” films and TV shows, a worldwide computer defense network becomes self-aware and sees humans as the enemy and attacks.
Scientists around the world are currently working on artificial intelligence, autonomous vehicles, uses for big data and other innovations and technologies that pose ethical questions.
A mobile detection assessment response system patrolling
DoD is examining those questions, said Air Force Gen. Paul J. Selva, the vice chairman of the Joint Chiefs of Staff. He spoke about some of these ideas yesterday with Kathleen Hicks, the senior vice president of the Center for Strategic and International Studies.
The idea of computers driving cars, landing airplanes, delivering packages or exploring planets is already here. Singapore is testing driverless taxis. Google is looking to do the same in Pittsburgh shortly.
There are a number of autonomous vehicles on Mars.
The U.S. military has a fleet of remotely piloted vehicles that operate worldwide and oceanographers have been using remotely piloted submersibles for years.
Autonomous Weapons Systems
But the idea of autonomous weapons systems poses some real ethical challenges, Selva said. DoD is working with experts on ethics -- both from inside and outside the department -- on the issues posed, he said. They are looking at the pitfalls of what happens when technology is brought into the execution of warfare.
“I am not bashful about what we do,” Selva said. “My job as a military leader is to witness unspeakable violence on an enemy. In the end, when you send me or any soldier, sailor, airman or Marine from the United States … out to defend the interests of our nation, our job is to defeat the enemy.”
How service members accomplish the mission is governed by laws and conventions, he said. “One of the places where we spend a great deal of time is determining whether or not the tools we are developing absolve humans of the decision to inflict violence on the enemy. That is a fairly bright line that we are not willing to cross.”
A true autonomous weapon system would be programmed to perform a mission and the decision to use deadly force would be left up to the on-board computer within the program parameters. That is unacceptable to the United States military, Selva said.
“We have insisted that as we look at innovations over the next couple of decades that one of the benchmarks in that process is that we not build a system that absolves our human leaders of the decision to execute a military operation, and that we not create weapons that are wholly and utterly autonomous of human interaction,” he said.
But the U.S. decision does not mean an enemy would follow suit.
In the world of autonomy, a completely robotic system that can make a decision on causing harm is already possible, he said. “It’s not terribly refined, it’s not terribly good, but it is here,” the general said. “As we develop systems that include things like artificial intelligence and autonomy, we have to be very careful that we don’t design them in a way where those systems actually absolve humans of that decision.”
The discussion needs to occur, the general said, and the United States must be prepared for nations or nonstate actors to violate any convention that the world draws up with respect to autonomous weapons.
“Until we understand what we want the limits to be,” Selva said, “we don’t have a baseline to use to determine if someone is moving down the path of violating a convention that could create something like a Terminator that adds an incredible amount of complexity and with no conscience to what happens on the battlefield.”
(Follow Jim Garamone on Twitter: @GaramoneDoDNews)