Product Lifecycle Report

New UAV Tech Poses Ethical Questions

Half a decade ago, the United States Air Force developed a 30-year roadmap for the development of unmanned aerial vehicles (UAVs), a plan that lays out the future of aircraft from current capabilities to systems that operate fully autonomously.

The autonomous part probably doesn’t surprise anyone familiar with current UAV technology. It’s an obvious evolution which is fast approaching. In fact, the Air Force figures fully autonomous capability will happen within the next 11 years.

There are already several UAVs in the Air Force inventory with the ability to return to the location of launch when contact is lost with ground controllers. Additionally, the Global Hawk, the Air Force’s premier intelligence gathering UAV, is programmed before a mission to start, taxi, takeoff, fly its profile, land, and taxi back to parking. And the Navy has a UAV that can launch and recover from an aircraft carrier.

Whether it’s search and rescue, electronic warfare, air refueling, or reconnaissance, UAVs will play a significant role in military operations going forward.

But we’re still a ways off from full autonomy – a UAV not only flying a programmed flight profile, but being able to make “decisions” when faced with such issues as traffic conflicts, poor weather, enemy threats and target selection.

Current systems like the Predator and Reaper are unmanned attack aerial vehicles, but they require a two-person team to work together to fly and engage targets with a system in place to ensure that actual weapons release does not occur without human consent.

Yet the Air Force may ultimately replace Reapers and Predators with UAVs that can fly to a specific area and engage targets sans human interface. In fact, its UAV roadmap calls for the development of systems capable of making targeting (friend/foe) and firing (kill) decisions.

The question still to be answered, is that the road we as a nation want to follow?

Even if the system proves to be foolproof—we do after all have current capabilities such as facial recognition software that is pretty remarkable—will we want to eliminate a human from making the final engagement decision?

One might argue that cruise missiles are essentially autonomous once launched. However, they are pre-programmed to hit at a specific set of coordinates, and that decision is controlled by a human, as is the launch.

Let’s assume that before launching one (or more) of these UAVs on a mission, it will be programmed in such a manner that it can only engage a target when 100 percent of its decision-making criteria line up. Surely, that makes it okay… but it’s still a machine making the decision.

The Air Force isn’t the only military organization working on offensive UAVs, and the ethical questions surrounding them isn’t lost on the Department of Defense which published a 2012 document addressing the need for protocols that prohibit eliminating humans from the decision loop. The document clearly states the need for human interaction, but conversely it hints that advancing technology may limit human input to pre-mission planning and programming.

And this isn’t the only issue the Air Force is facing with future UAV technology.

With the war in Iraq over and the one in Afghanistan ending, the Air Force has already reduced funding for UAVs in future budgets. Rather than continuing to build its fleet of Raptors (Predator production has already ended) the service is slowing things down, probably realizing that while UAVs were very effective in the low-threat environments of Iraq and Afghanistan, the current systems would not fare so well in less permissive air space.

Additionally, across-the-board budget cuts are limiting or eliminating several future programs.

The future of U.S. Air Force UAVs is definitely not engraved in stone, but it must deeply ponder the ethics of offensive autonomous UAVs before it commits resources and manpower to its roadmap.

Photo by Ethan Miller/Getty Images