Allen, now president of the Brookings Institution in Washington, D.C., engaged with Edward Felten, Princeton’s Robert E. Kahn Professor of Computer Science and Public Affairs, in a wide-ranging conversation about emerging uses of artificial intelligence on the battlefield.
“This is a capability that has the capacity for great good, but I’m more likely to worry that it could easily be a technology that we still don’t understand could be applied with great destructiveness,” Allen said.
The debate at Princeton’s Maeder Hall was part of the G.S. before a student audience, including numerous ROTC participants, faculty members, and others. The Center for Information Technology Policy co-sponsored Beckwith Gilbert’s 63 Lectures series.
In response to Felten’s opening question, Allen said warfare has always required a mix of human and technical aspects, and the military has often been an early technology adopter. By embracing artificial intelligence, several advantages for the military are evident, he said, ranging from far more efficient and effective logistics, procurement and maintenance, to better collection of information and target sense.
Key questions revolve around the role of human decision-making in the application of deadly force, including target selection and mission start-up and abortion decisions, said Allen, who led NATO forces in Afghanistan and served as the Global Coalition’s special presidential envoy to combat the Islamic State terrorist organization before retiring in 2015.
Allen said long-standing ethical principles governing the use of force and military action have served the United States well. In general, he outlined three decision-making elements: necessity, differentiation, and proportionality
Allen identified two ways of thinking about human control, being “on the loop,” meaning a human being is controlling a system to make sure it performs as planned, or “in the loop,” meaning a human being is actively making go or no – go decisions.
Allen and Felten, who worked in the Obama White House as deputy chief technologist, agreed that the Department of Defense’s 2012 policy lays the groundwork for thinking about these issues. “It is a fascinating document in its prescience to see how the use of artificial intelligence in collaboration will be viewed by a responsible country, driven by ethics,” Allen said.
In response to a question from an audience member, Allen stressed that military leaders must be taught to resist complacency by embracing advice from automated systems and applying rational skepticism to machine-driven inputs.