Last month, the U.S. Army put out a call to private companies for ideas about how to improve its planned semi-autonomous, AI-driven targeting system for tanks. In its request, the Army asked for help enabling the Advanced Targeting and Lethality Automated System (ATLAS) to “acquire, identify, and engage targets at least 3X faster than the current manual process.” But that language apparently scared some people who are worried about the rise of AI-powered killing machines. And with good reason.
In response, the U.S. Army added a disclaimer to the call for white papers in a move first spotted by news website Defense One. Without modifying any of the original wording, the Army simply added a note that explains Defense Department policy hasn’t changed. Fully autonomous American killing machines still aren’t allowed to go around murdering people willy nilly. There are rules—or policies, at least. And their robots will follow those policies.
Yes, the Defense Department is still building murderous robots. But those murderous robots must adhere to the department’s “ethical standards.”
Bookmarks