Wed. Dec 1st, 2021


The purpose of the guidelines is to ensure that technical contractors adhere to existing DoD systems Ethical principles for AIGoodman says. The DoD announced the policies last year after a two-year study commissioned by the Defense Innovation Board, an advisory panel of top technology researchers and businessmen founded in 2016 to bring the spark of Silicon Valley to the U.S. military. The board was chaired by Eric Schmidt, former CEO of Google, until September 2020, and its current members include Daniela Ross, director of MIT’s Computer Science and Artificial Intelligence Lab.

Yet some critics question whether the work promises any meaningful reform.

During the study, the board consulted with a number of experts, including vocal critics of the military’s use of AI, such as Campaign for Killer Robots and Meredith Whitaker, a former Google researcher who helped organize the Project Maven protest.

Whitaker, now the faculty director at the AI ​​Now Institute at New York University, was not available for comment. But according to Courtney Hollesworth, a spokeswoman for the institute, she attended a meeting where she argued with senior board members, including Schmidt, about the direction it was taking. “He was never given meaningful advice,” Hollesworth said. “The claim that he was can be read as a form of ethics-washing, where a small part of a long process is used to claim the presence of dissenting voices that a given result has led to extensive purchases from relevant stakeholders.”

If DoD doesn’t have extensive buy-ins, can its guidelines still help build trust? “There will be some people who will never be satisfied with a set of ethical guidelines for creating a DOD because they think the idea is unusual,” Goodman said. “It’s important to be realistic about what guidelines can and can’t do.”

For example, the guidelines say nothing about the use of lethal autonomous weapons, a technology that some preachers should ban. But Goodman noted that the rules governing such technology are decided on the chain. The purpose of the guidelines is to make it easier to create AI that meets these rules And part of that process is to make clear to third party developers any concerns they may have. “A valid application of these guidelines is the decision not to follow a specific procedure,” said Jared Dunman, their co-author at DIU. “It’s not a good idea you can decide.”



Source link

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *