Defence aims at AI ethics
Australia’s Defence department is exploring the use of ethical AI systems.
Defence has launched a new technical report on ethical artificial intelligence systems.
It says that AI technology could be used to boost Defence capability and reduce risk in military operations.
However, it also warns that significant steps are needed to ensure the technology does not result in “adverse outcomes”.
“Defence’s challenge is that failure to adopt the emerging technologies in a timely manner may result in a military disadvantage, while premature adoption without sufficient research and analysis may result in inadvertent harms,” the report says.
So, Defence says, now is the time to explore some of the dozens of topics related to military AI, such as transparency, safety, accountability, human factors, supply chain, and misuse and risks.
Chief defence scientist Professor Tanya Monro says AI could remove humans from high-threat environments and provide more in-depth and faster situational awareness.
“Upfront engagement on AI technologies, and consideration of ethical aspects needs to occur in parallel with technology development,” she said.
Air Vice-Marshal Cath Roberts, head of air force capability, says she expects AI and human-machine teaming will be “pivotal” for air and space power in the future. But ethical and legal issues must be resolved while the technology is being developed, she said.
“This paper is useful in suggesting consideration of ethical issues that may arise to ensure responsibility for AI systems within traceable systems of control,” she said.
“Practical application of these tools into projects such as the Loyal Wingman will assist Defence to explore autonomy, AI, and teaming concepts in an iterative, learning and collaborative way.”
The full report is accessible here.