Question to Gilles Babinet
Machine learning and AI technologies led to many useful innovations such as autonomous cars. But they also originated Fully Autonomous Weapons (FAW), weapons such as drones that can act and kill without being commanded by a human being. As buddhist monk Matthieu Ricard explains, AI computers “don’t feel gratitude. They don’t feel hatred. They just do what they’re programmed for, even if they’re quite incredibly smart.” But when they are programmed to learn by themselves, isn’t there a major ethical risk in the military field for instance? Should FAW weapons be banned such as biochemical weapons.
References: Robots Will Never Replace Humanity, Matthieu Ricard Explains
This is already starting to happen as a group of 116 experts in the field of AI and machine learning have signed an open letter asking the UN to ban autonomous weapons: https://www.theguardian.com/technology/2017/aug/20/elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war