Question to Gilles Babinet

Machine learning and AI technologies led to many useful innovations such as autonomous cars. But they also originated Fully Autonomous Weapons (FAW), weapons such as drones that can act and kill without being commanded by a human being. As buddhist monk Matthieu Ricard explains, AI computers “don’t feel gratitude. They don’t feel hatred. They just do what they’re programmed for, even if they’re quite incredibly smart.” But when they are programmed to learn by themselves, isn’t there a major ethical risk in the military field for instance? Should FAW weapons be banned such as biochemical weapons.

References: Robots Will Never Replace Humanity, Matthieu Ricard Explains