It’s plausible that in the near future we will have artificial intelligence that for given, limited situations (for example: make a medical treatment decision, a resource allocation decision, or an “acquire military target” decision) is able to make as a decision or better than a human can 99.99% of the time – unclouded by bias, with vastly more information available to it.

On the other hand, Human decision-making is riddled with biases and inconsistencies, and can be impacted heavily by as little as fatigue. Therefore, why not trust the machine? What are the most prominent ethical challenges in machine learning?




Good question, a little provocative but it makes it interesting. However, it might be good to improve it with a responsability aspect that we can feel in other questions. Why not trust the machine ? How is the responsibility shared in case of a machine’s mistake ? etc

Add a Comment