According to The simple economics of machine intelligence, machine should be used mainly for prediction. This is a big task lying on the shoulders of machine.
Who will bear the responsibility for mistakes? What kind of responsibility could be considered? Who will indemnified?
In the example given in the article (Medicine and prediction of the sickness), the machine will be responsible for the diagnosis while people will still be deciding the best course of treatment. The “prediction” in itself is thus uncontrolled.
In the example given in The evolution of machine learning, the 3rd phase, the monitoring is still undertaken by humans, does this mean, humans will still have to exercice a continuous monitoring on the prediction? And in that case they will bear responsibility for mistakes?
This interview (https://www.franceculture.fr/sciences/pourquoi-stephen-hawking-et-bill-gates-ont-peur-de-lintelligence-artificielle) cites another example : what would happen in case of an autonomous car crash ?
This is an interesting question. Prediction, insofar as it is uncertain, by the machine (controlled by human) or by human, allows mistakes.
It is the responsability of human to control the AI, and to be able to judge the predictions of the machine. In the simple economics of machine intelligence, we learn that the value of human judgment will increase, this is directly link to your question.
For example of the sickness prediction by the machine is a diagnosis. Maybe we should use it for non lethal diseases and have a double human check for serious diseases ?