In the processes of an algorithm development, from the collection of data to the interpretation of algorithms ouput, errors might occur and, furthermore, algorithmic decision-making processes can sometimes render biased judgements that could lead to continuous discrimination of certain groups. The work of Oscar Gandy showed that this type of algorithms can easily lead to discrimination by design which is a problem that seriously threatens the basic principles of a democracy. Algorithmic system outputs can therefore unintentionally continue historical biases because of past patterns, threatening therefore key principles of society such as equality and fairness.
Another example would be based on algorithms used to evaluate potential employees in the hiring process. Errors can also occur while using an algorithm for decisions, which could potentially lead to the exclusion of people from the workforce based on an algorithm error. This aspect affects a person’s life on the long-run and it would be againts the democratic principle of fairness.
Once these risks have been identified and once it is clear that algorithmic output is not always error-free, how can policy makers be sure to avoid the negative impact of the usage of algorithms on democratic principles, including fairness and equality?
I definitely see your point, you say that in order to overcome this problem we should leave the final decision in the hands of a human being?
It is indeed an alternative in order to avoid relying 100% on algorithm decisions, however, it has been proved that “algorithmic recommender systems constitute a very powerful form of choice architecture, shaping user perceptions and behavior in subtle but effective ways through the use of ‘hypernudge’ techniques, undermining an individual’s capacity to exercise independent discretion and judgment”. (Algorithmic Regulation: A Critical Interrogation by Karen Yeung). In the end, the decision would not be independent and the same problems that existed initially would arise according to the author.
Thanks Sabina, I find your question clearer!
Just a thought about it : isn’t the solution to always keep a “human in the loop” to proof the algorithm’s results ?
I don’t have the answer personaly, as I think ensuring these basic principles are difficult to do indeed. But thanks for your question!