In the processes of an algorithm development, from the collection of data to the interpretation of algorithms ouput, errors might occur and, furthermore, algorithmic decision-making processes can sometimes render biased judgements that could lead to continuous discrimination of certain groups. The work of Oscar Gandy showed that this type of algorithms can easily lead to discrimination by design which is a problem that seriously threatens the basic principles of a democracy. Algorithmic system outputs can therefore unintentionally continue historical biases because of past patterns, threatening therefore key principles of society such as equality and fairness.

Another example would be based on algorithms used to evaluate potential employees in the hiring process. Errors can also occur while using an algorithm for decisions, which could potentially lead to the exclusion of people from the workforce based on an algorithm error. This aspect affects a person’s life on the long-run and it would be againts the democratic principle of fairness.


Once these risks have been identified and once it is clear that algorithmic output is not always error-free, how can policy makers be sure to avoid the negative impact of the usage of algorithms on democratic principles, including fairness and equality?

edited question
Add a Comment