Myth #42: Algorithms are always neutral.
Matthias Spielkamp

Myth: Because an algorithm is nothing but a set of instructions that is applied to data – that usually comes in the form of numbers – it can contain no bias or prejudice that would have an influence on the outcome produced by using the algorithm.

 

Busted: Algorithms are designed by humans; so are algorithmic decision-making systems – from network management that favours certain forms of content over others (net neutrality) to “AI” that is supposed to automatically distinguish hate speech, disinformation or terrorist propaganda from journalism, parody and other forms of legitimate content. (#18; #43) All of these systems use value judgments to arrive at their results: To perform its task, an algorithm needs to “know” what data package to treat differently from others, what criteria define a certain mode of expression. Leaving the question aside whether algorithms will ever be able to assess speech correctly (they will not), it is obvious that such definitions are always developed by human beings with certain intentions. We would not qualify these definitions as neutral, and neither is the algorithm that acts on their basis.

A benevolent reading of the myth is that algorithms subject all inputs to the same set of instructions and therefore act indiscriminately, and that they do not have any intentions of their own. This is true, but obviously beside the point.

With regard to so-called machine-learning techniques, another aspect comes into play. When algorithms are trained by feeding them with large data sets in order for them to identify patterns and “learn” from them (so-called “training data”), then it needs to be understood that these data sets usually contain biases that are inherent in human society. If a self-learning system draws conclusions on the basis of biased data, these conclusions will generally be biased, too, and therefore not neutral.

 

Truth: Algorithms are either directly designed by humans or, if self-learning, develop their logic on the basis of human-controlled and -designed processes. They are neither “objective” nor “neutral” but outcomes of human deliberation and power struggles.

 


Source: Aylin Caliskan Islam, Joanna J. Bryson, Arvind Narayanan: Semantics derived automatically from language corpora necessarily contain human biases, Computing Research Repository (2017), https://arxiv.org/abs/1608.07187; Alex Salkever, Vivek Wadhwa: A.I. Bias Isn’t the Problem. Our Society Is (2019), https://fortune.com/2019/04/14/ai-artificial-intelligence-bias.