Literature Review

The following review will be conducted around the subject of algorithmic inequality in technologies, with machine learning being one of the focused technologies. Bias will be the key factor to the ways in which algorithmic technologies become unequal. The general trend of the literature originates from: key examples of inequality in the technology (tech) industry happen, become publicly known through mass media and digital journalism and then finally becomes explored by academics in research and theory. Because human cognitive bias itself is such a large academic field of study, key psychological theory’s will be highlighted to focus one’s definition and perspective within the key themes of the dissertation. Due to this focus, some key sources for the development of cognitive bias and the consciousness such as Freud and Jung (Freud and Rieff, 2008; Jung and Hull, 2014) will not be reviewed in-depth due to the complexity but will be referenced due to the deep foundations they contribute to the field. Psychological reasoning behind modern technologies will not be an addition to this dissertation although being an adjacent field. Its addition would defocus this dissertation’s aim to comprehend the first level inequality of bias in algorithms, as appose to why users persist inequality to take place (Gertz, 2018). This review will mirror the structure of the dissertation itself, firstly highlighting introductory literature, proposing psychological literature, showing sources of inequality in algorithms and concluding with literature aiming to solving the proposed issue.

For summative introductory terms, O’Neil (2017) is highly cited throughout the field of algorithmic inequality. Her writings propose examples of the many ways in which algorithms produce bias in technologies though multiple different types such as age, gender and race. The author provides a user perspective of these examples but does not necessarily produce insight from the industry. Wachter-Boettcher (2017) highlights some of these examples and builds upon the larger perspective of biased algorithms within the tech industry. After these examples where highlighted by the previous authors standardisation boards such as the IEEE (Standards.ieee.org, 2019) proposed advisory regulations and methods to combat algorithmic instances of inequality. These standards are drawn up by highly respected members on industry and academia surrounding sustainable design. The IEEE advised regulations are suggested as being well meant by authors such as O’Neil and Wachter-Boettcher, but acknowledged as short term solutions to the problem.

For methods of visualising bias in the human species, Darwin (1906) is still renowned as having the clearest perspective for our progression as a species. His theory of evolution where by a survival of the fittest method is proposed provides reasoning to why humans are bias. Brainerd, Stein and Reyna (1998) move forward with Darwin’s theory’s to try to provide explanation to the conscious or unconscious cognition within reason to evolution. The author’s research provides a modern planform built from earlier writings by Freud and Jung (Freud and Rieff, 2008; Jung and Hull, 2014). Although not heavily cited within academia, Suresh and Guttag (2019) give five clear categorisation of bias types relative to algorithmic equality. Their aim in doing so is to provide clarity to the field enabling clear causality for example within the industry.

Psychological theory alongside research from authors O’Neil (2017) and Wachter-Boettcher (2017) builds a clear picture of causally, highlighting the areas which have led to the scenarios taking place allowing for more affirmative solutions to be proposed. Editorial research carried out by Motherboard (Haskins, 2019) delves into the specific values of the scenarios in question. Articles such as these provide greater evidence are a commonly acknowledge source within academia as a means to publicly highlight findings from a user’s perspective.

Summative findings researched by Arnold (2018) highlight how the future of algorithmic equality may be at the end of two paths: change of human cognitive method or the implementation of bias into technology. Arnold’s further research with constellations has yet to be accepted by others so will be withheld from this dissertation. Recent studies from Biswas and Murray (2017) and Wright and Goodwin’s (2002) provide the research for both of these paths but have yet to directly compared against one another so summery of future direct can only currently be hypothesised until further research has taken place.