Addressing Inequality within Algorithms and Machine Learning

“If we want to build a society that’s fairer, more just, and more inclusive than in the past, then blindly accepting past data as neutral - as an accurate, or desirable, model upon which to build the future - won’t cut it.”

- Sara Wachter-Boettcher, Principle of Rare Union (Wachter-Boettcher, 2017 pp. 146)

Wachter-Boettcher (2017) goes on to describe how the tech industry should be responsible and provide transparency for what our data does and where it comes from, and that information concerning what assumptions are coded into products, should be made publicly available. She ends by stating, “Otherwise, we’ll only encounter more examples built on biased machine learning in the future.”. This is perfectly reasonable to propose, but practically it may be much harder to persuade individuals, groups or systems to take responsibility for their stereotypes, let alone stand by their justifications for said stereotypes. Lozano (2015) stresses the need for more sustainable mental models and behaviour within corporate structures. He proposes an in depth analysis of how sectors such as the tech industry can restructure their businesses to promote the conformity of sustainability issues. Lozano points out that external factors, or “drivers”, such as public and user opinion should be taken into consideration when modelling corporate mentality. In terms of algorithms and machine learning, the author suggests that cognitive bias should be a major consideration when designing integration methods of human behaviour in IT models such as machine learning systems. Arnold (2018) adds to this proposal stating that “Sustainability challenges cannot be solved by monocausal concepts and thinking, there is the need to stress uncertainty, unforeseen dynamics as well as multi-level effects.”. She then adds to this by stating, “Information technology is not only about software, data and document management and human training, concerning human interface. There must be learning sequences for direct analysis, interpretation and adaptation of human interaction in production planning processes.”.

Arnold (2018) proposes that there are two possible solutions to this issue: that humans directly take action to manipulate our own behavioural strategies to compromise for our cognitive bias when interfacing with algorithmic technologies, or algorithmic systems that interface with humans integrate with human biases as a means to compromise for them. Research of this latter solution has already undergone testing. Biswas and Murray (2017) tested two robots designed to communicate verbally with humans, one which contained programming to impose bias and one that contained no biases. The authors found that robots receive much better interaction with humans if they contain their own biases such as “forgetting participant’s names, denying its own faults for failures, unable to understand what a participant is saying, etc”. For the former solution of human behavioural action Wright and Goodwin’s (2002) research proposes adopting a “think harder” mentality to decision making, whereby individuals undertaking a “think harder” approach to a decision, engage in producing conscious decision trees to evaluate their implications of bias. As a direct result of this the total outcomes that rely upon biased opinions is minimised.

Both of the proposed solutions by Arnold (2018), as evidenced by Biswas, Murray (2017) and Wright, Goodwin (2002), shows that both are viable long term solutions to algorithmic equality. However, before further testing and implementation of these solutions can produced, short term, actionable solutions should be implemented to minimise negative escalation.

“people must be trained to think more carefully about the data they’re working with, and the historical context of the data. Only then will they ask the right questions - like, “Is our data representative of a range of skin tones?” and “Does our product fail more often for certain kinds of images?” - and, critically, figure out how to adjust the system as a result.”

- Sara Wachter-Boettcher, Principle of Rare Union (Wachter-Boettcher, 2017 pp. 136)

Bart Knijnenburg, assistant professor in human-centered computing at Clemson University adds to this opinion by providing insight to why bias algorithms have been able to persist in the manner in which they have, stating, “Algorithms will capitalise on convenience and profit, thereby discriminating [against] certain populations, but also eroding the experience of everyone else. The goal of algorithms is to fit some of our preferences, but not necessarily all of them”. Here Knijnenburg highlights the use of aggregation bias where by algorithms are designed in a “one-size-fits-all” methodology; allowing tech companies to cut both production time and cost of their systems but all at the detriment of minority users (Rainie and Anderson, 2017). O'Neil (2018) discusses this issue, comparing it to a “new industrial revolution”. Here she references the rapid advancements in technology in the early 1900’s that led to exploitation of workers and society as a whole. It was not until the media highlighted the exploitative behaviours to the mass population that changes in legislation made by government bodies brought fairness and transparency to the workplaces, such as in mines and factories. O'Neil proposes that for algorithms to move towards equality, similar action will have to take place where by greater powers regulate the behaviour of corporations and industry. She highlights how actions taken in the industrial revolution “no doubt raised the costs of doing business, they also benefited society as a whole.” thus showing how corporations such as Facebook and Google may have to compromise with the greater society if it wishes to exist as it does today.

The methods that provided socially oriented solutions to the industrial revolution that O’Neil describes are already taking place. Institutions such as FAT/ML (Fairness, Accountability, and Transparency in Machine Learning) (Fatml.org, 2019), are providing the content for mass media websites such as Motherboard, who are highlighting published examples such as, “Amazon Pulled the Plug on an AI Recruitment Tool That Was Biased Against Women” (Cole, 2018), providing the educational content to spread awareness of current issues and put pressure on companies or governments to do something about them. Influential figures in the development of algorithmic technologies such as Elon Musk, founder of SpaceX and Tesla Motors, and Steve Wozniak, Co-founder of Apple, have recently signed an open letter titled, “Research Priorities for Robust and Beneficial Artificial Intelligence” all agreeing to comply with sustainability regulations for the future of AI within our society, in hope to regulate the technology before it becomes too large to control (Future of Life Institute, 2018).

To conclude, biased methods, data and mental models are corrupting the algorithmic technologies that we use increasingly on a day to day bases. Steps towards equality are being made with researchers by testing alternative methods of handling data and also ways in which humans themselves process interaction and perceived groups within society. With these steps, the tech industry can achieve inclusive design not just among its users, but within its own corporate structures.