Search

Sample Topic: Algorithmic Bias

Writer: Allison Cui

Instagram: @allison.cui

Email: allisonycui@gmail.com


What is Algorithmic Bias?

With the rise of machine learning and artificial intelligence systems, decision-making processes are increasingly turning to computers to analyze information about populations. Reliance on humans and organizations is starting to shift into this new world of technology because of the mass availability of datasets and convenience in executing a sequence of tasks through technology. Algorithms facilitate decision-making in this manner and as a result, machine learning and AI have both become key tools in fields such as data analysis and political science during the past decade; their impact on democracy and society has been paramount but not always for the best reasons. This issue of algorithmic bias arose as flawed, unrepresentative, and/or incomplete data were identified across all sorts of populations.


Mechanisms of Algorithms

Algorithms are essentially the core of machine learning as they are what fundamentally drive intelligent machines to make decisions by executing a simple series of steps and instructions, like following a flow chart. A computer program is told what to do step by step, and the basic technique used to get this job done are algorithms. Machine learning, on the other hand, is defined as a class of methods for automatically creating models in data. The algorithms that are involved act as the engines; they are what turns a dataset into a model. They can be used for automated reasoning, data processing, and calculation. Examples of machine learning algorithms include linear regression, logistic regression, learning vector quantization, and many more.


The History of Bias in Data

Algorithmic biases are rooted in the history of America. Post-emancipation writers greatly contributed to the creation of racial knowledge necessary to shape the future of race relations. The historical issue of Africans “imposing their evil past on the nation” had spiraled into a contemporary crisis focusing on numerous social issues that related to what places African Americans would fill, and how they would enter the modern world as citizens after emancipation. Because of this, black criminality would emerge as a fundamental measure of black inferiority. At the time, writers were often perceived with suspicion as their anecdotal evidence was clearly biased, leading to several accusations. This eventually led to a greater desire for “objective” and “scientific” evidence in order to reinforce black inferiority and overcome the credibility gap. Thus, people began to rely on statistics as a way of understanding African Americans’ true racial capacity. Statistical data on the growth of the black prison population in the 1890 census would now be interpreted as definitive proof of Africans’ true criminal nature, which, as a result, would justify a number of discriminatory laws targeting blacks and unfairly punishing. This vicious cycle perpetuates the practice of linking crime to blacks and the stigmatization of crime.


The Significance of Algorithm Bias

Ideas of racial inferiority and crime have become fastened to minority classes throughout history, and even with the advancement of today’s technology, it is extremely difficult to separate bias from the truth. In fact, many algorithms today actually exacerbate the matter by introducing inequalities into measures that people look to as accurate and fair information. This is a significant problem because of the role algorithms play in decision-making for public policies as well as governance. These extreme ramifications especially affect traditionally marginalized groups. By using these biased datasets, algorithms end up reinforcing inequalities in hard numbers and statistics, which are in reality supposed to provide an objective perspective.


Unexplored Concepts and Limitations

Recent developments in technology, such as artificial neural networks, continue to make the process of identifying and removing biases in algorithms more difficult as creators struggle with grasping the logic of their networks. Several flawed human observations may be too deeply embedded into the AIs of companies to fully trace down and eliminate. However, even if the technical issues are alleviated, establishing fairness as purely a technical problem solvable through the inclusion of more data poses an ethical dilemma. A handful of algorithmic fairness solutions attempt to fit the minority group to the majority, thus ignoring heterogeneity. As a result, computations cannot solve the bias problem alone, but can be incorporated within a broader approach in addressing algorithmic bias.


Mitigation Proposals

The first and foremost way to combat algorithmic bias is through acknowledging the possible presence of bias in certain datasets. After recognizing the flaws in employing and relying on algorithms to make inferences about a group of people, the technical aspect must be more thoughtful in eliminating models that may exacerbate explicit discrimination. One possible mitigation proposal is the development of a bias impact statement for algorithm operators that would address and help filter out any potential biases that might have been factored into an algorithm decision. Operators should brainstorm an initial set of assumptions about the algorithm’s purpose and then apply the bias impact statement whenever it is appropriate to assess the algorithm’s process as well as production. Another method is to use a tool, such as IBM’s AI OpenScale, to ensure that fair outcomes are being produced. The platform provides insights on fairness, accuracy, and explainability while the solution will automatically detect unfair or inaccurate results, as well as what factors may have affected the decision. On the other hand, the non-technical aspects of mitigating algorithmic bias are also influential: companies are demonstrating the importance in understanding and raising awareness about this area. Some are even looking to establish new AI ethic positions and AI trainers to teach AI ethics to others. The regular auditing of algorithms is another procedure that checks for bias through reviewing the data of both input and output decisions. Audits can provide an insight into the algorithm’s behavior from a third-party’s point of view.


Closing Remarks

The presentation of data as objective, color-blind, and incontrovertible remains a crucial issue as laws, punishments, and new forms of everyday racial surveillance are introduced into society as computers and algorithms fail to recognize this bias, unfairly suppressing minority and marginalized groups. Although it’s unrealistic to ever reach a completely neutral system, transparency and adequate documentation at multiple stages of the process can greatly support a strong accountability framework.


Bibliography

Brown, Annie. “Biased Algorithms Learn From Biased Data: 3 Kinds Biases Found In AI Datasets.” Forbes, Forbes Magazine, 8 Feb. 2020, www.forbes.com/sites/cognitiveworld/2020/02/07/biased-algorithms/#afe316e76fc5.

Heller, Martin. “Machine Learning Algorithms Explained.” InfoWorld, InfoWorld, 9 May 2019, www.infoworld.com/article/3394399/machine-learning-algorithms-explained.html.

Says:, Koski, et al. “The Danger of Biased Algorithmic Systems and How to Solve It: MS&E 238 Blog.” MSE 238 Blog The Danger of Biased Algorithmic Systems and How to Solve It Comments, mse238blog.stanford.edu/2018/07/adstan18/the-danger-of-biased-algorithmic-systems-and-how-to-solve-it/.

Turner-Lee, Nicol, et al. “Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms.” Brookings, Brookings, 25 Oct. 2019, www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.