The Franke Program in Science and the Humanities and John Templeton Foundation

Post-Talk Blog Post for Vishnoi Event: Bias in Algorithms

December 10, 2021

Today, society consumes tremendous amounts of information through various online channels from ride-sharing apps to online shopping, and these platforms are governed by decision-making algorithms that continuously learn about human behavior and their preferences through the data we generate as humans. For artificial intelligence algorithms to make future decisions, this generated data is crucial.  

In order to feed the machine learning models or algorithms with data, subsamples are obtained from a set of information chosen by humans. Hypothesis models are then trained using this data to make predictions or decisions for the new data that is added after the model has been trained and deployed in the courts granting bail, banks granting loans, and corporations hiring employees. In the process, algorithms become brittle and have the ability to encode human biases, which now gives us solid proof that humans are extremely biased, which is why these algorithms perceive biases.

Due to the rapid emergence of these biases, we need to ensure that the data we feed to the model is correctly adjusted by using participatory design techniques. Using these joint models, we can design algorithms that are fair under certain constraints. In these models, Professor Vishnoi specifies a very general framework that users can use to encode their fairness constraints by identifying bias, fixing data, developing fair algorithms, and studying the impact.  

Since every algorithm influences us differently, the algorithms that are used in social media platforms for recommendation systems are more influential on our decision-making than the algorithms that are used in car rental services. Explicit bias causes us to change how our algorithms are designed, which means we must implement the right metric when measuring the fairness of the decisions the algorithm is making. Although explicit biases can be mitigated to some extent with minor auditing of the algorithm, implicit biases will remain in the data and are more difficult to correct, leading to a need for better data preprocessing techniques. 

Human decisions can appear in this objective or seemingly an objective world of algorithms and make them biased. As this bias can mean very different things with very different contexts, sometimes it may be impossible to satisfy all the constraints of bias in algorithmic situations, and also in real situations. It is essential to understand algorithmic bias not only as it helps in designing fair or unbiased artificial algorithms, but also in redesigning the policies of our work.

–Zahra Kanji