The Bias we don’t even see…. Algorithmic Bias

Emily Jensen
3 min readMar 14, 2021

The topic of this week is shall we have to essentially approve of the algorithms used in everyday stuff and platforms we use to prove they cause no harm in what content is put out there for us all to read and possibly be harmed by. After doing this weeks listen and readings, it kinda shocked me as to how this bias in algorithms happens everyday we barely notice it, but we have our phones or AI making possibly life altering decisions for us. These life altering decisions I discuss are possible job opportunities, possibly other schooling or sporting opportunities and so much more that just a basic algorithm in our phone, tablet, or computer can decide for us by what we search, like and have associations with, the fact that the thing we hold closest to us makes our life decisions is crazy when we as humans should be making them ourselves.In the Ted Talk by Joy Buolamwini, she digs into the idea of the facial recognition features on phones and how it is really just a computer camera recognizing you with or without glasses, but when it comes time with a mask on it doesn’t and that shows the bias in the algorithm, but of course it doesn’t stop there the issue of racial bias come into play when dealing with these algorithmic bias of not necessarily the AI recognizing an individual, and at the end of Buolamwini’s talk she ends with this, “ So I invite you to join me in creating a world where technology works for all of us, not just some of us, a world where we value inclusion and center social change,” (Buolamwini, 2016). After listening to that Ted Talk, it made me take a step back and analyze this issue of letting our technology define us and the decisions we make and allowing for these disparities and bias to happen right in front of us, most of the time without our knowledge, this is where I agree as far as having the algorithms to be tested to prove the no harm because we don’t want any racism or anything in the technology we use, but I’m afraid if we do allow for a slow development it might allow for some to still slip through the cracks and not entirely solve the problem like we should. In the Ted Talk by Cathy O’Neil she states the following, “Algorithms don’t make things fair if you just blithely, blindly apply algorithms; Because we all have bias, it means they could be codifying sexism or any other kind of bigotry,” (O’Neil, 2017). I found those past 2 sentences interesting as she’s basically stating what were all thinking, we know that some things in life just aren’t fair but as well as we see bias in almost everyday life and don’t really realize it until it becomes a big issue. I feel like the bigger issue is how easy it is for these biases to be embedded and coded into things that we don’t even realize, it’s kinda scary in a way. I think this just further proves both our thoughts on having algorithms legislated to prove they cause no harm because if they don’t get legislated, they very likely can be causing harm and nobody wants that. Overall, I do believe the algorithms should be tested but yet I’m unsure about the development and how efficient it will be in resolving this issue, but surely this is an issue that should be solved despite if its algorithmic, bias should be dealt with no matter what, I juts hope this can allow for you and other readers to agree and see how much these biases control our lives, and hopefully that will allow for change both algorthimically and legally.

--

--

Emily Jensen
0 Followers

Hi my name is Emily, if you couldn’t already tell, my pronouns: She/Her/ Hers and something you should know about me is I am a first generation college student!