Artificial Intelligence is Not Biased or Racist

Dispelling the Mainstream Media's Lies

In partnership with

Get Your Free ChatGPT Productivity Bundle

Mindstream brings you 5 essential resources to master ChatGPT at work. This free bundle includes decision flowcharts, prompt templates, and our 2025 guide to AI productivity.

Our team of AI experts has packaged the most actionable ChatGPT hacks that are actually working for top marketers and founders. Save hours each week with these proven workflows.

It's completely free when you subscribe to our daily AI newsletter.

Hello everyone and welcome to my newsletter where I discuss real-world skills needed for the top data jobs. šŸ‘

This week I’ll be dismantling the bias myth in machine learning. šŸ‘€

Not a subscriber? Join the informed. Over 200K people read my content monthly.

Thank you. šŸŽ‰

How are we doing?

Login or Subscribe to participate in polls.

Content: 

  • Bias in Ai: Definition šŸ“š

  • Models: Open Sourced šŸ†“

  • Bias Lie: False Example 🤄

We need an understanding of the problem before we can eviscerate the mainstream narrative. I headed over to my favorite and biased language models called ChatGPT. I use ChatGPT often. It’s the least biased LLM I could find.

What is bias in Ai? šŸ˜•

Bias in AI refers to systematic errors or unfair outcomes in artificial intelligence systems that result from prejudiced assumptions in the data, algorithms, or design of those systems. It can lead to AI systems that treat certain individuals or groups unfairly, often reflecting or amplifying existing societal inequalities.

Let’s start with the model. All Ai is machine learning. There are some items outside of machine learning but they are so minuscule that they don’t move the needle for anything in the real-world.

A machine learning model is computer code that looks for patterns in data. That’s all they do.

This one is actually easy to set straight. All of the models used in the applied space (real-world) are open source. That means the code is available for you to see. The top frameworks in deep learning are TensorFlow and PyTorch. The top model in traditional machine learning are gradient boosters. All of these models are open source.

When I see discussions about the algorithm or models bias I issue this challenge. I’ve been doing it for about a decade and no one has been able to find a single example. If the model is biased, find me the code that creates the bias or where the biases is hardcoded into the model.

Surely if there’s bias or racism embedded into any of these open source models it would be easy to find. You won’t find a single line of bias or racist code because there isn’t a single model in existence that’s biased or racist. Prove me wrong. Find me the code. I’ll be waiting.

ā

You won’t find a single line of biased or racist code because there aren’t any.

Let’s move on to the next one. Data. Recall that all machine learning models learn from data. If the model doesn’t learn from data, it’s not a machine learning model.

You can easily bias any model by feeding it biased or racist data. However, does this really happen in the corporate world. Are their companies feeding models racist or biased data? (Nope)

ā

You can easily bias any model by feeding it biased or racist data.

If you’ve read anything about ethical concerns in Ai then you’ve heard of Amazon’s resume screening tool. It’s famous. Around 2014–2015, Amazon developed an experimental machine learning model to automate resume screening for software engineering and technical roles. The goal was to rank candidates similarly to how human recruiters would.

Below is how Carnegie-Mellon framed the event. šŸ˜‚

Load of Shit

Here’s what happened. The model worked too well. Here’s what the story line was. The issue wasn't that the tool was inaccurate, but that it learned patterns from biased historical data.

The next question might be… how did the data get biased? Wasn’t this Amazon’s data? You expect us to believe Amazon’s biased their data? Spoiler… the data wasn’t biased. šŸ™‚ 

The Truth.

Amazon historically hired more men for technical roles. Technology has been dominated by men since the beginning. People didn’t like that and that made the data biased.

The machine learning model picked up on this and began downgrading resumes that included the word "women's" (e.g., ā€œwomen’s chess club captainā€) and gave lower scores to graduates from all-women’s colleges. - ChatGPT

This was the correct behavior. There are far more skilled men programmers than women and by a very wide margin. Nothing to do with gender bias, how about basic statistics.

What happened to the model? I think you know. šŸ™„

The model was never put into production and they chalked it up to biased data. So while the model was accurate, it was reproducing historical gender bias — making it a textbook example of data-driven bias, not faulty machine learning model. - ChatGPT

ā

making it a textbook example of data-driven bias, not faulty machine learning model. - ChatGPT 🤣 

The data was collected accurately, the machine learning model was accurate and yet this constitutes an unethical case in machine learning?  

If you going blame someone, then blame Amazon for not hiring poorly trained women programmers. Amazon should have been hiring for gender equality instead of hiring the best programmers. 🤣

What’s entertaining is that this example is one of the most unethical examples in all of machine learning… ever. When you ask about the worst event concerning the bias of Ai, this is their example. 

ChatGPT tells us that the modern Ai boom began in 2010ish. The worst case of an unethical machine learning model during this time was a model that never made it to production, that was highly accurate and had unbiased data. šŸ™„

Thanks for reading and have a great day. šŸ‘

Do you believe machine learning is unethical?

Login or Subscribe to participate in polls.

Heads up!! SQL Server Hyper Focus, my GPT for helping you learn foundational concepts in SQL Server is nearly finished. šŸŽ‰

This is actually a course on the top technical questions most often seen in interviews for data professionals who will be working with SQL Server. It will be one of the first interactive courses created using a GPT.

No more watching boring courses and trying to stay awake. You’ll be able to use all the powerful features of ChatGPT to assist you with learning SQL Server. This is the next level of interactive learning. 

This is a fully customized learning experience. Just tell SSHF (SQL Server Hyper Focus) what you need. Flash cards. Done. Quizzes. Done. More detail that isn’t provided by the course. Done.

I’ve trained the model to use spaced repetition in concert with all the tools it has to help you learn. If you aren’t familiar with spaced repetition then:

what is spaced repetition 

Actually, let’s just ask ChatGPT how it can help you learn with SSHF. 

Game changer is used often in todays world. However, it’s pretty easy to see that LLMs will change a lot of things and eduction is one of them.

Stay tuned. ā°