Byte Back Against Bias

Our World, Our Concern

A student-led campaign for AI ethics & fairness

Scroll to explore

AI isn't good or bad — it's what we build into it.

Artificial intelligence is one of the most powerful technologies of our time. But bias can creep in through training data, design decisions, and human assumptions — leading to unfair outcomes for real people.

Training Data

Historical biases embedded in datasets shape how AI sees the world.

Design Choices

Every algorithm reflects the priorities and blind spots of its creators.

Human Assumptions

Unconscious biases shape what questions we ask and what we overlook.

Empowering students to think
critically about AI.

01

Raise Awareness

Help students and communities understand how AI bias works and why it matters in everyday life.

02

Promote Critical Thinking

Encourage people to question AI systems, ask how they were built, and consider who they serve.

03

Encourage Fairness

Advocate for accountability, transparency, and equity in how AI technologies are developed and deployed.

04

Empower Action

Give students the knowledge and tools to engage responsibly with AI — without fear-mongering or villainizing technology.

Bias Bytes

Training Data

How Biased Data Creates Biased AI

AI systems learn from data — but what happens when that data reflects historical prejudices? Explore how biased training datasets perpetuate discrimination.

How Biased Data Creates Biased AI

Artificial intelligence doesn't form opinions on its own. It learns patterns from the data it's given. If that data contains historical biases — and most real-world data does — the AI will learn and amplify those biases.

Consider facial recognition technology. Studies have shown that some systems have error rates up to 34% higher for darker-skinned women compared to lighter-skinned men. Why? Because the training datasets were overwhelmingly composed of lighter-skinned faces.

This isn't a flaw in the technology itself — it's a flaw in how we prepare the data. When datasets underrepresent certain groups, the AI becomes less accurate for those groups, leading to real-world consequences in policing, hiring, healthcare, and more.

What can we do?

  • Demand diverse and representative datasets
  • Audit AI systems regularly for performance disparities
  • Support open research into dataset bias
  • Ask critical questions about where data comes from

The first step toward fairer AI is understanding how bias enters the system. By recognizing the role of training data, we can start building technology that works equitably for everyone.

Fairness

What Does "Fair" Even Mean in AI?

Fairness in AI isn't as straightforward as it sounds. Different definitions of fairness can actually conflict with each other. Let's break it down.

What Does "Fair" Even Mean in AI?

When we say we want AI to be "fair," it sounds simple. But in practice, defining fairness is one of the hardest challenges in AI ethics. There are multiple mathematical definitions of fairness, and they often cannot all be satisfied simultaneously.

For example: should a risk assessment algorithm treat everyone equally regardless of background (individual fairness)? Or should it ensure equal outcomes across different groups (group fairness)? These two goals can directly conflict.

In criminal justice, predictive algorithms used for sentencing have been shown to disproportionately flag Black defendants as higher risk. The creators argued the system was "fair" because it had equal accuracy rates. But critics pointed out it had very different false positive rates across racial groups.

Key takeaways

  • There is no single, universal definition of AI fairness
  • Context matters — what's fair in healthcare may differ from what's fair in lending
  • Transparency about which definition is used helps accountability
  • Community input should guide fairness decisions

Understanding that fairness is complex — not a checkbox — is crucial to building AI systems that truly serve everyone.

Real World

AI in Hiring: Who Gets Left Out?

Companies increasingly use AI to screen job applicants. But these tools can systematically disadvantage women, minorities, and people with disabilities.

AI in Hiring: Who Gets Left Out?

Imagine applying for your dream job, only to be rejected by an algorithm before a human ever sees your resume. This is the reality for millions of job seekers today.

A well-known case involved a major tech company that developed an AI recruiting tool. The system was trained on resumes submitted over the previous 10 years — most of which came from men. As a result, the AI learned to penalize resumes that included the word "women's" (as in "women's chess club captain") and downgraded graduates of all-women's colleges.

AI hiring tools can also discriminate against people with disabilities. Video interview analysis tools that measure facial expressions, tone of voice, and word choice can systematically disadvantage people with speech differences, hearing impairments, or neurodivergent communication styles.

How to push for change

  • Ask employers if they use AI in hiring and what safeguards exist
  • Support legislation requiring AI hiring audit transparency
  • Advocate for human review at every stage of the hiring process
  • Share knowledge about AI hiring bias with your peers

Everyone deserves a fair shot at opportunity. When AI gatekeeps without accountability, we all lose.

Take Action

5 Ways Students Can Promote Ethical AI

You don't need to be a programmer to make a difference. Here are practical steps every student can take to advocate for responsible AI development.

5 Ways Students Can Promote Ethical AI

You don't need a computer science degree to care about AI ethics. As the generation that will live most deeply with AI's consequences, students have both the right and the responsibility to shape how these technologies develop.

1. Educate yourself and others

Start conversations about AI bias with friends, family, and classmates. Share articles, host discussions, and follow organizations working on AI ethics.

2. Question the tools you use

When you use AI-powered apps, search engines, or recommendation systems, ask: Who built this? What data does it use? Who might it disadvantage?

3. Support diverse voices in tech

Advocate for inclusion in STEM education. Diverse teams build more equitable technology because they bring a wider range of perspectives and experiences.

4. Engage with policy

Follow AI regulation efforts in your community and country. Write to representatives, sign petitions, and participate in public consultations on technology policy.

5. Lead by example

If you create technology — whether it's a school project or a startup — build ethics into your process from day one. Consider who might be affected and how.

The future of AI is being written right now. Make sure your voice is part of the story.