Episode 74: How to Avoid Bias in Your Machine Learning Models with Clare Corthell
April 9, 2018 | Jon Prial
Bias exists everywhere. It factors into everything that we do and into virtually every decision that we make. An interesting, but problematic side effect of this is that bias can also easily slip into our machine models. In this episode, Jon Prial talks with Clare Corthell, a well-known and respected data scientist and engineer, and the founder of Luminant Data, about the issues that can arise when bias enters your models and how to avoid it in the first place. Plus, keep listening to find out how to access our first episode of Extra Impact, where Jon and Clare go deeper into this topic, talking about bias in AI-powered services like Airbnb and the controversial policing stop-and-frisk program.
You’ll hear about:
- The nature of bias in machine learning models and what causes it
- Cynthia Dwork’s work on transparency
- How companies should be thinking about data
- Implementing fairness into AI and machine learning models
- Developing an AI code of ethics
Who Is Clare Corthell?
Clare Corthell is a Data Scientist and Engineer with experience leading teams, building custom products for diverse business needs, building machine learning and natural language processing systems, and defining corporate data strategy. Over the last few years, she has built and managed Luminant Data, consulting on building more transparent predictive systems. Clients include startups, software companies, media businesses, R&D firms, and international nonprofits. Trained as a product designer, she focuses on building intelligent products with iterative, testable prototypes on the path to production. In addition, Clare is the author of The Open Source Data Science Masters, leads initiatives to combat algorithmic harm and improve transparency, and built a ridesharing NGO in Nairobi.