Algorithmic bias in healthcare still exists today, and as new technologies and digital innovations are introduced into the care continuum, leaders must acknowledge that many of the foundations the tools are built upon contain damaging bias that negatively impacts patient care.
AI must be created with caution and reviewed to avoid racism, sexism, and other forms of bias. We'll show examples from various industries and focus on the programmatic. mindful approach to carry AI forward that is inclusive by design.
In this episode we have a conversation with Dr. Tania Martin-Mercado, a clinical researcher who advises Microsoft, and her work on implicit bias and racial bias in clinical algorithms. We discuss some of the ways that bias has been integrated into clinical decision support tools and to what effects. Furthermore, what are some of the technology tools that can be used to detect bias and most importantly, how can organizations build teams that are most effective at detecting bias and improving AI tools.