Reading Notes
Tell your Troubles to the chatbot
- Woebot chatbot that helps with therapy
- Will woebot drive people who are already isolated apart?
- Is technology the solution to a problem that is caused by technology?
- Requires you to spend more time on your phone
- A lot of the tech is designed without risks in mind
- Using therapy chatbot data to blackmail people - woebot functions on trust
- Using facebook messenger - concern for data privacy
- Need to think about things from the perspective of the most vulnerable
- AI can stand for artificial intelligence
AI in Medicine needs to be carefully deployed to counter bias - and not entrench it
- “There’s also a risk: these powerful new tools can perpetuate long-standing racial inequities in how care is offered”
- Data that is used to train models is based off historically providing different care to white patients
- “People of color are often underrepresented in those training data sets”
- Ie: algorithm relied on health care spending to predict health needs, but historically black patients spent less and so they had to be much sicker to be recommended for extra care under this algorithm
- The challenge of rooting out racial bias
- A team at Duke overlooked delay in sepsis diagnosis in hispanic kids vs. white kids, which could be fatal
- Regulators are taking notice
- “FDA policies on racial bias in AI as uneven, with only a fraction of algorithms even including ration information in public applications”
- Requirement to share how data was used to build algorithms = nutrition label for more transparency
- Office for Civil Rights at US department of health and Human services proposed regulation that forbid clinicians from discriminating through clinical algorithms
- Industry welcoming and wary of new regulation
- Hospitals and academics don’t want regulation to scare physicians from using AI
- There is no financial incentive to make sure AI has no bias
- You have to look in the mirror
- Scared where else this is happening
- Involve anthropologists, sociologists, community members
Artificial Intelligence and algorithmic bias: implications for health systems
- AI and Algorithmic Bias in health systems: Challenges
- Lack of a clear standard of fairness
- Hard to define fairness in a general way because algorithms are trained on data from the world which is biased
- Lack of contextual specificity
- Data is not uniformly available for all socioeconomic groups
- The black-box nature of deep learning
- Actions to Counter the Risk of Algorithmic bias in health systems
- Establish context in which algorithms will be developed and deployed
- Consider different needs of different groups
- Establish processes to counter the risks of bias in algorithm development
- “Create “human-in-the-loop” systems, where algorithmic outputs are passed to a human decision maker with necessary caveats and the human is the ultimate decision maker”
- Balanced development of the discipline of health data science
- “The choice of data, algorithm, performance measures and analysis of algorithmic outputs to optimize performance and minimize bias requires considerable judgment.”
- “Data science teams should be as diverse as the populations that the AI algorithms they develop will affect.”
- “Diversity of discipline and patient representation is also important”
- Transparency and explainability in algorithm development
- “Use clinical expertise to propose relevant counterfactuals for the context in which the algorithm is being developed”
- The role of the public sector in AI and in countering the risk of algorithmic bias in health systems
- Establish standards of fairness
- Regulate algorithms
- Introduce mechanisms to address emerging issues
- Encourage conducive partnerships between public and private sectors