Move Forward With Ethical AI in Health Marzyeh Ghassemi This is information that's incredibly deeply (embedded) in data, and you can't just remove it simply.” In fact, you can filter this image in a variety of ways until it doesn't really look like a chest X-ray anymore, and machine learning models can still tell the self-reported race of a patient. “It's not body mass index, breast density, bone density, it's not disease distribution. “It's not the obvious spurious correlations that you might imagine you could remove from medical imaging data,” she said of AI’s strategic ability to classify the images according to race. In a shocking example of evaluating radiology images, Ghassemi showed how AI can still figure out a person's self-reported race where a human doctor would not be able to make that prediction. Looking at this entire life cycle, she said, will help stakeholders to move forward with ethical AI in health, and deal with deeply embedded biases that can otherwise have a negative effect on the fairness that we want in healthcare systems. Outlining five stages of a pipeline, Ghassemi mentioned problem selection, data collection, outcome definition, algorithm development and postdeployment considerations. “The question is, how does this do (for) all people?” she said, stressing that just using one sub-section of a populace is not enough to really produce transparency on applicable problems and concerns. Solving these problems, she said, will require diverse data and diverse teams.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |