‘It’s not going to work’: Keeping race out of machine learning isn’t enough to avoid bias

As more machine learning tools reach patients, developers are starting to get smart about the potential for bias to seep in. But a growing body of research aims to emphasize that even carefully trained models — ones built to ignore race — can breed inequity in care.

Researchers at the Massachusetts Institute of Technology and IBM Research recently showed that algorithms based on clinical notes — the free-form text providers jot down during patient visits — could predict the self-identified race of a patient, even when the data had been stripped of explicit mentions of race. It’s a clear sign of a big problem: Race is so deeply embedded in clinical information that straightforward approaches like race redaction won’t cut it when it comes to making sure algorithms aren’t biased.

Read the rest…

Read Original Article: ‘It’s not going to work’: Keeping race out of machine learning isn’t enough to avoid bias »