An Algorithm That Grants Freedom, or Takes It Away
This NYT article on how algorithms are being used for various predictions in the justice system is insightful but hardly surprising.
In Philadelphia, an algorithm created by a professor at the University of Pennsylvania has helped dictate the experience of probationers for at least five years. The algorithm is one of many making decisions about people’s lives in the United States and Europe. Local authorities use so-called predictive algorithms to set police patrols, prison sentences and probation rules. In the Netherlands, an algorithm flagged welfare fraud risks. A British city rates which teenagers are most likely to become criminals.
https://www.nytimes.com/2020/02/06/technology/predictive-algorithms-crime.html
The problem here isn’t that algorithms are being used in these ways. In fact, its likely better to remove the frailties and individual biases at play often in these life changing decisions. However, there are two interelated issues here:
- Lack of transparency breeds conspiracy. Its easy to claim that a heartless algorithm is wrong in making these major decisions but the bigger issue is that we don’t know how, by whom and with what training data they were developed.
- Biases will be built in. We like to think algorithms don’t have biases but they were trained on human data. Humans have biases. There are systematic ways to work those out of machine learning algorithms but that’s a non-trivial effort. Per #1, unless we have transparency, we cannot make these better.
Governments should open these up (with proper privacy protections) to researchers who can discover and fix the biases. Only then will we trust them.