Policing in the digital age

A number of factors in recent times have pushed to the forefront, the public debate over racial profiling and systemic racism in policing across the world. The killing of George Floyd and the furore that followed was a large contributor to this push. Joe Biden and Kamala Harris have promised to eliminate rampant systematic racism from American policing. While racial profiling itself is not old or specific to the US, there is a troubling dimension to the whole situation that affects the long term existence of systemic racism in policing practices.
As technology has advanced, facial recognition technology and predictive algorithms have increasingly become a much-coveted tool for policing worldwide. In the US in particular, police departments are increasingly adopting digital technology in the hopes that they can make law enforcement both accurate and efficient. A noble initiative to be certain and on the face of it, a well reasoned one. After all, the cliche is that data never lies.
Not so, however. Predictably, there are many problems. The biggest of which is that this new band of predictive policing is based on machine learning. Ample evidence and examples over the years have shown that, depending on the input, these machine learning models can imbibe the biases in the data it is presented. The problem has only gotten worse. Data received from a flawed first generation of predictive tools was used as the basis for further generations of such tools. Because they are based on pre-recorded sets of data, the machine mimics older biases in data collection. The problem is particularly acute in the US. Consider figures provided by the US Department of Justice itself that showed that you were twice as likely to be arrested if you are black rather than white. A black person is also five times more likely to be pulled over by the police without cause. Now, consider the result of feeding data that is based on this bias into a machine learning programme that then comes up with an algorithm for predictive policing. The data bias has clear potential for aggravating the situation.
Another aspect of this is the likelihood that such software can also misidentify you as it is trained to work better with a particular set of features. In 2018, an MIT study of three commercially available gender recognitions systems showed that they had error rates of up to 34 per cent for women of colour, nearly 49 times the error rate for white men who were predictably the target group that the software had trained with.
As such, last week the UN also expressed deep concern over the growing use of big data in policing and the potential for disproportionate negative effects on minorities. Their concerns are largely based on the example of how such systems increased discrimination when used in profiling systems used by many private companies for their hiring process.
Of course, the problem does not end with policing and job hiring. Big data and predictive algorithms are becoming more widespread by the moment. Already everything from loan applications to targeted social media apps are based on these predictive algorithms. In regards to policing itself, it is alarming to note that the Black Lives Matter movement in America has not actually slowed the use of biased predictive algorithms in policing in America, it has stimulated its growth. As discussions over defunding and further regulating the police force take centre stage, so does the need for police departments to plug in the gaps with measures like predictive policing.
All is not lost, however. Unlike human bias, machine bias can be course-corrected by teaching algorithms to identify and then tackle bias in the data it is presented. Several organisations across the world are already working to identify biases in existing algorithms. Furthermore, there are larger regulatory efforts by governments as well such as the General Data Protection Regulation in the EU which, among other things, prevents any organisation from using data that promotes specific biases to make automated decisions.
This is a matter of interest to India as well. In the last decade, Indian police forces have steadily expanded their use of predictive policing systems to identify criminal hotspots. Delhi and Uttar Pradesh are some of the Indian states that have made use of such systems. While there has been very little research done over how predictive policing works in India, it would be optimistic to the point of foolishness to expect that similar problems of bias will not creep into India's systems. Ultimately, there is no denying that predictive policing has the potential to be a significant tool in keeping society safe but the premature use of such technology without ironing out the kinks can have dire consequences.