by Varsha Rao, CCG at NLU Delhi.
The current discourse surrounding the use of facial recognition technology in surveillance operations, prompted by the recent ban in San Francisco, is populated by cost-benefit analyses — the cost of privacy and freedom of assembly versus the benefit of nabbing criminals and identifying missing children.
The problem with such an approach is that it pits individual rights and freedoms against the State’s duties without allowing for space to address shortcomings. It creates a shallow profile of the person opposing or favouring the technology. If you value your privacy and oppose real-time surveillance, you must be soft on crime. If you are willing to implement large-scale surveillance to track the thousands of children that go missing every year, you must be indifferent to the prejudices faced by minority communities.
The country is at a point in time where widespread use of facial recognition technology by law enforcement officials (such as Punjab Artificial Intelligence System and Chennai’s FaceTagr) and private companies (Paytm) is taking place without any legislation to regulate its implementation. On one hand, that is an alarming reality to face. On the other hand, since nothing has been set in stone yet, there is a sizeable opportunity to develop regulations that will wholeheartedly attempt to address the spectrum of citizen concerns.

The Obvious Red Flags in India’s Social Fabric

According to a deputy police commissioner in Chennai, to avoid misuse of their facial recognition technology, police personnel have been instructed to refrain from using the application unless they find a person suspicious. For a country steeped in caste-based and communal prejudices, we cannot brush aside the extent to which the concept of a “suspicious person” can be corrupted at the individual policeman’s level. There is ample evidence from the United States to warn us of biases that manifest in the form of tragic police killings.
India continues to live under the shadows of the Criminal Tribes Act of 1931 — the predecessor of the Habitual Offenders Act, 1952 — which links unfounded allegations of hereditary criminality to certain marginalised communities. Furthermore, a socioeconomic study of prisoners on death row in India (published in 2016) yielded distressing insights into the criminal justice system ~35% of death row inmates belong to the OBC community and 25% to the SC/ST community. Religious minorities were found to comprise 21% of death row prisoners. This is not to say that purely direct discrimination is at play on the part of the police force and criminal justice system, but it is enough to hint at subconscious prejudices and microaggressions permeating India’s justice delivery mechanisms.

Lessons from the DNA Technology Bill

Legal expert Usha Ramanathan had pointed out in her dissent note on The DNA Technology (Use and Application) Regulation Bill, 2018 that the proforma in use by Government agencies, such as the Centre for DNA Fingerprinting and Diagnostics (CDFD), inquires about the caste of the person whose DNA is being collected. Instances such as this play right into our suspicions about prejudiced profiling.
Important observations can also be drawn from the test carried out by American Civil Liberties Union (ACLU) of Amazon’s facial recognition tool “Rekognition” and its aftermath. The test revealed that the software had incorrectly singled out 28 members of the U.S. Congress as people who have been arrested for a crime. The false matches in the test disproportionately comprised of people of colour – nearly 40% of Rekognition’s false matches.
In response, Amazon highlighted that the confidence threshold of 80% used by ACLU is appropriate for general use cases (such as identifying celebrities on social media) but not for public safety use, thereby leading to false positives. The recommended confidence threshold of 99% resulted in a misidentification rate of zero in tests conducted by Amazon. What is interesting to note is that in 2017, Amazon’s blog post demonstrated the use of Rekognition to identify persons of interest for law enforcement using a confidence threshold of 85% (indicated by the variable “faceMatchThreshold” in the code). After the findings of the ACLU study, Amazon went from recommending 95% or higher as the confidence threshold to 99%.
When the creators of the software itself cannot keep its numbers straight, how much confidence can we realistically repose in our law enforcement officials?
The false positives generated by DNA evidence can shed some light on the potential consequences of using new technology such as facial recognition. The science of DNA profiling is a probabilistic and statistical study – first, the likelihood of the collected DNA belonging to the suspect is analysed, followed by the likelihood that the DNA sample could belong to someone else in a given population. Unfortunately, when DNA profiling is sold as the be-all-end-all of nabbing criminals, statistical nuances are left out of the argument. Unless police personnel and judges are trained to fully comprehend the evidence placed before them, it diminishes the value of incorporating science into the criminal justice system and neutralizes any attempt at doing away with wrongful convictions.

The Way Forward

The point of highlighting the potential and actual misuse of facial recognition technology is not to demonise tech companies, hold them accountable for the failures of the State, or to ignore the benefits of such technology in cases of missing children and human trafficking. The point, instead, is to create room for improvement and minimize the infringements of emerging technology on fundamental human rights. Developers should not be heralding the benefits while wilfully ignoring feedback and minimizing the appearance of drawbacks.
If they continue to do so, then they are truly missing the point.
Instead, tech companies need to insist on the establishment of a regulatory framework to govern the use of facial recognition technology on public and commercial premises. If not for the sake of human rights and freedoms, then at least to avoid outright bans on the technology and ensure stability in future policies.
Since India is home to the world’s largest biometric database, the much touted Aadhar, the country is in a unique position to assume a leadership role when it comes to regulating facial recognition technology. The Government cannot pass the buck of responsibility to the tech companies as it has attempted to do with the problem of “fake news”.
Imposing standards of oversight, limiting function creep, exploring issues of privacy and consent, protecting databases from cybersecurity breaches and the redressal of complaints — all fall within the ambit of government control. Stakeholder consultations with tech companies and civil society will ensure a richness of debate and hopefully, incorporate the voices of the marginalized as they have the most to lose in a surveillance-heavy environment.
The path that India chooses to follow in relation to facial recognition technology must firmly be in the opposite direction of the Chinese government, which has been allegedly deploying such technology to keep tabs on the oppressed Muslim Uighur population. Hopefully, the next Government in power has the political will to pick up the slack in harmonizing effective protections for rights and freedoms with the benefits of emerging technology. But for now, if you happen to spot a camera trained on you in a marketplace such as Chennai’s T. Nagar, don’t forget to smile and wave.
*
Varsha is a researcher with the Centre for Communication Governance at National Law University Delhi.
Cross-posted here with permission from the CCG, NLU Delhi Blog.