In 2009, I became extremely concerned with the concept of Unique Identity for various reasons. Connected with many like minded highly educated people who were all concerned.
On 18th May 2010, I started this Blog to capture anything and everything I came across on the topic. This blog with its million hits is a testament to my concerns about loss of privacy and fear of the ID being misused and possible Criminal activities it could lead to.
In 2017 the Supreme Court of India gave its verdict after one of the longest hearings on any issue. I did my bit and appealed to the Supreme Court Judges too through an On Line Petition.
In 2019 the Aadhaar Legislation has been revised and passed by the two houses of the Parliament of India making it Legal. I am no Legal Eagle so my Opinion carries no weight except with people opposed to the very concept.
In 2019, this Blog now just captures on a Daily Basis list of Articles Published on anything to do with Aadhaar as obtained from Daily Google Searches and nothing more. Cannot burn the midnight candle any longer.
"In Matters of Conscience, the Law of Majority has no place"- Mahatma Gandhi
Ram Krishnaswamy
Sydney, Australia.

Aadhaar

The UIDAI has taken two successive governments in India and the entire world for a ride. It identifies nothing. It is not unique. The entire UID data has never been verified and audited. The UID cannot be used for governance, financial databases or anything. It’s use is the biggest threat to national security since independence. – Anupam Saraph 2018

When I opposed Aadhaar in 2010 , I was called a BJP stooge. In 2016 I am still opposing Aadhaar for the same reasons and I am told I am a Congress die hard. No one wants to see why I oppose Aadhaar as it is too difficult. Plus Aadhaar is FREE so why not get one ? Ram Krishnaswamy

First they ignore you, then they laugh at you, then they fight you, then you win.-Mahatma Gandhi

In matters of conscience, the law of the majority has no place.Mahatma Gandhi

“The invasion of privacy is of no consequence because privacy is not a fundamental right and has no meaning under Article 21. The right to privacy is not a guaranteed under the constitution, because privacy is not a fundamental right.” Article 21 of the Indian constitution refers to the right to life and liberty -Attorney General Mukul Rohatgi

“There is merit in the complaints. You are unwittingly allowing snooping, harassment and commercial exploitation. The information about an individual obtained by the UIDAI while issuing an Aadhaar card shall not be used for any other purpose, save as above, except as may be directed by a court for the purpose of criminal investigation.”-A three judge bench headed by Justice J Chelameswar said in an interim order.

Legal scholar Usha Ramanathan describes UID as an inverse of sunshine laws like the Right to Information. While the RTI makes the state transparent to the citizen, the UID does the inverse: it makes the citizen transparent to the state, she says.

Good idea gone bad
I have written earlier that UID/Aadhaar was a poorly designed, unreliable and expensive solution to the really good idea of providing national identification for over a billion Indians. My petition contends that UID in its current form violates the right to privacy of a citizen, guaranteed under Article 21 of the Constitution. This is because sensitive biometric and demographic information of citizens are with enrolment agencies, registrars and sub-registrars who have no legal liability for any misuse of this data. This petition has opened up the larger discussion on privacy rights for Indians. The current Article 21 interpretation by the Supreme Court was done decades ago, before the advent of internet and today’s technology and all the new privacy challenges that have arisen as a consequence.

Rajeev Chandrasekhar, MP Rajya Sabha

“What is Aadhaar? There is enormous confusion. That Aadhaar will identify people who are entitled for subsidy. No. Aadhaar doesn’t determine who is eligible and who isn’t,” Jairam Ramesh

But Aadhaar has been mythologised during the previous government by its creators into some technology super force that will transform governance in a miraculous manner. I even read an article recently that compared Aadhaar to some revolution and quoted a 1930s historian, Will Durant.Rajeev Chandrasekhar, Rajya Sabha MP

“I know you will say that it is not mandatory. But, it is compulsorily mandatorily voluntary,” Jairam Ramesh, Rajya Saba April 2017.

August 24, 2017: The nine-judge Constitution Bench rules that right to privacy is “intrinsic to life and liberty”and is inherently protected under the various fundamental freedoms enshrined under Part III of the Indian Constitution

"Never doubt that a small group of thoughtful, committed citizens can change the World; indeed it's the only thing that ever has"

“Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.” -Edward Snowden

In the Supreme Court, Meenakshi Arora, one of the senior counsel in the case, compared it to living under a general, perpetual, nation-wide criminal warrant.

Had never thought of it that way, but living in the Aadhaar universe is like living in a prison. All of us are treated like criminals with barely any rights or recourse and gatekeepers have absolute power on you and your life.

Announcing the launch of the # BreakAadhaarChainscampaign, culminating with events in multiple cities on 12th Jan. This is the last opportunity to make your voice heard before the Supreme Court hearings start on 17th Jan 2018. In collaboration with @no2uidand@rozi_roti.

UIDAI's security seems to be founded on four time tested pillars of security idiocy

1) Denial

2) Issue fiats and point finger

3) Shoot messenger

4) Bury head in sand.

God Save India

Friday, June 28, 2019

14182 - Facial Recognition: The Reality -

  • 27 June 2019
  • Sally Ward-Foxton
AI is making automated facial recognition for mass surveillance a reality - but at what cost?


The UK is one of the most surveilled countries in the world, with closed-circuit televisions (CCTV) cameras everywhere, from shops to buses to private homes. In the last couple of years, AI techniques, such as neural networks, have propelled large-scale automated facial recognition. Combined with the UK’s huge network of existing CCTV cameras, this could hold vast potential for security applications.



A promotional picture for NEC’s NeoFace facial recognition technology. (Source: NEC)



However, there are worrying implications for privacy. Biometric facial data taken from photos or CCTV footage is particularly troubling because it can be taken without the person’s consent and without them knowing about it. This data may be collected indiscriminately — whether the subject is matched with a watch list or not — and it allows people to be located and for their movements to be tracked. It’s easy to imagine photos of crowds with every face linked to the person’s identity, then potentially connected to all kinds of other data about them.

Law Enforcement

If we are going to trade our privacy for increased security, the benefits must outweigh the costs. Is facial recognition technology actually effective in catching criminals?

Police forces around the UK have performed trials of live facial recognition technology in public places such as shopping malls, sports stadiums, and busy streets. The Metropolitan Police have undertaken 10 trials of the technology so far, using dedicated camera equipment along with NEC’s NeoFace algorithm, which measures the structure of each face, including distance between the eyes, nose, mouth, and jaw. According to NEC, this algorithm can tolerate poor quality image such as compressed surveillance video, with the ability to match images with resolutions as low as 24 pixels between the eyes.

Despite winning four consecutive NIST FIVE (U.S. National Institute of Standards and Technology, Face in Video Evaluation) awards for performance, NeoFace didn’t do terribly well in the field. In the eight deployments between 2016 and 2018, 96% of the Met’s facial recognition matches misidentified innocent members of the public as being on the watch list.

A major criticism of the trials has been that the public were not sufficiently alerted to them and were not given the opportunity to opt out. The BBC filmed one passerby in North London covering his face — the police photographed him anyway and gave him a £90 fine for disorderly conduct.



A South Wales Police van with facial recognition cameras. Text on the side of the van reads “Facial recognition fitted” in English and Welsh, but vans used by the Metropolitan Police were less clearly marked. (Source: Liberty)



South Wales Police’s more extensive trials of the technology were marginally more successful: only 91% of matches were misidentifications of innocent members of the public. Images of all passersby, matched or not, were stored by the police for 31 days.

This trial prompted one member of the public to take legal action against South Wales Police, having been photographed while out shopping and while at a peaceful anti-arms protest. He claims that automated facial recognition technology violated his right to privacy, interfered with his right to protest, and breached data protection laws (judgment is due this fall).

Ethical Concerns

A recent panel discussion at the AI conference CogX in London tackled the ethics of automated facial recognition technology. Martha Spurrier, director of human rights organization Liberty, highlighted research findings that surveillance distorts even non-criminal behavior.

“This technology affects all of us. It affects how we all interact with public spaces. It affects whether we decide to go to a protest, whether we decide to go and pray somewhere, or whether we hang out with people who might have been in trouble with the police,” she said.

Well-known issues with bias in facial recognition whereby women and people of color are more likely to be misidentified by algorithms are also a cause for concern, Spurrier said.

“[We know that] law enforcement marginalizes and demonizes particular communities. And this technology, whether it’s biased in and of itself, it will be used by people where bias is already entrenched. We have unfortunately no answer to dealing with police racism. So, we shouldn't equip [the police] with even more invasive tools to be able to enact that racism on an even greater scale,” she said.

Griff Ferris, legal and policy officer for civil liberties and privacy organization Big Brother Watch, said the public should be concerned about trials of live facial recognition like the ones carried out by UK police.

“It is mass surveillance. It biometrically scans every single face within a public space, checking against the watch list… It treats your face like a fingerprint, or DNA swab,” Ferris said, asking the audience how many would be willing to give fingerprints or have DNA swabbed when entering the auditorium (no hands went up).

“That's how it treats a public space, or a concert, or a protest. That's what it's doing and it's doing it without your knowledge and without your consent,” he added.

Proprietary Tech

Shaun Moore, CEO of facial recognition company Trueface, was the sole representative of a facial recognition company on the CogX panel. Trueface started with a smart doorbell in 2013, but now provides facial recognition to banks for secure ATM transactions, to the hospitality industry for VIP recognition, and to law enforcement.

“The [U.S.] government does not need facial recognition to surveil you. They have social media. They have your phone records. They have your credit card statements. So, you really need to think about this more holistically than just surveillance,” said Moore.

While he conceded that testing the technology on people who didn’t consent was “inappropriate,” he said there was no way around it if we want the technology to work.

“That is the only way to collect real information, and without collecting real information you will not know where the issues lie, so it's a bit of a [quandary],” he said. “But without real deployment of this technology with people who don't know that it's watching them, you will not get the information you need to see if it's accurate enough to identify people that you are looking for.”



Trueface’s facial recognition and spoof detection technology was showcased at ISC West in Las Vegas earlier this year. (Source: Trueface)



Moore said Trueface prides itself on enforcing responsible use of its technology, primarily by screening customers carefully and supplying to opt-in applications.

“We sell directly to our customers into their local infrastructure, so it's not a cloud API, you can't just go onto our site and buy our technology,” he said. “We spend at least six months with our customers to know exactly how they're using it, and we work in… primarily opt-in applications.”

Trueface’s technology protects those who haven’t opted in by blurring all faces in real time, Moore said. Imagery is not collected from faces other than for the moment that the system performs its matching, with non-matching faces completely erased from the system. He also pointed out that facial recognition does not mean storing images of faces — the Trueface system makes a mathematical representation of that face in a proprietary way, so if someone were to hack in to the database, it wouldn’t be possible to reverse engineer those numbers to plot a particular face.

Meanwhile, other private sector companies are divided on whether the technology in its current form should be used for surveillance applications. Amazon has provided its Rekognition facial recognition tool to law enforcement across the United States, despite an open letter from leading AI researchers calling for them to stop on the grounds that it is discriminatory. Microsoft has said that it doesn’t want its facial recognition technology to be used by law enforcement, with Microsoft President Brad Smith calling for regulation of the technology.

What Does the Law Say?

“What’s really clear is that [UK] policy in this area is a hot mess,” said Hetan Shah, executive director of the Royal Statistical Society. “It is really complicated, and that’s why we don’t have easy answers in this space.”

UK law stipulates the conditions for police collection of fingerprints and DNA, and what happens to that data after it is taken. It does not cover any other form of biometrics such as facial recognition images and data. While this type of data may be loosely covered under the Data Protection Act or the European GDPR regulation, biometric data other than DNA and fingerprints is not named specifically in either law. A High Court ruling in 2012 found that it was unlawful for the UK’s Home Office to store facial images of innocent people, but the Home Office only deployed a workaround, offering to delete the images of “unconvicted persons” who applied to have them removed.



Hetan Shah addresses a panel on the ethics of facial recognition technology at CogX 2019. (Source: EE Times)



The Ada Lovelace Institute, of which Shah is deputy chair, was created to investigate ethical issues related to AI technology. The Institute has commissioned a study of public perception of facial recognition technology used in public places.

“One thing we’re talking to stakeholders about is calling for a moratorium on this technology, which we think is in the private sector’s interest as much as citizens’ interest, because if we get the regulation wrong we’ll quickly be in the wrong place,” Shah said, citing a precedent set by the moratorium on genetic profiling in the insurance industry.

“In the same way that we've seen the science of genetically modified foods get set back 10 years because the public in Europe wouldn't support it, we need to find a path which gets social license to operate,” he continued. “The best way to do that is to pause on technology, and to have this discussion in many forums around the country and to think through them overall.”

Worst-Case Scenario

The situation overseas varies widely. San Francisco is one of the first cities to explicitly restrict use of the technology, following a grassroots campaign supported by the American Civil Liberties Union of Northern California. On the other extreme, frequently cited as a worst-case scenario is China. While China is keen to show off sunglasses equipped with facial recognition cameras on its police officers, there has been an international outcry about the use of facial recognition to track its Uighur Muslim community. China reportedly uses facial recognition technology on an enormous country-wide network of CCTV cameras to classify the ethnicity of its citizens, with alarms sounding if too many Uighurs appear in one area, for example. This level of social control, while alarming to Western ideals, is perfectly possible with today’s technologies.

While this is far from the situation on British streets, without any form of regulation, it’s easy to imagine the technology invading people’s right to privacy, or at worst, being deliberately misused.

The low accuracy and high rates of misidentification demonstrated in UK trials are particularly concerning, especially for women and people of color, who are disproportionately misidentified by today’s facial recognition algorithms. The argument “if you have nothing to hide, you have nothing to fear,” may no longer hold true.