In 2009, I became extremely concerned with the concept of Unique Identity for various reasons. Connected with many like minded highly educated people who were all concerned.
On 18th May 2010, I started this Blog to capture anything and everything I came across on the topic. This blog with its million hits is a testament to my concerns about loss of privacy and fear of the ID being misused and possible Criminal activities it could lead to.
In 2017 the Supreme Court of India gave its verdict after one of the longest hearings on any issue. I did my bit and appealed to the Supreme Court Judges too through an On Line Petition.
In 2019 the Aadhaar Legislation has been revised and passed by the two houses of the Parliament of India making it Legal. I am no Legal Eagle so my Opinion carries no weight except with people opposed to the very concept.
In 2019, this Blog now just captures on a Daily Basis list of Articles Published on anything to do with Aadhaar as obtained from Daily Google Searches and nothing more. Cannot burn the midnight candle any longer.
"In Matters of Conscience, the Law of Majority has no place"- Mahatma Gandhi
Ram Krishnaswamy
Sydney, Australia.

Aadhaar

The UIDAI has taken two successive governments in India and the entire world for a ride. It identifies nothing. It is not unique. The entire UID data has never been verified and audited. The UID cannot be used for governance, financial databases or anything. It’s use is the biggest threat to national security since independence. – Anupam Saraph 2018

When I opposed Aadhaar in 2010 , I was called a BJP stooge. In 2016 I am still opposing Aadhaar for the same reasons and I am told I am a Congress die hard. No one wants to see why I oppose Aadhaar as it is too difficult. Plus Aadhaar is FREE so why not get one ? Ram Krishnaswamy

First they ignore you, then they laugh at you, then they fight you, then you win.-Mahatma Gandhi

In matters of conscience, the law of the majority has no place.Mahatma Gandhi

“The invasion of privacy is of no consequence because privacy is not a fundamental right and has no meaning under Article 21. The right to privacy is not a guaranteed under the constitution, because privacy is not a fundamental right.” Article 21 of the Indian constitution refers to the right to life and liberty -Attorney General Mukul Rohatgi

“There is merit in the complaints. You are unwittingly allowing snooping, harassment and commercial exploitation. The information about an individual obtained by the UIDAI while issuing an Aadhaar card shall not be used for any other purpose, save as above, except as may be directed by a court for the purpose of criminal investigation.”-A three judge bench headed by Justice J Chelameswar said in an interim order.

Legal scholar Usha Ramanathan describes UID as an inverse of sunshine laws like the Right to Information. While the RTI makes the state transparent to the citizen, the UID does the inverse: it makes the citizen transparent to the state, she says.

Good idea gone bad
I have written earlier that UID/Aadhaar was a poorly designed, unreliable and expensive solution to the really good idea of providing national identification for over a billion Indians. My petition contends that UID in its current form violates the right to privacy of a citizen, guaranteed under Article 21 of the Constitution. This is because sensitive biometric and demographic information of citizens are with enrolment agencies, registrars and sub-registrars who have no legal liability for any misuse of this data. This petition has opened up the larger discussion on privacy rights for Indians. The current Article 21 interpretation by the Supreme Court was done decades ago, before the advent of internet and today’s technology and all the new privacy challenges that have arisen as a consequence.

Rajeev Chandrasekhar, MP Rajya Sabha

“What is Aadhaar? There is enormous confusion. That Aadhaar will identify people who are entitled for subsidy. No. Aadhaar doesn’t determine who is eligible and who isn’t,” Jairam Ramesh

But Aadhaar has been mythologised during the previous government by its creators into some technology super force that will transform governance in a miraculous manner. I even read an article recently that compared Aadhaar to some revolution and quoted a 1930s historian, Will Durant.Rajeev Chandrasekhar, Rajya Sabha MP

“I know you will say that it is not mandatory. But, it is compulsorily mandatorily voluntary,” Jairam Ramesh, Rajya Saba April 2017.

August 24, 2017: The nine-judge Constitution Bench rules that right to privacy is “intrinsic to life and liberty”and is inherently protected under the various fundamental freedoms enshrined under Part III of the Indian Constitution

"Never doubt that a small group of thoughtful, committed citizens can change the World; indeed it's the only thing that ever has"

“Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.” -Edward Snowden

In the Supreme Court, Meenakshi Arora, one of the senior counsel in the case, compared it to living under a general, perpetual, nation-wide criminal warrant.

Had never thought of it that way, but living in the Aadhaar universe is like living in a prison. All of us are treated like criminals with barely any rights or recourse and gatekeepers have absolute power on you and your life.

Announcing the launch of the # BreakAadhaarChainscampaign, culminating with events in multiple cities on 12th Jan. This is the last opportunity to make your voice heard before the Supreme Court hearings start on 17th Jan 2018. In collaboration with @no2uidand@rozi_roti.

UIDAI's security seems to be founded on four time tested pillars of security idiocy

1) Denial

2) Issue fiats and point finger

3) Shoot messenger

4) Bury head in sand.

God Save India

Wednesday, May 14, 2014

5528 - Facial recognition: is the technology taking away your identity? - The Guardian



Facial recognition technology is being used by companies such as Tesco, Google and Facebook, and it has huge potential for security. Concerned? It may be too late to opt out…

Luke Dormehl 
The Observer, Sunday 4 May 2014


Facial recognition data points: 'While facial recognition algorithms may be neutral themselves, the databases they are tied to are anything but.'

This summer, Facebook will present a paper at a computer vision conference revealing how it has created a tool almost as accurate as the human brain when it comes to saying whether two photographs show the same person – regardless of changes in lighting and camera angles. A human being will get the answer correct 97.53% of the time; Facebook's new technology scores an impressive 97.25%. "We closely approach human performance," says Yaniv Taigman, a member of its AI team.
Since the ability to recognise faces has long been a benchmark for artificial intelligence, developments such as Facebook's "DeepFace" technology (yes, that's what it called it) raise big questions about the power of today's facial recognition tools and what these mean for the future.

Facebook is not the only tech company interested in facial recognition. A patent published by Apple in March shows how the Cupertino company has investigated the possibility of using facial recognition as a security measure for unlocking its devices – identifying yourself to your iPhone could one day be as easy as snapping a quick selfie.

Google has also invested heavily in the field. Much of Google's interest in facial recognition revolves around the possibilities offered by image search, with the search leviathan hoping to find more intelligent ways to sort through the billions of photos that exist online. Since Google, like Facebook wants to understand its users, it makes perfect sense that the idea of piecing together your life history through public images would be of interest, although users who uploaded images without realising they could be mined in this manner might be less impressed when they end up with social media profiles they never asked for.

Google's deepest dive into facial recognition is its Google Glass headsets. Thanks to the camera built into each device, the headsets would seem to be tailormade for recognising the people around you. That's exactly what third-party developers thought as well, since almost as soon as the technology was announced, apps such as NameTag began springing up. NameTag's idea was simple: that whenever you start a new conversation with a stranger, your Google Glass headset takes a photo of them and then uses this to check the person's online profile. Whether they share your interest in Werner Herzog films, or happen to be a convicted sex offender, nothing will escape your gaze. "With NameTag, your photo shares you," the app's site reads. "Don't be a stranger."

While tools such as NameTag appeared to be the kind of "killer app" that might make Google Glass, in the end Google agreed not to distribute facial recognition apps on the platform, although some have suggested that is no more than a "symbolic" ban that will erode over time. That is to say, Google may prevent users from installing facial recognition apps per se on Glass but it could well be possible to upload images to sites, such as Facebook, that feature facial recognition. Moreover, there is nothing to prevent a rival headset allowing facial recognition apps – and would Google be able to stop itself from following suit?

Not everyone is happy about this. US senator Al Franken has spoken out against apps that use facial recognition to identify strangers, going so far as to publish an open letter to NameTag's creators. "Unlike other biometric identifiers such as iris scans and fingerprints, facial recognition is designed to operate at a distance, without the knowledge or consent of the person being identified," he wrote. "Individuals cannot reasonably prevent themselves from being identified by cameras that could be anywhere – on a lamp post, attached to an unmanned aerial vehicle or, now, integrated into the eyewear of a stranger."
To proponents of facial recognition, of course, this is precisely the point. Like the club doorman who knows you by name and can spot you in a busy crowd, facial recognition can make everything that bit more personal. In Steven Spielberg's 2002 sci-fi film Minority Report, ads are made more personal by using facial recognition technology. As Tom Cruise's character walks down the street, he is bombarded with customised adverts for everything from new cars to alcoholic drinks. In 2014, a number of companies are already bringing these ideas to (digital) life. Late last year, Tesco announced plans to instal video screens at its checkouts around the country. These screens will use inbuilt cameras equipped with facial recognition algorithms to ascertain the age and gender of individual shoppers.

Personal targeted advertsing in Spielberg's Minority Report starring Tom Cruise.

A Californian startup called Emotient meanwhile focuses on the area of facial expression analysis. Incorporated into next-generation TVs by way of a webcam, this technology could potentially be used to monitor viewer engagement levels with whatever entertainment is placed in front of them. The answer to questions such as "how many times did your face register interest during a programme?" can then be fed back to television companies to help them make creative decisions concerning programming.

"It is time for a step-change in advertising," says Lord Sugar's son, Simon, chief executive of Amscreen, which developed the OptimEyes technology behind Tesco's facial recognition screens. "Brands deserve to know not just an estimation of how many eyeballs are viewing their adverts, but who they are, too. Through our Face Detection technology, we want to optimise our advertisers' campaigns, reduce wastage and in turn deliver the type of insight that only online has previously been able to achieve."

Putting aside the question of whether or not brands do "deserve" to know anything and everything about their customers, companies such as OptimEyes and Emotient are far from the creepiest application of facial recognition. In the US, the startup SceneTap (previously known as BarTabbers) has installed cameras in more than 400 bars; they use facial recognition to help bar-hoppers decide which locations to visit on a night out. SceneTap offers real-time information on everything from gender ratios to the average age of patrons. A patent filed by the company even suggests plans to link identified people with their social networking profiles to determine "relationship status, intelligence, education and income".

Although the use of facial recognition tools is still relatively new in the consumer sector, that is where much of the visible innovation will take place over the coming years. "The stakes are lower, so companies are free to take more risks," says Kelly Gates, professor in communication and science studies at UC San Diego and author of Our Biometric Future: Facial Recognition Technology and the Culture of Surveillance. "As a result, there are a lot of experiments in the commercial domain. So what if you identify the wrong person by accident when you're targeting an ad? It's not that big a deal. It happens all the time in other forms of advertising."

Mohammed Atta (right) in the airport surveillance tape from Portland, Maine, 11 September 2001. Photograph: Reuters


There are, naturally, problems, and most relate to privacy concerns. Although privacy is an issue with every form of data mining, at least online the majority of information absorbed by companies is anonymised. Facial recognition, of course, is precisely the opposite. And since facial recognition takes place in public spaces, it is not even necessary for the person surveilled actively to "opt in" to the service.
This, in turns, links to the subject of security, which for many companies and organisations is the ultimate application for facial recognition. Hitherto, most facial recognition research has been funded by governments interested in its potential for streamlining surveillance. That emphasis has only increased over the past decade, provoked by events such as the 9/11 attacks and the 7/7 London bombing in 2005.
One of the most poignant images that came out of 11 September was a grainy frame of surveillance tape footage showing hijacker Mohamed Atta as he passed through an airport metal detector in Portland, Maine. Unlike the horrifying images of the collapse of the Twin Towers, this quieter picture was dramatic because of what it implied: that if only the right technology had been available, that day's tragic could have been averted.
The idea that data mining algorithms have any place in helping us stop the next 9/11 or 7/7 has been criticised in some quarters. But there is no doubt that facial recognition plays an ever more important part in control and surveillance – both in England and overseas. On 5 April 2011, 41-year-old John Gass received a letter from the Massachusetts Registry of Motor Vehicles informing him he should stop driving, effective immediately. A conscientious driver who had not received so much as a traffic violation in years, Gass was baffled. After several frantic phone calls, followed up by a hearing with registry officials, Gass learned his image had been flagged by a facial recognition algorithm, designed to scan through a database of millions of drivers' licences looking for potential criminal false identities. The algorithm had determined that he looked sufficiently like another Massachusetts driver that foul play was likely involved, so he received the automated letter. The RMV was unsympathetic, claiming it was the accused individual's "burden" to clear their name in the event of any mistakes, arguing that the pros of protecting the public outweighed the inconvenience to the wrongly targeted few.
"The dream is for governments to be able to set up networked cameras in public locations, capable of constantly searching through the faces of people who are photographed," says Xiaoou Tang, professor in the department of information engineering at the Chinese University of Hong Kong and one of the world's leading experts in facial recognition. "Once this is done, the images can then be matched to a database looking for suspects or potential terrorists, so that [pre-emptive] arrests can be made."
Perhaps the most notable thing about our faith in facial recognition is what it says regarding belief in the inherent neutrality (or even objectivity) of such systems. "One of the things that troubles me is the idea that machines don't have bias," says Gates. Of course, in a real sense, they might not. Unless a programmer is personally prejudiced and decides deliberately to code that bias into whatever system he or she is working on, it is unlikely that a facial recognition algorithm will exhibit prejudice against certain groups for the reasons that a human might.
But that doesn't mean that prejudice can't occur. It could be, for example, that facial recognition tools show a higher rate of recognition for men than for women and for individuals of non-white origin than for whites. (This has been shown to be true in the past.) A facial recognition system might not target a black male for reasons of overt prejudice in the way that a racist person might, but the fact that it could be more likely to do this than it is to target a white female means that the biased end result is no different.
And while facial recognition algorithms may be neutral themselves, the databases they are tied to are anything but. Whether a database concerns criminal suspects or first-class travellers, they are still designed to sort us into categorisable groups.
"These databases are what define our social mobility and our ability to move through the world," says Gates. "Individual identification is always tied to social classification. It's always there for some specific purpose, and that's usually to determine someone's level of access or privilege. The ethical questions in facial recognition relate to those social hierarchies and how they're established."
"I think it worries people because there's something very permanent about it," says Xiaoou Tang. "Even when you're talking about using your face or your fingerprints to unlock a phone, this is a password we can never change. We only have one, and once it's set up it's going to be your password for life."
This isn't to suggest that facial recognition doesn't have its positives. As computer vision continues to get better over the coming months and years, we'll reap the benefits as computer users. The idea that we can take the giant, anonymous world we live in and transform it into a place as knowable as a small town is, at root, a utopian/naive one. "Ultimately we need to ask ourselves whether a world of ubiquitous automated identification is really one that we want to build," says Gates.
It's important to understand the scale of change that is under way, because it is going to dictate what happens. Knowing about facial recognition, and how it is used by both governments and companies, is key to helping us face the future. No pun intended.
Luke Dormehl is the author of The Formula: How Algorithms Solve All Our Problems (And Create More), published by WH Allen £20