A human face has an astonishing variety of features that helps us recognise, read and understand others with a constant flow of intentional and unintentional signals. This had been one of the unique functions that separate man from the machine, until now. The dystopia of mass surveillance in George Orwell’s 1984[i] is now closer than ever before, where Big Brother is always watching you, and there is no place to hide. With the coming of Facial Recognition Technology (hereinafter referred to as “FRT”), which uses your face as your biometric identifier, the machine-human interaction, questions of mass surveillance, and censorship have protruded out, once again.
The use of FRT has faced severe repulsion globally. Several states in the US, such as San Francisco, Somerville, Oakland, San Diego, Boston, Portland have either banned this technology or denied any governmental, particularly police usage of it.[ii] In India, the lack of law governing such technologies’ use has created a public outcry, with civil society organizations demanding a ban on the use of FRT. Currently, there are 16 FRT systems in use across India by various Central and State governments for surveillance, security, and identity identification. Moreover, 17 more are currently being installed by various government ministries.[iii]
FRT is a popular biometric benchmark because it is simple to set up and operate, requiring no physical contact with the end-user. Furthermore, the verification/identification processes for face detection and face match are quick. It is employed in three major industries, security (both by law enforcement agencies and private organisations), health and banking & retail. For security, it is engaged in investigating for the interest of public safety. The health industry is used to properly track a patient’s drug consumption and uncover hereditary diseases to aid pain management methods. Finally, in Banking & retail, facial biometric is used in KYC (Know Your Customer) platforms to make user-friendly mobile applications and make purchases in retail stores.
Although many governments and private sectors have rapidly adopted the FRT system in recent years, there are no clear regulations or rules in India to oversee the use of this highly intrusive technology.[iv] This poses a significant threat to fundamental rights to privacy and freedom of speech and expression. Neither does the law specify the scope or limitations for using such technology, nor adhere to the minimal privacy hindrance threshold established in the landmark privacy case of Justice K.S. Puttaswamy v. Union of India.[v]
In this article, we shall discuss why are activists globally opting for banning FRT rather than tailor making a regulatory framework? Moreover, we shall also look whether we need a law to regulate or placing a ban will be an optimum solution to limit the employment of this abstruse technology?
Analysing the root cause behind the demand for a ban
FRTs have faced severe global backlash. The demand is that FRT should not be used, developed, produced, or sold for mass surveillance purposes by the police or other government agencies, and exports of the technological systems should be prohibited. Some of the reasons for such disapproval are as follows:
One of the first challenges that the FRT faces were that the citizens of a nation would be continuously observed without knowledge, consent, or participation. This would be done by the installation of the cameras and the subsequent constant surveillance, which could only lead to more harm than benefits. Such as, surveillance during a demonstration can hamper one’s right to protest.
b. Unreasonable differentiation:
The technology exacerbates systemic racism by disproportionately affecting people of colour, who are already discriminated against and have their human rights violated by law enforcement officers. Facial recognition technologies are also more likely to misidentify black persons, specifically black women.[vi] This may also lead to wrongfully targeting racial minorities by misusing these systems.
c. Dubious Precision & Accuracy:
As FRT makes a face print and does not understand the impact of illumination, pose, aging, or emotion, it is bound to make mistakes and give an incorrect response. In addition, it raises the risk of a “false positive,” in which a person is identified as someone they are not, or a “false negative,”[vii] the system refuses to recognise the person as themselves. Such false results due to misidentification can cause legal action to be taken against an innocent person.
d. Function Creep:
Function creep in FRT occurs when it is used for a purpose other than what it was designed for, such as security, surveillance, or investigation. It occurs when someone utilises information for a purpose other than the one for which it was initially intended. If the function has been broadened on the back end, how will one know what this technology is being utilised for or how it is regulated? For example, the sanctioned use of these systems for monitoring the movement of a criminal suspect may be extended to surveilling opposition political parties’ leaders and members.
e. Mass Surveillance:
FRT could lead to an over-policing problem. Moreover, it can be used for mass surveillance of people, where the dissenters, political/ public rallies, and any gathering can be severely scrutinised using this technology. This might pose a chilling effect on the individual’s freedom of speech, expression, protest, and movement. For instance, tribal community movements may be monitored, which could hamper their fundamental rights.
f. Private Companies’ Response:
Companies, including Amazon, IBM, and Microsoft,[viii] have opted not to offer these technologies to law enforcement bodies, specifically police. They first began with a year’s ban to reassess the situation and then declared an indefinite moratorium on the sale of FRT to federal police. It was advocated that due to lack of regulation on government usage of this technology and lack of ethics built on it, FRT may have an invading, incorrect, and uncontrolled practice. FRT will become a form of surveillance that makes tyrannical societies if it is placed in the hands of law enforcement agencies without any regulation or oversight.[ix] It will automate racial profiling and exacerbate existing inequity in our criminal justice system.
The adoption of new technology has been overridden with challenges and questions, for a long time. However, the concerns underlined above are based on the possible exploitation that may happen with a new technology, which still has many potentials to grow and become better. Moreover, in India, most of these issues arise due to a lack of regulation dealing with data protection as well as cybersecurity. Thus, there is no law that could specify the use, limitation, exercise, and control of these systems.
Furthermore, such final results of the Facial Recognition Systems will have to be checked by human intervention as of the current stage. However, such active engagement with this technology will prevent the potential consequence of false results and differentiation.
Without a legal framework in place, these FRT systems are being built and deployed across India, causing many issues. The apprehensions include the use of the technology for policing, probable remedies in case of wrongful legal action, and limitations to investigational use of the same. Hence, the concerns raised on the FRT could also be tackled using tailor-made legislation on the same.
Possible Counter-Reactions from Banning
The above argument does give a dark image of FRT and its harms; however, just because of the possibility that technology can be abused, misused, or exploited, it does not mean that it should be banned and condemned.[x] It is essential to understand the possible use of FRT in India, before deciding its fate in our jurisdiction.
Major concerns also include the repercussions from a ban, such as, will it be a pause button to reassess the risks or a permanent moratorium on the technology to develop, and would such a ban be a step backwards for public safety as it might lead to a policy vacuum?
Firstly, it is essential to understand the FRT is a pre-existing and fully functional technology, developed by Multinational companies worldwide. Some famous ones include Google’s FaceNet, Facebook’s DeepFace, Microsoft & IBM’s Megvii (FACE++), and Amazon’s Rekognition.[xi] Moreover, apart from these companies, it is also sold by smaller companies and freelancers at a very nominal cost. Further, small-scale database designs and FRT coding is also available in open-source websites, like GitHub.[xii] Therefore, it can be concluded that this technology can be used by individuals currently who have the basic technical knowledge and the internet. However, with the ban’s coming, such technology’s use by anyone, including the government, individuals, companies, and third-party contractors, will become illegal and a crime. This would cause an enormous administrational task, where the law enforcement agencies will have to make sure that it is not being used illegally or with the help of a global player whose host country allows the use of this technology.
Secondly, post the Puttaswamy Judgement,[xiii] the right to privacy was considered a fundamental right under Article 21[xiv] of the Constitution of India. It concluded that an action to hinder the right to privacy must be sanctioned by law, proportionate to the need for such interference, necessary and in pursuit of a legitimate aim. With these requirements, Aadhaar’s facial recognition database, held by the Ministry of UIDAI, was still considered to be within the threshold of the centre’s power to reasonable restriction on privacy to maintain a national record.[xv] This highlights that India already has a database with it, which has been already employed before. It is one step away from using facial templates. With a lack of regulation, such technology could be used by any law enforcement agency through a High Court Order.[xvi]
Moreover, as there is no legal procedure, as such, which signifies the steps to undertake for using FRT and joint limitations with it, such systems may be employed without any permission or restriction to it, in the status quo. This mandates a legal structure to be built to make the process harder, stringent, and the last resort, and highlights the limitations to such technology with due human intervention and redressal mechanism.
Thirdly, data theft and privacy intrusion are among the major concerns in the debate of the central database of such records. However, a localised database will pose a higher threat to misuse on the hind side as it can be easily accessed. Therefore, the regulation around FRT also needs the development of Artificial Intelligence (hereinafter referred to as “AI”), Machine Learning, and Cybersecurity. To begin with, a Centralised Offline Storage of Database, with the constant evolution of AI, including Deep-Face, Deep Learning, and Skin Print,[xvii] is essential to make the FRT efficient and accurate.
There is a fine balance that has to be drawn between the work that has been done by the high-level AI and the ethical principles and the distorting conversation on privacy. In an AI-based governance model, operation demands rely on the meaningful research and development of such technology in use. The growth of law shall accommodate the upcoming nuances for efficient and controlled employment of such AI.[xviii] Moreover, hardwired legal principles will further embark on clarifying the meaning of transparency and fairness, in this new world of data, along with best practices.
Lastly, to answer the question that we posed initially. Yes, the ban has to be a pause button, if it has to be implemented urgently, to avoid exploitation in the current status of lack of law. However, we must assess the risks and develop a legal framework around this technology. Secondly, a permanent ban will be a step backward in public safety. This is because FRT is an easier and quicker method to identify individuals. Moreover, a ban will bring in other illegal activities, causing greater unregulated access to FRT, posing a more significant threat to privacy.[xix] And finally, there does exist a policy vacuum, not only in India but also globally. Therefore, we need a tailor-made regulation, which focuses on Human Right Concerns.
With the Big Brother Internet’s increasing role in our lives, the world has become closer; individuals communicate faster and better. This has caused the pre-existing immediate world problems to aggravate, demanding newer and more efficient solutions to tackle the ever-evolving technology. One of which is FRT, which is on the rise in India, with no safeguards or remedies to protect citizens from the dangers of exclusion, profiling, and surveillance. Without immediate action, such mass surveillance systems will erode democratic liberties and jeopardise rights.
However, banning will do no better. Instead, it will create a hidden, unregulated, unmonitored pathway for criminals to misuse the pre-existing databases. Moreover, individuals will also have uncontrollable access to open-source or nominal costing technology to interfere with law and order. So, the million-dollar question is are we choosing regulation or allowing technology to unfold in its manipulative ways.
The new-age philosophy of tyranny is based on the innumerable challenges that the future of technology may hold. As there do exist grappling effects of technology, it is a relevant concern now more than ever. However, this has challenged us to ask ourselves, are we in the Twilight age of technology, or is there more development? If the answer is the latter, then, in that case, we can regulate the use, tailor the limitations, avert the manipulations, avoid getting it into criminal hands, and prevent the harms.
The article can be cited as:
Tannvi, Challenges in the implementation of Facial Recognition Technology, Metacept-Communicating the Law, accessible at: https://metacept.com/challenges-in-the-implementation-of-facial-recognition-technology/
[i] Orwell, G. (1983). Nineteen Eighty-four. United States: Houghton Mifflin Harcourt. [ii] Kate Conger, Richard Fausset and Serge F. Kovaleski, San Francisco Bans Facial Recognition Technology, New York Times, May 14, 2019, access at https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html. (Last Accessed on May 31st 2021) [iii] Facial Recognition Systems in India, Panoptic Tracker (Internet Freedom Foundation), access at https://panoptic.in. (Last Accessed on May 31st 2021) [iv] Prabhjote Gill, India is ramping up the use of facial recognition to track down individuals without any laws to keep track of how this technology is being used, BUSINESS INSIDER, February 10th 2021, 1214 IST, access at https://www.businessinsider.in/tech/news/what-is-facial-recognition-technology-and-how-india-is-using-it-to-track-down-protestors-and-individuals/articleshow/80782606.cms. (Last Accessed on May 31st 2021) [v] Justice K.S. Puttaswamy v. Union of India (2017) 10 SCC 1. [vi] Alex Najibi, Racial Discrimination in Face Recognition Technology, Harvard University Blog (Special Edition: Science Policy & Social Justice), October 24th 2020, access at https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/. (Last Accessed on May 31st 2021) [vii] Soibam Rocky Singh, Facial recognition technology: law yet to catch up, The Hindu: Delhi, December 31st 2020, 0050 IST, access at https://www.thehindu.com/news/cities/Delhi/facial-recognition-technology-law-yet-to-catch-up/article33458380.ece. (Last Accessed on May 31st 2021) [viii] Soibam Rocky Singh, Facial recognition technology: law yet to catch up, The Hindu: Delhi, December 31st 2020, 0050 IST, access at https://www.thehindu.com/news/cities/Delhi/facial-recognition-technology-law-yet-to-catch-up/article33458380.ece. (Last Accessed on May 31st 2021) [ix] Lary Magid, IBM, Microsoft And Amazon Not Letting Police Use Their Facial Recognition Technology, FORBES (EDITOR’S PICK), June 12th 2020, access at https://www.forbes.com/sites/larrymagid/2020/06/12/ibm-microsoft-and-amazon-not-letting-police-use-their-facial-recognition-technology/. (Last Accessed on May 31st 2021) [x] David Gargaro, The pros and cons of facial recognition technology, IT-Pro (In-depth), March 26th 2021, access at https://www.itpro.com/security/privacy/356882/the-pros-and-cons-of-facial-recognition-technology. (Last Accessed on May 31st 2021) [xi] Facial recognition: top 7 trends, Identity & Biometric Solutions, Thales Group, May 22nd 2021, access at https://www.thalesgroup.com/en/markets/digital-identity-and-security/government/biometrics/facial-recognition. (Last Accessed on May 31st 2021) [xii] Request at Github, ‘face_recognition’, access at: https://github.com/ageitgey/face_recognition. (Last Accessed on May 31st 2021) [xiii] Justice K.S. Puttaswamy v. Union of India (2017) 10 SCC 1. [xiv] INDIA CONST. art. 21. [xv] Face recognition feature set to ensure stronger Aadhaar security; here’s more detail, Times Of India, October 15th 2019, 1737 IST, access at https://timesofindia.indiatimes.com/business/faqs/aadhar-faqs/face-recognition-feature-set-to-ensure-stronger-aadhaar-security/articleshow/62518399.cms. (Last Accessed on May 31st 2021) [xvi] Saurabh Trivedi, Delhi Police using facial recognition system to identify protesters, The Hindu: Delhi, January 21st 2020, 2249 IST, access at https://www.thehindu.com/news/cities/Delhi/delhi-police-using-facial-recognition-system-to-identify-protesters/article30437756.ece. (Last Accessed on May 31st 2021) [xvii] Dhamija, J., Choudhury, T., Kumar, P. and Rathore, Y.S., 2017, October. An Advancement towards Efficient Face Recognition Using Live Video Feed:” For the Future”. In 2017 3rd International Conference on Computational Intelligence and Networks (CINE) (pp. 53-56). IEEE. [xviii] Zeng, Y., Lu, E., Sun, Y. and Tian, R., 2019. Responsible facial recognition and beyond. arXiv preprint arXiv:1909.12935.
[xix] Milligan, C.S., 1999. Facial recognition technology, video surveillance, and privacy. S. Cal. Interdisc. LJ, 9, p.295.