By AI Trends StaffÂ
The United National Human Rights Office of the High Commissioner this week called for a moratorium on the sale and use of AI technology that poses human rights risksâincluding the use of facial recognition softwareâuntil adequate safeguards are in place. Â
Michelle Bachelet, UN High Commissioner for Human Rights
âArtificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect peopleâs human rights,â stated Michelle Bachelet, the UN High Commissioner for Human Rights, in a press release.  Â
Bacheletâs warnings accompany a report released by the UN Human Rights Office analyzing how AI systems affect peopleâs right to privacyâas well as rights to health, education, freedom of movement and more. The full report entitled, âThe right to privacy in the digital age,â can be found here.Â
âArtificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states,â Bachelet stated. âAI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online.â Â
Digital rights advocacy groups welcomed the recommendations from the international body. Evan Greer, the director of the nonprofit advocacy group Fight for the Future, stated that the report further proves the âexistential threatâ posed by this emerging technology, according to an account from ABC News.Â
âThis report echoes the growing consensus among technology and human rights experts around the world: artificial intelligence powered surveillance systems like facial recognition pose an existential threat to the future [of] human liberty,â Greer stated. âLike nuclear or biological weapons, technology like this has such an enormous potential for harm that it cannot be effectively regulated, it must be banned.â Â
While the report did not cite specific software, it called for countries to ban any AI applications that âcannot be operated in compliance with international human rights law.â More specifically, the report called for a moratorium on the use of remote biometric recognition technologies in public spacesâat least until authorities can demonstrate compliance with privacy and data protection standards and the absence of discriminatory or accuracy issues. Â
The report was also critical of the lack of transparency around the implementation of many AI systems, and how their reliance on large datasets can result in peopleâs data being collected and analyzed in opaque ways, and can result in faulty or discriminatory decisions, according to the ABC account. The long-term storage of data and how it could be used in the future is also unknown and a cause for concern, according to the report.Â
âGiven the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face,â Bachelet stated. âWe cannot afford to continue playing catch-up regarding AIâallowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact.â Bachelet called for immediate action to put âhuman rights guardrails on the use of AI.â Â
Report Announced in Geneva Â
Peggy Hicks, Director of Thematic Engagement , UN rights office
Journalists were present at the announcement of the report in Geneva. âThis is not about not having AI,â stated Peggy Hicks, director of thematic engagement for the UN rights office, in an account in Time. âItâs about recognizing that if AI is going to be used in these human rightsâvery criticalâfunction areas, that itâs got to be done the right way. And we simply havenât yet put in place a framework that ensures that happens.â Â
The report also expresses caution about tools that try to deduce peopleâs emotional and mental states by analyzing their facial expressions or body movements, saying such technology is susceptible to bias, misinterpretations, and lacks scientific basis. Â
âThe use of emotion recognition systems by public authorities, for instance for singling out individuals for police stops or arrests or to assess the veracity of statements during interrogations, risks undermining human rights, such as the rights to privacy, to liberty, and to a fair trial,â the report states. Â
The reportâs recommendations are consistent with concerns raised by many political leaders in Western democracies; European regulators have already taken steps to rein in the riskiest AI applications. Proposed regulations outlined by European Union officials this year would ban some uses of AI, such as real-time scanning of facial features, and tightly control others that could threaten peopleâs safety or rights. Â
Western countries have been at the forefront of expressing concerns about the discriminatory use of AI. âIf you think about the ways that AI could be used in a discriminatory fashion, or to further strengthen discriminatory tendencies, it is pretty scary,â stated US Commerce Secretary Gina Raimondo during a virtual conference in June, quoted in the Time account. âWe have to make sure we donât let that happen.â Â
At the same conference, Margrethe Vestager, the European Commissionâs executive vice president for the digital age, suggested some AI uses should be off-limits completely in âdemocracies like ours.â She cited social scoring, which can close off someoneâs privileges in society, and the âbroad, blanket use of remote biometric identification in public space.â Â
Consistency in Cautions Issued Around the World Â
The report did not single out any countries by name, but AI technologies in some places around the world have caused alarm over human rights in recent years, according to an account in The Washington Post. Â
The government of China, for example, has been criticized for conducting mass surveillance that uses AI technology in the Xinjiang region, where the Chinese Communist Party has sought to assimilate the mainly Muslim Uyghur ethnic minority group. Â
The Chinese tech giant Huawei tested AI systems, using facial recognition technology, that would send automated âUyghur alarmsâ to police once a camera detected a member of the minority group, The Washington Post reported last year. Huawei responded that the language used to describe the capability had been âcompletely unacceptable,â yet the company had advertised ethnicity-tracking efforts. Â
Bachelet of the UN was critical of technology that can enable authorities to systematically identify and track individuals in public spaces, affecting rights to freedom of expression, and of peaceful assembly and movement. Â
In Myanmar this year, Human Rights Watch criticized the Myanmar military juntaâs use of a public camera system, provided by Huawei, that used facial and license plate recognition to alert the government to individuals on a âwanted list.â Â
In the US, facial recognition has attracted some local regulation. The city of Portland, Ore., last September passed a broad ban on facial recognition technology, including uses by local police. Amnesty International this spring launched the âBan the Scanâ initiative to prohibit the use of facial recognition by New York City government agencies.Â
Read the source articles and information in a press release from the UN Human Rights Office, read the report entitled, âThe right to privacy in the digital age,â here; from ABC News, in Time and in The Washington Post. Â
Read MoreAI Trends