OHCHR / ARTIFICIAL INTELLIGENCE AND PRIVACY

Preview Language:   Original
15-Sep-2021 00:04:18
UN High Commissioner for Human Rights Michelle Bachelet stressed the urgent need for a moratorium on the sale and use of artificial intelligence (AI) systems that pose a serious risk to human rights until adequate safeguards are put in place. She also called for AI applications that cannot be used in compliance with international human rights law to be banned. UNTV CH

Available Language: English
Type
Language
Format
Acquire
/
English
Other Formats
Description
STORY: OHCHR / ARTIFICIAL INTELLIGENCE AND PRIVACY
TRT: 04:15
SOURCE: UNTV CH
RESTRICTIONS: NONE
LANGUAGE: ENGLISH / NATS

DATELINE: 15 SEPTEMBER 2021 GENEVA, SWITZERLAND 

SHOTLIST:

1.Wide shot, alley flags, Palais de Nation, Geneva
2. Wide shot, Room 27 Press Conference
3. SOUNDBITE (English) Peggy Hicks, Director of Thematic Engagement for the Office of the UN High Commissioner for Human Rights (OHCHR):
“The report then goes on to look at how these issues playout in practice by examining how AI is having an impact on human rights in four key sectors where it's being deployed, namely, first in the law enforcement, national security and criminal justice and border management area where we see it being used for profiling and suspect identification, biometric technologies such as facial recognition and emotional recognition being used, including remotely and real time to identify people with documented cases of erroneous identification and disproportionate impact on certain groups of minorities.”
4. Wide shot, press Conference Room
5. SOUNDBITE (English) Peggy Hicks, Director of Thematic Engagement for the Office of the UN High Commissioner for Human Rights (OHCHR):
“The crux of the argument made in the report is simple. Artificial intelligence poses enormous risks for human rights, and despite those implications, it has been designed and deployed across systems critical to our most basic freedoms without proper regulation or oversight. This is not about the risks of AI for human rights in the future, it is about the reality we see today without immediate and far reaching shifts in how we address things like employment and development, the existing harms will multiply at scale and with speed. And the worst part of it is we won't even know the extent of the problem because there is so little transparency around artificial intelligence and its use.”
6. Wide shot, participant presser
7. SOUNDBITE (English) Peggy Hicks, Director of Thematic Engagement for the Office of the UN High Commissioner for Human Rights (OHCHR):
“We call for a moratorium on the sale and use of A.I. systems that carry a high risk relating to the enjoyment of human rights unless and until adequate safeguards to protect human rights are in place. The high commissioner will also recommend specifically a moratorium on the use of remote biometric recognition technology in public spaces. Given the serious threats to public freedoms associated with such surveillance.”
8. Wide shot, podium and screen
9. SOUNDBITE (English) Peggy Hicks, Director of Thematic Engagement for the Office of the UN High Commissioner for Human Rights (OHCHR):
“Companies have a central role here and they need to step up their human rights due diligence regarding technologies they develop and dramatically increase the transparency regarding the use and sale of AI and to take action to ensure greater diversity within their own workforces. Working on AI among a number of other steps.”
10. Wide shot Press, Conference Room
11. SOUNDBITE (English) Peggy Hicks, Director of Thematic Engagement for the Office of the UN High Commissioner for Human Rights (OHCHR):
“As our work makes abundantly clear, AI is already a part of our lives, and there is no time to lose in the fight to ensure that it is designed and deployed in a manner that makes our societies better and more rights respecting, rather than being a tool that enables discrimination, invades our privacy, and undermines our rights.”
12. Wide shot, participants
13. SOUNDBITE (English) Peggy Hicks, Director of Thematic Engagement for the Office of the UN High Commissioner for Human Rights (OHCHR):
“On the first one about facial recognition and whether there's evidence about its discriminatory impact and inability to, you know, work effect as effectively with regards to women and to people of color. I mean, I think the evidence is in. And, yes, there absolutely is a problem in the companies themselves, I think have recognized that. The question is really what's been done about it since we recognize that. And how are we ensuring that this technology, which is not scientifically accurate in that regard, is being adjusted and not being deployed in circumstances where that flaw will result in human rights consequences.”
14. Close up, participant screen
15. SOUNDBITE (English) Peggy Hicks, Director of Thematic Engagement for the Office of the UN High Commissioner for Human Rights (OHCHR):
“And we want to support innovation. There are ways in which innovation with new technologies like this can actually be incredibly beneficial to human rights. We want to emphasize that this is not about, you know, not having AI It's about recognizing that if AI is going to be used in these human rights, very critical function areas, that it's got to be done in the right way. And we simply haven't yet put in place a framework that ensures that happens.”
16. Close up, computer screen, TV screen
17. Wide shot, Press conference room and participants

STORYLINE:

UN High Commissioner for Human Rights Michelle Bachelet stressed the urgent need for a moratorium on the sale and use of artificial intelligence (AI) systems that pose a serious risk to human rights until adequate safeguards are put in place. She also called for AI applications that cannot be used in compliance with international human rights law to be banned.

Bachelet’s call came as her Office published a report analyzing how AI – including profiling, automated decision-making and other machine-learning technologies – affects people’s right to privacy and other rights, including the rights to health, education, freedom of movement, freedom of peaceful assembly and association, and freedom of expression.

The report highlights the aspects of artificial intelligence that contribute to the interference with the right to privacy and other human rights. These are the reliance of many AI tools on large data sets, including personal data, the embedded biases leading to discrimination and the use of AI for inferences about people.

Speaking to reporters today (15 Sep) in Geneva, Peggy Hicks, the UN Human Rights Director of Thematic Engagement, said, “the report then goes on to look at how these issues play out in practice by examining how AI is having an impact on human rights in four key sectors where it's being deployed, namely, first, in the law enforcement, national security and criminal justice and border management area where we see it being used for profiling and suspect identification, biometric technologies such as facial recognition and emotional recognition being used, including remotely and real time to identify people with documented cases of erroneous identification and disproportionate impact on certain groups of minorities.”

The report looks at how States and businesses alike have often rushed to incorporate AI applications, failing to carry out due diligence. There have already been numerous cases of people being treated unjustly because of AI, such as being denied social security benefits because of faulty AI tools or arrested because of flawed facial recognition.

“The crux of the argument made in the report is simple. Artificial intelligence poses enormous risks for human rights, and despite those implications, it has been designed and deployed across systems critical to our most basic freedoms without proper regulation or oversight. This is not about the risks of AI for human rights in the future. It is about the reality we see today without immediate and far reaching shifts in how we address like employment and development, the existing harms will multiply at scale and with speed. And the worst part of it is we won't even know the extent of the problem because there is so little transparency around artificial intelligence and its use,” Peggy Hicks said.

Peggy Hicks outlined key actions that Governments need to take: “We call for a moratorium on the sale and use of AI systems that carry a high risk relating to the enjoyment of human rights unless and until adequate safeguards to protect human rights are in place. The High Commissioner will also recommend specifically a moratorium on the use of remote biometric recognition technology in public spaces, given the serious threats to public freedoms associated with such surveillance.”

The report also highlights the responsibilities of businesses with regard to artificial intelligence.

“Companies have a central role here and they need to step up their human rights due diligence regarding technologies they develop and dramatically increase the transparency regarding the use and sale of AI and to take action to ensure greater diversity within their own workforces, working on AI among a number of other steps,” Peggy Hicks highlighted.

“As our work makes abundantly clear, AI is already a part of our lives, and there is no time to lose in the fight to ensure that it is designed and deployed in a manner that makes our societies better and more rights respecting, rather than being a tool that enables discrimination, invades our privacy, and undermines our rights,” Peggy Hicks said.

Peggy Hicks stressed that it is clear there are serious issues with facial recognition technology, which are are increasingly used to identify people in real time and from a distance, potentially allowing unlimited tracking of individuals.

“On the first one about facial recognition and whether there's evidence about its discriminatory impact and inability to, you know, work effect as effectively with regards to women and to people of color. I mean, I think the evidence is in. And, yes, there absolutely is a problem in the companies themselves, I think have recognized that. The question is really what's been done about it since we recognize that. And how are we ensuring that this technology, which is not scientifically accurate in that regard, is being adjusted and not being deployed in circumstances where that flaw will result in human rights consequences.”

The report aims to help the discussions about human rights and artificial intelligence and was not an argument about the innovations technology can bring, Peggy Hicks said:

“We want to support innovation. There are ways in which innovation with new technologies like this can actually be incredibly beneficial to human rights. We want to emphasize that this is not about, you know, not having AI, it’s about recognizing that if AI is going to be used in these human rights, very critical function areas, that it's got to be done in the right way. And we simply haven't yet put in place a framework that ensures that happens.”
Series
Category
Creator
UNTV CH
Alternate Title
unifeed210915f
Asset ID
2654242