AI
Coded Bias on Netflix. Image source Netflix

European Union (EU) lawmakers have laid out their first-ever legal framework on artificial intelligence (AI), which addresses the risks of AI, facial recognition and positions Europe to play a leading role globally.

The lawmakers today proposed new rules and actions to turn Europe into the global hub for trustworthy AI.

The proposals aim to guarantee people and businesses’ safety and fundamental rights while strengthening AI uptake, investment, and innovation across the EU.

The proposed rules include prohibitions on a small number of use-cases considered too dangerous to people’s safety or fundamental rights, such as a China-style type of AI-enabled mass surveillance.

AI systems considered a clear threat to the safety, livelihoods, and rights of people would be banned.

All remote biometric identification systems are regarded as high risk and must be subject to strict requirements.

Their live use of such technology in publicly accessible spaces for law enforcement purposes must be prohibited in principle.

Furthermore, EU lawmakers demand that when using AI systems such as chatbots, users should be made aware that they are interacting with a machine.

Users can then make an informed decision to continue or step back.

While the EU may well be the first to regulate AI, others will likely follow suit.

“On Artificial Intelligence, trust is a must, not a nice to have,” Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age, said.

“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.

“By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way.

“Future-proof and innovation-friendly, our rules will intervene, where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

South Africa and AI, Facial Recognition

South Africa still faces challenges in providing access to affordable broadband for all citizens, especially those living in rural areas.

The delay in releasing the high-demand spectrum has negatively impacted the country’s ability to roll out data-rich services and 5G.

5G is necessary to implement 4IR technology such as blockchain, artificial intelligence, augmented reality, and the Internet of Things.

But this hasn’t stopped ISPs and security firms in South Africa from deploying invasive AI and facial recognition solutions.

For more read: Smart CCTV Networks Are Driving an AI-Powered Apartheid in South Africa

In one of the world’s most racially divided countries, a company called Vumacam is building a nationwide surveillance network that scrutinises peoples’ movements for “unusual behaviour.”

“Beggars” and “vagrants” are not welcome in Parkhurst, South Africa, a mostly white middle-class suburb of about 5,000 on the outskirts of Johannesburg’s inner city.

Criminals are on the prowl, residents warn, and they threaten their neighbourhood security.

To combat crime, the locals came up with a solution: place CCTV surveillance cameras everywhere.

Vumacam also deploys invasive cameras in Johannesburg’s southern suburbs.

Also watch: Coded Bias on Netflix

This Terrifying Netflix Doc Reveals the Flaws in Facial Recognition Technology

When MIT Media Lab researcher Joy Buolamwini discovers that facial recognition technology does not see dark-skinned faces accurately, she embarks on a journey to push for the first-ever U.S. legislation against bias in algorithms that impact us all.

Have you ever wondered how your phone’s facial recognition software really works? Well, allow this all-new documentary to blow your mind.

Coded Bias originally premiered back in 2020, but it just became available to stream on Netflix.

 

Coded Bias is presented by  Buolamwini who has found several flaws in facial recognition technology.

It all started when she realized that her phone’s software worked better when she wore a white mask.

The documentary investigates the algorithms behind facial recognition and examines how the system’s bias affects people of color.

“It turned out these algorithms performed better with a light male face as the benchmark,” she says in the doc.

 

“It did better on the male faces than the female faces, and lighter face better than darker faces.”

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here