How Europe is leading the world in the push to regulate AI

London, June 15: Authorities worldwide are racing to rein in artificial intelligence, including in the European Union, where groundbreaking legislation is set to pass a key hurdle Wednesday.

European Parliament lawmakers are due to vote on the proposal including controversial amendments on facial recognition as it heads toward passage.

A yearslong effort by Brussels to draw up guardrails for AI has taken on more urgency as rapid advances in chatbots like ChatGPT show the benefits the emerging technology can bring and the new perils it poses.

Here’s a look at the EU’s Artificial Intelligence Act:

The measure, first proposed in 2021, will govern any product or service that uses an artificial intelligence system. The act will classify AI systems according to four levels of risk, from minimal to unacceptable.

Riskier applications, such as for hiring or tech targeted to children, will face tougher requirements, including being more transparent and using accurate data.

Violations will draw fines of up to 30 million euros ($33 million) or 6% of a company’s annual global revenue, which in the case of tech companies like Google and Microsoft could amount to billions.

It will be up to the EU’s 27 member states to enforce the rules. One of the EU’s main goals is to guard against any AI threats to health and safety and protect fundamental rights and values.

That means some AI uses are an absolute no-no, such as “social scoring” systems that judge people based on their behaviour.

Also forbidden is AI that exploits vulnerable people, including children, or uses subliminal manipulation that can result in harm, for example, an interactive talking toy that encourages dangerous behaviour.

Predictive policing tools, which crunch data to forecast who will commit crimes, is also out.

Lawmakers beefed up the original proposal from the European Commission, the EU’s executive branch, by widening the ban on remote facial recognition and biometric identification in public. The technology scans passers-by and uses AI to match their faces or other physical traits to a database.

But it faces a last-minute challenge after a centre-right party added an amendment allowing law enforcement exceptions such as finding missing children, identifying suspects involved in serious crimes or preventing terrorist threats.

“We don’t want mass surveillance, we don’t want social scoring, we don’t want predictive policing in the European Union, full stop. That’s what China does, not us,” Dragos Tudorache, a Romanian member of the European Parliament who is co-leading its work on the AI Act, said Tuesday.

AI systems used in categories like employment and education, which would affect the course of a person’s life, face tough requirements such as being transparent with users and taking steps to assess and reduce risks of bias from algorithms.

Most AI systems, such as video games or spam filters, fall into the low- or no-risk category, the commission says.

The original measure barely mentioned chatbots, mainly by requiring them to be labelled so users know they’re interacting with a machine. Negotiators later added provisions to cover general purpose AI like ChatGPT after it exploded in popularity, subjecting that technology to some of the same requirements as high-risk systems.

One key addition is a requirement to thoroughly document any copyright material used to teach AI systems how to generate text, images, video and music that resemble human work. (AP)

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *