IE 11 is not supported. For an optimal experience visit our site on another browser.

E.U. proposes rules for high-risk artificial intelligence uses

The proposals also include a prohibition in principle on “remote biometric identification,” such as the use of live facial recognition on crowds of people in public places.
Image: Media conference on the EU approach to Artificial Intelligence in Brussels
European Executive Vice-President Margrethe Vestager speaks at a media conference on the EU approach to Artificial Intelligence following a weekly meeting of EU Commission in Brussels, Belgium, April 21, 2021.Olivier Hoslet / Reuters

European Union officials unveiled proposals Wednesday for reining in high-risk uses of artificial intelligence such as live facial scanning that could threaten people’s safety or rights.

The draft regulations from the E.U.’s executive commission include rules on the use of the rapidly expanding technology in activities such as choosing school, job or loan applicants. They also would ban artificial intelligence outright in a few situations, such as “social scoring” and systems used to manipulate human behavior.

The proposals are the 27-nation bloc’s latest move to maintain its role as the world’s standard-bearer for technology regulation. E.U. officials say they are taking a “risk-based approach” as they try to balance the need to protect rights such as data privacy against the need to encourage innovation.

“With these landmark rules, the E.U. is spearheading the development of new global norms to make sure AI can be trusted,” Margrethe Vestager, the European Commission’s executive vice president for the digital age, said in a statement. “By setting the standards, we can pave the way for to ethical technology worldwide and ensure that the E.U. remains competitive along the way.”

The proposals also include a prohibition in principle on “remote biometric identification,” such as the use of live facial recognition on crowds of people in public places, with exceptions only for narrowly defined law enforcement purposes such as searching for a missing child or a wanted person.

The draft regulations say chatbots and deepfakes should be labeled so people know they are interacting with a machine.