[ad_1]

How tech companies are shaping the rules governing AI

Enlarge (credit: Mina De La O | Getty Images)

In early April, the European Commission published guidelines intended to keep any artificial intelligence technology used on the EU’s 500 million citizens trustworthy. The bloc’s commissioner for digital economy and society, Bulgaria’s Mariya Gabriel, called them “a solid foundation based on EU values.”

One of the 52 experts who worked on the guidelines argues that foundation is flawed—thanks to the tech industry. Thomas Metzinger, a philosopher from the University of Mainz, in Germany, says too many of the experts who created the guidelines came from or were aligned with industry interests. Metzinger says he and another member of the group were asked to draft a list of AI uses that should be prohibited. That list included autonomous weapons, and government social scoring systems similar to those under development in China. But Metzinger alleges tech’s allies later convinced the broader group that it shouldn’t draw any “red lines” around uses of AI.

Metzinger says that spoiled a chance for the EU to set an influential example that—like the bloc’s GDPR privacy rules—showed technology must operate within clear limits. “Now everything is up for negotiation,” he says.

Read 18 remaining paragraphs | Comments

[ad_2]

Source link

Related Posts