Europe Regulates High-Risk AI

Europe Regulates High-Risk AI

The European Commission has unveiled its plan to strictly regulate artificial intelligence (AI), distinguishing itself from more freewheeling approaches to the technology in the United States and China.

The commission will create new rules – including the prevention of “black boxes” humans cannot interpret – to manage the use of high-tech technologies, such as medical devices and self-driving cars. Although the regulations may be broader and more stringent than some of the previous EU rules, European Commission President Ursula von der Leyen said at a press conference today he announced the plan was to promote “trust, not fear.” The plan also includes ways to modernize the European Union’s AI strategy and depress billions in R&D over the next decade.

These suggestions are not final: In the next 12 weeks, experts, community rehabilitation teams, and the public can look at that in the system before the concrete rules are introduced. Any final regulation will require the approval of the European Parliament which is likely to happen this year.

Europe is taking a more cautious approach to AI than the United States and China, where policymakers are reluctant to impose limits on their AI competition. But EU officials hope that the rules will help Europe compete by winning consumer trust, thus driving widespread adoption of AI.

Europe Regulates High-Risk AI
Europe Regulates High-Risk AI

“The EU is trying to apply leadership to the best of it, which is a solid and comprehensive regulatory framework,” said Andrea Renda, a member of the independent advisory group on AI, and an AI policy researcher at the Center for European Policy Studies. Eleonore Pauwels, an AI behavior researcher at the Global Center on Cooperative Security, says the rules are a good idea. He argues that there can be “social deficiencies” if policymakers do not find alternatives to what they call “surveillance capitalism” in the United States and “digital dictatorships” created in China.

The commission is looking for binding rules to apply the “high risk” of using AI in sectors such as health care, transportation or crime. Risk management measures include whether a person can be injured – in a self-driving car or medical treatment, for example – or whether one has little say in the impact of a machine’s decision, such as when AI is used in hiring or observing.

Related: The Kinetic energy form a single raindrop can enlight 100 small led bulbs

In high-risk situations, the commission wants to set up a countless “black box” in favor of human control. These rules will also control the large data sets used in training AI systems, ensure that they are legally purchased, downloadable from their source, and broad enough to train the system. “The AI   system must have strong and accurate technology to be reliable,” commission chief Margrethe Vestager said at a press conference.

The law will also establish who is responsible for the actions of the AI   system – such as the company that uses it, or the company that designed it. High-risk applications will need to be shown to be compliant with laws before being sent to the European Union.

The commission also plans to award a “Trusted AI” certification, promoting compliance and a commitment to high-risk use. Certified programs that were found to violate the laws could face fines.

The commission also said it would “launch a broader European debate” on face recognition programs, in which AI could target people in the crowd without their consent. Although EU countries such as Germany have announced plans to employ these programs, officials say they often violate EU privacy laws, including special rules for police work.

Pauwels, a former commission official, says the AI trade has to date incontestable a “pervasive lack of normative vision.” however Vestager points out that 350 businesses have expressed a disposition to adjust to the moral principles concerned by its AI consultatory cluster.

The new AI strategy isn’t solely regarding regulation. The commission can return up with an associate “action plan” for the desegregation of AI into public services like transport and health care and can update its 2018 AI development strategy that plowed €1.5 billion into the analysis. The commission is looking for additional R&D, as well as AI “excellence and testing centers” and a brand new industrial partnership for AI that would invest billions.

For Latest Updates Click Here


Please enter your comment!
Please enter your name here