On 10 February the “AI Action Summit” is opening in Paris. It was expected to gather at least a hundred heads of state and governments, including representation from the United States and China, together with members of the civil society, and executives of high-tech firms specializing in AI. Notable attendees are reported by Le Monde: Elon Musk and Sam Altman (OpenAI), Sundar Pichai. This will be rolled out gradually, with debate and possible objections about how to go about its implementation.
On February 2, the European Union officially promulgated the AI regulation, marking the world’s first comprehensive international AI legislation, generally called the EU AI Act.
The topics to this effect, regulators will most likely raise during the summit on how the regulation affects the businesses of developers and users of AI. The final version of the AI regulation was passed by the majority in the European Parliament with 523 votes in favor, 46 against, and 49 abstentions on March 13, 2024. The document had been agreed upon as early as December 2023 with the European Commission and EU member states. According to a communiqué from the European Parliament, it aims at the protection of “fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI,” while encouraging innovation and increasing Europe’s leadership in this field.
Thus, this is the law but not a directive. It means this is a legal act with a direct effect within the European Union. As an important element regarding all subjects either developing or embedding AI in your business activity field, the agency Eesti Firma develops such services like: legal and corporate consulting in markets of crypto assets, artificial intellect, and all kinds of other digital assets, respectively.
AI Act: Main Bans and Exemptions
Some AI applications are simply forbidden under the AI Act. Among the most significant prohibitions is a ban on mass real-time biometric surveillance. Another AI applications fall into one of the different risk categories, ranging from Low to High, corresponding to different safety and transparency requirements.
From February 2, EU rules ban AI systems presenting “unacceptable risks” to human security and rights. Exceptions are made on grounds of national security.
Concretely, programs that evaluate human social behavior and give them “scores” are banned. Such a social scoring system, used in China to reward or penalize citizens depending on their behavior, is now banned in the EU.
Workplaces and educational institutions have also banned AI-driven emotion recognition technologies. The AI systems, using misleading methods for the manipulation of behavior, in particular for children, are prohibited. For example, talking toys are not allowed which may advise children to do unsafe practices.
Systems exploiting vulnerable individuals include robocalls (Robo-Calls) which can be potentially used to commit frauds against the elderly.
Otherwise, facial recognition is largely banned from public spaces, though the system can be operated by law enforcement agencies in case of work on serious crime cases, notably human trafficking and terrorism. It is an exemption that leaves something to be desired according to Caterina Rodelli, a global human rights organization Access Now EU policy analyst who described the situation with the words, “If there are exceptions to a ban, then it is no longer a ban,” according to Euronews.
Risk Assessment based on the EU AI regulation
Companies developing or using AI have to assess the risk levels of their systems and take measures necessary to ensure compliance with legal norms. The European Commission insists that regulation is not intended to hamper innovation but to make AI safe and transparent. As a result, AI providers have to make sure that all those involved in the development and operation of such systems possess sufficient “AI competencies.”
Failure to comply with the AI Act will be punished with fines of up to €35 million, or 7% of a company’s global annual turnover-whichever amount is higher-depending on the particular violation and scale of operation of the company. This is beyond the maximum penalties under the EU’s General Data Protection Regulation, which limits fines at €20 million or 4% of global annual turnover.
Criticism and Warnings
Tasos Stampelos, head of public policy and EU government engagement at Mozilla, previously told CNBC that while the law is “not perfect,” it is “very necessary.”
“It is important to understand that the AI Act is mainly a product safety legislation,” Stampelos said during a conference hosted by CNBC. “As happened with the safety regulations, the adoption of a law does not mean that the process is over. Several subsequent measures and clarifications will follow.”
“Compliance with the law currently will depend on the standards, guidelines, secondary legislation and derivative tools in general that are in the future going to define what compliance actually means,” he added.
Criticism also comes from industry association Bitkom: the new law is not clearly defined, according to lawmakers, who have placed strict requirements with tight deadlines without having “done their homework.” Companies are on their own where legal risks are concerned, said Susanne Demel, a member of the board at Bitkom. While the U.S. invests hundreds of billions in AI, and China works on powerful language models, Demel is convinced that “Germany and the EU are putting up obstacles for developers.”
European Commission President Ursula von der Leyen and European Central Bank President Christine Lagarde heightened the sense of crisis with a joint blog post, warning Europe’s AI competitiveness was in peril. “While the world is experiencing an AI revolution, the EU risks being left behind,” they said.
Approaches Different – EU: vs. U.S. Under Donald Trump
Meanwhile, in front of European regulation, the U.S. government, at the start of the second term of Donald Trump beginning in January 2025, has chosen a very different tack. His administration has eased back on the already light reins, rescinding earlier executive orders and touting billions in investments from the tech sector.
The White House announced the launch of the “Stargate” infrastructure project with total investments of up to $500 billion within four years. The money will be invested in the development of data centers and other components of the AI infrastructure. Donald Trump underscored that “all of this is happening here, in America” and promised to present the detailed “AI Action Plan” in the next 180 days.
Analysts note that the primary goal of the new American strategy is to strengthen U.S. dominance in AI. Experts believe that the differing approaches of the EU and the U.S. will have long-term consequences for the global AI race. Today, the U.S. leads in research and investment, driven by tech giants such as Google, Meta, Apple, and OpenAI. China holds the second position, rapidly expanding its AI capabilities. While the U.S. focuses on the development of the IT sector and national interests, Europe sticks to strict regulations and consumer protection.
Thus, the future of global AI governance remains uncertain. While the EU seeks to safeguard innovation through detailed risk regulation and transparency, the U.S. is pursuing a strategy of minimizing barriers for businesses. Observers emphasize that the balance between safety and innovation will determine AI’s role in the economy and society in the coming years.