info@zuykov.com8 (800) 700-16-37
Free Advice
mon-thu: from 09:30 to 18:15
fri: from 09:30 to 17:00
sat-sun: day off
  • RU
  • EN
  • CN

Change Region :UAE / SA

European Parliament agreed on restrictions on artificial intelligence

22 Jan 2024
#New technologies
Author
Head of Trademark Department / Trademark Attorney Reg. № 1258 / Patent Attorney of the Russian Federation / Eurasian Patent Attorney Reg. № 63

The end of 2023 was marked by a landmark event for the field of artificial intelligence - in December, the European Parliament agreed on a law defining the boundaries of the use of relevant technologies. To date, this is the largest attempt to take control and regulate relations in the relevant area.

It is interesting to note that the first version of the bill on AI regulation was developed in the European Parliament back in 2021. It summed up the result of three years of cooperation between EU legislators and the world's leading experts in this field. This version of the bill was presented as a global model for the treatment of technology - it was assumed that the general provisions of the law on artificial intelligence would cover not only existing but also all possible technologies that may appear in the future. However, ChatGPT technology soon appeared on the world stage, which was dubbed a global sensation (for more details, see the article “ChatGPT Artificial Intelligence – a Tool of the Future or a Threat to Copyright”). This technology revealed the bill’s “blind spots” and highlighted new challenges.“We will always lag behind the speed of technology development,” one of the members of the European Parliament who participated in the preparation of the bill noted in this regard. As a result, it took about two years to finalize the law taking into account new realities.

However, the problem of regulating the field of AI lies not only in the speed of technology development, but in the fears of leading states and associations of losing leadership in the global race for technological breakthroughs. The dilemma is that, on the one hand, national legislative restrictions can delay the development of AI technologies, which are expected to have neither more nor less, but the ability to radically change the existing system of economic and social relationships in society. On the other hand, the use of AI technologies already at the current stage has revealed many significant risks, including copyright violations, deepfakes, violations in the use of personal data; they shook the educational and scientific spheres of activity, etc. In addition, it is predicted that the development of technology can lead to mass unemployment in certain areas of activity, unfair competition, including not only economic, but also electoral processes, including the impact on the outcome of political elections and voter behavior.

Having joined the technological race, the developers of AI systems themselves faced problems. First of all, they affected such IT giants as Microsoft and Google, which have financial and technical resources, as well as access to a huge amount of data (big data), necessary for the development of AI technologies. Moreover, the problems affected not only the moral and ethical side, but also the legal sphere, which in turn resulted in an avalanche of claims with multi-million dollar claims. This has led to the need to rethink the value of AI technologies from the standpoint of the relationship between their advantages and objectively existing and potential risks.

At the same time, despite statements by IT corporations about reducing the volume and suspending the release of new technologies, as well as about voluntarily accepted obligations to focus on the development of responsible and safe AI, many states have begun to take measures aimed at controlling the use of AI technologies. Both advisory and restrictive measures were used (Japan, USA, China), as well as and direct targeted bans (for example, a ban on the use of ChatGPT in Italy).

In the legal sphere, the key problem is the discrepancy between the current regulatory regulation of intellectual property relations and objective reality, which is based on the corresponding technological breakthroughs. Thus, problems in the field of intellectual property law have already affected both its general provisions and special rules for key groups of objects. In particular, issues related to the patenting of “synthetic” inventions and other objects of industrial property created with the help of AI systems are gaining relevance (for more details, see the article “On the problems of patent protection of inventions created by artificial intelligence”).

The problem in the field of copyright is even more acute, especially in terms of protection by copyright holders of “input data” - materials that are used by technology companies to train neural networks and generate “synthetic” content (for more details, see the article "On the problem of training generative neural networks on objects protected by copyright"). Social media owners (Reddit and Twitter), news organizations, publishers, writers, artists and other copyright holders are bringing both individual and class action lawsuits against companies that develop AI systems.

At the end of December 2023, the American media giant New York The Times has filed a lawsuit against technology companies OpenAI and Microsoft for copyright infringement. According to the plaintiff, the defendants illegally used millions of published articles, the copyrights of which belong to the plaintiff, to train chatbots. In addition to seeking actual damages, the New York Times wants to prevent chatbots from further using its content, citing unfair competition in the news industry and resulting loss of profits due to reduced traffic. In turn, OpenAI considers this claim to be groundless, and the use of open materials as fair use. 

Some of the problems associated with the use of “input data” are eliminated by concluding appropriate partnership agreements. Technology companies, including OpenAI, are calling on rights holders to collaborate on mutually beneficial opportunities based on the transformative potential of AI technologies and the new revenue models associated with it. However, it is obvious that in the absence of basic legislative regulation of the relevant area, this approach is a half-measure: as a rule, the competitive advantage is on the side of the former, and the process of agreeing on the terms of the transaction can drag on for a long time and ultimately fail to produce the desired results. For example, the New York Times and OpenAI initially tried to agree on cooperation, but the negotiation process was interrupted by filing a lawsuit. Given that the uncertainty of the situation and the “fair use” doctrine applied in the United States, in the absence of regulatory guidelines, does not provide reliable guarantees of protection for any of the interested parties.

In this regard, the adoption of the law on artificial intelligence by the European Union seems to be a landmark event, guiding the development of revolutionary technologies in a reliable and environmental direction. According to the European Parliament press office, the law simultaneously protects against the use of high-risk AI, aims to stimulate innovation, making Europe a leader in this area, and also sets obligations for AI systems depending on their potential risks and level of impact.

In particular, when developing and using AI technologies, the law generally establishes a number of prohibitions. For example, such as:

  • a ban on the use of biometric systems for categorizing people based on political, religious, philosophical beliefs, sexual orientation and race;
  • a ban on the inappropriate collection of images of faces from the Internet or recordings from CCTV cameras to create facial recognition databases (except for use for security purposes, preventing the threat of terrorism, etc.);
  • prohibition of AI systems that manipulate human behavior, exploit their vulnerabilities due to age, disability, social or economic status, etc., recognize emotions in the workplace and in educational institutions (except for medical or security reasons, e.g. , monitoring the level of pilot fatigue), etc.

At the same time, companies developing AI with a high potential risk for people and society (for example, if the technology can cause harm during hiring or training, etc.) will have to conduct a preliminary assessment of the compliance of AI models, assess and reduce systemic risks, conduct adversarial testing and report to the regulator.

General purpose AI systems (including ChatGPT) and the models they are based on have transparency requirements that include:

  • compilation technical documentation;
  • compliance with EU copyright law;
  • distributing detailed summaries of the content (“input”) that is used to train the AI system.

In addition, the law provides for:

  • certain measures to support small and medium-sized businesses involved in the development of AI systems;
  • penalties for failure to comply with regulatory requirements, the amount of which will be incl. determined depending on the violation, the size of the violating company, etc.

European parliamentarians are calling the AI law a deal on comprehensive rules for trustworthy AI that will have a significant impact on the digital future of the European Union. However, experts in the relevant field note that in addition to official approval, the new law still has a long and thorny road ahead.

Author
Head of Trademark Department / Trademark Attorney Reg. № 1258 / Patent Attorney of the Russian Federation / Eurasian Patent Attorney Reg. № 63