The European Parliament has adopted a new law called the AI Act. It sets out the requirements for AI developers, as well as the rules that the creators of such systems must obey. This is expected to ensure the safety, transparency, environmental friendliness and ethics in the use of AI in the EU.
For example, the new law prohibits the use of AI systems with an unacceptable level of risk to human security. These are, for example, systems that manipulate people in one way or another, using social engineering, various methods of manipulation, or used for social assessments.
In addition, AI is prohibited from using for such tasks:
- Remote biometric identification in public places and in real time. This is allowed only by the police and only after a court decision.
- Using AI to categorize people based on gender, race, religion or other criteria.
- The use of predictive systems for the police, which work on the basis of profiling.
- The use of AI for emotion recognition in law enforcement, border services, educational institutions and workplaces.
- Extracting biometric data from social networks or from video recordings from CCTV cameras to create a database.
The law also introduces the concept of “high-risk” AI systems that can potentially harm a person. Among them there are recommendation algorithms of social networks. Their developers will need to ensure high quality data for AI training, as well as register systems in a European database.
When using chatbots, developers and vendors are required to notify users that they are interacting with AI and not with a live operator. For generative neural networks, there is a requirement to indicate the authorship of the content. That is, you need to explicitly indicate that the picture or text was generated by AI.