EU got itself an AI Act; US – only a set of safeguards

All News, Business, Digital & Media, e-Commerce, Finances, HR & Jobs, Innovation, ITC, Tech

European Artificial Intelligence Act (AI Act) entered into force on August 1st. The declared purpose: ensure AI developed and used in the EU is trustworthy, with safeguards to protect people’s fundamental rights and establish a harmonized internal market for AI in the EU.

The legislation details and categorizes the risks related to AI from minimal to unacceptable. Under the EU AI Act, AI systems that are clearly a threat to fundamental rights will be banned; examples refer to systems or apps manipulating human behavior to circumvent users’ free will, systems that allow ‘social scoring’ by governments or companies, applications of predictive policing and certain uses of biometric systems (f.ex. emotion recognition at workplace or some systems for categorizing people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces – with limited exceptions).

AI Act introduces rules for so-called general-purpose AI models, which are highly capable AI models that are designed to perform a wide variety of tasks like generating human-like text.

EU countries have a year from today to designate competent national authorities to oversee the application of the law. EU Commission’s AI Office will be the key implementation body for this legislation within EU and the enforcer for the rules for general-purpose AI models.

Three advisory bodies will support the implementation of the rules: European Artificial Intelligence Board (ensure a uniform application of the AI Act in EU states ; main body for cooperation between the Commission and the Member States), a scientific panel of independent experts (technical advice and input on enforcement; ability to issue alerts to AI Office about risks associated to general-purpose AI models) and an advisory forum (diverse set of stakeholders).

Implementing AI Act’s regulations is mandatory and companies risk fines of up to 7% of the global annual turnover for violations of banned AI applications, up to 3% for violations of other obligations and up to 1.5% for supplying incorrect information.

Most rules included in AI Act start apply on August 2nd 2026, with the prohibitions of AI systems deemed to present an unacceptable risk to already apply after six months, while the rules for so-called General-Purpose AI models will apply after 12 months.

The transitional period will be governed by an AI Pact initiated by EU Commission, which invites AI developers to voluntarily adopt key obligations of the AI Act ahead of the legal deadlines. EU Commission is also working to guidelines to define and detail how the AI Act should be implemented and facilitating co-regulatory instruments like standards and codes of practice. The Commission opened a call for expression of interest to participate in drawing-up the first general-purpose AI Code of Practice, as well as a multi-stakeholder consultation giving the opportunity to all stakeholders to have their say on the first Code of Practice under the AI Act.

Meanwhile in US

The creation of AI Safety Institute Consortium (AISIC) was announced in February. Acting under the jurisdiction of Department of Commerce’s National Institute of Standards and Technology (NIST), the consortium includes a comprehensive list of companies.

The US companies must abide a set of safeguards which include testing AI systems for security flaws and sharing the results of those tests with the U.S. government, developing mechanisms that would allow users to know when content is AI-generated, as well as developing standards and tools to make sure AI systems are safe. The safeguards are voluntary and not enforceable, meaning the companies won’t suffer consequences for not abiding to them.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.