Back to insights
Article • 24/01/2024

What does the New ‘EU AI Act’ Mean for AI Companies?

Challenging the trend of recent rapid advances in artificial intelligence, the evolution of AI legislation took a deliberate course rather than a revolutionary leap, culminating on December 8, 2023, with a groundbreaking consensus by the European Parliament and Council on the European Union’s Artificial Intelligence Act (‘EU AI Act’).

This landmark achievement positions the EU as the first continent to establish explicit rules for AI usage. Being the result of extensive negotiations aimed at shaping the EU’s digital future, it places a premium on safety, fundamental rights, and the promotion of innovation and investment in AI technologies.

Risk-Based Strategy

The final draft approved at the end of December adopts a risk-centric approach by categorizing AI systems into four groups—unacceptable-risk, high-risk, limited-risk, and minimal/no-risk—based on their intended applications.

Banning of Unacceptable-Risk AI Systems:

This legislation bars the introduction, deployment, or utilization of AI systems employing subliminal techniques beyond an individual’s consciousness to distort behavior, causing potential physical or psychological harm. The law also forbids the use of AI systems exploiting vulnerabilities associated with age, and physical or mental disability, leading to harm for individuals within those specific groups.

Public authorities are strictly prohibited from utilizing AI to evaluate or classify individuals based on social behavior or personal characteristics (‘social scoring’). While biometric identification systems are generally prohibited, exceptions for law enforcement purposes require stringent conditions and are permitted only in specific situations, such as the search for missing children.

Comprehensive Compliance for High-Risk AI:

High-risk AI systems used in critical sectors like infrastructure management, education, employment, law enforcement, border control, and justice administration must adhere to rigorous obligations involving risk mitigation, data governance, documentation, transparency, and cybersecurity. These systems are required to maintain detailed usage records, including the period of each use, reference databases checked, matched input data, and identification of involved personnel.

AI companies must provide users with comprehensive information about the system’s ownership, contact details, characteristics, limitations, performance metrics, and potential risks. This includes specifications for input data, changes to the system, human oversight measures, and expected lifetime with maintenance details. The development of such AI systems, especially those using model training, demands strict adherence to guidelines for quality datasets, considering design choices, biases, and specific user characteristics. 

This demand for greater transparency and human oversight aims to enable users to understand and utilize outputs appropriately, with technical solutions required to address vulnerabilities like data poisoning and adversarial examples.

Minimal Transparency for Limited-Risk AI, and Flexible Use for Minimal/No-Risk AI:

Limited-risk AI systems face minimal transparency obligations. These transparency requirements include informing users of AI system interactions and marking synthetic content in a machine-readable format. AI systems falling outside of the risk classes, such as recommender systems, can be freely used, though following voluntary codes of conduct is advised.

What does this mean for General-Purpose AI Models?

For general-purpose AI models, like ChatGPT, the EU Parliament and Council have reached a compromise, implementing a tiered approach. This approach distinguishes between horizontal obligations applicable to all models and additional obligations for those posing systemic risks. Transparency requirements, adherence to copyright law, and detailed training content summaries apply universally.

Systemic-risk models face more stringent obligations, including evaluations, risk assessments, adversarial testing, reporting incidents to the Commission, ensuring cybersecurity, and reporting on energy efficiency. 

Negative news coverage highlighted the underwhelming nature of the agreement, as more and more was conceded throughout the negotiation project. This did not go unnoticed in the eyes of the media, who increasingly raised the question of whether attempting to please all parties of the trilogue, would be the ultimate path to pleasing none. More quietly, yet revealingly, the media kept asking the question of whether EU countries and companies had the logistic infrastructure in place to see the implementation and enforcement of the act’s provisions.

Critics have raised pivotal questions regarding the act’s implications for companies within the entertainment industry and, more broadly, across various sectors. Concerns echo about how the provisions of this legislation could potentially compromise the operational efficiency of these models, leading to a surge in operational costs that could render them financially inaccessible. The answer to these questions came in the form of overwhelmingly positive coverage by the rest of the news stories published about the subject. For both the tech companies most commonly mentioned in news coverage of the EU AI act, and the EU bodies involved in its ratification, news coverage was overwhelmingly positive.

At its core, this regulatory breakthrough propels terms like ‘responsible AI’ and ‘trustworthy AI’ to the forefront, signaling a paradigm shift in AI governance. This comprehensive legislation applies to all companies utilizing or offering AI models within the EU, while also addressing concerns raised a year earlier by the White House’s blueprint for an AI bill of rights, including ensuring safety and protection against discrimination.

The impending enactment of the EU AI Act positions the EU as a leader in responsible AI development, ensuring that governance aligns with innovation. With a focus on AI systems being “safe, transparent, traceable, non-discriminatory, and environmentally friendly,” the EU’s efficacy in this regard will likely be scrutinized in comparison to approaches adopted by other leading AI nations such as the UK and the US, as well as international initiatives shaping AI standards at the G7, G20, OECD, Council of Europe, and the UN.

Disclaimer: All the aforementioned information is merely informative and general purposes only and should not be misconstrued as legal advice or opinions.

You may also like

View More
Article

Sentiment Analysis Measured Your Way: Introducing Configurable Sentiment

For Corporate Communications leaders, sentiment analysis has been the key to understanding public perception. But sentiment can be subjective—from company to company, what your internal team decides is ‘positive,’ ‘negative,’ and ‘neutral’ sentiment can be a subjective reporting decision. Now, Signal AI’s new Configurable Sentiment Analysis feature gives communicators the power to choose how to […]

Read more
Article

Brands Go Quiet this Pride Month, Fearing Rainbow Washing

Every June during Pride Month, the corporate landscape transforms into a kaleidoscope of rainbows and pro-LGBTQIA+ messages. Ideally, it’s an opportunity for corporations to show support for the queer community and advocate for social change. However, it can often come across as “pride-for-profit.” Rainbow washing (a.k.a. pinkwashing or rainbow capitalism) is the marketing practice of […]

Read more
Article

How Messaging is Shaping the UK Election: Insights from Signal AI’s Election Index

The United Kingdom is set for its first election in five years. The Conservative Party, which has ruled since 2010, faces a formidable challenge in retaining power. Running behind in the polls, years of problems and controversies have made the Conservatives a prime target for critics. In this article, we used the live U.K. Elections […]

Read more
Article

Sentiment Analysis Measured Your Way: Introducing Configurable Sentiment

For Corporate Communications leaders, sentiment analysis has been the key to understanding public perception. But sentiment can be subjective—from company to company, what your internal team decides is ‘positive,’ ‘negative,’ and ‘neutral’ sentiment can be a subjective reporting decision. Now, Signal AI’s new Configurable Sentiment Analysis feature gives communicators the power to choose how to […]

Read more
Article

Brands Go Quiet this Pride Month, Fearing Rainbow Washing

Every June during Pride Month, the corporate landscape transforms into a kaleidoscope of rainbows and pro-LGBTQIA+ messages. Ideally, it’s an opportunity for corporations to show support for the queer community and advocate for social change. However, it can often come across as “pride-for-profit.” Rainbow washing (a.k.a. pinkwashing or rainbow capitalism) is the marketing practice of […]

Read more
Article

How Messaging is Shaping the UK Election: Insights from Signal AI’s Election Index

The United Kingdom is set for its first election in five years. The Conservative Party, which has ruled since 2010, faces a formidable challenge in retaining power. Running behind in the polls, years of problems and controversies have made the Conservatives a prime target for critics. In this article, we used the live U.K. Elections […]

Read more
View More