European Commission validates withdrawal of SEP and AI liability
It highlights the differences between the tech lobby and consumer advocates in the AI sector
European Commission validates withdrawal of SEP and AI liability
It highlights the differences between the tech lobby and consumer advocates in the AI sector
Recently, it was reported that the European Commission (EC) had confirmed the official withdrawal of legislative draft proposals to enhance the European Union’s (EU) regulatory oversight on standard-essential patent (SEP) licensing and civil liability of artificial intelligence (AI) products and services.
In September 2022, the EC published a draft proposal on an AI liability directive to adapt non-contractual civil liability rules. It was intended to prevent the fragmentation of rules adopted by individual EU member states to address harmful acts and other wrongs committed by AI.
In April 2023, the Commission published a regulatory proposal to facilitate licensing for patents covering inventions incorporated into technological standards. It mandated SEP owners to register the patents with the EU Intellectual Property Office (EUIPO), which would scrutinize and set criteria for fair, reasonable and non-discriminatory (FRAND) licensing obligations.
However, industry analysts concerned about the regulation’s impact on the telecommunications market have welcomed the EC’s decision to withdraw the proposed SEP regulation.
A spokesperson for the Council for Innovation Promotion expressed that the regulation “would have enabled large companies within industries to collectively determine royalty rates.”
Gene Quinn, CEO and President, IPWatchdog, remarked that the SEP oversight framework would “render meaningless FRAND licensing promises in favor of authoritarian decrees.”
The EC cited a lack of agreement among member states in the foreseeable future as the main reason for tabling the proposed SEP regulation. However, the inability to agree on matters led to the withdrawal.
The findings from the EC’s February 2020 white paper on AI provided for court-ordered evidentiary disclosures from operators of high-risk AI. This was a presumption of a causal link between non-compliance and damage caused by AI outputs or failures, and a monitoring program advising the EC if certain AI incidents required strict liabilities.
However, since the AI liability directive introduced in late 2022, no significant progress was made on its adoption by EU member states. Originally part of a larger plan for the regulation, along with passage of the Artificial Intelligence Act, which became effective in August 2024, the AI directive was to complement the Act’s framework for assessing risk levels for specific AI deployments. This was by providing EI consumers a cause of action for civil liability, an enforcement mechanism against non-compliance with the Act.
Several EU lawmakers resisted plans to withdraw the AI liability directive. Members of the European Parliament’s Committee on the Internal Market and Consumer Protection (IMCO) voted to continue working on AI liability rules after the Commission’s withdrawal.
Henna Virkkunen, the EU Commission’s Executive Vice-President for Tech Sovereignty, Security. and Democracy asserted, “The AI liability directive would have led to fragmented rules across EU member states.
In April, several organizations representing the interests of civil society, including the European Consumer Organisation, the European Center for Not-for-Profit Law and Mozilla Foundation, sent an open letter to Virkkunen, urging the Commission to draft new civil liability rules for AI providers.
Recently, Virkkunen defended the EC’s withdrawal of the AI directive before the EU Parliament’s Committee on Legal Affairs, stating that the rules would not be drafted until the AI Act was fully implemented across the EU. She reiterated her commitment to drafting the rules supporting a true single market for AI across Europe.
Meanwhile, civil liability in the AI zone was opposed by the Big Tech lobby. Some critics of the AI directive argued that the proposed framework had liability gaps.
A 2023 article in a journal noted that it would be difficult under the proposed framework for civil liability to attach to black-box medical AI systems, which provided diagnoses and recommendations based on opaque decision-making processes. This was when physicians could not assess AI outputs, or the decisions were not subject to independent physician review.
The confirmation of the AI liability directive’s withdrawal came days before certain oversight provisions of the AI Act became effective. It mandated EU member states to monitor compliance with the act by domestic businesses.
Foreign companies operating in the EU’s AI market will also be governed by the AI Code of Practice established by the AI Act, which mandates certain transparency and compliance standards on general-purpose AI models.
Recently, Google and X announced they would sign the EU’s general-purpose AI rules. X is committed to the Code’s chapter on safety and security.