Skip to main content
Return

« European Artificial Intelligence Act »: Regulation must be balanced and innovation-supportive

Digital & Innovation

Luxembourg, 14 February 2022.

In its statements of 2020 and 2021, the European Commission intends to assert its excellence in artificial intelligence (AI) and digital technologies and strengthen the EU’s competitiveness vis-à-vis global players. While the Commission has set up an action programme to stimulate innovation and investment in AI systems, it has also decided to define a legal framework that aims to build confidence in a rapidly developing strategic technology and to avoid possible abuses that could affect the security and fundamental rights of citizens. Thus, on 21 April 2021, the European Commission presented a proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence. Since then, the Council of the EU on the one hand and the European Parliament on the other have made progress on the analysis of the text under the ordinary European legislative procedure.

In FEDIL’s view, it is of course necessary to regulate the use and market introduction of AI systems at EU level to ensure a safe, reliable and transparent context, and in particular to avoid the fragmentation of the digital single market. However, FEDIL believes that this regulation and the resulting obligations should be proportionate to the risks incurred, realistic in relation to the objectives pursued and clear and objective for the actors involved. The new rules should not constitute a regulatory straitjacket that hinders innovation and investment, leads to more costly solutions and ultimately fails to achieve the EU’s ambition to make important technological advances in this area.

The Federation of Industrialists wishes to draw attention to the European regulators’ proposal because it and its members have understood the enormous potential of AI-based technologies for industry, especially if it comes to making the green and digital transitions a success. In a wide range of areas, these new technologies – Big Data, Machine Learning and Artificial Intelligence – can provide innovative solutions in terms of creating customer value, automating and optimising manufacturing processes, improving productivity, reducing costs, etc., and will ultimately have a beneficial impact on raw material requirements or on reducing carbon emissions. In general, industry has a very positive attitude towards AI, which is already being widely applied in predictive maintenance of manufacturing facilities, smart energy management, robotisation of repetitive low value-added tasks, quality control, smart grids or smart material modelling. This innovative momentum must not be allowed to fade.

The approach adopted by the European Commission is based on a classification of AI systems according to whether they present a minimal, limited, high or unacceptable risk and are therefore prohibited. Depending on the level of risk, specific obligations have been defined. FEDIL believes that the definition of high-risk applications is too broad. It should be avoided that industrial applications that do not present a high risk, fall within the scope of the full specifications for high-risk AI. This would be disproportionate and would discourage small and medium-sized enterprises and start-ups in particular from developing innovative industrial AI applications. FEDIL therefore demands clear and objective criteria that correspond to the real risks.

Assuming that the identification of high-risk AI systems is relevant and proportionate, FEDIL endorses the implementation of a rapid conformity assessment before this AI system is ready to be used and put into circulation on the internal market.

As regards the specific requirements that high-risk AI systems should comply with, it must be noted that the current proposal is not adapted to all use cases. For example, some clarifications would be needed in the area of data governance, record keeping or human oversight. Instead of subjecting certain data to quality criteria and defined practices, FEDIL recommends defining the desired outcome of the use of the high-risk AI system and leaving it to the industries to design their system to achieve it, notably through standards.

As regards the different obligations imposed on the actors in the supply chain of the high-risk AI system, FEDIL believes that some adaptations are necessary. While the supplier will have to bear the burden of most of the obligations laid down in the Regulation when he wants to place a high-risk AI system on the market, the user will for example be subject to the same obligations when he puts a high-risk AI system into operation under his own name or brand. Apart from the fact that they do not necessarily have the technical knowledge to ensure that the system complies with the necessary requirements, these types of users, often SMEs, may no longer invest in the development of an AI system by a third party.

In order to support digital dynamism in Europe and Luxembourg, it is important to avoid creating barriers to the emergence of new use cases, discouraging companies to innovate in the field of artificial intelligence. Any undesirable administrative burden that would hinder the necessary investments is a loss of revenue. European companies, especially SMEs and start-ups, should therefore not be subjected to disproportionate regulatory constraints.

Because in view of the current and future technological and energy challenges, it is essential to harness the potential of artificial intelligence and put cutting-edge AI solutions at the service of industry and business.

Table of content