Navigating the AI Act: Implications for Critical Infrastructure
The increasing integration of Artificial Intelligence (AI) in critical infrastructure markets, including industrial, communications, transportation, education, and financial sectors, necessitates a clear understanding of AI's capabilities, limitations, applications, and associated risks. The degree of regulatory burden imposed on AI systems is significantly influenced by these factors. The European Parliament's recent passage of the Artificial Intelligence Act (“AI Act”) on March 13, 2024, marks a pivotal moment in AI regulation, with potential ramifications for US companies operating in EU markets.
EU AI Act Background
The AI Act establishes a risk management-based regulatory framework designed to protect EU citizens from potential adverse effects of AI. This framework categorizes AI systems into four risk levels: "unacceptable risk," "high risk," "limited risk," and "minimal risk." While many believe that most AI applications will fall into the minimal or limited risk categories, a considerable number of potential market applications could be classified as high-risk, leading to extensive regulatory oversight.
High-Risk Classification
Given the fundamental importance of critical infrastructure systems to society, AI applications within these systems are subject to heightened scrutiny. Article 6 of the AI Act defines high-risk AI systems, focusing on those that serve a safety purpose, either as standalone products or as components of other products. It also designates types of goods or services under the EU’s harmonization laws.
Generally, high-risk AI systems encompass applications in:
- Education
- Banking and finance
- Public and private services
- Critical infrastructure
- Goods subject to EU product safety laws (e.g., vehicles, toys, medical devices, aviation, elevators, watercraft, and farming equipment).
Critical infrastructure is generally understood as essential support systems, functions, and services that are vital for maintaining a civil society. The United States defines critical infrastructure and key resources (CIKR) across 16 key economic and governmental sectors. Similarly, the EU employs an expansive functional sector list, including utilities and energy, water, transportation, food, waste, public services administration, communications digital infrastructure, and space, as defined in Directive (EU) 2022/2557 and Regulation (EU) 2023/2450 (collectively, “EU Critical Infrastructure Laws”).
Read also: Explore the life of Benjamin Banneker.
The AI Act adopts the same critical infrastructure definition as the EU Critical Infrastructure Laws in Article 3(62). However, the AI Act begins to fragment its treatment and approach to critical infrastructure in AI risk classification.
Annex III (2) of the AI Act identifies as high risk AI system: “AI systems intended to be used as safety components in the management and operation of critical digital infrastructure (emphasis added), road traffic, or in the supply of water, gas, heating or electricity.”
This definition narrows the scope to a subset of the EU’s critical infrastructure sectors and introduces the term "critical digital infrastructure," which is not explicitly defined. Recital 55 of the AI Act clarifies that this term refers to management and operational systems within the critical infrastructure sectors and subsectors outlined in Annex 1 to Directive (EU) 2022/22557. Thus, any AI system that performs an operational or management control function is likely to be classified as high-risk, regardless of whether it has an intended safety function. Similarly, any safety system used in critical infrastructure would be a covered high-risk system, as it would inherently be part of an operational system.
Critical infrastructure is further addressed in Annex III (5) of the AI Act, which includes systems that control “[a]ccess to and enjoyment of essential private services and essential public services and benefits.” This encompasses account management, billing, credit, and provisioning systems used within critical infrastructure service sectors.
Furthermore, other sectors falling under the EU’s Critical Infrastructure Law’s definition may be assigned a high-risk classification under Article 6(2) through reference to a Union harmonization legislation list in Annex 1, which is not limited to use in safety systems. Consequently, a wide range of systems used in the operation, management, safety, provision, and access to critical infrastructure functions may be implicated, potentially involving other systems and products of a categorical nature.
Read also: The Benjamin School Fees
Defining AI: Smart vs. Artificially Intelligent
While understanding the high-risk AI system classification is crucial for critical infrastructure operators, defining what constitutes AI is equally important. The AI Act’s definition of an "AI system," combined with other ambiguities, could inadvertently include legacy smart systems within its scope.
Critical infrastructure systems are intricate and rely on advanced technologies for operation, control, and monitoring. These technologies often incorporate real-time or near-real-time sensing, network connectivity, and predictive models to make decisions and execute system functions. Although these systems are "smart," it is debatable whether they should be classified as AI systems.
The AI Act defines an “AI system” under Article 3(62) as: “…a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
This definition, derived from the OECD’s 2019 definition, reflects the ongoing difficulty in achieving consensus on what constitutes AI. The definition includes "adaptivity" as a potential trait of an AI system but does not make it a mandatory requirement. The definition essentially boils down to a five-part test:
- A machine system
- Designed with autonomy
- That performs any kind of objective
- Based upon inputs received
- That can influence any environment
This test could potentially encompass systems that are not conventionally considered AI. The most distinguishing "AI-like" trait in the definition is "autonomy," but its meaning in relation to machines remains unclear.
Read also: Internship Opportunities
The Oxford English Dictionary defines "autonomy" as "the fact or quality of being unrelated to anything else, self-containedness; independence from external influence or control, self-sufficiency." Another definition emphasizes the irreducibility of a system having its own laws or methods. Regardless, the ordinary meaning of "autonomy" does little to inform on what it means in relation to a machine, and, as a consequence, contributes little to ferreting out what an AI system is.
The key takeaway from the definition is the notion of independence from external influence or control. Many conventional software-controlled systems use statistical algorithms and methods, such as regression analysis, Monte Carlo simulation, and factor analysis, to interpret data inputs, draw inferences, and take actions without human intervention. This type of software is prevalent in various applications, from factory production line robots to comprehensive system management platforms that make near real-time decisions in areas of energy production and transmission switching, financial trading, and logistics.
However, a self-executing system controlled or constrained by Boolean logic, even if using statistically derived functional inputs, would likely not be considered autonomous because it is the product of outside control or influence.
Delineating Systemic Risk: The 10^25 FLOPS Threshold
The AI Act introduces a quantitative threshold under Chapter V for general-purpose AI models, which are deemed to present systemic risks. Article 3(63) defines a “general purpose AI Model” as: “… an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are released on the market.”
Article 3(65) defines “systemic risk” as: “a risk that is specific to the high-impact capabilities of general purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.”
Under Article 51(1)(a) and (b), a general-purpose AI model presents significant risks if it has high impact capabilities evaluated on the basis of technical tools, methods, and standards identified by the Commission in accordance with the impact criteria in Annex VIII. Article 51(2) further specifies that general-purpose models using cumulative training computations greater than 1025 FLOPS (floating point operations per second) present a systemic risk under Article 51(1)(a). This threshold is likely aimed at newer large language models (LLMs) like GPT-4.
Modern AI systems are vastly more complex than their predecessors but still rely on long-understood principles, leveraging computational scale. The use of LLMs in critical infrastructure and other identified high-risk areas will undoubtedly be subject to significant scrutiny and regulatory oversight.
The High-Risk Safe Harbor
To mitigate definitional uncertainty, the AI Act provides a safe harbor in Article 6 (3), excluding AI systems from high-risk classification if the system “… does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making.”
This aligns with the Boolean-based distinction mentioned earlier. Point 3 further provides a derogation (exclusion) test if a system meets at least two of the following conditions:
- The AI system is intended to perform a narrow procedural task.
- The AI system is intended to improve the result of a previously completed human activity.
- The AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review.
- The AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III.
While this provision defines a non-AI system, it offers a basis for excluding systems from high-risk classification. This is a crucial provision for critical infrastructure systems providers who choose to demur to AI system treatment and can firmly establish that the Article 6 (3) safe harbor is met.
However, this path is not without its own classification risk or regulation. Article 6(4) of the AI Act mandates that providers of purported non-high-risk Annex III systems register them under Article 49(2) and document their non-high-risk assessment, making it available to authorities upon request.
Finally, Article 6(5) requires the European Artificial Intelligence Board (AI Board) to develop and implement guidelines that provide practical examples and use cases distinguishing between "high-risk and not high-risk" systems within 18 months. This mandate is intended to guide the market and enable the Commission under Article 6(6) to modify and add to the high-risk exclusion conditions under its delegated authority under Article 97(2).
tags: #0206 #Benjamin #Franklin #Hall #Commonwealth #University

