How should the law define AI?
How do you regulate something which has no single clear definition? Here's my idea...and it's inspired by legos.
Author’s note: This article will be continuously tested, reviewed and revised.
Last update: 12 May 2024.
Law is built on definitions (which drive the scope of the law). In some cases, definitions are relatively straightforward or non-contentious to draft. But in other fields, definitions can be a fundamental threshold roadblock that stumps policymakers.
AI is an example of the latter case. Since the launch of ChatGPT in late 2022, AI regulation has moved from a niche topic to a global concern, with governments striving to ensure safe AI use while mitigating risks like misinformation and privacy breaches. However, the lack of a universally accepted definition of AI complicates this process, leaving each government to determine its own interpretation for regulatory purposes.
But I might have a suggestion...and it’s inspired by lego pieces 🤭
The challenge with defining AI
There is no one universally accepted definition of AI. What AI should or should not cover has been a long debated topic among technologists, governments, legal theorists, etc.
In fact, when you search for “definition of AI” in Google, you’ll get a bunch of different results, such as the below (just to list a few examples):
“The ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” (Britannica)
“The capacity of computers or other machines to exhibit or simulate intelligent behaviour; the field of study concerned with this.” (Oxford Dictionary)
“The science and engineering of making intelligent machines.” (Stanford)
“A technical and scientific field devoted to the engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives.” (International Organisation for Standardisation)
AI includes “(1) systems that think like humans (e.g. cognitive architectures and neural networks; (2) systems that act like humans (e.g. pass the Turing Test, natural language processing; (3) systems that think rationally (e.g. logic solvers, inference, optimisation); and (4) systems that act rationally (e.g. intelligent software agents and embodied robots that achieve goals via perception, planning, etc)” (Russell / Norvig AI)
“information-processing technologies that integrate models and algorithms that produce a capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-making in material and virtual environments. AI systems are designed to operate with varying degrees of autonomy by means of knowledge modelling and representation and by exploiting data and calculating correlations. AI systems may include several methods, such as but not limited to:(i) machine learning, including deep learning and reinforcement learning; (ii) machine reasoning, including planning, scheduling, knowledge representation and reasoning, search, and optimization. AI systems can be used in cyber-physical systems, including the Internet of things, robotic systems, social robotics, and human-computer interfaces, which involve control, perception, the processing of data collected by sensors, and the operation of actuators in the environment in which AI systems work” (UNESCO Recommendation on ethics of AI).
“discipline concerned with the building of computer systems that perform tasks requiring intelligence when performed by humans” (ISO/IEC 39794-16:2021).
“capability to acquire, process, create and apply knowledge, held in the form of a model, to conduct one or more given tasks” (ISO/TR 5255-2:2023).
“capability of a functional unit to perform functions that are generally associated with human intelligence such as reasoning and learning” (ISO/IEC 2382:2015).
Indeed, there’s a whole plethora of literature on what is AI. While I don’t propose to dive into the literature, I’ve found that definitions tend to range from (broad to narrow in each case):
AI as a form of simulating intelligence (or even a form of intelligence itself) versus AI as only an algorithm.
AI as a field of science versus AI as an applied technology.
AI is not exclusive to machine learning versus AI is exclusive to machine learning.
Defining AI is tricky because it covers a wide and ever-changing range of technologies, from narrow predictive algorithms used in finance to large language models embedded in chatbots that can deliver human-like conversations.
In fact, AI historian Pamela McCorduck has described this as an "odd paradox" - i.e. as computer scientists find new and innovative solutions, computational techniques once considered AI lose the title as they become common and repetitive.
For example, expert systems (which emulate the decision-making processes through if-then-else logic statements) were once considered AI throughout the 1980/90s. Expert systems were an example of deterministic systems (i.e. the same inputs will always produce the same output).
But when machine learning came to the fore, it introduced the ability for computers to predict new data based on patterns in historical data (as opposed to calculating new data based on pre-coded logic/rules). Machine learning represented a form of ‘non-deterministic’ computing (i.e. the same inputs could produce different outputs) as a machine learning system could continuously refine its outputs over time by ingesting more historical data and developing better approximation of patterns (without being explicitly programmed to do so).
This led to the impression that machine learning apps are ‘smarter’ (or more ‘intelligent’) than deterministic systems, which has since then complicated the terminology around “AI”. Some see “AI” as exclusive to “machine learning”, while others still think “AI” covers other legacy deterministic systems (e.g expert systems), which is often now known as “symbolic AI”.
No one knows who’s right.
So how should AI be defined under law?
Given the complexities around the terminology of AI, how should we go about defining AI for the purposes of regulation?
Obviously, there’s an earlier threshold question of whether AI should be regulated in the first place. I don’t propose to go into this (as it’s a whole topic in itself), and will assume for the sake of this article that AI regulation is a given premise. Nor will I go into what rules/obligations should be attached to any regulated AI.
This piece is purely about AI definition and terminology. And on that, here are some factors to consider: