The Artificial Intelligence (AI) Law, Regulation and Policy Glossary March 2024

admin
80 Min Read

AI terms often have varying technical definitions. But how are AI terms used in laws, regulations, regulatory guidance and policy in the EU and the UK?

The Burges Salmon AI Law, Regulation and Policy Glossary is a selection of key AI terms and their definitions (with links to sources), identifying where they are found in current and anticipated UK and EU laws, regulations, regulatory guidance and UK AI policy.

This glossary is a useful guide for private companies, public organisations, regulators and legislators – in particular those working in the areas of Financial Services, Healthcare and Transport Technology – who: * want a reference guide to AI terms; * are interested in how and where terms relevant to AI are being used in law, regulation and policy; and/or * are preparing to comply with the various current and future regulations that will affect how they build, buy, sell and govern AI systems

What can we learn from this? The glossary draws out four key themes about the application of AI terms in law, regulation and policy: * We may not be talking the same language. Shared understanding of terms is essential when determining how and when laws and regulations apply. But whilst certain terms may be commonly used in industry they can lack or vary in legal definition and risk differing interpretations and application. There are various geographical and industry standards setting organisations working towards common AI terminologies. Those are useful and have been how shared terminology has been developed previously in other industries. However, they may vary between themselves and may not be how legislators, regulators or courts apply terms in practice. * That is, partly, because context matters. Definitions vary depending on the context in which they are used e.g. the type of legislation or guidance, the industry to which they relate, and the geography in which they apply. For example, the types of ‘damage’ which regulations try to protect against can vary; damage potentially caused by automated vehicles (property) is different to the types of damage AI systems can cause which other laws are typically more concerned about (e.g. the EU’s focus on fundamental rights). * We are still at an early stage. There is still relatively limited application of AI terms in statute, case law and regulation. This may mean those applying AI terms in practice – whether industry, courts or regulators – have to turn to other sources to try to understand what a term does (or does not) mean. That will include industry and technical definitions (which are voluminous, varied, at differing levels of maturity and which we do not include here). However, as AI regulations progress globally, we can expect further debate, guidance and clarification as to what terms mean in practice. * Common definitions do occur but should not be presumed. For example, a number of ‘data’ related terms are consistent in England, Wales, Scotland and in the EU as a result of the GDPR. The EU’s proposed AI laws intend to produce a similar ‘gold standard’ of legislation, which would include seeing terms being used consistently in different jurisdictions. However, regulators may intentionally choose not to do this. For example, the EU AI Act intends to define AI with precision, whereas the UK’s proposed approach to regulating AI does not define AI precisely Why did we choose these sources? Those looking to build, buy or sell AI systems will be subject to various laws and regulations. They may also look to guidance which indicates how terms are understood and may be applied. Which ones apply, and the weight they should be given (if any), needs to be considered in each case. This document is not legal The Burges Salmon AI Law, Regulation and Policy Glossary is a selection of key AI terms and their definitions (with links to sources), identifying where they are found in current and anticipated UK and EU laws, regulations, regulatory guidance and UK AI policy. advice. However, we think the sources used for this glossary (listed at the end) are some that are likely to have to be considered, or are useful to compare and contrast, to help determine the meaning of a term.

AI / Artificial Intelligence Also see AI System E&W and Scotland ‘Artificial intelligence’ means technology enabling the programming or training of a device or software to — i) perceive environments through the use of data; ii) interpret data using automated processing designed to approximate cognitive abilities; and iii) make recommendations, predictions or decisions; with a view to achieving a specific objective. (Source: National Security and Investment Act 2021 (Notifiable Acquisition) (Specification of Qualifying Entities) Regulations 2021/1264 Schedule 3, Paragraph 1 – Link) Guidance – E&W and Scotland ‘What is AI? AI is an umbrella term for a range of technologies and approaches that often attempt to mimic human thought to solve complex tasks. Things that humans have traditionally done by thinking and reasoning are increasingly being done by, or with the help of, AI.’ (Source: ICO, Explaining decisions made with Artificial Intelligence, Part 1 The basics of explaining AI, Definitions – Link) ‘AI or AI system or AI technologies: products and services that are ‘adaptable’ and ‘autonomous’ in the sense outlined in our definition in section 3.2.1.’ (Source: UK AI White Paper March 2023 – Link) Note that the UK Policy Paper 2022 stated that the UK’s position was to ‘set out the core characteristics of AI to inform the scope of the AI regulatory framework but allow regulators to set out and evolve more detailed definitions of AI according to their specific domains or sectors. This is in line with the government’s view that we should regulate the use of AI rather than the technology itself – and a detailed universally applicable definition is therefore not needed. Rather, by setting out these core characteristics, developers and users can have greater certainty about scope and the nature of UK regulatory concerns while still enabling flexibility – recognising that AI may take forms we cannot easily define today – while still supporting coordination and coherence.’ (Source: UK Policy Paper 2022: Establishing a pro-innovation approach to regulating AI – Link) Note that whilst the UK government response to the White Paper includes a glossary it does not define artificial intelligence. However, it does define the following: Adaptivity: The ability to see patterns and make decisions in ways not directly envisioned by human programmers. Autonomous: Capable of operating, taking actions, or making decisions without the express intent or oversight of a human. (Source: UK government response to White Paper, February 2024 – Link) Guidance – Scotland ‘While descriptions abound, we define AI as: Technologies used to allow computers to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation’. (Source: Scotland’s AI Strategy adapted from UK House of Lords’ Select Committee on Artificial Intelligence report ‘AI in the UK: Ready, Willing and Able’ (March 2021 – Link)) EU See AI System Canada ‘Information technology that performs tasks that would ordinarily require biological brainpower to accomplish, such as making sense of spoken language, learning behaviours, or solving problems.’ (Source: Canadian Directive on Automated Decision-Making – Link)

AI System (Artificial Intelligence System) E&W and Scotland See AI Guidance – E&W and Scotland Each AI system ‘involves the creation of an algorithm that uses data to model some aspect of the world, and then applies this model to new data in order to make predictions about it’. (Source: ICO, Explaining decisions made with Artificial Intelligence, Part 1 The basics of explaining AI, Definitions – Link) EU ‘AI system is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.’ (Source: EU AI Act, Article 3(1) – Link) Canada ‘artificial intelligence system means a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.’ (Source: Canada’s proposed C-27 bill ‘to enact the Artificial Intelligence and Data Act’ – Link) Algorithm Guidance – E&W and Scotland ‘An algorithm is a sequence of instructions or set of rules designed to complete a task or solve a problem. Profiling uses algorithms to find correlations between separate datasets. These algorithms can then be used to make a wide range of decisions, for example to predict behaviour or to control access to a service.’ (Source: ICO, UK GDPR Guidance and Resources, Automated decision-making and profiling – Link) ‘a set of mathematical instructions or rules that, especially if given to a computer, will help to calculate an answer to a problem.’ (Source: Joint Bank of England and FCA report, ‘Machine Learning in UK financial Services – Link) Guidance – E&W ‘A finite sequence of instructions, typically used to solve a class of specific problems or to perform a computation.’ (Source: Law Commission DAO Consultation – Link) Guidance – Scotland ‘A series of instructions for performing a calculation or solving a problem, especially with a computer. They form the basis for everything a computer can do, and are therefore a fundamental aspect of all AI systems.’ (Source: Scotland’s AI Strategy (adapted from UK House of Lords’ Select Committee on Artificial Intelligence Report “AI in the UK: Ready, Willing and Able (March 2021 – Link) EU Note: algorithm is referred to in the EU AI Act but not defined

Algorithmic Processing Guidance – E&W and Scotland Algorithmic processing is ‘the processing of data (both personal and non-personal) by automated systems. This includes artificial intelligence (AI) applications, such as those powered by machine learning (ML) techniques, but also simpler statistical models […] Algorithmic processing can be used both to produce an output (for example video or text content) and to make or inform decisions that have a direct bearing on individuals.’ (Source: Digital Regulation Cooperation Forum (DRCF), The benefits and harms of algorithms: a shared perspective from the four digital regulators – Link) Algorithmic System Guidance – E&W and Scotland Algorithmic System is ‘a convenient shorthand to refer more widely to automated systems, a larger intersection of the algorithm, data, models, processes, objectives, and how people interact and use these systems’. (Source: CMA, Algorithms: How they can reduce competition and harm consumers – Link) Algorithmic Trading EU See MiFID II, Article 4(1)(39) which uses near-identical language to the FCA Glossary – Link E&W and Scotland Algorithmic Trading means ‘trading in financial instruments where a computer algorithm automatically determines individual parameters of orders such as whether to initiate the order, the timing, price or quantity of the order or how to manage the order after its submission, with limited or no human intervention, and does not include any system that is only used for the purpose of routing orders to one or more trading venues or for the processing of orders involving no determination of any trading parameters or for the confirmation of orders or the post- trade processing of executed transactions’. (Source: Financial Services and Markets Act 2000 (Markets in Financial Instruments) Regulations 2017/701, Regulation 2 – Link) MiFID II, Article 4(1)(39) which uses near-identical language to the FCA Glossary – Link; and FCA Glossary – Link Authorised representative EU ‘Authorised representative’ means any natural or legal person established within the Union who has received a written mandate from a manufacturer to act on its behalf in relation to specified tasks.’ (Source: EU Proposed Product Liability Directive, Article 4(12) – Link). ‘authorised representative’ means any natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation.’ (Source: EU AI Act, Article 3(5) – Link) Guidance – E&W and Scotland Authorised UK representative means ‘(in relation to a firm) a person resident in the United Kingdom who is authorised to act generally, and to accept service of any document, on behalf of the firm’. (Source: FCA Glossary – Link)

Automated DecisionMaking (ADM) (or Automated Decision System) Guidance – E&W and Scotland ‘Automated decision-making is the process of making a decision by automated means without any human involvement. These decisions can be based on factual data, as well as on digitally created profiles or inferred data. Examples of this include: an online decision to award a loan; and an aptitude test used for recruitment which uses pre-programmed algorithms and criteria. Automated decision-making often involves profiling, but it does not have to.’ (Source: ICO, UK GDPR Guidance and Resources, Automated decision-making and profiling – Link) Canada Automated Decision System ‘any technology that either assists or replaces the judgment of human decision-makers. These systems draw from fields like statistics, linguistics, and computer science, and use techniques such as rules-based systems, regression, predictive analytics, machine learning, deep learning, and neural nets.’ (Source: Canadian Directive on Automated Decision-Making – Link) ‘automated decision system means any technology that assists or replaces the judgment of human decision-makers through the use of a rules-based system, regression analysis, predictive analytics, machine learning, deep learning, a neural network or other technique’. (Source: Canada’s proposed C-27 bill ‘to enact the Artificial Intelligence and Data Act’ – Link) Automated Facial Recognition (AFR) E&W ‘AFR is a way of assessing whether two facial images depict the same person. A digital photograph of a person’s face is taken and processed to extract biometric data (i.e. measurements of the facial features). That data is then compared with facial biometric data from images contained in a database. […] The technical operation of AFR comprises the following six stages: (1) Compiling/using an existing database of images […], (2) Facial image acquisition […], (3) Face detection […], (4) Feature extraction […], (5) Face comparison […], (6) Matching […]’. (Source: R (Bridges) v CC South Wales, Paragraphs 8-9 – Link)

Biometric Data (baseddata, identification and verification) E&W ‘The use of AFR technology involves the collection, processing and storage of a wide range of information, including (1) facial images; (2) facial features (i.e. biometric data); […] AFR entails the processing of biometric data in the form of facial biometrics. The term “biometrics” is described in the Home Office “Biometrics Strategy – Better Public Services Maintaining Public Trust” […] as “the recognition of people based on measurement and analysis of their biological characteristics or behavioural data’. (Source: R (Bridges) v CC South Wales, Paragraph 21 – Link) E&W and Scotland ‘Biometric data means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of an individual, which allows or confirms the unique identification of that individual, such as facial images or dactyloscopic data’. (Source: Data Protection Act 2018, Part 7, 205(1) – Link) Guidance – E&W and Scotland The ICO uses a definition similar to the Data Protection Act 2018 Part 7, section 205 (Source: Link) EU (33) ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data. (33a) ‘biometric identification’ means the automated recognition of physical, physiological, behavioural, and psychological human features for the purpose of establishing an individual’s identity by comparing biometric data of that individual to stored biometric data of individuals in a database. (33c) ‘biometric verification’ means the automated verification of the identity of natural persons by comparing biometric data of an individual to previously provided biometric data (one-to-one verification, including authentication). (Source: EU AI Act, Article 3(33) – Link) See also other types of ‘data’ under the EU AI Act in Data

Black box Also see Transparency Guidance – E&W and Scotland ‘Black box – A system, device or object that can be viewed in terms of its inputs and outputs, without any knowledge of its internal workings’. (Source: ICO, Guidance on AI and data protection, Glossary – Link) ‘Some of the most powerful machine learning models, by their very nature, lack a defined structure. They are described as ‘black boxes’ because we can observe the inputs (the data) and see the output (the prediction or decision) but may not be able to explain completely the mechanism that connects one to the other’. (Source: FCA, Explaining why the computer says ‘no’ – Link) MHRA have a ‘Project Glass Box’ to address the problem: ‘Current medical device requirements do not take into account adequate consideration of human interpretability and its consequence for safety and effectiveness for AIaMD [AI as a medical device].’ (Source: MHRA Software and AI as a Medical Device Change Programme – Roadmap – Link) Component EU ‘Component’ means any item, whether tangible or intangible, or any related service, that is integrated into, or inter-connected with, a product by the manufacturer of that product or within that manufacturer’s control. (Source: Proposed EU Product Liability Directive Article 4(3) – Link) (Data) Controller E&W and Scotland ‘(1) the competent authority which, alone or jointly with others: a) determines the purposes and means of the processing of personal data, or b) is the controller by virtue of subsection (2). (2) Where personal data is processed only — (a) for purposes for which it is required by an enactment to be processed, and (b) by means by which it is required by an enactment to be processed, the competent authority on which the obligation to process the data is imposed by the enactment (or, if different, one of the enactments) is the controller.’ (Source: Data Protection Act 2018 section 32 – Link) Guidance – E&W and Scotland ‘Controller means the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data.’ (Source: ICO, UK GDPR Guidance and Resources, Controllers and processors, What are ‘controllers’ and ‘processors’ – Link) EU ‘Controller means the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data; where the purposes and means of such processing are determined by Union or Member State law, the controller or the specific criteria for its nomination may be provided for by Union or Member State law.’ (Source: EU GDPR Regulation (EU) 2016/679 Article 4(7) – Link)

Damage (or harm) E&W and Scotland ‘In the context of autonomous vehicles, specifically liability of insurers, damage means ‘death or personal injury, and any damage to property other than — (a) the automated vehicle, (b) goods carried for hire or reward in or on that vehicle or in or on any trailer (whether or not coupled) drawn by it, or (c) property in the custody, or under the control, of (i)the insured person (where subsection (1) applies), or (ii)the person in charge of the automated vehicle at the time of the accident. (where subsection (2) applies).’ (Source: Automated and Electric Vehicles Act 2018 section 2(3) – Link) EU Damage means ‘material losses resulting from: (a) death or personal injury, including medically recognised harm to psychological health; (b) harm to, or destruction of, any property except: (i) the defective product itself; (ii) a product damaged by a defective component of that product; (iii) property used exclusively for professional purposes; (c) loss or corruption of data that is not used exclusively for professional purposes.’ (Source: EU Product Liability Directive, Article 4(6) – Link) ‘Serious incident’ means any incident or malfunctioning of an AI system that directly or indirectly leads to any of the following: (a) the death of a person or serious damage to a person’s health, (b) a serious and irreversible disruption of the management and operation of critical infrastructure, (ba) breach of obligations under Union law intended to protect fundamental rights; (bb) serious damage to property or the environment.’ (Source: EU AI Act, Article 3(44) – Link) Canada ‘harm means (a) physical or psychological harm to an individual; (b) damage to an individual’s property; or (c) economic loss to an individual.’ (Source: Canada’s proposed C-27 bill ‘to enact the Artificial Intelligence and Data Act’ – Link)

Data, including Training data, Validation data, Testing data and Input data E&W Information which is recorded electronically or manually; not verbal communications (unless recorded). (Note: regarding the Data Protection Act 1998 section 1 since repealed by the Data Protection Act 2018. Source: Scott v LGBT Foundation Ltd [2020] EWHC 483 (QB), paragraph 61 – Link; citing Durant v Financial Services Authority [2003] EWCA Civ 1746 – Link) EU Data means data as defined in Article 2, point (1), of Regulation (EU) 2022/868 of the European Parliament and of the Council. [i.e.] ‘data’ means any digital representation of acts, facts or information and any compilation of such acts, facts or information, including in the form of sound, visual or audiovisual recording; (Source: EU Proposed Product Liability Directive, Article 4(7) – Link, and EU Regulation (EU) 2022/868 Article 2(1) – Link) EU AI Act also includes: (29) ‘training data’ means ‘data used for training an AI system through fitting its learnable parameters’. (30) ‘validation data’ means ‘data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process, among other things, in order to prevent underfitting or overfitting; whereas the validation dataset is a separate dataset or part of the training dataset, either as a fixed or variable split.’ (31) ‘testing data’ means ‘data used for providing an independent evaluation of the AI system in order to confirm the expected performance of that system before its placing on the market or putting into service’. (see Placing on the Market and Putting into Service) (32) ‘input data’ means ‘data provided to or directly acquired by an AI system on the basis of which the system produces an output’. (Source: EU AI Act, Article 3 – Link) See Biometric Data Data Subject E&W and Scotland Data subject means ‘the identified or identifiable living individual to whom personal data relates’. (Source: Data Protection Act 2018, section 3(5) – Link) Guidance – E&W and Scotland Personal data means ‘any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person’. (Source: ICO, UK GDPR Guidance and Resources, What is personal data? – Link) EU ‘personal data means personal data as defined in Article 4, point (1) of Regulation (EU) 2016/679’. (Source: EU AI Act, Article 3(44a) – Link) personal data means ‘any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person’. (Source: EU GDPR Regulation (EU) 2016/679 Article 4(1) – Link)

Defectiveness EU ‘A product shall be considered defective when it does not provide the safety which the public at large is entitled to expect, taking all circumstances into account, including the following: (a) the presentation of the product, including the instructions for installation, use and maintenance; (b) the reasonably foreseeable use and misuse of the product; (c) the effect on the product of any ability to continue to learn after deployment; (d) the effect on the product of other products that can reasonably be expected to be used together with the product; (e) the moment in time when the product was placed on the market or put into service or, where the manufacturer retains control over the product after that moment, the moment in time when the product left the control of the manufacturer; (f) product safety requirements, including safety-relevant cybersecurity requirements; (g) any intervention by a regulatory authority or by an economic operator referred to in Article 7 relating to product safety; (h) the specific expectations of the end-users for whom the product is intended. 2. A product shall not be considered defective for the sole reason that a better product, including updates or upgrades to a product, is already or subsequently placed on the market or put into service’. (Source: EU Proposed Product Liability Directive, Article 6 – Link) Deep Fake EU Deep fake ‘means AI generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful.’ (Source: EU AI Act, Article 3(bl) – Link) Deployer E&W Also see (AI) Supplier Guidance – E&W and Scotland AI Deployer means ‘any individual or organisation that supplies or uses an AI application to provide a product or service to an end user’. (Source: UK AI White Paper Consultation Outcome February 2024 – Link) EU Deployer means ‘any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity’. (Source: EU AI Act, Article 3(4) – Link)

Deterministic algorithm Singapore Deterministic algorithms ‘do and only do what they have been programmed to do. They have no mind of their own. They operate when called upon to do so in the pre-ordained manner. They do not know why they are doing something or what the external events are that cause them to operate in the way that they do. They are, in effect, mere machines carrying out actions which in another age would have been carried out by a suitably trained human. They are no different to a robot assembling a car rather than a worker on the factory floor or a kitchen blender relieving a cook of the manual act of mixing ingredients. All of these are machines operating as they have been programmed to operate once activated.’ (Source: B2C2 Ltd v Quoine Pte Ltd [2019] SGHC(I) 03, Paragraphs 208 and 209 – Link) ‘…the Trading Contracts had been entered into pursuant to deterministic algorithmic programs that had acted exactly as they had been programmed to act…’ (Source: Quoine Pte Ltd v B2C2 Ltd [2020] SGCA(I) 02, paragraph 114 – Link) (AI) Developer Guidance – E&W and Scotland AI Developer means ‘organisations or individuals who design, build, train, adapt, or combine AI models and applications’. Developers of highly capable general-purpose systems means ‘a subsection of AI developers, these organisations invest large amounts of resource into designing, building, and pre-training the most capable AI foundation models. These models can underpin a wide range of AI applications and may be deployed directly or adapted by downstream AI developers’. (Source: UK AI White Paper Consultation Outcome February 2024 – Link) Downstream Provider Downstream provider means ‘a provider of an AI system, including a general-purpose AI system, which integrates an AI model, regardless of whether the model is provided by themselves and vertically integrated or provided by another entity based on contractual relations.’ (Source: EU AI Act, Article 3(44g) – Link) Distributor EU Distributor means ‘any natural or legal person in the supply chain, other than the manufacturer or the importer, who makes a product available on the market.’ (Source: EU Proposed Product Liability Directive, Article 4(15) – Link) ‘Distributor means any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market’. (Source: EU AI Act, Article 3(7) – Link)

(AI) Ecosystem Guidance – E&W and Scotland ‘The complex network of actors and processes that enable the use and supply of AI throughout the AI life cycle (including supply chains, markets, and governance mechanisms).’ (Source: UK AI White Paper March 2023 – Link) Explainability Guidance – E&W and Scotland Explainability refers to the extent to which it is possible for relevant parties to access, interpret and understand the decision-making processes of an AI system. (Source: UK AI White Paper March 2023 – Link) Note also the OECD explanation that ‘Explainability means enabling people affected by the outcome of an AI system to understand how it was arrived at. This entails providing easy-to-understand information to people affected by an AI system’s outcome that can enable those adversely affected to challenge the outcome, notably – to the extent practicable – the factors and logic that led to an outcome.’ (Source: OECD Guide to Transparency and Explainability – Link) Expert System Guidance – E&W and Scotland ‘There are several ways to build AI systems. Each involves the creation of an algorithm that uses data to model some aspect of the world, and then applies this model to new data in order to make predictions about it. Historically, the creation of these models required incorporating considerable amounts of hand-coded expert input. These “expert systems” applied large numbers of rules, which were taken from domain specialists, to draw inferences from that knowledge base’. (Source: ICO, UK GDPR Guidance and Resources, Explaining decisions made with Artificial Intelligence, Definitions – Link) ‘A computer system that mimics the decision-making ability of a human expert by following preprogrammed rules, such as ‘if this occurs, then do that’. These systems fuelled much of the earlier excitement surrounding AI in the 1980s, but have since become less fashionable, particularly with the rise of neural networks.’ (Source: Scotland’s AI Strategy (adapted from UK House of Lords’ Select Committee on Artificial Intelligence Report “AI in the UK: Ready, Willing and Able (March 2021 – Link)

Foundation Model Guidance – E&W and Scotland Foundation model means ‘machine learning models trained on very large amounts of data that can be adapted to a wide range of tasks’. (Source: UK AI White Paper Consultation Outcome February 2024 – Link) ‘Foundation models, which include large language models and generative artificial intelligence (AI), that have emerged over the past five years, have the potential to transform much of what people and businesses do.’ (Source: CMA Review of Foundation Models (May 2023) – Link) EU Foundation model means ‘an AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks.’ (Note: Foundation Model is not defined in the version of the EU AI Act agreed by the EU Parliament on 2 February 2024. The above definition was previously included at Article 3(1c) – Link) The text tabled to EU Parliament in June 2023 included proposed recital (60e) which said that ‘Foundation models are a recent development, in which AI models are developed from algorithms designed to optimize for generality and versatility of output. Those models are often trained on a broad range of data sources and large amounts of data to accomplish a wide range of downstream tasks, including some for which they were not specifically developed and trained. Those systems can be unimodal or multimodal, trained through various methods such as supervised learning or reinforced learning. AI systems with specific intended purpose or general purpose AI systems can be an implementation of a foundation model, which means that each foundation model can be reused in countless downstream AI or general purpose AI systems. These models hold growing importance to many downstream applications and systems.’ (Note: As above, this text is not included in the version of the EU AI Act agreed by the EU Parliament on 2 February 2024) Generative AI EU ‘AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video.’ (Source: previously included in EU AI Act, Article 28b(4) – Link) (Note: this definition is not in the agreed by the EU Parliament on 2 February 2024, however recital 60c states: ‘Large generative AI models are a typical example for a general-purpose AI model, given that they allow for flexible generation of content (such as in the form of text, audio, images or video) that can readily accommodate a wide range of distinctive tasks.) (Source: EU AI Act, Recital 60c – Link) General Purpose AI Model EU ‘General purpose AI model’ means an ‘AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities.’ (Source: EU AI Act, Article 3(44b) – Link)

General-Purpose AI System Guidance E&W and Scotland Highly capable general-purpose AI means ‘foundation models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models. Generally, such models will span from novice through to expert capabilities with some even showing superhuman performance across a range of tasks.’ (Source: UK AI White Paper Consultation Outcome February 2024 – Link Note that the document also says that Highly capable narrow AI means ‘Foundation models that can perform a narrow set of tasks, normally within a specific field such as biology, with capabilities that match or exceed those present in today’s most advanced models. Generally, such models will demonstrate superhuman abilities on these narrow tasks or domains.’) EU ‘General-purpose AI system’ means an ‘AI system which is based on a general-purpose AI model , that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems. (Source: EU AI Act, Article 3(44e) – Link) Importer EU Importer ‘means any natural or legal person established within the Union who places a product from a third country on the Union market’. (Source: EU Proposed Product Liability Directive, Article 4(13) – Link) ‘Importer means any natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established outside the Union’. (Source: EU AI Act, Article 3(6) – Link) Inaccurate Personal Data EU See Article 16 EU GDPR and a data subject’s right to rectification of inaccurate personal data. E&W and Scotland ‘Inaccurate, in relation to personal data, means incorrect or misleading as to any matter of fact’. (Source: Data Protection Act 2018, Part 7, 205(1) – Link) Guidance – E&W and Scotland ‘If information seemingly relating to a particular individual is inaccurate (i.e. it is factually incorrect or it is information about a different individual), the information is still personal data, as it relates to that individual’. (Source: ICO, UK GDPR Guidance and Resources, What is personal data? What is the meaning of relates to? – Link) Interoperability Guidance – E&W and Scotland ‘Interoperability allows different systems to share information and resources. An ‘interoperable format’ is a type of format that allows data to be exchanged between different systems and be understandable to both. At the same time, you are not expected to maintain systems that are technically compatible with those of other organisations. Data portability is intended to produce interoperable systems, not compatible ones.’ (Source: ICO, UK GDPR Guidance and Resources, Individual rights, Right to data portability – Link) Note that the UK AI White Paper includes a section on interoperability, including: ‘We will promote interoperability and coherence between different approaches, challenging barriers which may stand in the way of businesses operating internationally. We will ensure that the UK’s regulatory framework encourages the development of a responsive and compatible system of global AI governance. We will build our international influence, allowing the UK to engage meaningfully with like-minded partners on issues such as cross-border AI risks and opportunities.’ (Source: UK AI White Paper March 2023 – Link)

(Data) Joint Controllers E&W and Scotland ‘Where two or more competent authorities jointly determine the purposes and means of processing personal data, they are joint controllers for the purposes of this Part.’ (Source: Data Protection Act 2018 section 58 but also see section 104 regarding intelligence services – Link) EU ‘Where two or more controllers jointly determine the purposes and means of processing, they shall be joint controllers.’ (Source: EU GDPR Regulation (EU) 2016/679 Article 26(1) – Link) (Also see ICO guidance – Link) (AI) Lifecycle Guidance – E&W and Scotland ‘All events and processes that relate to an AI system’s lifespan, from inception to decommissioning, including its design, research, training, development, deployment, integration, operation, maintenance, sale, use and governance.’ (Source: UK AI White Paper March 2023 – Link) Machine Learning (ML) Guidance – E&W and Scotland ‘One prominent area of AI is “machine learning” (ML), which is the use of computational techniques to create (often complex) statistical models using (typically) large quantities of data. Those models can be used to make classifications or predictions about new data points’. (Source: ICO, UK GDPR Guidance and Resources, Guidance on AI and data protection – Link) ‘ML is a methodology whereby computer programmes build a model to fit a set of data that can be utilised to make predictions, recommendations, or decisions without being explicitly programmed to do so, instead learning from sample data or experience’. (Source: Joint Bank of England and FCA report, ‘Machine Learning in UK financial Services – Link) ‘One particular form of AI, which gives computers the ability to learn and improve at a task from experience, without being explicitly programmed for that task. When provided with sufficient data, a machine learning algorithm can learn to make predictions or solve problems, such as identifying objects in pictures or winning at particular games, for example.’ (Source: Scotland’s AI Strategy (adapted from UK House of Lords’ Select Committee on Artificial Intelligence Report “AI in the UK: Ready, Willing and Able (March 2021 – Link) ‘More recently data-driven, machine learning (ML) models have emerged as the dominant AI technology. These kinds of models may be constructed using a few different learning approaches that build from the past information contained in collected data to identify patterns and hone classificatory and predictive performance.’ (Source: ICO, UK GDPR Guidance and Resources, Artificial Intelligence, Definitions – Link) EU “The techniques that enable inference while building an AI system include machine learning approaches that learn from data how to achieve certain objectives…” (Source: EU AI Act, Recital 6 – Link)

Making available on the market See Placing on the Market EU Making available on the market means ‘any supply of a product for distribution, consumption or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge.’ (Source: EU EU Proposed Product Liability Directive, Article 4(9) – Link) Making available on the market means ‘any supply of an AI system or a general purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge’. (Source: EU AI Act, Article 3(10) – Link) Manufacturer EU ‘Manufacturer’ means any natural or legal person who develops, manufactures or produces a product or has a product designed or manufactured, or who markets that product under its name or trademark or who develops, manufactures or produces a product for its own use’. (Source: EU Proposed Product Liability Directive, Article 4(5a) – Link) EU AI Act defines ‘product manufacturer’ as having the meaning given in any of the legislation listed in Annex II (Note: including directives concerning: medical devices; in vitro diagnostic medical devices; machiner; safety of toys; protective systems; personal protective equipment). Manufacturer’s control Also see Manufacturer EU ”Manufacturer’s control’ means that the manufacturer of a product authorises a) the integration, inter-connection or supply by a third party of a component including software updates or upgrades, or b) the modification of the product’. (Source: EU Proposed Product Liability Directive Article 4(5) – Link) Matching Also see Automated Facial Recognition (AFR) E&W ‘When facial features from two images are compared, the AFR software generates a “similarity score”. This is a numerical value indicating the likelihood that the faces match, with a higher number indicating a greater likelihood of a positive match between the two faces. A threshold value is fixed to determine when the software will indicate that a match has occurred. Fixing this value too low or too high can, respectively, create risks of a high “false alarm rate” (i.e. the percentage of incorrect matches identified by the software) or a high “false reject rate” (i.e. the percentage of true matches that are not in fact matched by the software). The threshold value is generally suggested by the manufacturer, and depends on the intended use of the AFR system. Most AFR systems, however, allow the end user to change the threshold value to whatever they choose.’ (Source: R (Bridges) v CC South Wales, Paragraph 9(6) – Link) Guidance- E&W and Scotland: ‘Data matching: combining, comparing or matching personal data obtained from multiple sources.’ (Source: ICO, UK GDPR Guidance and Resources, Data Protection Impact Assessments, When do we need to do a DPIA? – Link)

Metadata Guidance – E&W and Scotland: Documents created electronically contain information about the life of the document. Details are recorded and stored with a file, such as: the author, dates, editing history, size, file paths, security settings, and any email routing history. This information is known as “metadata” and is accessed via the file properties. (Source: ICO, FOI, Freedom of information and environmental information regulations, Determining Whether We Hold Information – Link)” E&W In the context of disclosure in the Civil Procedure Rules, metadata means ‘data about data. In the case of an electronic document, metadata is typically embedded information about the document which is not readily accessible once the native electronic document has been converted into an electronic image or paper document. It may include for example the date and time of creation or modification of a word-processing file, or the author and the date and time of sending an e-mail. Metadata may be created automatically by a computer system or manually by a user’. (Source: Practice Direction 57AD, Disclosure in the Business and Property Courts, Appendix 1 paragraph 1.11 – Link) (See also: Practice Direction 31B, Disclosure of Electronic Documents Paragraph 5(7) – Link) EU (Note: Metadata identification is referred, but not defined in the EU AI Act in the context of marking synthetic content as being generated or manipulated by an AI system. See Recital 70a EU AI Act – Link) Model Guidance – E&W and Scotland ‘A model is defined as a quantitative method that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into output’. (Source: Bank of England, PRA, Appendices to CP6/22 – Model risk management principles for banks – Link) ‘Whereas traditional financial models are usually rules-based with explicit fixed parameterisation, AI models are able to learn the rules and alter model parameterisation iteratively. The use of AI models also represents a step change for three other reasons: firstly, the speed and frequency at which the models update (with some AI models able to learn continuously); secondly, the scale in terms of the volume of data needed to train the models and the number of features that are used as inputs; and thirdly, the complexity of certain techniques, such as convolutional neural networks, which can make them more opaque (the so-called ‘black-box problem’). (Source: Bank of England, FCA and PRA discussion paper, DP5/22 – Artificial Intelligence and Machine Learning – Link) ‘a quantitative method, system, or approach that applies statistical, economic, financial or mathematical theories, techniques, and assumptions to process input data into output.’ (Source: Joint Bank of England and FCA report, Machine Learning in UK financial Services – Link) Also see Foundation Model and General Purpose AI Model.

Neural Network Guidance – Scotland “Also known as an artificial neural network, this is a type of machine learning loosely inspired by the structure of the human brain. A neural network is composed of simple processing nodes, or “artificial neurons”, which are connected to one another in layers. Each node will receive data from several nodes “above” it, and give data to several nodes “below” it. Nodes attach a “weight” to the data they receive, and attribute a value to that data. If the data does not pass a certain threshold, it is not passed on to another node. The weights and thresholds of the nodes are adjusted when the algorithm is trained until similar data input results in consistent outputs.” (Source: Scotland’s AI Strategy (adapted from UK House of Lords’ Select Committee on Artificial Intelligence Report “AI in the UK: Ready, Willing and Able” (March 2021) – Link) Open Source / OpenSource EU This is software, ‘including its source code and modified versions, that is openly shared and freely accessible, usable, modifiable and redistributable’. (Source: EU Proposed Product Liability Directive, Recital 13 – Link) Note: ‘Free and open-source AI components covers the software and data, including models and general purpose AI models, tools, services or processes of an AI system. Free and open-source AI components can be provided through different channels, including their development on open repositories. For the purpose of this Regulation, AI components that are provided against a price or otherwise monetised, including through the provision of technical support or other services, including through a software platform, related to the AI component, or the use of personal data for reasons other than exclusively for improving the security, compatibility or interoperability of the software, with the exception of transactions between micro enterprises, should not benefit from the exceptions provided to free and open source AI components. The fact of making AI components available through open repositories should not, in itself, constitute a monetisation.’ (Source: EU AI Act, Recital 60(i+1) – Link) Performance EU ‘performance of an AI system’ means ‘the ability of an AI system to achieve its intended purpose’. (Source: EU AI Act, Article 3(18) – Link)

Personal data E&W and Scotland ‘any information relating to an identified or identifiable living individual […]’. (Source: Data Protection Act 2018 section 3(2) – Link) “‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person” Source: Article 4(1) UK GDPR – Link Guidance – E&W and Scotland ‘Any information relating to a person (a ‘data subject’) who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that person.’ (Source: ICO, For organisations, Data protection fee, Legal definitions fees – Link) EU “‘personal data’ means personal data as defined in Article 4, point (1) of Regulation (EU) 2016/679″ … [i.e.] ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person’. (‘Source: EU AI Act, Article 3(44a) – Link; and see EU GDPR Regulation (EU) 2016/679 Article 4(1) – Link) Placing on the Market Also see Making Available on the market EU Placing on the market means ‘the first making available of a product on the Union market.’ (Source: EU Proposed Product Liability Directive, Article 4(8) – Link) Placing on the market means ‘the first making available of an AI system or a general purpose AI model on the Union market.’ (Source: EU AI Act, Article 3(9) – Link)

Privacy Enhancing Technologies (PETs) Guidance – E&W and Scotland ‘PETs are technologies that embody fundamental data protection principles by: minimising personal information use (this covers the legal definition of personal data in the UK GDPR); maximising information security; or empowering people. (Source: ICO Privacy-enhancing technologies guidance, June 2023 – Link) Note: The above mentioned ICO guidance also refers to The European Union Agency for Cybersecurity (ENISA) definition of PETs, being: ‘Software and hardware solutions, ie systems encompassing technical processes, methods or knowledge to achieve specific privacy or data protection functionality or to protect against risks of privacy of an individual or a group of natural persons.’ EU Note: The EU AI Act also refers to but does not define ‘privacy-preserving techniques’ and ‘privacy-preserving measures’ such as pseudonymisation. Previous drafts of the EU AI Act also referred to anonymisation and encryption. (Data) Processing E&W and Scotland ‘Processing, in relation to information, means an operation or set of operations which is performed on information, or on sets of information, such as (a) collection, recording, organisation, structuring or storage, (b) adaptation or alteration, (c) retrieval, consultation or use, (d) disclosure by transmission, dissemination or otherwise making available, (e) alignment or combination, or (f) restriction, erasure or destruction’. (Source: Data Protection Act 2018, Part 1, 3(4) – Link) EU ‘processing’ means ‘any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction’. (Source: EU GDPR Regulation (EU) 2016/679 Article 4(2) – Link) (Data) Processor E&W and Scotland ‘any person who processes personal data on behalf of the controller (other than a person who is an employee of the controller)’ (Source: The definition depends upon which part of the Data Protection Act 2018 is being considered. The above example is Data Protection Act section 32, but also see sections 5, 6, 82 and 83- Link) Guidance – E&W and Scotland ‘A person, public authority, agency or other body which processes personal data on behalf of the controller’. (Source: ICO, For organisations, Data protection fee, Legal definitions fees – Link) EU [‘processor’ means] ‘a natural or legal person, public authority, agency or other body which processes personal data on behalf of the controller’. (Source: EU GDPR Regulation (EU) 2016/679 Article 4(8) – Link)

Product EU ‘all movables, even if integrated into another movable or into an immovable. ‘Product’ includes electricity, digital manufacturing files and software’. (Source: EU Proposed Product Liability Directive, Article 4(1) – Link) Profiling E&W and Scotland ‘Profiling means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to an individual, in particular to analyse or predict aspects concerning that individual’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements’. (Source: Data Protection Act 2018, section 33(4) – Link) Guidance – E&W and Scotland ‘Profiling means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements’. Article 4(4) UK GDPR. (Source: ICO, UK GDPR Guidance and Resources, Automated decision-making and profiling – Link and see Article 4(4) of GDPR – Link). EU ‘any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements’. (Source: EU GDPR 2016/679 Article 4(4) – Link) (Also see: EU AI Act, Article 3(44)(c) – Link) Putting into service EU ‘the first use of a product in the Union in the course of a commercial activity, whether in return for payment or free of charge, in circumstances in which the product has not been placed on the market prior to its first use’. (Source: EU Proposed Product Liability Directive, Article 4(10) – Link). means ‘the supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose’ (Source: EU AI Act, Article 3(11) – Link)

Real-time E&W ‘Facial image acquisition. A CCTV camera takes digital pictures of facial images in real time. This case is concerned with the situation where a moving image is captured when a person passes into the camera’s field of view, using a live feed.’ (Source: R (Bridges) v CC South Wales, Paragraph 9(2) – Link) EU real-time remote biometric identification system means ‘a remote biometric identification system whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay. This comprises not only instant identification, but also limited short delays in order to avoid circumvention.” ‘In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, nearinstantaneously or in any event without a significant delay. […] ‘Real-time’ systems involve the use of ‘live’ or ‘near-‘live’ material, such as video footage, generated by a camera or other device with similar functionality.’ (Source EU AI Act Article 3(37) and Recital 8 – Link) Risk EU ‘the combination of the probability of an occurrence of harm and the severity of that harm.’ ‘significant risk’ means a risk that is significant as a result of the combination of its severity, intensity, probability of occurrence, and duration of its effects, and its the ability to affect an individual, a plurality of persons or to affect a particular group of persons’. (Source: EU AI Act, Article 3(1a) – Link) (Note: The definition of ‘significant risk’ which was included at Article 3(1b) is not used in EU AI Act agreed by the EU Parliament on 2 February 2024 ). Sensitive Processing E&W and Scotland (a) the processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs or trade union membership; (b) the processing of genetic data for the purpose of uniquely identifying an individual; (c) the processing of biometric data for the purpose of uniquely identifying an individual; (d) the processing of data concerning health; (e) the processing of data concerning an individual’s sex life or sexual orientation; (f) the processing of personal data as to — (i) the commission or alleged commission of an offence by an individual, or (ii) proceedings for an offence committed or alleged to have been committed by an individual, the disposal of such proceedings or the sentence of a court in such proceedings. (Source: The definition depends upon which part of the Data Protection Act 2018 is being considered. The above example is Data Protection Act section 86(7), which applies in the context of intelligence services, also see section 35(8) which applies to law enforcement agencies – Link.) Guidance – E&W and Scotland Depending on the context, the ICO refers to both section 86(7) and 35(8) of the Data Protection Act 2018. (Source: ICO, For organisations, Intelligence services processing, Scope and key definitions – Link and ICO, For organisations, Law Enforcement, Guide to LE Processing, Scope and key definitions – Link)” EU Note: separate from commercially/competitively sensitive information, which the EU Data Governance Act says ‘typically includes information on customer data, future prices, production costs, quantities, turnovers, sales or capacities.’ Recital (37) – Link

Special Category Data E&W and Scotland The following categories of data which require specific conditions to be met, including explicit consent, for processing: Racial or ethnic origin; Political opinions; Religious or philosophical beliefs; Trade union membership; Genetic data; Biometric data for the purpose of uniquely identifying a natural person; Data concerning health; Sex life or sexual orientation. See also: Article 9(1) and 9(2) of the UK GDPR – Link. (Source: Data Protection Act 2018 section 10(1) – Link) EU special categories of personal data’ means the categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725″. (Source: EU AI Act, Article 3(33d) – Link) Subject to exceptions, ‘Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation shall be prohibited.’ (Source: EU GDPR Regulation (EU) 2016/679 Article 9 – Link) Supervised Learning Guidance – E&W and Scotland ‘Supervised learning models are trained on a dataset which contains labelled data. ‘Learning’ occurs in these models when numerous examples are used to train an algorithm to map input variables (often called features) onto desired outputs (also called target variables or labels). On the basis of these examples, the ML model is able to identify patterns that link inputs to outputs. ML models are then able to reproduce these patterns by employing the rules honed during training to transform new inputs received into classifications or predictions.’ (Source: ICO, UK GDPR Guidance and Resources, Artificial intelligence, Explaining decisions made with Artificial Intelligence, Part 1 The basics of explaining AI, Definitions – Link) (AI) Supplier Guidance – E&W and Scotland ‘Any organisation or individual who plays a role in the research, development, training, implementation, deployment, maintenance, provision or sale of AI systems.’ (Source: UK AI White Paper March 2023 – Link) EU Also see Deployer Synthetic Data Guidance – E&W and Scotland ‘Synthetic data is ‘artificial’ data generated by data synthesis algorithms, which replicate patterns and the statistical properties of real data (which may be personal data). It is generated from real data using a model trained to reproduce the characteristics and structure of that data. This means that when you analyse the synthetic data, the analysis should produce very similar results to analysis carried out on the original real data.’ (Source: ICO, Draft anonymisation, pseudonymisation and privacy enhancing technologies Guidance – Link) EU Note: The EU AI Act does not define synthetic data but does refer to it in various contexts, such as ‘synthetic’ content generation, and processing certain types of data for AI regulatory sandboxes.

Transparency Guidance – E&W and Scotland ‘Transparency refers to the communication of appropriate information about an AI system to relevant people (for example, information on how, when, and for which purposes an AI system is being used).’ (Source: UK AI White Paper March 2023 – Link) ‘Transparency is fundamentally linked to fairness. Transparent processing is about being clear, open and honest with people from the start about who you are, and how and why you use their personal data’. (Source: ICO Guidance, UK GDPR Guidance and Resources, Principle (a): Lawfulness, fairness and transparency – Link) EU ‘Transparency means that AI systems are developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their rights.’ (Source: EU AI Act, Recital 14a – Link) See EU AI Act, Article 13 for obligations in respect of high-risk AI systems concerning transparency and provision of information to users to collect, store and interpret logs and also see Title IV, transparency obligations for deployers of certain AI systems and GPAI models. (Source: EU AI Act, Article 13 and also Title IV – Link) Note also the OECD explanation that: ‘The term transparency carries multiple meanings. In the context of this Principle, the focus is first on disclosing when AI is being used (in a prediction, recommendation or decision, or that the user is interacting directly with an AI-powered agent, such as a chatbot). Disclosure should be made with proportion to the importance of the interaction. Transparency further means enabling people to understand how an AI system is developed, trained, operates, and deployed in the relevant application domain […]. Transparency also refers to the ability to provide meaningful information and clarity about what information is provided and why. Thus transparency does not in general extend to the disclosure of the source or other proprietary code or sharing of proprietary datasets, all of which may be too technically complex to be feasible or useful to understanding an outcome. Source code and datasets may also be subject to intellectual property, including trade secrets.’ (Source: OECD Guide to Transparency and Explainability – Link) Unsupervised Learning Guidance – E&W and Scotland ‘Unsupervised learning models are trained on a dataset without explicit instructions or labelled data. These models identify patterns and structures by measuring the densities or similarities of data points in the dataset’. (Source: ICO, UK GDPR Guidance and Resources, Artificial intelligence, Explaining decisions made with Artificial Intelligence, Part 1 The basics of explaining AI, Definitions – Link) ‘…in unsupervised learning the algorithms are not trained and are instead left to find regularities in input data without any instructions as to what to look for.’ (Source: ICO, ‘Big Data, Artificial Intelligence, machine learning and data protection’ – Link)

(AI System) User Guidance – E&W and Scotland ‘AI User’ means ‘any individual or organisation that uses an AI product’. (Source: UK AI White Paper March 2023 – Link) AI end user means ‘any intended or actual individual or organisation that uses or consumes an AI-based product or service as it is deployed’. (Source: UK AI White Paper Consultation Outcome February 2024 – Link) EU Note that the EU AI Act amended ‘user’ to deployer. See Deployer.

Share This Article
By admin
test bio
Please login to use this feature.