# Impetora AI consulting glossary

> Definitions of terms in enterprise AI, regulatory compliance, and AI consulting. Vendor-neutral reference for buyers, builders, and risk teams.

Source: https://impetora.com/glossary
Last updated: 2026-04-27
Total terms: 55

## Terms

- [ABA Formal Opinion 512](https://impetora.com/glossary/aba-formal-opinion-512) - ABA Formal Opinion 512 is the American Bar Association's 2024 ethical guidance on lawyers' use of generative AI, addressing competence, confidentiality, supervision, fees, and candor.
- [Agentic AI](https://impetora.com/glossary/agentic-ai) - Agentic AI refers to systems that plan multi-step actions, call external tools, and operate with some autonomy toward a goal, rather than producing a single response to a single prompt.
- [AI Audit Trail](https://impetora.com/glossary/ai-audit-trail) - An AI audit trail is the persistent, tamper-evident record of every input, output, tool call, model version, and decision an AI system has made, sufficient to reconstruct any past interaction.
- [AI Risk Management](https://impetora.com/glossary/ai-risk-management) - AI risk management is the discipline of identifying, assessing, mitigating, and monitoring the harms an AI system can cause across its lifecycle.
- [AI ROI](https://impetora.com/glossary/ai-roi) - AI ROI is the measurable financial return on an AI investment, calculated as the value generated (cost savings, revenue uplift, risk reduction) net of total cost of ownership.
- [AI Solutions Partner](https://impetora.com/glossary/ai-solutions-partner) - An AI solutions partner is an external firm that designs, builds, and operates AI systems for an enterprise on a long-term partnership basis rather than a project basis.
- [AIOps](https://impetora.com/glossary/aiops) - AIOps is the application of AI and machine learning to IT operations data, used to detect anomalies, correlate alerts, and automate incident response.
- [Artificial Intelligence](https://impetora.com/glossary/artificial-intelligence) - Artificial Intelligence (AI) is the field of computer systems that perform tasks normally associated with human cognition: perception, reasoning, language understanding, and decision-making.
- [Build vs Buy AI](https://impetora.com/glossary/build-vs-buy-ai) - Build vs buy is the strategic decision between developing an AI capability internally or in partnership, versus licensing a finished product from a vendor.
- [CCPA + AI](https://impetora.com/glossary/ccpa-ai) - The California Consumer Privacy Act (CCPA), as amended by the CPRA, applies to AI systems that process personal information of California residents and grants rights including access, deletion, and opt-out of automated decision-making.
- [Conformity Assessment](https://impetora.com/glossary/conformity-assessment) - Conformity assessment is the formal process of demonstrating that a high-risk AI system meets the requirements of the EU AI Act before being placed on the market or put into service.
- [Consulting AI](https://impetora.com/glossary/consulting-ai) - Consulting AI is the engagement model in which an external team provides AI strategy, design, build, and operate services to an enterprise client.
- [Custom AI](https://impetora.com/glossary/custom-ai) - Custom AI is an AI system designed and built for a specific enterprise's data, workflows, and constraints, rather than a generic product configured by the buyer.
- [Data Card](https://impetora.com/glossary/data-card) - A data card is a structured document describing a dataset used to train or evaluate an AI model: its source, composition, collection process, intended use, and limitations.
- [Data Residency](https://impetora.com/glossary/data-residency) - Data residency is the requirement that personal or regulated data stays within a specified geographic region throughout processing, storage, and backup.
- [Deep Learning](https://impetora.com/glossary/deep-learning) - Deep Learning is a branch of machine learning that uses multi-layer neural networks to learn hierarchical representations from raw data.
- [Discovery Phase](https://impetora.com/glossary/discovery-phase) - The discovery phase is the first stage of an AI engagement, in which scope, data, workflows, success criteria, and constraints are mapped before any system is built.
- [Discriminative AI](https://impetora.com/glossary/discriminative-ai) - Discriminative AI refers to models that classify or score existing inputs rather than generating new content, learning the boundary between classes from labelled data.
- [DORA](https://impetora.com/glossary/dora) - The Digital Operational Resilience Act (DORA) is an EU regulation that sets uniform requirements for the digital operational resilience of financial entities, including their use of AI and ICT third-party service providers.
- [EIOPA AI Statement](https://impetora.com/glossary/eiopa-ai-statement) - EIOPA's Statement on the use of artificial intelligence in the insurance sector is the European Insurance and Occupational Pensions Authority's supervisory expectations for AI deployment by insurers and intermediaries.
- [Embedding](https://impetora.com/glossary/embedding) - An embedding is a dense numerical vector that represents a piece of content (text, image, audio) in a way that semantically similar items end up close together in the vector space.
- [Enterprise AI](https://impetora.com/glossary/enterprise-ai) - Enterprise AI is AI deployed inside a large organisation, integrated with systems of record, governed by enterprise risk and compliance, and accountable to multiple stakeholders.
- [EU AI Act](https://impetora.com/glossary/eu-ai-act) - The EU AI Act (Regulation (EU) 2024/1689) is the European Union's horizontal regulation for AI, classifying systems by risk and imposing obligations on providers, deployers, importers, and distributors.
- [Evaluation Harness](https://impetora.com/glossary/evaluation-harness) - An evaluation harness is the test framework used to measure an AI system against a fixed set of inputs, expected outputs, and metrics, run on every change.
- [Explainable AI (XAI)](https://impetora.com/glossary/explainable-ai) - Explainable AI (XAI) is the set of techniques that make an AI system's outputs and behaviour understandable to humans, supporting trust, debugging, and regulatory compliance.
- [FCA AI Strategy](https://impetora.com/glossary/fca-ai-strategy) - The Financial Conduct Authority's AI strategy is the UK financial regulator's published approach to AI supervision, emphasising existing rules over new AI-specific legislation.
- [Fine-tuning](https://impetora.com/glossary/fine-tuning) - Fine-tuning is the process of continuing the training of a pre-trained model on a smaller, task-specific dataset to specialise its behaviour.
- [Foundation Model](https://impetora.com/glossary/foundation-model) - A foundation model is a large neural network pre-trained on broad data and designed to be adapted to many downstream tasks.
- [Function Calling](https://impetora.com/glossary/function-calling) - Function calling is the specific implementation of tool use where the language model emits a structured JSON object matching a function signature, which the host application then executes.
- [GDPR](https://impetora.com/glossary/gdpr) - The General Data Protection Regulation (GDPR) is the EU's data-protection regulation, governing the processing of personal data of people in the EU and EEA.
- [Generative AI](https://impetora.com/glossary/generative-ai) - Generative AI is the class of AI systems that produce new content (text, images, audio, video, code) rather than only classifying or scoring existing inputs.
- [Guardrails](https://impetora.com/glossary/guardrails) - Guardrails are runtime checks placed around an AI system to constrain inputs, outputs, and tool calls within safety, compliance, and business policy.
- [Hallucination](https://impetora.com/glossary/hallucination) - A hallucination is a confident-sounding output from a generative AI model that is not grounded in any source and is factually wrong.
- [Impact Assessment](https://impetora.com/glossary/impact-assessment) - An impact assessment is a structured analysis of the potential effects an AI system could have on individuals, groups, and processes before it is deployed.
- [Inference](https://impetora.com/glossary/inference) - Inference is the act of running a trained model on new inputs to produce predictions or generated output.
- [ISO 42001](https://impetora.com/glossary/iso-42001) - ISO/IEC 42001 is the international standard for AI management systems, specifying requirements for establishing, implementing, maintaining, and continually improving an AI governance programme.
- [Large Language Model](https://impetora.com/glossary/large-language-model) - A Large Language Model (LLM) is a foundation model trained on text to predict the next token, capable of generating, summarising, and reasoning over natural language.
- [LLMOps](https://impetora.com/glossary/llmops) - LLMOps is the subset of MLOps focused on the specific operational concerns of large language models: prompt versioning, evaluation, cost control, and output observability.
- [Machine Learning](https://impetora.com/glossary/machine-learning) - Machine Learning (ML) is a subfield of AI in which systems learn statistical patterns from data rather than being explicitly programmed with rules.
- [MLOps](https://impetora.com/glossary/mlops) - MLOps is the discipline of operating machine learning systems in production: versioning, deployment, monitoring, retraining, and governance.
- [Model Card](https://impetora.com/glossary/model-card) - A model card is a structured document describing an AI model's purpose, training data, performance, limitations, and ethical considerations.
- [Model Drift](https://impetora.com/glossary/model-drift) - Model drift is the gradual or sudden degradation of a model's performance in production caused by changes in input data, target distribution, or operating context.
- [Multi-modal AI](https://impetora.com/glossary/multi-modal-ai) - Multi-modal AI refers to systems that can process and generate more than one type of input or output, such as text, images, audio, and video, within a single model or pipeline.
- [Neural Network](https://impetora.com/glossary/neural-network) - A neural network is a computational model loosely inspired by biological neurons, in which weighted connections between simple units learn to map inputs to outputs.
- [NIST AI RMF](https://impetora.com/glossary/nist-ai-rmf) - The NIST AI Risk Management Framework is a voluntary US framework for managing risks of AI systems across the lifecycle, organised around the functions Govern, Map, Measure, and Manage.
- [Observability](https://impetora.com/glossary/observability) - Observability for AI is the ability to understand what an AI system did, why it did it, and at what cost, by inspecting its inputs, outputs, intermediate steps, and metrics.
- [Pilot Phase](https://impetora.com/glossary/pilot-phase) - The pilot phase is the staged build of a working AI system on a narrow, real workflow with real data, evaluated against success criteria agreed in discovery.
- [Production Phase](https://impetora.com/glossary/production-phase) - The production phase is the deployment, operation, and continuous improvement of an AI system in live use, with the controls and monitoring required for the workflow's risk class.
- [Prompt Engineering](https://impetora.com/glossary/prompt-engineering) - Prompt engineering is the practice of designing, testing, and versioning the instructions given to a language model to elicit reliable, evaluable outputs.
- [RAG (Retrieval-Augmented Generation)](https://impetora.com/glossary/rag) - Retrieval-Augmented Generation (RAG) is an architecture pattern that grounds a language model's output in retrieved source documents rather than relying on the model's parametric memory alone.
- [Sub-processor](https://impetora.com/glossary/sub-processor) - A sub-processor is a third party that processes personal data on behalf of a processor, typically an infrastructure or software vendor sitting beneath the primary service provider.
- [Tool Use](https://impetora.com/glossary/tool-use) - Tool use is the capability of a language model to invoke external functions, APIs, or services as part of producing a response.
- [TRACE Methodology](https://impetora.com/glossary/trace-methodology) - TRACE is Impetora's four-pillar methodology for delivering enterprise AI in regulated industries: Trust, Readiness, Architecture, Citations.
- [Transparency Notice](https://impetora.com/glossary/transparency-notice) - A transparency notice is a clear disclosure to users that they are interacting with an AI system, what it is doing with their data, and what its limits are.
- [Vector Database](https://impetora.com/glossary/vector-database) - A vector database is a storage system optimised for indexing and querying high-dimensional embedding vectors using approximate nearest neighbour search.

## Markdown twins

- https://impetora.com/md/glossary/aba-formal-opinion-512
- https://impetora.com/md/glossary/agentic-ai
- https://impetora.com/md/glossary/ai-audit-trail
- https://impetora.com/md/glossary/ai-risk-management
- https://impetora.com/md/glossary/ai-roi
- https://impetora.com/md/glossary/ai-solutions-partner
- https://impetora.com/md/glossary/aiops
- https://impetora.com/md/glossary/artificial-intelligence
- https://impetora.com/md/glossary/build-vs-buy-ai
- https://impetora.com/md/glossary/ccpa-ai
- https://impetora.com/md/glossary/conformity-assessment
- https://impetora.com/md/glossary/consulting-ai
- https://impetora.com/md/glossary/custom-ai
- https://impetora.com/md/glossary/data-card
- https://impetora.com/md/glossary/data-residency
- https://impetora.com/md/glossary/deep-learning
- https://impetora.com/md/glossary/discovery-phase
- https://impetora.com/md/glossary/discriminative-ai
- https://impetora.com/md/glossary/dora
- https://impetora.com/md/glossary/eiopa-ai-statement
- https://impetora.com/md/glossary/embedding
- https://impetora.com/md/glossary/enterprise-ai
- https://impetora.com/md/glossary/eu-ai-act
- https://impetora.com/md/glossary/evaluation-harness
- https://impetora.com/md/glossary/explainable-ai
- https://impetora.com/md/glossary/fca-ai-strategy
- https://impetora.com/md/glossary/fine-tuning
- https://impetora.com/md/glossary/foundation-model
- https://impetora.com/md/glossary/function-calling
- https://impetora.com/md/glossary/gdpr
- https://impetora.com/md/glossary/generative-ai
- https://impetora.com/md/glossary/guardrails
- https://impetora.com/md/glossary/hallucination
- https://impetora.com/md/glossary/impact-assessment
- https://impetora.com/md/glossary/inference
- https://impetora.com/md/glossary/iso-42001
- https://impetora.com/md/glossary/large-language-model
- https://impetora.com/md/glossary/llmops
- https://impetora.com/md/glossary/machine-learning
- https://impetora.com/md/glossary/mlops
- https://impetora.com/md/glossary/model-card
- https://impetora.com/md/glossary/model-drift
- https://impetora.com/md/glossary/multi-modal-ai
- https://impetora.com/md/glossary/neural-network
- https://impetora.com/md/glossary/nist-ai-rmf
- https://impetora.com/md/glossary/observability
- https://impetora.com/md/glossary/pilot-phase
- https://impetora.com/md/glossary/production-phase
- https://impetora.com/md/glossary/prompt-engineering
- https://impetora.com/md/glossary/rag
- https://impetora.com/md/glossary/sub-processor
- https://impetora.com/md/glossary/tool-use
- https://impetora.com/md/glossary/trace-methodology
- https://impetora.com/md/glossary/transparency-notice
- https://impetora.com/md/glossary/vector-database
