I
Impetora
Foundational

Foundation Model

A foundation model is a large neural network pre-trained on broad data and designed to be adapted to many downstream tasks.

What is Foundation Model?

Foundation models are trained once on very large corpora and then re-used. Adaptation happens through prompting, retrieval augmentation, or fine-tuning. Examples include large language models, vision-language models, speech models, and multi-modal models. Vendors host them as APIs or release them as open weights. The term was popularised by the Stanford CRFM in 2021 and is now central to the EU AI Act, which defines specific obligations for general-purpose AI models.

How does Foundation Model apply to enterprise AI?

Enterprise AI systems are usually built on top of a foundation model rather than training from scratch. The buyer choices that matter are vendor, model family, hosting region, version pinning, and whether prompts and outputs are retained.

Related terms

  • Large Language Model - A Large Language Model (LLM) is a foundation model trained on text to predict the next token, capable of generating, summarising, and reasoning over natural language.
  • Fine-tuning - Fine-tuning is the process of continuing the training of a pre-trained model on a smaller, task-specific dataset to specialise its behaviour.
  • RAG (Retrieval-Augmented Generation) - Retrieval-Augmented Generation (RAG) is an architecture pattern that grounds a language model's output in retrieved source documents rather than relying on the model's parametric memory alone.
  • EU AI Act - The EU AI Act (Regulation (EU) 2024/1689) is the European Union's horizontal regulation for AI, classifying systems by risk and imposing obligations on providers, deployers, importers, and distributors.

External references

Impetora

Need help applying Foundation Model to your enterprise? Submit a short brief and we reply within one business day.

Submit a projectBack to glossary