# Large Language Model

> A Large Language Model (LLM) is a foundation model trained on text to predict the next token, capable of generating, summarising, and reasoning over natural language.

Category: Foundational
Source: https://impetora.com/glossary/large-language-model
Part of: Impetora AI consulting glossary (https://impetora.com/glossary)

## What is Large Language Model?

LLMs are transformer neural networks with billions of parameters, trained on web-scale text. They generate output by sampling tokens conditioned on a prompt and prior context. Useful behaviours like instruction following, code generation, and tool use emerge through additional training stages such as supervised fine-tuning and reinforcement learning from human feedback. LLMs do not have memory across calls unless explicitly given one, and they can produce confident-sounding errors known as hallucinations.

## How does Large Language Model apply to enterprise AI?

Enterprises use LLMs for drafting, classification, extraction, summarisation, and grounded question answering. Production systems pair the model with retrieval, validation, and citation steps to reduce hallucination risk and produce auditable outputs.

## Related terms

- [Foundation Model](https://impetora.com/glossary/foundation-model) - A foundation model is a large neural network pre-trained on broad data and designed to be adapted to many downstream tasks.
- [Generative AI](https://impetora.com/glossary/generative-ai) - Generative AI is the class of AI systems that produce new content (text, images, audio, video, code) rather than only classifying or scoring existing inputs.
- [RAG (Retrieval-Augmented Generation)](https://impetora.com/glossary/rag) - Retrieval-Augmented Generation (RAG) is an architecture pattern that grounds a language model's output in retrieved source documents rather than relying on the model's parametric memory alone.
- [Hallucination](https://impetora.com/glossary/hallucination) - A hallucination is a confident-sounding output from a generative AI model that is not grounded in any source and is factually wrong.
- [Prompt Engineering](https://impetora.com/glossary/prompt-engineering) - Prompt engineering is the practice of designing, testing, and versioning the instructions given to a language model to elicit reliable, evaluable outputs.

## External references

- [Anthropic model card library](https://www.anthropic.com/research)
- [OpenAI research publications](https://openai.com/research)

---

Impetora is a custom AI consultancy and solutions partner for enterprises in regulated industries. Submit a project at https://impetora.com/intake.
