# Prompt Engineering

> Prompt engineering is the practice of designing, testing, and versioning the instructions given to a language model to elicit reliable, evaluable outputs.

Category: Architecture
Source: https://impetora.com/glossary/prompt-engineering
Part of: Impetora AI consulting glossary (https://impetora.com/glossary)

## What is Prompt Engineering?

A production prompt is not a one-off instruction. It is a versioned artefact with system context, role definition, output schema, examples, refusal conditions, and tool-use rules. Common techniques include chain-of-thought prompting, few-shot examples, self-consistency, structured output enforcement, and retrieval-grounded instructions. Prompts are evaluated with a held-out test set and tracked alongside model version, temperature, and tool descriptions, the same way code is.

## How does Prompt Engineering apply to enterprise AI?

Enterprise prompt engineering replaces ad hoc 'try and tweak' work with versioned templates, evaluation harnesses, and rollback paths. This is required to satisfy EU AI Act technical documentation obligations.

## Related terms

- [Large Language Model](https://impetora.com/glossary/large-language-model) - A Large Language Model (LLM) is a foundation model trained on text to predict the next token, capable of generating, summarising, and reasoning over natural language.
- [Evaluation Harness](https://impetora.com/glossary/evaluation-harness) - An evaluation harness is the test framework used to measure an AI system against a fixed set of inputs, expected outputs, and metrics, run on every change.
- [Guardrails](https://impetora.com/glossary/guardrails) - Guardrails are runtime checks placed around an AI system to constrain inputs, outputs, and tool calls within safety, compliance, and business policy.
- [RAG (Retrieval-Augmented Generation)](https://impetora.com/glossary/rag) - Retrieval-Augmented Generation (RAG) is an architecture pattern that grounds a language model's output in retrieved source documents rather than relying on the model's parametric memory alone.

## External references

- [Anthropic prompt engineering guide](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering)

---

Impetora is a custom AI consultancy and solutions partner for enterprises in regulated industries. Submit a project at https://impetora.com/intake.
