I
Impetora
For: General Counsel / Chief Legal Officer

Custom AI for the General Counsel: how we ship AI you can defend, indemnify, and disclose.

A General Counsel reading an AI vendor contract is looking for four things: that the system has a defensible posture under the EU AI Act and GDPR, that the training data is IP-clean, that the indemnity language survives a copyright or data-subject claim, and that the regulator-pack exists before launch and not after the first complaint. We build to that brief by default. Every system carries a conformity-assessment file, a model card, and a sub-processor disclosure that your team can sign without revising the redlines.

Regulator-pack assembly, conformity-assessment evidence, IP-clean training data, sub-processor contracts, and Article 22 design patterns. Delivered as documents, not promises.

EUR 35M
Maximum administrative fine under EU AI Act Article 99 for prohibited-practice breach
EUR-Lex
4%
Of global annual turnover, GDPR Article 83(5) maximum fine ceiling
GDPR-Info
9
Annex IV technical-documentation chapters required for high-risk systems
EUR-Lex
Art 22
GDPR right not to be subject to a decision based solely on automated processing
GDPR-Info
What GCs actually care about

The five concerns we hear on every GC discovery call.

AI Act conformity assessment

High-risk systems need a documented conformity assessment, an EU declaration, and CE marking. Building these after launch is more expensive than building them in.

GDPR Article 22 exposure

Decisions producing legal or similarly significant effects cannot be made solely on automated processing without explicit safeguards. Every customer-affecting AI workflow has to carry the design pattern.

Training-data IP posture

Foundation models trained on contested data create derivative-works claims and licence-violation risk. The contract has to push the indemnity to a vendor that can carry it.

Output IP and ownership

Who owns AI-generated output, what licence the foundation-model provider grants on output, and how that interacts with your existing customer contracts is a four-line clause that takes a quarter to renegotiate if it is wrong.

Sub-processor disclosure

Every party that touches personal data has to be disclosed under your DPA, with the legal basis, the residency, and the contractual posture documented.

Indemnity and liability

When the model produces a wrong, biased, defamatory, or copyright-infringing output, your contract chain has to push the liability to a party that can carry it.
TRACE pillar focus

For GCs, the spine is Citations and Evidence.

For a General Counsel, the spine is Citations and Evidence. Every output traceable to its source document, prompt, and decision step. A reviewer signing off can replay the decision in seconds. A regulator opening a file gets the conformity-assessment pack on day one. A claimant sending a data-subject access request gets a defensible response without forensic archaeology. The audit trail is not a feature, it is the artefact your indemnity language depends on.

An AI contract without an evidence chain is an indemnity you cannot honour and a regulator file you cannot fill.
Impetora redline notes
Engagement model

What the engagement looks like from your seat.

Legal scopeDiscoveryConformity packAnnex IVArt 22 designPre-buildBuild + logEvidenceRegulator-packPre-launch
How a GC engagement runs end to end.
Deliverables

What GCs need from a partner, and what we ship.

Defensible audit trail

An immutable, append-only log of every input, retrieved context, model version, and output. Replayable on demand by your team or a regulator.

Annex IV technical documentation

The nine-chapter pack the EU AI Act expects for high-risk systems: system description, design, monitoring, performance metrics, risk management, post-market monitoring, change log, EU declaration of conformity, instructions for use.

GDPR Article 22 design pattern

Where decisions produce legal effects, we ship explicit consent, human review surfaces, and a contestation path. Where the workflow does not need full automation, we design human-in-the-loop in by default.

IP-clean training data posture

Foundation-model providers contracted with no-training clauses on customer data, and a documented training-data position from the provider. Where you supply training data, we maintain a provenance log and a licence record.

Sub-processor contracts

DPA delivered with a sub-processor list, residency map, legal basis per category, and notification terms when the list changes.

Model card and disclosure

A published model card describing capabilities, limitations, evaluation results, and intended use. Reviewable by your team and exposable to customers under your transparency obligations.

GC questions, answered.

How do you handle GDPR Article 22?

Article 22 prohibits decisions producing legal or similarly significant effects from being made solely on automated processing without one of the specified bases (contract, EU or member-state law, or explicit consent) and explicit safeguards. We classify every workflow against Article 22 in Discovery. Where the workflow falls in scope, we design a human-in-the-loop surface that is meaningful, not cosmetic. The reviewer sees the model's reasoning, the retrieved evidence, and the alternative options, and the decision logic supports a documented contestation path. The EDPB 2024 guidelines on Article 22 are reflected in the design pattern, and the audit log captures the human approval as a first-class event.

Is the training data IP-clean?

We do not train foundation models. We contract with foundation-model providers under no-training clauses on your data, and we use the providers' documented training-data position as the IP basis for the model itself. Where you supply training data for fine-tuning or retrieval, we maintain a provenance log and a licence record per source, and we refuse to ingest data without a documented licence. The full provenance chain is delivered as part of the regulator-pack at launch.

What about derivative-works and copyright claims on AI outputs?

AI output ownership and infringement risk are a function of the foundation-model provider's terms, the input you supplied, and the use case. We design every system to keep the human reviewer in the signing seat for any output that constitutes original creative work, and we contract output-licence terms that flow through to your customer agreement. Where a foundation-model provider offers an indemnity for output infringement (several do), we use it. Where they do not, we structure the system to keep the human reviewer accountable for the final output.

What does the regulator-pack contain?

For a high-risk system under the EU AI Act, the regulator-pack contains the Annex IV technical documentation (nine chapters), the EU declaration of conformity, the conformity-assessment evidence (Annex VI or VII route), the risk-management file, the data-governance documentation, the human-oversight design, the post-market monitoring plan, and the audit-log schema. For a limited-risk system, the pack is proportionate: transparency notice, model card, audit-log schema, and DPIA where the workflow triggers GDPR Article 35. The pack is delivered before launch, not after.

How do you handle indemnity and liability in the contract?

We push indemnity to the party that can carry it: foundation-model provider for training-data IP, us for engineering negligence on the build, and you for use-case decisions and the human reviewer's sign-off. The contract names each indemnity, caps each one, and exposes the audit log as the evidence basis when one is invoked. Where your customer agreements require a flow-through indemnity, we draft the back-to-back terms during Build phase, not after the first complaint.

Bring us the GC mandate. We bring the audit-ready system.

Discovery starts with a scoped audit. The deliverable is yours either way. We respond within two business days at info@ainora.lt.

Discovery call

Book a discovery call

Tell us what you would like to build. We reply within one business day.

30-minute call. Free of charge. No obligation.