Custom AI for the General Counsel: how we ship AI you can defend, indemnify, and disclose.
A General Counsel reading an AI vendor contract is looking for four things: that the system has a defensible posture under the EU AI Act and GDPR, that the training data is IP-clean, that the indemnity language survives a copyright or data-subject claim, and that the regulator-pack exists before launch and not after the first complaint. We build to that brief by default. Every system carries a conformity-assessment file, a model card, and a sub-processor disclosure that your team can sign without revising the redlines.
Regulator-pack assembly, conformity-assessment evidence, IP-clean training data, sub-processor contracts, and Article 22 design patterns. Delivered as documents, not promises.
The five concerns we hear on every GC discovery call.
AI Act conformity assessment
GDPR Article 22 exposure
Training-data IP posture
Output IP and ownership
Sub-processor disclosure
Indemnity and liability
For GCs, the spine is Citations and Evidence.
For a General Counsel, the spine is Citations and Evidence. Every output traceable to its source document, prompt, and decision step. A reviewer signing off can replay the decision in seconds. A regulator opening a file gets the conformity-assessment pack on day one. A claimant sending a data-subject access request gets a defensible response without forensic archaeology. The audit trail is not a feature, it is the artefact your indemnity language depends on.
An AI contract without an evidence chain is an indemnity you cannot honour and a regulator file you cannot fill.
Where GCs typically engage us first.
Document processing
Decision support
Customer support automation
Internal knowledge AI
What the engagement looks like from your seat.
What GCs need from a partner, and what we ship.
Defensible audit trail
Annex IV technical documentation
GDPR Article 22 design pattern
IP-clean training data posture
Sub-processor contracts
Model card and disclosure
GC questions, answered.
How do you handle GDPR Article 22?
Article 22 prohibits decisions producing legal or similarly significant effects from being made solely on automated processing without one of the specified bases (contract, EU or member-state law, or explicit consent) and explicit safeguards. We classify every workflow against Article 22 in Discovery. Where the workflow falls in scope, we design a human-in-the-loop surface that is meaningful, not cosmetic. The reviewer sees the model's reasoning, the retrieved evidence, and the alternative options, and the decision logic supports a documented contestation path. The EDPB 2024 guidelines on Article 22 are reflected in the design pattern, and the audit log captures the human approval as a first-class event.
Is the training data IP-clean?
We do not train foundation models. We contract with foundation-model providers under no-training clauses on your data, and we use the providers' documented training-data position as the IP basis for the model itself. Where you supply training data for fine-tuning or retrieval, we maintain a provenance log and a licence record per source, and we refuse to ingest data without a documented licence. The full provenance chain is delivered as part of the regulator-pack at launch.
What about derivative-works and copyright claims on AI outputs?
AI output ownership and infringement risk are a function of the foundation-model provider's terms, the input you supplied, and the use case. We design every system to keep the human reviewer in the signing seat for any output that constitutes original creative work, and we contract output-licence terms that flow through to your customer agreement. Where a foundation-model provider offers an indemnity for output infringement (several do), we use it. Where they do not, we structure the system to keep the human reviewer accountable for the final output.
What does the regulator-pack contain?
For a high-risk system under the EU AI Act, the regulator-pack contains the Annex IV technical documentation (nine chapters), the EU declaration of conformity, the conformity-assessment evidence (Annex VI or VII route), the risk-management file, the data-governance documentation, the human-oversight design, the post-market monitoring plan, and the audit-log schema. For a limited-risk system, the pack is proportionate: transparency notice, model card, audit-log schema, and DPIA where the workflow triggers GDPR Article 35. The pack is delivered before launch, not after.
How do you handle indemnity and liability in the contract?
We push indemnity to the party that can carry it: foundation-model provider for training-data IP, us for engineering negligence on the build, and you for use-case decisions and the human reviewer's sign-off. The contract names each indemnity, caps each one, and exposes the audit log as the evidence basis when one is invoked. Where your customer agreements require a flow-through indemnity, we draft the back-to-back terms during Build phase, not after the first complaint.
Where to go next.
Annex VI vs VII routes, Annex IV technical documentation, EU declaration, CE marking.
SCHUFA, EDPB 2024 guidelines, AI scope, design pattern for compliance.
How we apply TRACE to in-house legal teams and law firms with audit-traceable AI.
Bring us the GC mandate. We bring the audit-ready system.
Discovery starts with a scoped audit. The deliverable is yours either way. We respond within two business days at info@ainora.lt.