I
Impetora
For: Chief Information Security Officer

Custom AI for the CISO: how we ship systems your SOC can defend and your auditor can read.

AI security posture is the union of adversarial robustness, data-leakage prevention, supply-chain visibility, and an evidence chain your SOC team can pull during an incident. Off-the-shelf AI ships the model. We ship the controls around the model. Every system we build is mapped to the OWASP LLM Top 10 and the NIST AI Risk Management Framework before it goes near production traffic.

Red-teaming pipelines, output filtering, prompt-injection defence, and audit-grade logging. Discovery includes a written threat model, not a feature list.

10
OWASP Top 10 risks for LLM applications, mapped to controls in every build
OWASP
4
NIST AI RMF core functions: GOVERN, MAP, MEASURE, MANAGE
NIST
100%
Inference traffic logged with prompt, response, model version, and decision step
Zero
Customer data used for foundation-model training, contractually guaranteed
What CISOs actually care about

The five concerns we hear on every CISO discovery call.

Prompt injection

An attacker hides instructions inside a document, an email, or a tool response. The model executes them. Output filtering and instruction isolation are not optional.

Data exfiltration through the model

A model can leak training data, retrieved context, or system prompts through a crafted request. We design retrieval and output layers to make exfiltration paths observable and bounded.

Supply-chain visibility

Foundation-model providers, vector databases, embedding services, observability vendors. Your sub-processor list has to be defensible under contract and visible to the SOC.

Evidence chain for incident response

When something goes wrong, the SOC needs the prompt, the retrieved context, the model version, and the response, time-stamped and immutable. Audit logs are not a feature, they are the spine.

Identity and access control

AI inference traffic has to respect the same access rules as the underlying data. A user must never see AI output drawn from data they could not query directly.

Model updates and regression

A foundation-model upgrade can change behaviour overnight. Without a regression suite gating promotion, an upgrade is a security event you find out about from a user.
TRACE pillar focus

For CISOs, the spine is Trust.

For a CISO, the spine is Trust. Data residency, encryption posture, sub-processor visibility, audit-grade logging, and refusal of architectures that cannot be defended. We map every system to the OWASP LLM Top 10 and the NIST AI RMF before a line of production code ships, and the threat model is delivered as a written artefact your team can red-team against. Architecture and Citations sit on top, but Trust is the gate.

Most enterprise AI breaches do not start with a zero-day. They start with a system that nobody could explain and nobody could log.
Impetora threat-model notes
Engagement model

What the engagement looks like from your seat.

Threat modelDiscoveryControl mappingOWASP + NISTRed-teamPre-launchSOC integrationAudit logIncident responseEvidence chain
How a CISO engagement runs end to end.
Deliverables

What CISOs need from a partner, and what we ship.

Written threat model

A document covering attack surface, abuse cases, abuse impact, and the controls that mitigate each risk. Delivered before Build phase begins.

OWASP LLM Top 10 control mapping

Each of the ten categories mapped to specific controls in the build: prompt injection, insecure output handling, training-data poisoning, model denial-of-service, supply chain, sensitive information disclosure, insecure plugin design, excessive agency, overreliance, model theft.

NIST AI RMF alignment

GOVERN, MAP, MEASURE, MANAGE applied across the engagement. Documented per system, refreshed at every release.

Red-teaming pipeline

Adversarial test suite run before launch and on every model upgrade. Findings remediated and re-tested. The harness is yours to extend.

SOC integration

Audit logs delivered in a format your SIEM can ingest. Alerting on prompt-injection signatures, anomalous tool use, and policy-violation patterns.

Sub-processor disclosure

A documented list of every third-party that touches inference, retrieval, or storage. Updated when it changes. Reviewable under your DPA.

CISO questions, answered.

How do you handle prompt injection?

We treat every input as untrusted. Retrieved documents, tool responses, and user-supplied content are parsed in a context layer that is isolated from the system prompt, and the model is instructed to refuse instructions found inside that layer. Output is filtered against a deny-list, a policy classifier, and a structured-output schema where the workflow allows it. We also red-team the system before launch using a published prompt-injection corpus plus client-specific abuse cases. The full mitigation set is mapped to OWASP LLM01 and documented per build.

What happens if a model leaks training data or retrieved context?

By default, we contract zero-retention and no-training clauses with the foundation-model provider, and we deploy on EU regions or your tenancy where the workflow requires it, so training-data leakage is bounded by the provider's published guarantees. For retrieved context, we apply permission-scoped retrieval at query time and output filtering at response time, so a user only ever sees content drawn from data they were already authorised to read. The audit log captures the retrieved chunks, the prompt, and the response, so an exfiltration attempt is recoverable as a forensic artefact.

Do you provide evidence for our SOC team during an incident?

Yes. Every inference call writes an immutable record of the input, the retrieved context, the model version, the response, and the downstream action. The format is your SIEM's native ingestion format, whether that is Splunk, Sentinel, or an OTLP collector. During an incident, your SOC analyst replays the conversation, identifies the failing step, and exports the evidence chain into the case. We document the schema in the threat model so your team can build detections against it.

How do you handle the supply chain?

Every third-party that touches inference, retrieval, or storage is listed in a sub-processor table delivered with the DPA. The list includes the data category each one processes, the legal basis, and the data-residency posture. When the list changes, you get notified under contract. We refuse to use sub-processors that cannot meet the residency or contractual posture your environment requires.

How do you gate foundation-model upgrades?

Model upgrades go through the evaluation harness and the red-team suite before promotion. A failed regression blocks promotion, full stop. We also run a shadow-mode comparison where the new model runs alongside the current one on production traffic, with output logged but not actioned, so behavioural drift is caught before users see it. The promotion decision is yours, with a written diff in front of you.

Bring us the CISO mandate. We bring the audit-ready system.

Discovery starts with a scoped audit. The deliverable is yours either way. We respond within two business days at info@ainora.lt.

Discovery call

Book a discovery call

Tell us what you would like to build. We reply within one business day.

30-minute call. Free of charge. No obligation.