Security and governance in AI Factory v1.3

AI Factory helps you build Gen AI and data‑driven applications on your infrastructure with clear guardrails. This page explains who is responsible for what, where governance is enforced, and how to apply least‑privilege patterns across components.

Why security and governance matter

Gen AI systems touch sensitive data, call powerful models, and may execute actions. Proper controls reduce risk while keeping teams productive:

  • Protect sensitive data at rest and in flight
  • Limit who can deploy models, change retrieval rules, or publish assistants
  • Provide auditability and observability for compliance
  • Enforce least privilege when assistants call tools or external services

See also: Sovereign AI.

Where enforcement happens

Who is responsible

  • Platform teams: Control cluster access, secrets, and network policy; provision GPUs for Model Serving.
  • Data engineers: Govern sources, Knowledge Bases, and embedding freshness.
  • Application/Gen AI builders: Define Rulesets, Retrievers, Tools, and Assistants with least privilege.

How to apply governance in AI Factory

  • Keep data in governed stores
    • Use Postgres and object storage managed by your platform; control access with your standard policies.
    • Use Vector Engine to keep embeddings close to data and reuse existing database controls.
  • Govern retrieval and behavior
    • Express retrieval scope and limits with Retrievers and Knowledge Bases.
    • Enforce acceptable behavior with Rulesets; external calls should be explicit through Tools.
  • Secure model endpoints
  • Observe and audit
    • In Hybrid Manager, use integrated observability to monitor model serving, Gen AI runs, and pipelines.
    • Treat Threads, logs, and metrics as audit artifacts as required by your policy.

Prerequisites

  • Hybrid Manager: A project with AI Factory enabled and entitlements for Gen AI, Pipelines, Vector search, and Model Serving. See the AI Factory spoke.
  • Outside Hybrid Manager: Ensure an enterprise secrets store, network policy, and audit logging are in place for the components you run.

Use cases

  • Regulated knowledge assistants: Use governed Knowledge Bases, constrained Retrievers, and Rulesets; run models via private endpoints.
  • Document intelligence for PII: Keep content in Postgres/object storage, process via Pipelines, and restrict Tools to non‑destructive operations.
  • Internal automation: Gate write‑actions behind Tools with explicit scopes; review Threads and logs periodically.