Model capabilities in Hybrid Manager v1.2

Hybrid Manager provides full support for Model Serving and Model Library, delivered through the AI Factory workload.

Models are deployed as scalable Inference Services using KServe on Hybrid Manager’s Kubernetes infrastructure.

The Model Library provides discovery and management of supported models and container images.


Key components

  • Model Serving — deploy models as network-accessible inference services.
  • Model Library — discover and manage AI models and container images.

Where to learn more

GPUs

Understand the role of GPUs in Model Serving with AI Factory, how Hybrid Manager uses them, and how to prepare GPU resources.

Asset Library

How Asset Library works within Hybrid Manager and how to manage model images and AI assets.

Model Serving

How Model Serving works within Hybrid Manager and key deployment considerations.