# Build, Serve and Deploy

### Overview

AI-API™ makes moving trained ML models to production easy:

* Package models trained with an ML framework and then containerize the model server for production deployment&#x20;
* Deploy anywhere for online API serving endpoints or offline batch inference jobs
* High-Performance API model server with adaptive micro-batching support
* AI-API™ server can handle high volume without crashing, supports multi-model inference, API server Dockerization, Built-in Prometheus metric endpoint, Swagger/Open API endpoint for API Client library generation, serverless endpoint deployment, etc.
* Central hub for managing models and the deployment process via web UI and APIs
* Supports various ML frameworks, including:

Scikit-Learn, PyTorch, TensorFlow 2.0, Keras, FastAI v1 & v2, XGBoost, H2O, ONNX, Gluon and more

* Supports API input data types, including:&#x20;

DataframeInput, JsonInput, TfTensorflowInput, ImageInput, FileInput, MultifileInput, StringInput, AnnotatedImageInput, and more

* Supports API output adapters, including:&#x20;

BaseOutputAdapter, DefaultOutput, DataframeOutput, TfTensorOutput, and JsonOutput

### Easy steps to AI-AP&#x49;**™** Deployment

1. [Select your notebook](https://computationaldocs.zeblok.com/info/1.3.7/procedure/workstations/introduction/notebooks)
2. [Build Model](https://computationaldocs.zeblok.com/info/1.3.7/procedure/workstations/introduction/build-models)
3. [Deploy](https://computationaldocs.zeblok.com/info/1.3.7/procedure/workstations/introduction/deploy)
