Zeblok Computational
1.3.4
1.3.4
  • AI-MicroCloud
    • About Ai-MicroCloud
    • Need for Ai-MicroCloud
    • Ai-MicroCloud Lifecycle Managment
  • PROCEDURE
    • Create an Account
      • Individual User
    • Sign In
    • Inviting User
    • Role definition by ISV administrator:
      • IAM
        • Roles
        • Usergroup
        • Organizations
    • NameSpaces
    • Workstations
      • Onboard Workstation
      • Edit feature
      • Spawn Ai-WorkStation
      • Build, Serve and Deploy
        • Notebooks
          • Installing different frameworks
        • Build Models
          • ZBL Command Line Interface
            • How to use the CLI
        • Containerization
    • Microservice
      • Onboard Microservices
      • Edit feature
      • Spawn MicroService
      • Spawn Multi-container Ai-Microservices
    • Gen-AI Workspace
      • Chat Platform
      • Knowledge Distilation
    • Spawn Orchestration Add-on
    • SDK
  • DATA AND METRICS
    • DataLake
      • Datasets
    • Monitoring
      • Resource level monitoring
  • MORE
    • Manage Account
      • Forgot Password
      • 2 Factor Authentication
    • Admin Guide
      • Menu
      • Dashboard
      • DataCenters
      • Plans
      • Lifecycle Manager
    • Video Tutorial
  • RELEASES
    • Release Notes
  • Support
    • Support Emails
Powered by GitBook
On this page
  • Overview
  • Easy steps to Ai-API™ Deployment

Was this helpful?

  1. PROCEDURE
  2. Workstations

Build, Serve and Deploy

Introduction to Ai-APIᵀᴹ

PreviousSpawn Ai-WorkStationNextNotebooks

Was this helpful?

Overview

Ai-API™ makes moving trained ML models to production easy:

  • Package models trained with ML framework and then containerize the model server for production deployment

  • Deploy anywhere for online API serving endpoints or offline batch inference jobs

  • High-Performance API model server with adaptive micro-batching support

  • Ai-API™ server is able to handle high-volume without crashing, supports multi-model inference, API server Dockerization, Built-in Prometheus metric endpoint, Swagger/Open API endpoint for API Client library generation, serverless endpoint deployment etc.

  • Central hub for managing models and deployment process via web UI and APIs

  • Supports various ML frameworks including:

Scikit-Learn, PyTorch, TensorFlow 2.0, Keras, FastAI v1 & v2, XGBoost, H2O, ONNX, Gluon and more

  • Supports API input data types including:

DataframeInput, JsonInput, TfTensorflowInput, ImageInput, FileInput, MultifileInput, StringInput, AnnotatedImageInput and more

  • Supports API output Adapters including:

BaseOutputAdapter, DefaultOutput, DataframeOutput, TfTensorOutput and JsonOutput

Easy steps to Ai-API™ Deployment

Select your notebook
Build Model
Deploy