How to use the CLI
This document will guide you through the usage of Zeblok cli
CLI Usability and commands.
Show-URL
This command just outputs the APP URL and datalake Url configured in the zeblok configure command
zeblok showUrlSnapshot
This command is used to create a snapshot of your workstation, on which it creates a Docker image whose tag is emailed to the user, on which that tag can be used to create a new workstation with the current configuration
zeblok snapshot
? Enter the name for the Docker file getting generated: <Docker Image Name>
? Select your DataLake Bucket to Deploy (Use arrow keys) <Minio Bucket Selection>Brief
Snapshot command copies all the files from your notebook with the same structure as in the current one with and it also creates requirements.txt with all the installed packages in your current notebook and installs it when you spawn the new workstation with that Docker image to have the same configuration as you have currently.
requirments.txt is generated automatically with
pip list --format=freezeBase image for workstation used is
minimal-notebook:2023.09.20
Components
Docker Image Name- The image name you want the Docker to haveMinio Bucket Selection- Select the bucket where you want to store all the data of your notebook
Openvino
This command is used to serve a model using OpenVINO
zeblok openvino
? Enter the path for the generated IR model (without/at the end): <IR_OUTPUT_PATH>
? Enter the IR output name: <IR_OUTPUT_NAME>
? Select your DataLake Bucket to Deploy: <BUCKET_NAME>
? Do you want to auto-deploy? : <AUTO_DEPLOY>
? Select your namespace: <NAMESPACE>
? Select your DataCenter: <DATACENTER>
? Select your Kiosk: <KISOK>
? Select your Plan: <PLAN>
? Name of your Deployment: <DEPLOYMENT_NAME>Components
IR_OUTPUT_PATH- Folder containing the <IR_OUTPUT_NAME>.xml and the <IR_OUTPUT_NAME>.binIR_OUTPUT_NAME- IR output name, which is the same for both .xml and .bin files in theIR_OUTPUT_PATHBUCKET_NAME- Select the bucket where you want to store the model filesAUTO_DEPLOY- This is a Boolean Input that defines if you want to deploy the model directly to the AI-APINAMESPACE- Namespace where the ai api is to be deployed and asked ifAUTO_DEPLOYis marked asYESDATACENTER- Datacenter where the ai api is to be deployed, and asked ifAUTO_DEPLOYis marked asYESKISOK- Kiosks where the ai api was to be deployed and asked ifAUTO_DEPLOYis marked asYESPLAN- Plan in which the ai api is to be deployed and asked ifAUTO_DEPLOYis marked asYESDEPLOYMENT_NAME- Name of deployment used to deploy AI-API, and asked ifAUTO_DEPLOYis marked asYES
Ideal File Structure

For openvino model serving to work the output path must contain 2 files with same name with extensions *.xml and *.bin
Bentoml
Bentoml model serving serves a bentoml model served built using bentoml build command. It fetches the model with bentoml list --output json command and lists one that you want to serve.
zeblok bentoml
? Select your Model to Deploy: <MODEL_NAME>
? Select your DataLake Bucket to Deploy: <BUCKET_NAME>
? Do you want to auto-deploy? : <AUTO_DEPLOY>
? Select your namespace: <NAMESPACE>
? Select your DataCenter: <DATACENTER>
? Select your Kiosk: <KISOK>
? Select your Plan: <PLAN>
? Name of your Deployment: <DEPLOYMENT_NAME>Components
MODEL_NAME- Select the model built using the bentoml build commandbentoml buildBUCKET_NAME- Select the bucket where you want to store the model filesAUTO_DEPLOY- This is a Boolean Input that defines if you want to deploy the model directly to the AI-APINAMESPACE- Namespace where the ai api is to be deployed, and asked ifAUTO_DEPLOYis marked asYESDATACENTER- Datacenter where the ai api is to be deployed, and asked ifAUTO_DEPLOYis marked asYESKISOK- Kiosks where the AI API was to be deployed, and asked ifAUTO_DEPLOYis marked asYESPLAN- Plan in which the ai api is to be deployed and asked ifAUTO_DEPLOYis marked asYESDEPLOYMENT_NAME- Name of deployment used to deploy AI-API, and asked ifAUTO_DEPLOYis marked asYES
The model serving of bentoml needs bentoml installed on your workstation
If there is no model to select in first question then try running
bentoml buildin the model creation .
MLflow model serving uses two processes to serve the model into your directory, and second it serves that model.
Commands
1. Get model
To get a model from MLFlow register, use the command
zeblok mlflow
? Enter the MLFlow URL for model: <MLFLOW_TRACKING_URL>
? Enter the runId for model: <RUN_ID>Components
MLFLOW_TRACKING_URL- MLflow tracking URL where your model is stored.RUN_ID- Run ID of the model you are willing to serve
To create a new MLFlow microservice, please refer to the AI-Microcloud documentation.
2. Serve Model
Once the command to get the model is done, you will need to run the modelserve command to serve the model
zeblok modelserve
? Enter the rootPath for the model in the directory: <DOWNLOADED_MODEL_PATH>
? Enter the IR output name: <IR_OUTPUT_NAME>
? Select your DataLake Bucket to Deploy: <BUCKET_NAME>
? Do you want to auto-deploy? : <AUTO_DEPLOY>
? Select your namespace: <NAMESPACE>
? Select your DataCenter: <DATACENTER>
? Select your Kiosk: <KISOK>
? Select your Plan: <PLAN>
? Name of your Deployment: <DEPLOYMENT_NAME>Components
DOWNLOADED_MODEL_PATH- Enter the folder name where your model is stored.IR_OUTPUT_NAME- Enter any name that you want to put for the model.AUTO_DEPLOY- This is a Boolean Input that defines if you want to deploy the model directly to the AI-APINAMESPACE- Namespace where the ai api is to be deployed and asked ifAUTO_DEPLOYIt is marked asYESDATACENTER- Datacenter where the ai api is to be deployed, and asked ifAUTO_DEPLOYis marked asYESKISOK- Kiosks where the ai api was to be deployed and asked ifAUTO_DEPLOYis marked asYESPLAN- Plan in which the ai api is to be deployed and asked ifAUTO_DEPLOYis marked asYESDEPLOYMENT_NAME- Name of deployment used to deploy AI-API, and asked ifAUTO_DEPLOYIt is marked asYES
LLAMA
This command serves the LLAMA model with CPP serving.
zeblok llamacpp
? Enter the path for the generated IR model (without/at the end): <IR_OUTPUT_PATH>
? Enter the IR output name with its extension: <IR_OUTPUT_NAME>
? Select your DataLake Bucket to Deploy: <BUCKET_NAME>
? Do you want to auto-deploy? : <AUTO_DEPLOY>
? Select your namespace: <NAMESPACE>
? Select your DataCenter: <DATACENTER>
? Select your Kiosk: <KISOK>
? Select your Plan: <PLAN>
? Name of your Deployment: <DEPLOYMENT_NAME>Components
IR_OUTPUT_PATH- Enter the path where your model andrequirements.txtThe file is situated.IR_OUTPUT_NAME- Enter the model name with its extension, whether it's .gguml or .ggufBUCKET_NAME- Select the bucket where you want to store the model filesAUTO_DEPLOY- This is a Boolean Input that defines if you want to deploy the model directly to the AI-APINAMESPACE- Namespace where the ai api is to be deployed and asked ifAUTO_DEPLOYis marked asYESDATACENTER- Datacenter where the ai api is to be deployed, and asked ifAUTO_DEPLOYis marked asYESKISOK- Kiosks where the ai api was to be deployed and asked ifAUTO_DEPLOYis marked asYESPLAN- Plan in which the ai api is to be deployed and asked ifAUTO_DEPLOYis marked asYESDEPLOYMENT_NAME- Name of deployment used to deploy AI-API, and asked ifAUTO_DEPLOYIt is marked asYES
IR_OUTPUT_PATHmust contain the model andrequirements.txtfile to workMake sure you enter
IR_OUTPUT_NAMEwith its extension .Depending on the size of model it will take time around 2 to 3 hours
VLLM
This command serves the model with the Vllm model serving.
zeblok vllm
? Enter the directory name containing your model : <MODEL_DIRECTORY>
? Select your DataLake Bucket to Deploy: <BUCKET_NAME>
? Do you want to auto-deploy? : <AUTO_DEPLOY>
? Select your namespace: <NAMESPACE>
? Select your DataCenter: <DATACENTER>
? Select your Kiosk: <KISOK>
? Select your Plan: <PLAN>
? Name of your Deployment: <DEPLOYMENT_NAME>Components
,,MODEL_DIRECTORY- Enter the folder name where your model and files are situated inside the workstation.BUCKET_NAME- Select the bucket where you want to store the model filesAUTO_DEPLOY- This is a Boolean Input that defines if you want to deploy the model directly to the AI-APINAMESPACE- Namespace where the ai api is to be deployed and asked ifAUTO_DEPLOYis marked asYESDATACENTER- Datacenter where the ai api is to be deployed, and asked ifAUTO_DEPLOYis marked asYESKISOK- Kiosks where the ai api was to be deployed and asked ifAUTO_DEPLOYis marked asYESPLAN- Plan in which the AI API is to be deployed, and asked ifAUTO_DEPLOYis marked asYESDEPLOYMENT_NAME- Name of deployment used to deploy AI-API, and asked ifAUTO_DEPLOYis marked asYES
Make sure
MODEL_DIRECTORYyou enter contains all the files related to model .Depending on the size of model it will take time around 1 to 2 hours
Last updated
Was this helpful?