Release notes
ZEBLOK COMPUTATIONAL AI-MICROCLOUD RELEASE NOTES
Version 1.2.7 (March 2024)
All the below mentioned updates have been integrated into the release version of AI-Microcloud 1.2.7 specific to our Basic SKU
PaaS
User Interface Enhancements
Dashboard User View Enhancements:
Roles and Policies: Finalize the naming conventions for roles and policies on the dashboard.
User Roles Depiction: Users' roles are now visually represented on the user list for administrators.
Datacenter Functionality:
Node Type Display: Users can now view the type of node they are accessing within the datacenter.
Datacenter Form Revamp: The datacenter form has been optimized to remove unwanted fields.
Microservice and Workstation Cards Enhancement: Cards in the MS/WS section now include vital information such as license details and service provider information.
Terms and Conditions Checkbox: A terms and conditions checkbox has been added during any spawning process, ensuring users acknowledge usage terms before proceeding.
Provider Information Display: Third-party service provider details are now visible on respective cards for user reference.
Activity Logs: Users can now access detailed activity logs for better tracking and monitoring.
Password Functionality:
Forgot Password Page: Users now have access to a forgotten password page for streamlined password recovery.
Reset Password Page: A reset password page has been added for users to securely reset their passwords.
View Password Button: A new "view password" button has been added in the launcher for password visibility convenience.
Search and Filter Fields: Improved UI functionality with the addition of search and filter fields for enhanced navigation and data organization.
AppStore Improvement: The AppStore has been optimized to fit in the Intelligent Marketplace(present in earlier versions).
Error Message Display: Error messages are now displayed on the UI for troubleshooting.
Login Page: The Login page has been cleaned up for better user experience.
Backend Enhancements
Event Exporter: Introducing the Event Exporter feature to facilitate seamless exporting of events for analysis and reporting purposes.
Kubernetes Token Dynamic Fetching: Dynamic fetching of Kubernetes tokens implemented, ensuring enhanced security and efficiency in token management.
Error Handling Improvements:
Error Message Definition: Error messages now display the level of errors, providing clearer insights into system issues.
Error Handling: Enhanced error handling mechanisms to address and manage errors more effectively.
Dashboard Enhancement:
Information Details: Improved dashboard interface with detailed depiction of WS, MS, and OaaS (Workstation, Microservice, and Orchestration-as-a-Service) details for better visibility and monitoring.
Standard Naming Implementation: Standard naming conventions applied throughout the application for consistency and clarity in data representation.
Async/Await Implementation: Async/Await methodology implemented across the application for improved performance and responsiveness.
Removal of Bottleneck: Identified bottlenecks have been removed, leading to smoother and more efficient system operations.
Logic Optimization: System logic optimized to enhance performance and resource utilization.
Deployment Tab:
Deployments tab deals with the containerization of models and then deploying them if non-auto deploy is opted for.
Deployments Tab incorporates feature of deploying Ai-API, Pipelines
It also shows if containerization fails at certain level while building whose logs are stored for further assessment.
Email Alerting System: Email alerting system integrated to notify users promptly about important events and system updates.
Improved Seeding Process:
Seeding Process Enhancement: Seeding process has been improved for faster and more reliable data population.
Database Preservation: Database preservation ensured with only updated seeds being changed, maintaining data integrity.
GPU Monitoring on Cloud Environments: GPU monitoring capability added for cloud environments, allowing users to efficiently manage GPU resources.
Containerization as a Service Revamp: Containerization as a Service feature has been separated and developed as a different module.
CLI Enhancements
Integrated Object Storage Support:
MINIO, BLOB, and S3 Integration: The Command Line Interface (CLI) now seamlessly integrates with MINIO, BLOB, or S3 object storage, adapting to the environment's storage requirements. Users can effortlessly manage and manipulate objects stored in these environments directly from the CLI interface, enhancing flexibility and convenience in data handling.
Containerization as a Service Integration:
Containerization Simplified: With the integration of Containerization as a Service (CaaS), users can now easily deploy and manage containerized applications directly from the CLI. This streamlined approach eliminates the complexities associated with container management, allowing users to focus on building and deploying their applications with ease.
SDK Enhancements
Integrated Comp-App v1.2.7 changes
Made changes in API response processing for Plan and Add-On Orchestration components of the SDK.
Updated the Docker Hub subdomain url in the SDK.
Added Ai-Pipelines Feature
Added Ai-Pipeline Auto-Deploy and Non-Auto Deploy feature.
Added get_all Ai-Pipelines and validate an Ai-Pipeline interface.
Added DataSets Feature & Integrated Object Storage Support:
SDK integrates seamlessly with the environment’s respective object storage and currently supports S3 (AWS), Blob (Azure), MinIO storage.
Users can upload and download files of any size directly on the object storage via SDK from their local or Ai-MicroCloud Environment.
SaaS
Basic Launcher (AWS, AZURE, Bare-Metal)
This version includes the following major changes.
Standard Naming Conventions: Renamed variables, functions, and components across modules like Auth, User, and AKS/EKS controllers to adhere to standard naming conventions, improving codebase consistency and readability.
Async/Await Implementation Throughout: Replaced call-back based functions with async/await syntax in modules such as API service layers, Redux action creators, and data fetching utilities, enhancing code readability and managing asynchronous operations effectively.
Removal of Bottlenecks: Optimized data fetching by removing redundant API calls, implementing batch insertion for improved performance, and streamlining file operations during cluster creation and updates. Refactored complex logic in AKS/EKS controllers, reducing code complexity and improving scalability.
Logic Optimization: Streamlined data processing logic in components like API controller layers, improving computational efficiency and reducing resource consumption.Reorganized conditional statements and loops for better code flow and readability.
Frontend State Management: Integrated Redux Toolkit for state management in front-end components, such as User Profile, Dashboard, and Login, ensuring a centralized and efficient state management system.
Error Handling and Message Definition: Enhanced error handling mechanisms by centralizing error logging, implementing custom error messages with severity levels, and providing detailed error responses from API endpoints.
UI Improvements: Revamped UI components for better user experience, introduced warnings, alerts and actions for better visualization.
Code Segmentation: Broke down complex functions into smaller, manageable parts for easier understanding and modification. Optimized the code by reducing the number of lines in the code.
The launcher application for the cloud platforms i.e. AWS and AZURE allows the user to add an extra set of Node group/Node pool once the cluster is created.
Provision of adding GPU instance node pools has been included.
In case of deployment failure, the user can download the log file and troubleshoot. Once the issue is identified, the user can edit/modify the cluster and resume the deployment of Ai-Microcloud.
A new feature addition is the component version validation. Once the Cluster is successfully created and the MicroCloud is properly deployed the user will automatically get a list which will include a list of all the modules along with their respected versions which can we utilized while using support tickets resulting in quick resolutions
Version 1.2.6 (December 2023)
All the below mentioned updates have been integrated into the major release version of AI-Microcloud 1.2.6 from the earlier version in user i.e. AI-Microcloud 1.2.5
SaaS
Model Serving of Large Language models
LAMACPPserves quantized models
Facebook 125M model using vLLM serving
Served using command line interface as an API
PaaS
ZBL CLI
vLLM serving of models as APIs
LLAMA CPP serving of models as APIs
Capability of multipart upload of larger model sizes
COMP APP
Refactoring of the whole code base in comp-app in order to reduce SDK communication crashing and improve Error Handling and API interaction handling
IAM roles and privileges updated for more User privilege and easy segregation of access in organisation's System admins
Generic Image for Enterprise edition which allows the use of a single image that would be compatible with all our environments.
UI Improvements for AI-Microcloud
In the comp app updated the profile page design
The support button has been added and users can raise tickets from AI-Microcoud itsef
Once the secret keys are generated they're always available in the session for the user.
Copy button has been added for the keys
View password has been also added in the profile page and login page has context menu
Microservice and Workstations card alignment.
Removal of the Zeblok Ai-WorkSpace text
Segregation of Manage account page
User role should also get displayed on Manage your account page.
Rename get data-lake key and get api credentials button to Generate Datalake key and Generate API credentials.
View password button in the register page
Padding fixes in the UI
Role table fixes
COMP AUTH
Complete code refactoring for making the app more resilient and avoid server crashing
Client libraries for object storage(minIO) have been updated which allow on the fly bucket creation and policy creation on minIO simultaneously on user registration.
Microservices and Workstations
Introduction of new and updated microservices and workstations into the Microcloud for more dynamic and extensive usability e.g. StreamLIT UI, Sceenscape, ChromaDB, etc.
The new versions for docker images have been provided on the seeding part of AI-MicroCloud.
Monitoring
Event Exporter: Dependency on the third party Kubernetes event exporter has been removed and usage of Kube-API is introduced, this has improved event caching and improved reliability, ensuring we are not missing out on released events
CaaS
CaaS is made available for concurrent users by the help of kubernetes jobs in return with no wait time for any user .
The containerization as a service has been totally redesigned to help improve the usability and to enhance the user experience.The user can have the CaaS service invoked with custom plans as required according to them and also be able to see the logs for the whole process.
Improvements
Big chunks of data is now being able to be fed and processed while model serving .
Any size of models can be served and packaged as an API using our CaaS service
SDK
Refactoring of the whole code base for validation processes and changes authentication mechanism and packaging for seamless user experience.
IaaS
Hardware
Intel AMX support now available with Basic and Enterprise SKU of AI-Microcloud
Basic Baremetal Launcher
Our basic launcher now provides the user to create AI platform into the Bare-metal environment automatically within few minutes by using the launcher filling up some details and providing access credentials(For Bare-Metal connectivity) which are taken in as encrypted data, With all the information taken the scripts to run in the background and providing the URL in the Launcher to access the AI-Microcloud all deployed and ready to use.
Enterprise Azure Launcher
The enterprise launcher allows the user to connect their 3 hubs with any amount of bare metal servers and the user can easily establish the environment automatically within a few minutes by using the launcher filling up some details.
Platform upgrade
Our Applications(MicroCLoud, Launcher and OTF) are now compatible for the MONGO Atlas integration, it’s been tested and ready for use
Website
Playground
New Addition to our Zeblok website www.zeblok.com is the feature of our playground button which enables any interested user to try out our Microcloud free for 21 days by just clicking on the “playground” button and registering on the following forms which will grant him direct accessibility to our playground feature
Fixes and Improvements
Enterprise Launcher fixes
Bugs on User Interface now resolved
Removed the cross symbol from the success message. Instead display a message on new window
User will land on Marketplace tab when log inside the AI-Microcloud
Launcher script-Token fixed inside the script
Hash is now being received on email
User input are taken interactively while running the script
Launcher user registration email verification is now enabled
The instance type in the form t2xlarge is fixed
Masters name removed
User now lands on Request page on launcher once he logs in
Version 1.2.5 (September 2023)
Hardware:
AIM now runs on Intel Sapphire-Rapid Processor
BASIC:
Storage for Azure Cloud: AZURE Blob storage from UI for uploading viewing and download of datasets
AZURE Basic version integrated with Orchestration addon as a usability(Ray usability) which enable the user to use the HPC usability of the platform for training and developing complex models and running immense computations.
Multi-master cluster on Bare Metal servers supported
Version 1.2.4 (June 2023)
Enterprise version:
Software Updates: All the software packages are updated to latest version which covers bug fixes and security updates.
Security:
User can select and implement the Minimum base line security standard(MBSS) over edge server using UI of Enterprise launcher.
User can restrict the SSH access to the edge server using UI of Enterprise launcher.
Hardware:
Supermicro IoT SuperServer SYS-510D-10C-FN6P is certified for use as edge server.
Supermicro IoT SuperServer SYS-110D-20C-FRAN8TP is certified for use as Hub and edge server.
Intel® Data Center GPU Flex 140 is tested and certified for use at Hub and edge server
Monitoring: User can view the edge nodes attached to the cluster and check their online status whether active or not.
Basic version:
Auto scaling plans on AWS: Users can now define auto-scaling plans in Ai-MicroCloud® which makes the platform capable to scale the infrastructure as the need of resource increases while running the AI workloads.
Storage: Ai-MicroCloud® datalake now uses S3 as backend in AWS environment.
Security: Encryption layer provided onto the Basic Launcher while taking in sensitive data from the user which allows the system to be more secured from foreign attack
HPC: Users can spawn Ray clusters using orchestration add-on feature of Ai-MicroCloud® which will enable them to train and develop models with speed.
BUG FIXES:
Removal of the Datalake button on the user interface
Flickering of the Ai-MicroCloud® login page
Error handling, error message will be shown at the bottom of the webpage
Support button fixed with proper ticket creation
Alerting integrated into the platform to know the status of the environment in case of downtime
CROSS cloud integration resolved
Automatic redirection to the next/previous page once action completed on current
All mandatory fields highlighted in forms for the users
ML-flow integration on the ZBL CLI
Version 1.2.2 (JANUARY 2023)
Edge Monitoring:
SuperAdmin can monitor CPU and Memory for edge servers and deployed microservices with a more detailed overview which allows them to gather detailed information about individual Edge servers or particular microservices.
Detailed Graphical representation of the utilization of server resources.
CICD:
Automatic cyclic updation without taking down the whole environment i.e. only changing the git version which will directly incorporate changes into a pre-deployed AI-M ecosystem
Automated deployment of the whole ecosystem from a single point of contact just by using variables that are collected from the user irrespective of the platform/environment to be deployed upon.
Edge Security:
Minimum Baseline security(MBSS) has been incorporated into our ecosystem for providing standardized security according to the global regulations which makes the whole platform very secure to outside attacks.
Access control on Edge provides SSH and BMC console only from specific subnets and on specific interfaces.
Data encryption on local storage is incorporated into the system to make it more secure.
Version 1.2.1 (SEPTEMBER 2022)
Improved Microservice Configuration: Spawning MicroService workflow now has a more flexible and dynamic configuration system for environment variables, arguments, ports, etc.
Simultaneous EDGE deployments of AI-API and MicroServices.
Zeblok Cli: Zeblok Cli now provides an option to deploy your Ai-Models to multiple edge nodes simultaneously in an Edge Datacenter as an Ai-API.
Dynamic plans (CPU and Memory): Spawning Ai-WorkStation, MicroService and Ai-API workflows have been added with an option to attach customized plans (CPU and RAM according to user’s needs) where the resources will be within the limits set by the real-time resource metrics recommendation system.
Resource manager with monitoring: Resource Manager Tab in Ai-MicroCloud now gives the superadmin and admins an ability to view the monitoring graphs of each resource like Ai-API, Ai-Workstation and MicroServices.
Crosscloud integration: Ai-Models and Ai-Pipelines have been enabled to show their security scan logs along with previously enabled Ai-WorkStations and MicroServices which gives the users a better understanding of vulnerabilities in their images.
LAUNCHER APP(Edge):
New GUI for easy and efficient setup of the Microcloud on bare metal servers
Zeblok Launcher App is the newest member in the Ai-MicroCloud installation ecosystem. This feature seamlessly installs the whole Ai-MicroCloud edge ecosystem from scratch in a bare-metal environment efficiently.
Version 1.2.0 (JUNE 2022)
Zeblok Python SDK now has the feature to deploy simultaneous multi-datacenter Ai-Pipelines with also the ability to integrate the Ai-Runs output with open source tools.
Version 1.1.0 (MARCH 2022)
Openvino Model Serving capabilities are now supported in the Ai-API Engine.
Openvino toolkit distribution has been upgraded to version 2022.4.1
Ai-WorkStations have been upgraded with Nvidia Cuda Version 11.4
JupyterLab has been upgraded to the latest version along with Python v3.9.
Version 1.0.0 (DECEMBER 2021)
Zeblok Ai-MicroCloud is now available at Azure Cloud Environment with all the latest features and power giving the same experience of Zeblok AI-MicroCloud on the Azure Platform.
Zeblok has established Ai-MicroClouds for two private baremetal turned cloud environments: Advantech Labs and Intel Dev Cloud.
Version 0.0.9 (SEPTEMBER 2021)
PaaS Updates:
Admin analytics dashboard has been updated with more precise and accurate data along with bug fixes providing a more precise idea of the usage by the users.
Zeblok Authentication MicroService has been updated with more secure tokens and authorization logics while making the user’s authentication and signup process easier than before.
Zeblok Ai-MicroCloud introduces a brand new workflow for microservices where you can run pre-uploaded and modified images in an isolated and an independent environment in just a few clicks like MongoDB, Intel DLWorkBench, etc.
Session page now also handles metrics for spawned microservices along with Ai-WorkStations.
User’s profile has two new tabs for Spawned MicroServices and one for their total runtime consumptions, both of which can be accessed by the SuperAdmin and the Organization’s Admin.
Periodic Backup in every 6 hours has been established in aws s3 buckets as a disaster management measure.
Superadmin now gets a new tab in their menu called “Resource Manager” where you can have a look at all and every kind of resource in the Ai-MicroCloud like Ai-WorkStation, MicroServices and Ai-API and also control them if required.
Version 0.0.8 (JUNE 2021)
Infrastructure Re-Architecture is done for better performance.
Jupyterhub is taken out and Kuberntetes APIs are Introduced for Orchestration giving a more stable infrastructure and debugging system.
This new Architecture allows more control over resources and gives a faster experience so that the users don’t have to wait for the spawning process for too long.
Dynamic Plans can be introduced now giving full access to the power of the infrastructure.
Istio Mesh Networking is introduced for Load Balancing and Networking among the pods for better connectivity along with a better security.
PaaS Updates:
Login and Signup pages have a new UI now which is also mobile responsive.
Update has been rolled out for the new Admin analytics dashboard with more analytical metrics for viewing the activities of specific users in an organization.
Session page has more attributes added for more in depth metering and analysis.
User’s profile has a new tab for durations of the Ai-WorkStations now.
The Organizations table has a new View button for viewing the users and their duration of work in the Ai-Workstations.
Server Side pagination is implemented for reducing the bandwidth usage for users.
We now introduce a new workflow in our Ai-MicroCloud called Ai-API Engine which can be accessed by going to the Ai-Api section in the menu and clicking on the Create Ai-API button. For more information, you can visit https://computationaldocs.zeblok.com/zeblok-computational-1/ai-api/introduction
A new separate Zeblok Authentication microservice has been added exclusively for enhanced Authentication Processes and validation and also giving users the ability to secure their account using 2 factor authentication.
The Login Method has been improved with the addition of Refresh Tokens and JWT Token for a better and new level of security.
Version 0.0.7 (MARCH 2021)
All the Zeblok benefits, now in AWS:
Automated/scripted the deployment of the Zeblok platform within AWS, which will help us serve our customers better. How?
They don’t have to move their data across datacenter or platforms, reducing a large amount of effort and billing, as they don’t have to engineer the data movement in and out of AWS, which also impacts data security. Less movement of data in and out of AWS saves bandwidth cost.
Usage can be customized as per individual organization needs. Doesn’t matter if the requirement is of 1 GPU or 10 or 100, the system can easily be scaled as and when required. Increasing impact or ROI and decreasing un-utilized cost.
To learn more about our AWS Deployment. Visit here.
Version 0.0.6 (DECEMBER 2020)
A new button of DATA LAKE is added besides the create workstation button where users can experience object storage provided via zeblok for storing their necessary files for their Ai-WorkStation.
Session Icon (Time Watch) is added to the listed spawned workstation (Active) on the landing page , where you can see your sessions.
Activity Logs
A new tab is added in the profile section where you can see all your activities and the resources which you have visited.
DashBoard
All users display more information like Allowed Notebook , Last Active etc are visible.
A new Section in View Button of Users is added to Activity Logs , where Super Admin can monitor users activities.
A quick summary of users is displayed and a “View More analytics” button is added for further resources monitoring page.
Users Section - In the users information display , on clicking “View” for a particular user , 2 new tabs of activities and spawnedImages are created to show its activities log and spawned Workstation by user.
Resources Monitoring
Super Admin can see following data
Total number of resources created like Workstations ,User Groups,Roles, Organizations,Plans etc.
Further re-distribution pie charts of various different sub categories of resource like -
Workstations : Data Science , Algorithms , Hyperconvergence .
Plans : CPU , GPU , Both
Data Center : Public , Private
A User Statistics Bar graph showing all new Users , all new Beta Users , all new Non Beta Users created along with that all active users , all active Beta users , all active non Beta users.
A Stacked Graph of users created each day is shown with categories of Beta and Non Beta User.
A pie chart showing Spawned Images data of users along with a table of further different categories of data.
Stacked Bar Graphs showing Workstations Created Day wise ,Workstation which are currently running as Day wise ,Number of CPU/GPU spawned with the notebooks on the respective date.
A stacked Bar graph showing the count of All Notebooks spawned with different plans.
For more detailed information , please refer to https://docs.google.com/document/d/1sv0aMzLhc63X5HKsAPfSufRn8kz8mH4eXnkTHMcPBEU/edit?usp=sharing
All the data can be filtered out between any time intervals. While the data which is related to any organization then it will be updated as per the organization.
Admin can use filters for both categories like Search by Organization , Search by Duration or Both.
Workstation
In the workstation table , a new column of licence is added to put licenses in algorithms and hpc’s.
IAM Menu
A new menu of IAM introduced which consists of all access management menus like roles , resources , menu and user groups from where superadmin can give or modify access for a specific role in the Ai-MicroCloud.
This helps to customize the side navigation menu also.
Version 0.0.5 (SEPTEMBER 2020)
An AlgoStore is now made available for the users to go and get detailed information about all the containers giving knowledge and also making the selection easier according to the needs.
Read More button on each algorithm provides information on capabilities, a youtube video link and links to other sources for more information.
Join Slack option provided for all users to join our dedicated Slack channel for any kind of clarification, bug posting and discussion with our team.
Documentation link is provided at the navbar to read and understand the design and workflows of the Ai-MicroCloud for a non ambiguous experience.
Dashboard
Can view all the users present on the platform segregated according to their roles, organizations for getting a view of the whole microcloud users.
Each user is categorized into a beta and non-beta user. Admin can update the user as beta or non-beta. Beta users have many more features unlocked for them than a non beta user.
Each user has a no. of allowed notebooks assigned to him. The user can only spawn that many number of notebooks if he is a beta user which depends on his subscriptions and resource management rules.
Beta users are the ones who are already present in the computational-beta environment. For these users they do not have to buy plans. Their billing would continue as it was happening before. They can spawn a limited number of notebooks as given access by the Zeblok Admin.
Workstations
All notebook containers are displayed here as a list in a table where the superadmin can perform CRUD operations on the notebooks.
Superadmin can edit the notebook image, name, description, restrict public access, add video hyperlink and other sources, and can add the stripe product ID for charging this workstation.
Superadmin can set the related plan and default plan for the container.
Spawned Images
Workstations spawned by the users are shown in Ai-WorkSpace along with the current status of their workstations and corresponding sessions.
All the activities of each workstation can be listed by clicking on their respective sessions icon giving a full in-depth overview of the activities done on the Ai-WorkStation.
Roles
Orgadmin role added giving full access to the organizational unit which consists of an orgadmin and all org users.
Orguser Role added as the role given to the users who are invited to the Ai-MicroCloud via an invite sent out by the OrgAdmin from the dashboard.
Plans
Plans can now be edited and deleted by the zeblok superadmin giving a more dynamic resource allocation system for the Ai-WorkStations.
Data Center
Data center requests can be seen here (Requests are by default given a Pending status).
It is the responsibility of zeblok admin to check and approve those data center requests and make them active.
Once approved these data centers would be visible to the users while creating workstations. (shown on the world map)
Version 0.0.4 (JUNE 2020)
Each Notebook has some predefined plans by the admin, and among them is a default plan which is recommended by us for the selected workstation to run without any resource limitations.
Organization Admin Role Added for better handling of access control within the organization so that the onboarded users can make the microcloud their own.
Organization Admin has been given the access to view/edit/update/delete all the users under his organization.
Version 0.0.3 (MARCH 2020)
The better, fresh and responsive user interface is now available.
Jupyterlab and Notebooks are updated to their latest Versions.
Each user will now have access to a dedicated GPU and from 4GB to 8GB of memory.
Each user will be able to spawn one server at a time.
Cull feature is now introduced. If a workstation is not utilized by users for two hours then it is automatically culled and resources released, if the browser session is killed. JupyterHub will run the culling process every ten minutes and will cull any user pods that have been inactive for more than one hour.
A license agreement is now required on signUp which is stored in the Database.
Zeblok Readme Notebook is now available in the environment showing tutorial instructions on retrieving compute, datalake and other platform features.
Nvidia Rapids example Notebooks are available on spawning the server using Nvidia Rapids Image.
Now the infrastructure has been moved within a VPN, making it more secure and easy to expand the infrastructure.
Slack community is added.
Version 0.0.2 (DECEMBER 2019)
A wide range of Ai-ML and DataScience Notebooks is added now as a library of notebooks giving users multiple options to spawn in JupyterHub like PySpark, TensorFlow and many more.
Ai-AppStore now includes trademarked notebooks and available for use upon agreeing the license agreement for Terms and usage. For example: Explainable AI by Akai Kaeru.
Notebooks have been CUDA optimized for the NVIDIA GPU with SLI support.
Version 0.0.1 (SEPTEMBER 2019)
Users are now able to spawn JupyterHub based notebooks as Ai-Workstation within CEWIT (Center of Excellence in Wireless and Information Technology) Datacenter cluster.
Users are now allowed to spawn basic minimal notebooks with cuda optimization for NVIDIA GPU cards (RTX6000) saving enormous time for users.
JWT Authentication layer is now available on top of our JupyterHub that authenticates only the valid users to spawn in a secure manner.
Last updated