Skip to content

Provision, execute, and monitor batch and HPC container workloads on Azure Batch

License

Notifications You must be signed in to change notification settings

jackpimbert/batch-shipyard

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Build Status Build Status Build status Docker Pulls Image Layers

Batch Shipyard

dashboard

Batch Shipyard is a tool to help provision, execute, and monitor container-based batch processing and HPC workloads on Azure Batch. Batch Shipyard supports both Docker and Singularity containers! No experience with the Azure Batch SDK is needed; run your containers with easy-to-understand configuration files. All Azure regions are supported, including non-public Azure regions.

Additionally, Batch Shipyard provides the ability to provision and manage entire standalone remote file systems (storage clusters) in Azure, independent of any integrated Azure Batch functionality.

Major Features

  • Support for multiple container runtimes including Docker, Singularity, and Kata Containers tuned for Azure Batch compute nodes
  • Automated deployment of container images required for tasks to compute nodes
  • Comprehensive data movement support: move data easily between locally accessible storage systems, remote filesystems, Azure Blob or File Storage, and compute nodes
  • Support for serverless execution binding with Azure Functions
  • Federation support: enables unified, constraint-based scheduling to collections of heterogeneous pools, including across multiple Batch accounts and Azure regions
  • Automated, integrated resource monitoring with Prometheus and Grafana for Batch pools and RemoteFS storage clusters
  • Standalone Remote Filesystem Provisioning with integration to auto-link these filesystems to compute nodes with support for NFS and GlusterFS distributed network file system
  • Automatic shared data volume support for linking to Remote Filesystems as provisioned by Batch Shipyard, Azure File via SMB, Azure Blob via blobfuse, GlusterFS provisioned directly on compute nodes, and custom Linux mount support (fstab)
  • Support for automated on-demand, per-job distributed scratch space provisioning via BeeGFS BeeOND
  • Support for simple, scenario-based pool autoscale and autopool to dynamically scale and control computing resources on-demand
  • Support for Task Factories with the ability to generate tasks based on parametric (parameter) sweeps, randomized input, file enumeration, replication, and custom Python code-based generators
  • Transparent support for GPU-accelerated container applications on both Docker and Singularity on Azure N-Series VM instances
  • Support for multi-instance tasks to accommodate MPI and multi-node cluster applications packaged as Docker or Singularity containers on compute pools with automatic job completion and task termination
  • Transparent assist for running Docker and Singularity containers utilizing Infiniband/RDMA for MPI on HPC low-latency Azure VM instances including A-Series, H-Series, and N-Series
  • Seamless integration with Azure Batch job, task and file concepts along with full pass-through of the Azure Batch API to containers executed on compute nodes
  • Support for Azure Batch task dependencies allowing complex processing pipelines and DAGs
  • Support for merge or final task specification that automatically depends on all other tasks within the job
  • Support for job schedules and recurrences for automatic execution of tasks at set intervals
  • Support for live job and job schedule migration between pools
  • Support for credential management through Azure KeyVault
  • Support for Docker Registries including Azure Container Registry, other Internet-accessible public and private registries, and support for the Singularity Hub Container Registry
  • Support for Low Priority Compute Nodes
  • Support for deploying Batch compute nodes into a specified Virtual Network
  • Automatic setup of SSH or RDP users to all nodes in the compute pool and optional creation of SSH tunneling scripts to Docker Hosts on compute nodes
  • Support for custom host images
  • Support for Windows Containers on compliant Windows compute node pools with the ability to activate Azure Hybrid Use Benefit if applicable
  • Accelerated Docker and Singularity image deployment at scale to compute pools consisting of a large number of VMs via private peer-to-peer distribution of container images among the compute nodes

Installation

Azure Cloud Shell

Batch Shipyard is integrated directly into Azure Cloud Shell and you can execute any Batch Shipyard workload using your web browser or the Microsoft Azure Android and iOS app.

Simply request a Cloud Shell session and type shipyard to invoke the CLI; no installation is required. Try Batch Shipyard now from your browser: Launch Cloud Shell

Local Installation

Please see the installation guide for more information regarding the various local installation options and requirements.

Documentation and Recipes

Please refer to the Batch Shipyard Documentation on Read the Docs.

Visit the Batch Shipyard Recipes section for various sample container workloads using Azure Batch and Batch Shipyard.

Batch Shipyard Compute Node Host OS Support

Batch Shipyard is currently compatible with popular Azure Batch supported Marketplace Linux VMs, compliant Linux custom images, and native Azure Batch Windows Server with Containers VMs. Please see the platform image support documentation for more information specific to Batch Shipyard support of compute node host operating systems.

Change Log

Please see the Change Log for project history.


Please see this project's Code of Conduct and Contributing guidelines.

About

Provision, execute, and monitor batch and HPC container workloads on Azure Batch

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 78.4%
  • Shell 10.8%
  • Jupyter Notebook 6.6%
  • Dockerfile 3.6%
  • Other 0.6%