HPC

Intel HPC Orchestrator Advanced Overview

November 15, 2016
2 min read
Featured-Image_(7).jpg

Intel HPC Orchestrator

Intel is releasing a cluster management software called Intel HPC Orchestrator Advanced. The software is script/recipe based, and is deployed on either RHEL/CentOS or SLES base operating system installation.

The software stack was released at Supercomputing 2016.

• Cluster provisioning in Intel HPC Orchestrator is based on Warewulf : http://warewulf.lbl.gov/trac
• It's a script/recipe based installation process in which you leverage Intel HPC Orchestrator Advanced scripts and repositories in order to streamline the automation and deployment of the cluster provisioning, workload management, health monitoring, development tools, etc.

Basic Connectivity Overview Example:

HPC Orchestrator Example
Figure taken from the Intel(R) HPC Orchestrator Manual - Shows the management network (eth1), externalnet (eth0) and the high speed network (optional)

Basic Installation Process Overview:

Without going into details about the process for setting up a cluster via Intel HPC Orchestrator, the following is a summary of the steps involved. The manual included with the software goes into details.

• Installation of base operating system
• Enable Intel HPC Orchestrator repository + EPEL 7 Repository
• Fill out the Installation Template included with the Intel HPC Orchestrator Packages
• This is to help automate the installation, usernames, etc
• Add provisioning services for the Master Node
• Add resource management / workload management
• SLURM For example
• Add InfiniBand/OFED/Omni-Path Packages (as needed)
• Configure Internal-Net / Warewulf provisioning via scripts included with Intel HPC Orchestrator
• Configure Genders packages for compute imaging
• Compile/Build the initial base OS image for the compute nodes via included scripts/commands included with Intel HPC Orchestrator
• Install Intel HPC Orchestrator specific components
• Customize the base OS image via adding authorized_keys, customize packages that you'd like included in the base
compute node image
• mrsh, Lustre packages, Nagios, Ganglia, Etc
• Customize any Linux/Kernel settings, memlock, etc
• Complete building the initial base OS via building the VNFS image with Warewulf
• Setup DHCPD/PXE & Register nodes for provisioning
• Boot nodes via PXE/DHCP to the cluster

This can all be done quickly with the included scripts & commands from the Intel HPC Orchestrator Advanced manual.

After this point, you deploy Development tools, Compilers, Intel Modules (Cluster Checker), MPI Libraries, Performance tools and libraries & other 3rd party modules which are included in repository form for quick and easy installation onto the Master Node and start the workload manager and launch your test job.

Topics

Featured-Image_(7).jpg
HPC

Intel HPC Orchestrator Advanced Overview

November 15, 20162 min read

Intel HPC Orchestrator

Intel is releasing a cluster management software called Intel HPC Orchestrator Advanced. The software is script/recipe based, and is deployed on either RHEL/CentOS or SLES base operating system installation.

The software stack was released at Supercomputing 2016.

• Cluster provisioning in Intel HPC Orchestrator is based on Warewulf : http://warewulf.lbl.gov/trac
• It's a script/recipe based installation process in which you leverage Intel HPC Orchestrator Advanced scripts and repositories in order to streamline the automation and deployment of the cluster provisioning, workload management, health monitoring, development tools, etc.

Basic Connectivity Overview Example:

HPC Orchestrator Example
Figure taken from the Intel(R) HPC Orchestrator Manual - Shows the management network (eth1), externalnet (eth0) and the high speed network (optional)

Basic Installation Process Overview:

Without going into details about the process for setting up a cluster via Intel HPC Orchestrator, the following is a summary of the steps involved. The manual included with the software goes into details.

• Installation of base operating system
• Enable Intel HPC Orchestrator repository + EPEL 7 Repository
• Fill out the Installation Template included with the Intel HPC Orchestrator Packages
• This is to help automate the installation, usernames, etc
• Add provisioning services for the Master Node
• Add resource management / workload management
• SLURM For example
• Add InfiniBand/OFED/Omni-Path Packages (as needed)
• Configure Internal-Net / Warewulf provisioning via scripts included with Intel HPC Orchestrator
• Configure Genders packages for compute imaging
• Compile/Build the initial base OS image for the compute nodes via included scripts/commands included with Intel HPC Orchestrator
• Install Intel HPC Orchestrator specific components
• Customize the base OS image via adding authorized_keys, customize packages that you'd like included in the base
compute node image
• mrsh, Lustre packages, Nagios, Ganglia, Etc
• Customize any Linux/Kernel settings, memlock, etc
• Complete building the initial base OS via building the VNFS image with Warewulf
• Setup DHCPD/PXE & Register nodes for provisioning
• Boot nodes via PXE/DHCP to the cluster

This can all be done quickly with the included scripts & commands from the Intel HPC Orchestrator Advanced manual.

After this point, you deploy Development tools, Compilers, Intel Modules (Cluster Checker), MPI Libraries, Performance tools and libraries & other 3rd party modules which are included in repository form for quick and easy installation onto the Master Node and start the workload manager and launch your test job.

Topics