AI Buyer Insights:

Cantor Fitzgerald, a Kyriba Treasury customer evaluated GTreasury

Moog, an UKG AutoTime customer evaluated Workday Time and Attendance

Michelin, an e2open customer evaluated Oracle Transportation Management

Citigroup, a VestmarkONE customer evaluated BlackRock Aladdin Wealth

Wayfair, a Korber HighJump WMS customer just evaluated Manhattan WMS

Swedbank, a Temenos T24 customer evaluated Oracle Flexcube

Westpac NZ, an Infosys Finacle customer evaluated nCino Bank OS

Cantor Fitzgerald, a Kyriba Treasury customer evaluated GTreasury

Moog, an UKG AutoTime customer evaluated Workday Time and Attendance

Michelin, an e2open customer evaluated Oracle Transportation Management

Citigroup, a VestmarkONE customer evaluated BlackRock Aladdin Wealth

Wayfair, a Korber HighJump WMS customer just evaluated Manhattan WMS

Swedbank, a Temenos T24 customer evaluated Oracle Flexcube

Westpac NZ, an Infosys Finacle customer evaluated nCino Bank OS

List of SchedMD Slurm Workload Manager Customers

Apply Filters For Customers

Logo Customer Industry Empl. Revenue Country Vendor Application Category When SI Insight
Lawrence Berkeley National Laboratory Government 3804 $900M United States SchedMD SchedMD Slurm Workload Manager Apps Development 2016 n/a
In 2016, Lawrence Berkeley National Laboratory implemented SchedMD Slurm Workload Manager as part of an Apps Development deployment to manage large scale scientific computing workloads. The deployment was adopted and customized by the National Energy Research Scientific Computing Center for the Cori supercomputer at LBNL. Configuration work concentrated on advanced scheduling and resource management capabilities provided by SchedMD Slurm Workload Manager, including fine grained job prioritization, reservation and allocation policies, and GPU aware scheduling to support heterogeneous compute nodes. The implementation extended standard scheduler configurations to support large scale batch workflows and node level resource controls suitable for high performance computing operations. Integrations emphasized coordination with storage and network infrastructure, notably integrating the burst buffer DataWarp service and collaborating with ESnet and Cray to instrument software defined networking for Cori. NERSC documentation and conference materials describe Slurm use and integration with storage and network features to coordinate job placement, data staging and I/O flows across the system. Operational scope centered on NERSC production HPC operations at Lawrence Berkeley National Laboratory in the United States, supporting scientific computing user communities on Cori. The deployment enabled new large scale scheduling and resource management capabilities across compute, storage and network layers and is documented in technical materials that inform ongoing scheduler configuration and operational governance.
Oak Ridge National Laboratory Government 7000 $2.6B United States SchedMD SchedMD Slurm Workload Manager Apps Development 2019 n/a
In 2019, Oak Ridge National Laboratory migrated portions of its Rhea and DTN compute nodes from Moab to the SchedMD Slurm Workload Manager to standardize job scheduling for HPC and scientific computing in the United States. The migration was executed at the Oak Ridge Leadership Computing Facility and targeted consolidation of scheduling across compute and data transfer node resources. The SchedMD Slurm Workload Manager implementation leveraged core workload manager capabilities typical of high performance computing environments, including job queuing and allocation, configurable scheduling policies, and resource aware placement for parallel MPI and GPU workloads. These configurations were aligned with Apps Development practices for scheduler configuration, job submission interfaces, and batch orchestration. Operational scope covered subsets of the Rhea compute cluster and the DTN fleet, establishing a unified scheduling layer to coordinate batch workloads and data movement tasks as the center prepared for next generation leadership systems. The deployment architecture emphasized scheduler scalability and operational flexibility while maintaining integration points with the existing compute node infrastructure. Governance and rollout were documented in the OLCF migration notes, which captured transition sequencing and operator procedures for scheduler administration. The migration to SchedMD Slurm Workload Manager improved scheduler scalability and operational flexibility as stated in OLCF documentation.
Texas Advanced Computing Center Education 207 $30M United States SchedMD SchedMD Slurm Workload Manager Apps Development 2017 n/a
In 2017, Texas Advanced Computing Center deployed SchedMD Slurm Workload Manager as the job scheduler for its Stampede2 supercomputer. The deployment aligned with Stampede2 entering full production in 2017 and supported HPC and research computing across the United States, and the implementation is categorized under Apps Development with Slurm established as the primary orchestration layer for batch and parallel workloads on the system. The SchedMD Slurm Workload Manager implementation covered core scheduler modules including job submission, queue management, advanced scheduling policies, resource allocation and monitoring. Configuration incorporated partitioning and reservations, job arrays and backfill scheduling to optimize utilization for scientific applications, and cluster-level instrumentation for job lifecycle tracking and accounting was configured to support user-facing monitoring and administrative control. TACC user documentation identifies Slurm as the primary scheduler for job submission, monitoring and management on Stampede2 and highlights improved scheduling features for scientific applications. Operational coverage included TACC researchers and authorized national research users on Stampede2, with the scheduler governing multi-node MPI jobs and high throughput batch workloads, and no external system integrators are referenced in the documentation. Governance was enforced through published user documentation and scheduler policy definitions to standardize job submission and resource usage workflows across the platform. The rollout coincided with full production operations and emphasized documentation driven onboarding, operational procedures and scheduler configuration standards for scientific computing use cases.
Showing 1 to 3 of 3 entries

Buyer Intent: Companies Evaluating SchedMD Slurm Workload Manager

ARTW Buyer Intent uncovers actionable customer signals, identifying software buyers actively evaluating SchedMD Slurm Workload Manager. Gain ongoing access to real-time prospects and uncover hidden opportunities. Companies Actively Evaluating SchedMD Slurm Workload Manager for Apps Development include:

  1. United States Department of Justice, a United States based Government organization with 115600 Employees

Discover Software Buyers actively Evaluating Enterprise Applications

Logo Company Industry Employees Revenue Country Evaluated
No data found
FAQ - APPS RUN THE WORLD SchedMD Slurm Workload Manager Coverage

SchedMD Slurm Workload Manager is a Apps Development solution from SchedMD.

Companies worldwide use SchedMD Slurm Workload Manager, from small firms to large enterprises across 21+ industries.

Organizations such as Oak Ridge National Laboratory, Lawrence Berkeley National Laboratory and Texas Advanced Computing Center are recorded users of SchedMD Slurm Workload Manager for Apps Development.

Companies using SchedMD Slurm Workload Manager are most concentrated in Government and Education, with adoption spanning over 21 industries.

Companies using SchedMD Slurm Workload Manager are most concentrated in United States, with adoption tracked across 195 countries worldwide. This global distribution highlights the popularity of SchedMD Slurm Workload Manager across Americas, EMEA, and APAC.

Companies using SchedMD Slurm Workload Manager range from small businesses with 0-100 employees - 0%, to mid-sized firms with 101-1,000 employees - 33.33%, large organizations with 1,001-10,000 employees - 66.67%, and global enterprises with 10,000+ employees - 0%.

Customers of SchedMD Slurm Workload Manager include firms across all revenue levels — from $0-100M, to $101M-$1B, $1B-$10B, and $10B+ global corporations.

Contact APPS RUN THE WORLD to access the full verified SchedMD Slurm Workload Manager customer database with detailed Firmographics such as industry, geography, revenue, and employee breakdowns as well as key decision makers in charge of Apps Development.