List of SchedMD Slurm Workload Manager Customers
Lehi, 84043, UT,
United States
Since 2010, our global team of researchers has been studying SchedMD Slurm Workload Manager customers around the world, aggregating massive amounts of data points that form the basis of our forecast assumptions and perhaps the rise and fall of certain vendors and their products on a quarterly basis.
Each quarter our research team identifies companies that have purchased SchedMD Slurm Workload Manager for Apps Development from public (Press Releases, Customer References, Testimonials, Case Studies and Success Stories) and proprietary sources, including the customer size, industry, location, implementation status, partner involvement, LOB Key Stakeholders and related IT decision-makers contact details.
Companies using SchedMD Slurm Workload Manager for Apps Development include: Oak Ridge National Laboratory, a United States based Government organisation with 7000 employees and revenues of $2.60 billion, Lawrence Berkeley National Laboratory, a United States based Government organisation with 3804 employees and revenues of $900.0 million, Texas Advanced Computing Center, a United States based Education organisation with 207 employees and revenues of $30.0 million and many others.
Contact us if you need a completed and verified list of companies using SchedMD Slurm Workload Manager, including the breakdown by industry (21 Verticals), Geography (Region, Country, State, City), Company Size (Revenue, Employees, Asset) and related IT Decision Makers, Key Stakeholders, business and technology executives responsible for the software purchases.
The SchedMD Slurm Workload Manager customer wins are being incorporated in our Enterprise Applications Buyer Insight and Technographics Customer Database which has over 100 data fields that detail company usage of software systems and their digital transformation initiatives. Apps Run The World wants to become your No. 1 technographic data source!
Apply Filters For Customers
| Logo | Customer | Industry | Empl. | Revenue | Country | Vendor | Application | Category | When | SI | Insight |
|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
Lawrence Berkeley National Laboratory | Government | 3804 | $900M | United States | SchedMD | SchedMD Slurm Workload Manager | Apps Development | 2016 | n/a |
In 2016, Lawrence Berkeley National Laboratory implemented SchedMD Slurm Workload Manager as part of an Apps Development deployment to manage large scale scientific computing workloads. The deployment was adopted and customized by the National Energy Research Scientific Computing Center for the Cori supercomputer at LBNL.
Configuration work concentrated on advanced scheduling and resource management capabilities provided by SchedMD Slurm Workload Manager, including fine grained job prioritization, reservation and allocation policies, and GPU aware scheduling to support heterogeneous compute nodes. The implementation extended standard scheduler configurations to support large scale batch workflows and node level resource controls suitable for high performance computing operations.
Integrations emphasized coordination with storage and network infrastructure, notably integrating the burst buffer DataWarp service and collaborating with ESnet and Cray to instrument software defined networking for Cori. NERSC documentation and conference materials describe Slurm use and integration with storage and network features to coordinate job placement, data staging and I/O flows across the system.
Operational scope centered on NERSC production HPC operations at Lawrence Berkeley National Laboratory in the United States, supporting scientific computing user communities on Cori. The deployment enabled new large scale scheduling and resource management capabilities across compute, storage and network layers and is documented in technical materials that inform ongoing scheduler configuration and operational governance.
|
|
|
Oak Ridge National Laboratory | Government | 7000 | $2.6B | United States | SchedMD | SchedMD Slurm Workload Manager | Apps Development | 2019 | n/a |
In 2019, Oak Ridge National Laboratory migrated portions of its Rhea and DTN compute nodes from Moab to the SchedMD Slurm Workload Manager to standardize job scheduling for HPC and scientific computing in the United States. The migration was executed at the Oak Ridge Leadership Computing Facility and targeted consolidation of scheduling across compute and data transfer node resources.
The SchedMD Slurm Workload Manager implementation leveraged core workload manager capabilities typical of high performance computing environments, including job queuing and allocation, configurable scheduling policies, and resource aware placement for parallel MPI and GPU workloads. These configurations were aligned with Apps Development practices for scheduler configuration, job submission interfaces, and batch orchestration.
Operational scope covered subsets of the Rhea compute cluster and the DTN fleet, establishing a unified scheduling layer to coordinate batch workloads and data movement tasks as the center prepared for next generation leadership systems. The deployment architecture emphasized scheduler scalability and operational flexibility while maintaining integration points with the existing compute node infrastructure.
Governance and rollout were documented in the OLCF migration notes, which captured transition sequencing and operator procedures for scheduler administration. The migration to SchedMD Slurm Workload Manager improved scheduler scalability and operational flexibility as stated in OLCF documentation.
|
|
|
Texas Advanced Computing Center | Education | 207 | $30M | United States | SchedMD | SchedMD Slurm Workload Manager | Apps Development | 2017 | n/a |
In 2017, Texas Advanced Computing Center deployed SchedMD Slurm Workload Manager as the job scheduler for its Stampede2 supercomputer. The deployment aligned with Stampede2 entering full production in 2017 and supported HPC and research computing across the United States, and the implementation is categorized under Apps Development with Slurm established as the primary orchestration layer for batch and parallel workloads on the system.
The SchedMD Slurm Workload Manager implementation covered core scheduler modules including job submission, queue management, advanced scheduling policies, resource allocation and monitoring. Configuration incorporated partitioning and reservations, job arrays and backfill scheduling to optimize utilization for scientific applications, and cluster-level instrumentation for job lifecycle tracking and accounting was configured to support user-facing monitoring and administrative control.
TACC user documentation identifies Slurm as the primary scheduler for job submission, monitoring and management on Stampede2 and highlights improved scheduling features for scientific applications. Operational coverage included TACC researchers and authorized national research users on Stampede2, with the scheduler governing multi-node MPI jobs and high throughput batch workloads, and no external system integrators are referenced in the documentation.
Governance was enforced through published user documentation and scheduler policy definitions to standardize job submission and resource usage workflows across the platform. The rollout coincided with full production operations and emphasized documentation driven onboarding, operational procedures and scheduler configuration standards for scientific computing use cases.
|
Buyer Intent: Companies Evaluating SchedMD Slurm Workload Manager
- United States Department of Justice, a United States based Government organization with 115600 Employees
Discover Software Buyers actively Evaluating Enterprise Applications
| Logo | Company | Industry | Employees | Revenue | Country | Evaluated | ||
|---|---|---|---|---|---|---|---|---|
| No data found | ||||||||