List of Google Cloud Managed Lustre Customers
Mountain View, 94043, CA,
United States
Since 2010, our global team of researchers has been studying Google Cloud Managed Lustre customers around the world, aggregating massive amounts of data points that form the basis of our forecast assumptions and perhaps the rise and fall of certain vendors and their products on a quarterly basis.
Each quarter our research team identifies companies that have purchased Google Cloud Managed Lustre for Cloud Storage from public (Press Releases, Customer References, Testimonials, Case Studies and Success Stories) and proprietary sources, including the customer size, industry, location, implementation status, partner involvement, LOB Key Stakeholders and related IT decision-makers contact details.
Companies using Google Cloud Managed Lustre for Cloud Storage include: Salesforce, a United States based Professional Services organisation with 76453 employees and revenues of $37.90 billion, Resemble AI, a United States based Communications organisation with 2400 employees and revenues of $600.0 million, Afeela, a Japan based Automotive organisation with 160 employees and revenues of $67.0 million and many others.
Contact us if you need a completed and verified list of companies using Google Cloud Managed Lustre, including the breakdown by industry (21 Verticals), Geography (Region, Country, State, City), Company Size (Revenue, Employees, Asset) and related IT Decision Makers, Key Stakeholders, business and technology executives responsible for the software purchases.
The Google Cloud Managed Lustre customer wins are being incorporated in our Enterprise Applications Buyer Insight and Technographics Customer Database which has over 100 data fields that detail company usage of software systems and their digital transformation initiatives. Apps Run The World wants to become your No. 1 technographic data source!
Apply Filters For Customers
| Logo | Customer | Industry | Empl. | Revenue | Country | Vendor | Application | Category | When | SI | Insight |
|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
Afeela | Automotive | 160 | $67M | Japan | Google Cloud Managed Lustre | Cloud Storage | 2025 | n/a |
In 2025, Afeela implemented Google Cloud Managed Lustre as a Cloud Storage component to accelerate AI model training for AFEELA Intelligent Drive. The deployment in Japan targets ADAS and perception model workloads and is positioned to provide higher throughput and faster checkpoint performance for training pipelines.
The implementation leverages Google Cloud Managed Lustre core capabilities, including a parallel POSIX file system architecture, scalable metadata handling, and high throughput data striping to support large batch training and frequent checkpoint operations. Configuration emphasis is on sustained read and write throughput, checkpoint snapshotting, and integration with model training workflows to reduce I/O bottlenecks during distributed training.
Operationally the service is integrated into Google Cloud compute-based training pipelines and used by Afeela engineering and data science teams responsible for AFEELA Intelligent Drive model development. Google Cloud and DDN testimonials are cited for similar ADAS oriented deployments in Japan, and Afeela reports roughly 3x faster model training compared with other Google Cloud storage options, reflecting improved throughput and checkpoint performance for production AI training workloads.
|
|
|
|
Resemble AI | Communications | 2400 | $600M | United States | Google Cloud Managed Lustre | Cloud Storage | 2025 | n/a |
In 2025, Resemble AI implemented Google Cloud Managed Lustre as Cloud Storage to eliminate I O bottlenecks that were constraining multi GPU distributed training for generative voice models. The deployment focused on high throughput data pipelines, enabling processing of datasets measured in the hundreds of terabytes that feed model development and R D workflows.
The implementation used DDN first party Google Cloud Managed Lustre to provide a parallel file system and scalable block and object throughput consistent with Cloud Storage requirements for large scale AI training. Configuration emphasized read optimized throughput and parallel I O to sustain sustained data delivery to GPU clusters, and the environment was provisioned to allow rapid expansion of capacity from 200 TB to 500 TB without downtime or reconfiguration.
Integrations centered on Google Cloud AI infrastructure and the company s multi GPU training clusters, aligning storage performance with distributed training schedules and data staging processes. Operational scope covered Resemble AI s model training and research functions, with storage access patterns tuned for real time model iteration and pipeline orchestration across training nodes.
Process changes reduced infrastructure preparation time that previously required multi day setup, shifting to near immediate readiness for training runs and shortening iteration cycles for engineering and research teams. Governance emphasized centralized storage management and data access controls to support reproducible training, while operational ownership was aligned to the AI engineering organization to coordinate capacity scaling and performance tuning.
Outcomes reported by Resemble AI include sustained 100% GPU utilization with no idle cycles waiting for data, accelerated model development and iteration, and seamless scalability of Google Cloud Managed Lustre as Cloud Storage from 200 TB to 500 TB, enabling faster delivery of next generation generative voice solutions.
|
|
|
|
Salesforce | Professional Services | 76453 | $37.9B | United States | Google Cloud Managed Lustre | Cloud Storage | 2025 | n/a |
In 2025, Salesforce integrated Google Cloud Managed Lustre to support high-throughput AI inference and model serving needs within the Cloud Storage tier used by Salesforce AI Research. Google Cloud Managed Lustre was provisioned as a managed parallel file system to remove typical onboarding bottlenecks for inference workloads and to provide a production-ready storage layer for large model artifacts and streaming data access.
The implementation emphasizes the high-throughput parallel storage capability of Google Cloud Managed Lustre, with configuration focused on sustained bandwidth and POSIX-compatible file access for model checkpoints and shard distributions. Deployment architecture places Managed Lustre alongside Vertex training clusters, enabling direct mount and low-latency file access patterns that keep B200 GPUs fully saturated during inference and nearline training phases.
Integrations are centered on Vertex training clusters for model training and inference orchestration, with operational ownership by Salesforce AI Research and supporting ML engineering and infrastructure teams. The integration covers serving pipelines and inference workflows for large language models, aligning Cloud Storage performance characteristics with GPU compute scheduling and data staging practices.
Governance changes include standardized onboarding workflows that provision Managed Lustre volumes in tandem with Vertex cluster allocation, reducing setup friction for new inference workloads. The usage is documented in Google Cloud customer material and the Managed Lustre product page, and the documented result is improved LLM inference throughput and latency through better GPU utilization.
|
Buyer Intent: Companies Evaluating Google Cloud Managed Lustre
Discover Software Buyers actively Evaluating Enterprise Applications
| Logo | Company | Industry | Employees | Revenue | Country | Evaluated | ||
|---|---|---|---|---|---|---|---|---|
| No data found | ||||||||