AI Buyer Insights:

Citigroup, a VestmarkONE customer evaluated BlackRock Aladdin Wealth

Swedbank, a Temenos T24 customer evaluated Oracle Flexcube

Westpac NZ, an Infosys Finacle customer evaluated nCino Bank OS

Cantor Fitzgerald, a Kyriba Treasury customer evaluated GTreasury

Moog, an UKG AutoTime customer evaluated Workday Time and Attendance

Michelin, an e2open customer evaluated Oracle Transportation Management

Wayfair, a Korber HighJump WMS customer just evaluated Manhattan WMS

Citigroup, a VestmarkONE customer evaluated BlackRock Aladdin Wealth

Swedbank, a Temenos T24 customer evaluated Oracle Flexcube

Westpac NZ, an Infosys Finacle customer evaluated nCino Bank OS

Cantor Fitzgerald, a Kyriba Treasury customer evaluated GTreasury

Moog, an UKG AutoTime customer evaluated Workday Time and Attendance

Michelin, an e2open customer evaluated Oracle Transportation Management

Wayfair, a Korber HighJump WMS customer just evaluated Manhattan WMS

List of Google Cloud Managed Lustre Customers

Apply Filters For Customers

Logo Customer Industry Empl. Revenue Country Vendor Application Category When SI Insight
Afeela Automotive 160 $67M Japan Google Google Cloud Managed Lustre Cloud Storage 2025 n/a
In 2025, Afeela implemented Google Cloud Managed Lustre as a Cloud Storage component to accelerate AI model training for AFEELA Intelligent Drive. The deployment in Japan targets ADAS and perception model workloads and is positioned to provide higher throughput and faster checkpoint performance for training pipelines. The implementation leverages Google Cloud Managed Lustre core capabilities, including a parallel POSIX file system architecture, scalable metadata handling, and high throughput data striping to support large batch training and frequent checkpoint operations. Configuration emphasis is on sustained read and write throughput, checkpoint snapshotting, and integration with model training workflows to reduce I/O bottlenecks during distributed training. Operationally the service is integrated into Google Cloud compute-based training pipelines and used by Afeela engineering and data science teams responsible for AFEELA Intelligent Drive model development. Google Cloud and DDN testimonials are cited for similar ADAS oriented deployments in Japan, and Afeela reports roughly 3x faster model training compared with other Google Cloud storage options, reflecting improved throughput and checkpoint performance for production AI training workloads.
Resemble AI Communications 2400 $600M United States Google Google Cloud Managed Lustre Cloud Storage 2025 n/a
In 2025, Resemble AI implemented Google Cloud Managed Lustre as Cloud Storage to eliminate I O bottlenecks that were constraining multi GPU distributed training for generative voice models. The deployment focused on high throughput data pipelines, enabling processing of datasets measured in the hundreds of terabytes that feed model development and R D workflows. The implementation used DDN first party Google Cloud Managed Lustre to provide a parallel file system and scalable block and object throughput consistent with Cloud Storage requirements for large scale AI training. Configuration emphasized read optimized throughput and parallel I O to sustain sustained data delivery to GPU clusters, and the environment was provisioned to allow rapid expansion of capacity from 200 TB to 500 TB without downtime or reconfiguration. Integrations centered on Google Cloud AI infrastructure and the company s multi GPU training clusters, aligning storage performance with distributed training schedules and data staging processes. Operational scope covered Resemble AI s model training and research functions, with storage access patterns tuned for real time model iteration and pipeline orchestration across training nodes. Process changes reduced infrastructure preparation time that previously required multi day setup, shifting to near immediate readiness for training runs and shortening iteration cycles for engineering and research teams. Governance emphasized centralized storage management and data access controls to support reproducible training, while operational ownership was aligned to the AI engineering organization to coordinate capacity scaling and performance tuning. Outcomes reported by Resemble AI include sustained 100% GPU utilization with no idle cycles waiting for data, accelerated model development and iteration, and seamless scalability of Google Cloud Managed Lustre as Cloud Storage from 200 TB to 500 TB, enabling faster delivery of next generation generative voice solutions.
Salesforce Professional Services 76453 $37.9B United States Google Google Cloud Managed Lustre Cloud Storage 2025 n/a
In 2025, Salesforce integrated Google Cloud Managed Lustre to support high-throughput AI inference and model serving needs within the Cloud Storage tier used by Salesforce AI Research. Google Cloud Managed Lustre was provisioned as a managed parallel file system to remove typical onboarding bottlenecks for inference workloads and to provide a production-ready storage layer for large model artifacts and streaming data access. The implementation emphasizes the high-throughput parallel storage capability of Google Cloud Managed Lustre, with configuration focused on sustained bandwidth and POSIX-compatible file access for model checkpoints and shard distributions. Deployment architecture places Managed Lustre alongside Vertex training clusters, enabling direct mount and low-latency file access patterns that keep B200 GPUs fully saturated during inference and nearline training phases. Integrations are centered on Vertex training clusters for model training and inference orchestration, with operational ownership by Salesforce AI Research and supporting ML engineering and infrastructure teams. The integration covers serving pipelines and inference workflows for large language models, aligning Cloud Storage performance characteristics with GPU compute scheduling and data staging practices. Governance changes include standardized onboarding workflows that provision Managed Lustre volumes in tandem with Vertex cluster allocation, reducing setup friction for new inference workloads. The usage is documented in Google Cloud customer material and the Managed Lustre product page, and the documented result is improved LLM inference throughput and latency through better GPU utilization.
Showing 1 to 3 of 3 entries

Buyer Intent: Companies Evaluating Google Cloud Managed Lustre

ARTW Buyer Intent uncovers actionable customer signals, identifying software buyers actively evaluating Google Cloud Managed Lustre. Gain ongoing access to real-time prospects and uncover hidden opportunities.

Discover Software Buyers actively Evaluating Enterprise Applications

Logo Company Industry Employees Revenue Country Evaluated
No data found
FAQ - APPS RUN THE WORLD Google Cloud Managed Lustre Coverage

Google Cloud Managed Lustre is a Cloud Storage solution from Google.

Companies worldwide use Google Cloud Managed Lustre, from small firms to large enterprises across 21+ industries.

Organizations such as Salesforce, Resemble AI and Afeela are recorded users of Google Cloud Managed Lustre for Cloud Storage.

Companies using Google Cloud Managed Lustre are most concentrated in Professional Services, Communications and Automotive, with adoption spanning over 21 industries.

Companies using Google Cloud Managed Lustre are most concentrated in United States and Japan, with adoption tracked across 195 countries worldwide. This global distribution highlights the popularity of Google Cloud Managed Lustre across Americas, EMEA, and APAC.

Companies using Google Cloud Managed Lustre range from small businesses with 0-100 employees - 0%, to mid-sized firms with 101-1,000 employees - 33.33%, large organizations with 1,001-10,000 employees - 33.33%, and global enterprises with 10,000+ employees - 33.33%.

Customers of Google Cloud Managed Lustre include firms across all revenue levels — from $0-100M, to $101M-$1B, $1B-$10B, and $10B+ global corporations.

Contact APPS RUN THE WORLD to access the full verified Google Cloud Managed Lustre customer database with detailed Firmographics such as industry, geography, revenue, and employee breakdowns as well as key decision makers in charge of Cloud Storage.