AI Buyer Insights:

Swedbank, a Temenos T24 customer evaluated Oracle Flexcube

Michelin, an e2open customer evaluated Oracle Transportation Management

Moog, an UKG AutoTime customer evaluated Workday Time and Attendance

Cantor Fitzgerald, a Kyriba Treasury customer evaluated GTreasury

Westpac NZ, an Infosys Finacle customer evaluated nCino Bank OS

Citigroup, a VestmarkONE customer evaluated BlackRock Aladdin Wealth

Wayfair, a Korber HighJump WMS customer just evaluated Manhattan WMS

Swedbank, a Temenos T24 customer evaluated Oracle Flexcube

Michelin, an e2open customer evaluated Oracle Transportation Management

Moog, an UKG AutoTime customer evaluated Workday Time and Attendance

Cantor Fitzgerald, a Kyriba Treasury customer evaluated GTreasury

Westpac NZ, an Infosys Finacle customer evaluated nCino Bank OS

Citigroup, a VestmarkONE customer evaluated BlackRock Aladdin Wealth

Wayfair, a Korber HighJump WMS customer just evaluated Manhattan WMS

List of Google Cloud Hyperdisk ML Customers

Apply Filters For Customers

Logo Customer Industry Empl. Revenue Country Vendor Application Category When SI Insight
Abridge Professional Services 400 $43M United States Google Google Cloud Hyperdisk ML Cloud Storage 2025 n/a
In 2025 Abridge implemented Google Cloud Hyperdisk ML in Cloud Storage to accelerate model weight hydration for healthcare clinical documentation AI workloads in the United States. The deployment targeted real time clinical note generation pipelines, with Google Cloud Hyperdisk ML used to reduce pod initialization overhead and improve throughput for production inference workloads. The implementation centralized model artifact hosting and prefetching, configuring Hyperdisk ML to stream model weights into Google Kubernetes Engine pods at startup. Configuration focused on model weight hydration and pod initialization optimization, using the GKE Volume Populator pattern to stage data and decrease cold start times for containerized ML services. Operationally the rollout was scoped to clinical documentation and ML inference teams supporting live note generation, integrating Google Cloud Hyperdisk ML with GKE orchestration and the GKE Volume Populator data flow. The architecture emphasized pod based container orchestration, persistent block storage for large model artifacts, and automated hydration workflows to align storage behavior with inference lifecycle. Outcomes reported in the Google Cloud GKE Volume Populator blog post include model loading speed improvements up to ~76 percent and lower pod init times, which Abridge cited as improving throughput for its clinical documentation AI workloads.
Hubx Media 290 $61M Colombia Google Google Cloud Hyperdisk ML Cloud Storage 2025 n/a
In 2025, HubX deployed Google Cloud Hyperdisk ML to serve large AI models and accelerate pod startup for high concurrency inference workloads in Turkey. The deployment uses Cloud Storage capabilities to support AI and ML inference workloads running on containerized serving infrastructure, aligning the Google Cloud Hyperdisk ML application with production model serving for a media operator. The implementation emphasized large model staging and accelerated model load workflows, configuring Google Cloud Hyperdisk ML to reduce model load times and speed pod initialization for GPU backed serving. Google Cloud Hyperdisk ML was provisioned to host model artifacts and serve high throughput reads, enabling faster model residency on nodes and lower latency model load operations consistent with Cloud Storage functional patterns. Operational integration centered on Google Kubernetes Engine based serving, with orchestration adjusted to take advantage of faster pod startup and rapid model staging on storage tiers. The rollout targeted inference engineering and platform operations in the Turkey region, coordinating container image standardization and serving orchestration to exploit faster initialization for high concurrency traffic. As reported in the Google Cloud case study, the implementation delivered approximately 30x faster pod initialization and produced measurable GPU cost savings by reducing idle GPU time and shortening model load windows. Governance emphasized standardized model artifact management and serving workflows to maintain consistent initialization behavior across production clusters.
Resemble AI Communications 2400 $600M United States Google Google Cloud Hyperdisk ML Cloud Storage 2024 n/a
In 2024, Resemble AI implemented Google Cloud Hyperdisk ML. The deployment used Google Cloud Hyperdisk ML as a Cloud Storage layer to support high-throughput model training and production serving, aligning storage architecture to the companys machine learning workloads. The implementation combined components of Google Clouds AI Hypercomputer, including A3 VMs and Hyperdisk ML, together with Local SSDs and N2 VMs. Hyperdisk ML volumes were provisioned to host large static training datasets, Local SSDs were dedicated to smaller dynamic datasets for read write access, and N2 VMs handled upstream data cleaning and transformation that fed prepared datasets into Hyperdisk ML for training on A3 accelerators. Model weights are persisted in Cloud Storage and mounted for inference using Cloud Storage FUSE, while Vertex AI orchestrates fine tuning jobs, often leveraging spot instances and committed use discounts on Compute Engine for longer running workloads. Operational scope centered on Resemble AIs engineering organization, with the infrastructure designed to scale from heavy retraining jobs involving 70 terabytes of data to high concurrency serving. Serving infrastructure runs inference on a mix of A3 and G2 instances to balance performance and cost, and the team integrated Gemini and Gemma into labeling and deepfake detection workflows to deepen model validation capabilities. Setup and ongoing management emphasized console driven provisioning and simplified operational processes so engineering could focus on modeling rather than low level infrastructure plumbing. Documented outcomes from the deployment are concrete and infrastructure focused. Hyperdisk ML eliminated a primary data to accelerator bottleneck and, when paired with A3 VMs, doubled epoch throughput, in one example reducing a week long training job to approximately one hour. Resemble reported sustained improvements in iteration velocity, a shift in time allocation away from data preparation toward modeling, the ability to scale beyond 100 inference requests per second with sub 250 millisecond response times in many cases, and a faster path from prototype to production.
Showing 1 to 3 of 3 entries

Buyer Intent: Companies Evaluating Google Cloud Hyperdisk ML

ARTW Buyer Intent uncovers actionable customer signals, identifying software buyers actively evaluating Google Cloud Hyperdisk ML. Gain ongoing access to real-time prospects and uncover hidden opportunities.

Discover Software Buyers actively Evaluating Enterprise Applications

Logo Company Industry Employees Revenue Country Evaluated
No data found
FAQ - APPS RUN THE WORLD Google Cloud Hyperdisk ML Coverage

Google Cloud Hyperdisk ML is a Cloud Storage solution from Google.

Companies worldwide use Google Cloud Hyperdisk ML, from small firms to large enterprises across 21+ industries.

Organizations such as Resemble AI, Hubx and Abridge are recorded users of Google Cloud Hyperdisk ML for Cloud Storage.

Companies using Google Cloud Hyperdisk ML are most concentrated in Communications, Media and Professional Services, with adoption spanning over 21 industries.

Companies using Google Cloud Hyperdisk ML are most concentrated in United States and Colombia, with adoption tracked across 195 countries worldwide. This global distribution highlights the popularity of Google Cloud Hyperdisk ML across Americas, EMEA, and APAC.

Companies using Google Cloud Hyperdisk ML range from small businesses with 0-100 employees - 0%, to mid-sized firms with 101-1,000 employees - 66.67%, large organizations with 1,001-10,000 employees - 33.33%, and global enterprises with 10,000+ employees - 0%.

Customers of Google Cloud Hyperdisk ML include firms across all revenue levels — from $0-100M, to $101M-$1B, $1B-$10B, and $10B+ global corporations.

Contact APPS RUN THE WORLD to access the full verified Google Cloud Hyperdisk ML customer database with detailed Firmographics such as industry, geography, revenue, and employee breakdowns as well as key decision makers in charge of Cloud Storage.