List of Google Cloud Hyperdisk ML Customers
Mountain View, 94043, CA,
United States
Since 2010, our global team of researchers has been studying Google Cloud Hyperdisk ML customers around the world, aggregating massive amounts of data points that form the basis of our forecast assumptions and perhaps the rise and fall of certain vendors and their products on a quarterly basis.
Each quarter our research team identifies companies that have purchased Google Cloud Hyperdisk ML for Cloud Storage from public (Press Releases, Customer References, Testimonials, Case Studies and Success Stories) and proprietary sources, including the customer size, industry, location, implementation status, partner involvement, LOB Key Stakeholders and related IT decision-makers contact details.
Companies using Google Cloud Hyperdisk ML for Cloud Storage include: Resemble AI, a United States based Communications organisation with 2400 employees and revenues of $600.0 million, Hubx, a Colombia based Media organisation with 290 employees and revenues of $61.0 million, Abridge, a United States based Professional Services organisation with 400 employees and revenues of $43.0 million and many others.
Contact us if you need a completed and verified list of companies using Google Cloud Hyperdisk ML, including the breakdown by industry (21 Verticals), Geography (Region, Country, State, City), Company Size (Revenue, Employees, Asset) and related IT Decision Makers, Key Stakeholders, business and technology executives responsible for the software purchases.
The Google Cloud Hyperdisk ML customer wins are being incorporated in our Enterprise Applications Buyer Insight and Technographics Customer Database which has over 100 data fields that detail company usage of software systems and their digital transformation initiatives. Apps Run The World wants to become your No. 1 technographic data source!
Apply Filters For Customers
| Logo | Customer | Industry | Empl. | Revenue | Country | Vendor | Application | Category | When | SI | Insight |
|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
Abridge | Professional Services | 400 | $43M | United States | Google Cloud Hyperdisk ML | Cloud Storage | 2025 | n/a |
In 2025 Abridge implemented Google Cloud Hyperdisk ML in Cloud Storage to accelerate model weight hydration for healthcare clinical documentation AI workloads in the United States. The deployment targeted real time clinical note generation pipelines, with Google Cloud Hyperdisk ML used to reduce pod initialization overhead and improve throughput for production inference workloads.
The implementation centralized model artifact hosting and prefetching, configuring Hyperdisk ML to stream model weights into Google Kubernetes Engine pods at startup. Configuration focused on model weight hydration and pod initialization optimization, using the GKE Volume Populator pattern to stage data and decrease cold start times for containerized ML services.
Operationally the rollout was scoped to clinical documentation and ML inference teams supporting live note generation, integrating Google Cloud Hyperdisk ML with GKE orchestration and the GKE Volume Populator data flow. The architecture emphasized pod based container orchestration, persistent block storage for large model artifacts, and automated hydration workflows to align storage behavior with inference lifecycle.
Outcomes reported in the Google Cloud GKE Volume Populator blog post include model loading speed improvements up to ~76 percent and lower pod init times, which Abridge cited as improving throughput for its clinical documentation AI workloads.
|
|
|
|
Hubx | Media | 290 | $61M | Colombia | Google Cloud Hyperdisk ML | Cloud Storage | 2025 | n/a |
In 2025, HubX deployed Google Cloud Hyperdisk ML to serve large AI models and accelerate pod startup for high concurrency inference workloads in Turkey. The deployment uses Cloud Storage capabilities to support AI and ML inference workloads running on containerized serving infrastructure, aligning the Google Cloud Hyperdisk ML application with production model serving for a media operator.
The implementation emphasized large model staging and accelerated model load workflows, configuring Google Cloud Hyperdisk ML to reduce model load times and speed pod initialization for GPU backed serving. Google Cloud Hyperdisk ML was provisioned to host model artifacts and serve high throughput reads, enabling faster model residency on nodes and lower latency model load operations consistent with Cloud Storage functional patterns.
Operational integration centered on Google Kubernetes Engine based serving, with orchestration adjusted to take advantage of faster pod startup and rapid model staging on storage tiers. The rollout targeted inference engineering and platform operations in the Turkey region, coordinating container image standardization and serving orchestration to exploit faster initialization for high concurrency traffic.
As reported in the Google Cloud case study, the implementation delivered approximately 30x faster pod initialization and produced measurable GPU cost savings by reducing idle GPU time and shortening model load windows. Governance emphasized standardized model artifact management and serving workflows to maintain consistent initialization behavior across production clusters.
|
|
|
|
Resemble AI | Communications | 2400 | $600M | United States | Google Cloud Hyperdisk ML | Cloud Storage | 2024 | n/a |
In 2024, Resemble AI implemented Google Cloud Hyperdisk ML. The deployment used Google Cloud Hyperdisk ML as a Cloud Storage layer to support high-throughput model training and production serving, aligning storage architecture to the companys machine learning workloads.
The implementation combined components of Google Clouds AI Hypercomputer, including A3 VMs and Hyperdisk ML, together with Local SSDs and N2 VMs. Hyperdisk ML volumes were provisioned to host large static training datasets, Local SSDs were dedicated to smaller dynamic datasets for read write access, and N2 VMs handled upstream data cleaning and transformation that fed prepared datasets into Hyperdisk ML for training on A3 accelerators. Model weights are persisted in Cloud Storage and mounted for inference using Cloud Storage FUSE, while Vertex AI orchestrates fine tuning jobs, often leveraging spot instances and committed use discounts on Compute Engine for longer running workloads.
Operational scope centered on Resemble AIs engineering organization, with the infrastructure designed to scale from heavy retraining jobs involving 70 terabytes of data to high concurrency serving. Serving infrastructure runs inference on a mix of A3 and G2 instances to balance performance and cost, and the team integrated Gemini and Gemma into labeling and deepfake detection workflows to deepen model validation capabilities. Setup and ongoing management emphasized console driven provisioning and simplified operational processes so engineering could focus on modeling rather than low level infrastructure plumbing.
Documented outcomes from the deployment are concrete and infrastructure focused. Hyperdisk ML eliminated a primary data to accelerator bottleneck and, when paired with A3 VMs, doubled epoch throughput, in one example reducing a week long training job to approximately one hour. Resemble reported sustained improvements in iteration velocity, a shift in time allocation away from data preparation toward modeling, the ability to scale beyond 100 inference requests per second with sub 250 millisecond response times in many cases, and a faster path from prototype to production.
|
Buyer Intent: Companies Evaluating Google Cloud Hyperdisk ML
Discover Software Buyers actively Evaluating Enterprise Applications
| Logo | Company | Industry | Employees | Revenue | Country | Evaluated | ||
|---|---|---|---|---|---|---|---|---|
| No data found | ||||||||