Hubx Technographics
Discover the latest software purchases and digital transformation initiatives being undertaken by Hubx and its business and technology executives. Each quarter our research team identifies on-prem and cloud applications that are being used by the 290 Hubx employees from the public (Press Releases, Customer References, Testimonials, Case Studies and Success Stories) and proprietary sources.
During our research, we have identified that Hubx has purchased the following applications: Google Cloud Hyperdisk ML for Cloud Storage in 2025 and the related IT decision-makers and key stakeholders.
Our database provides customer insight and contextual information on which enterprise applications and software systems Hubx is running and its propensity to invest more and deepen its relationship with Google or identify new suppliers as part of their overall Digital and IT transformation projects to stay competitive, fend off threats from disruptive forces, or comply with internal mandates to improve overall enterprise efficiency.
We have been analyzing Hubx revenues, which have grown to $61.0 million in 2024, plus its IT budget and roadmap, cloud software purchases, aggregating massive amounts of data points that form the basis of our forecast assumptions for Hubx intention to invest in emerging technologies such as AI, Machine Learning, IoT, Blockchain, Autonomous Database or in cloud-based ERP, HCM, CRM, EPM, Procurement or Treasury applications.
IaaS
Vendor |
Previous System |
Application |
Category |
Market |
VAR/SI |
When |
Live |
Insight |
|---|---|---|---|---|---|---|---|---|
| Legacy | Google Cloud Hyperdisk ML | Cloud Storage | IaaS | n/a | 2025 | 2025 |
In 2025, HubX deployed Google Cloud Hyperdisk ML to serve large AI models and accelerate pod startup for high concurrency inference workloads in Turkey. The deployment uses Cloud Storage capabilities to support AI and ML inference workloads running on containerized serving infrastructure, aligning the Google Cloud Hyperdisk ML application with production model serving for a media operator.
The implementation emphasized large model staging and accelerated model load workflows, configuring Google Cloud Hyperdisk ML to reduce model load times and speed pod initialization for GPU backed serving. Google Cloud Hyperdisk ML was provisioned to host model artifacts and serve high throughput reads, enabling faster model residency on nodes and lower latency model load operations consistent with Cloud Storage functional patterns.
Operational integration centered on Google Kubernetes Engine based serving, with orchestration adjusted to take advantage of faster pod startup and rapid model staging on storage tiers. The rollout targeted inference engineering and platform operations in the Turkey region, coordinating container image standardization and serving orchestration to exploit faster initialization for high concurrency traffic.
As reported in the Google Cloud case study, the implementation delivered approximately 30x faster pod initialization and produced measurable GPU cost savings by reducing idle GPU time and shortening model load windows. Governance emphasized standardized model artifact management and serving workflows to maintain consistent initialization behavior across production clusters.
|
| First Name | Last Name | Title | Function | Department | Phone | |
|---|---|---|---|---|---|---|
| No data found | ||||||
| Date | Company | Status | Vendor | Product | Category | Market |
|---|---|---|---|---|---|---|
| No data found | ||||||