List of IBM Cloud AI Infrastructure Customers
Armonk, 10504, NY,
United States
Since 2010, our global team of researchers has been studying IBM Cloud AI Infrastructure customers around the world, aggregating massive amounts of data points that form the basis of our forecast assumptions and perhaps the rise and fall of certain vendors and their products on a quarterly basis.
Each quarter our research team identifies companies that have purchased IBM Cloud AI Infrastructure for AI infrastructure from public (Press Releases, Customer References, Testimonials, Case Studies and Success Stories) and proprietary sources, including the customer size, industry, location, implementation status, partner involvement, LOB Key Stakeholders and related IT decision-makers contact details.
Companies using IBM Cloud AI Infrastructure for AI infrastructure include: Vodafone Group, a United Kingdom based Communications organisation with 88780 employees and revenues of $43.89 billion, Harvard University, a United States based Education organisation with 19000 employees and revenues of $6.70 billion, Unipol Gruppo Finanziario S.p.A., a United States based Banking and Financial Services organisation with 3000 employees and revenues of $1.50 billion and many others.
Contact us if you need a completed and verified list of companies using IBM Cloud AI Infrastructure, including the breakdown by industry (21 Verticals), Geography (Region, Country, State, City), Company Size (Revenue, Employees, Asset) and related IT Decision Makers, Key Stakeholders, business and technology executives responsible for the software purchases.
The IBM Cloud AI Infrastructure customer wins are being incorporated in our Enterprise Applications Buyer Insight and Technographics Customer Database which has over 100 data fields that detail company usage of software systems and their digital transformation initiatives. Apps Run The World wants to become your No. 1 technographic data source!
Apply Filters For Customers
| Logo | Customer | Industry | Empl. | Revenue | Country | Vendor | Application | Category | When | SI | Insight |
|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
Harvard University | Education | 19000 | $6.7B | United States | IBM | IBM Cloud AI Infrastructure | AI infrastructure | 2025 | n/a |
In 2025, Harvard University provisioned IBM Cloud AI Infrastructure to accelerate LLM research in the Calmon Lab, addressing GPU constraints for AI safety experiments in the United States. The IBM Cloud AI Infrastructure engagement is centered on high-performance compute and cloud-native storage to support model development, inference, and experimental pipelines for research teams.
The implementation deployed NVIDIA HGX H100 GPU servers as the primary compute tier and IBM Cloud Object Storage for datasets, model checkpoints, and inference payloads, all provisioned on IBM Cloud. Configuration emphasized GPU-dense nodes and high-throughput storage connectivity to remove GPU bottlenecks and sustain sustained inference workloads typical of large language model research.
Operational coverage targeted research activities within Harvard University, specifically AI safety experiments run by the Calmon Lab, with rapid provisioning capability to shorten setup cycles for experiments. Integrations explicitly include IBM Cloud GPU servers and IBM Cloud Object Storage, supporting model training, evaluation, and high-rate inference pipelines for LLM research workloads.
Governance and workflow changes focused on enabling researchers to request and receive cloud GPU capacity quickly, centralizing compute provisioning and dataset storage to improve experiment reproducibility and resource allocation. The deployment model supported iterative research workflows, with infrastructure orchestration aligned to academic research timelines rather than long procurement cycles.
The engagement delivered explicit performance outcomes, removing GPU bottlenecks and producing much higher model throughput for AI safety experiments, with reported inference speeds over 2,000 tokens per second. IBM Cloud AI Infrastructure provided Harvard University with a scalable AI infrastructure foundation to support intensive LLM inference and research operations.
|
|
|
Unipol Gruppo Finanziario S.p.A. | Banking and Financial Services | 3000 | $1.5B | United States | IBM | IBM Cloud AI Infrastructure | AI infrastructure | 2024 | n/a |
In 2024, Unipol Gruppo Finanziario S.p.A. implemented IBM Cloud AI Infrastructure to underpin an AI powered monitoring and automation platform called NAMI. The initial rollout used IBM Cloud for launch and later migrated to a hybrid Red Hat OpenShift environment to modernize IT operations and incident response for insurance operations in Italy.
The implementation bundles IBM watsonx for model and machine learning services, IBM Cloud Pak for AIOps for event detection and automated remediation, Cloud Pak for Data for integrated analytics and data services, and IBM Fusion HCI for consolidated on premises compute and storage. This AI infrastructure deployment was configured to provide observability, event correlation, automated remediation workflows, and orchestration across the NAMI monitoring and automation modules.
Operational scope focused on insurance operations in Italy with deployment coverage across IT operations and incident response teams, and a migration path from initial IBM Cloud hosting to a hybrid Red Hat OpenShift architecture to unify cloud and on premises runtimes. The architecture supports containerized workloads and platform consistency between public cloud and on premises environments used by Unipol.
Governance and process changes centered on embedding automated incident response and event handling into IT operations workflows and establishing observability pipelines to sustain continuous monitoring. Outcomes reported include a 90 second average event response time and full monitoring coverage achieved by June 2025.
|
|
|
Vodafone Group | Communications | 88780 | $43.9B | United Kingdom | IBM | IBM Cloud AI Infrastructure | AI infrastructure | 2024 | Ibm |
In 2024 Vodafone Group deployed IBM Cloud AI Infrastructure to accelerate generative AI journey testing for its TOBi virtual assistant as part of a global and EMEA pilot. The implementation is categorized as AI infrastructure and was used to operationalize faster model-driven content generation and validation within customer service workflows.
The technical configuration centered on IBM Cloud Code Engine for containerized execution and orchestration, and watsonx.ai for generative model development, testing and content refinement. These capabilities were applied to journey testing and gap analysis, enabling automated test runs and iterative content updates for conversational flows in CRM scenarios.
The pilot covered customer service journey testing at a global and EMEA scope, focusing on TOBi conversational touchpoints and CRM-related interaction scenarios. IBM Client Engineering and IBM Consulting led the engagement, with IBM Consulting documented as the implementation partner responsible for deployment and testing cadence.
Governance for the pilot emphasized repeatable testing pipelines and rapid turnaround for journey validation, which materially reduced testing time and improved content quality as reported in the case study. The work produced a validated pattern for scalable journey testing within Vodafone Group, using IBM Cloud AI Infrastructure to align model orchestration, automated test execution and content governance.
|
Buyer Intent: Companies Evaluating IBM Cloud AI Infrastructure
Discover Software Buyers actively Evaluating Enterprise Applications
| Logo | Company | Industry | Employees | Revenue | Country | Evaluated | ||
|---|---|---|---|---|---|---|---|---|
| No data found | ||||||||