AI Buyer Insights:

Michelin, an e2open customer evaluated Oracle Transportation Management

Wayfair, a Korber HighJump WMS customer just evaluated Manhattan WMS

Westpac NZ, an Infosys Finacle customer evaluated nCino Bank OS

Moog, an UKG AutoTime customer evaluated Workday Time and Attendance

Cantor Fitzgerald, a Kyriba Treasury customer evaluated GTreasury

Swedbank, a Temenos T24 customer evaluated Oracle Flexcube

Citigroup, a VestmarkONE customer evaluated BlackRock Aladdin Wealth

Michelin, an e2open customer evaluated Oracle Transportation Management

Wayfair, a Korber HighJump WMS customer just evaluated Manhattan WMS

Westpac NZ, an Infosys Finacle customer evaluated nCino Bank OS

Moog, an UKG AutoTime customer evaluated Workday Time and Attendance

Cantor Fitzgerald, a Kyriba Treasury customer evaluated GTreasury

Swedbank, a Temenos T24 customer evaluated Oracle Flexcube

Citigroup, a VestmarkONE customer evaluated BlackRock Aladdin Wealth

List of Cerebras Cloud Customers

Apply Filters For Customers

Logo Customer Industry Empl. Revenue Country Vendor Application Category When SI Insight
Hugging Face Professional Services 500 $50M United States Cerebras Systems Cerebras Cloud AI infrastructure 2025 n/a
In 2025, Hugging Face integrated Cerebras Cloud into the Hugging Face Hub and Inference API to provide CS-3 powered inference for popular open-source models. Cerebras Cloud is an AI infrastructure provider that was added as an available inference provider on the developer platform, supporting a developer-platform inference use case while Hugging Face is headquartered in the United States and serves a global developer audience. The implementation exposed explicit inference and integration modules within the Hugging Face Inference API, enabling low-latency, high-throughput model serving for open-source models. Configuration centered on provider selection and endpoint provisioning so developers could route inference requests to Cerebras Cloud CS-3 resources through the Hub and API integration. Integration scope covered the Hugging Face Hub and the Inference API as the primary operational touchpoints, with Cerebras Cloud providing CS-3 hardware-accelerated inference capacity. The deployment addressed developer platform and model serving business functions, preserving Hub workflows while adding Cerebras as a selectable inference provider for model hosting and runtime inference. Governance and rollout focused on making Cerebras Cloud available as an inference provider within the existing developer experience, with the intent of expanding developer access to high-throughput model serving. The announced outcome was expanded developer access to CS-3 powered, low-latency inference through Cerebras Cloud as an available inference provider on the Hugging Face Hub and Inference API.
Mistral AI Professional Services 140 $65M France Cerebras Systems Cerebras Cloud AI infrastructure 2025 n/a
In 2025, Mistral AI deployed Cerebras Cloud to power its Le Chat conversational assistant. The deployment used Cerebras Systems' cloud offering as core AI infrastructure for inference focused model serving in Europe. The implementation prioritized low latency and extreme throughput, instrumenting Cerebras Cloud compute to run production large language model inference workflows. Functional capabilities implemented included streaming token generation, throughput optimized batching and model serving pipelines, and conversational session handling for Le Chat. Configuration emphasized inference scale and latency controls rather than training workloads. Mistral reported that the Cerebras Cloud deployment achieved roughly 1,000 words per second in inference and materially reduced end user response latency, delivering record breaking inference throughput for its conversational assistant. Operational scope was inference focused, supporting customer facing chat interactions across Mistral AI’s European deployment footprint. Governance centered on production inference orchestration and latency monitoring.
Perplexity Professional Services 55 $5M United States Cerebras Systems Cerebras Cloud AI infrastructure 2025 n/a
In 2025, Perplexity deployed Cerebras Cloud to power its Sonar search model built on Llama 3.3 70B, using the vendor platform as the core AI infrastructure for inference. The Cerebras Cloud implementation targeted faster, more factual search answers for Perplexity Pro users and was framed as a search and AI inference deployment. The deployment provisioned dedicated model hosting and low latency inference capacity, with Cerebras Cloud serving the Sonar model to support real-time query handling. Functional capabilities implemented included low-latency model serving, request routing and concurrency controls, and observability for inference performance to monitor throughput and latency during peak search activity. This search and AI inference implementation was scoped to North America and focused on improving Perplexity Pro customer experiences, with operational ownership sitting with product and engineering teams. The integration of Cerebras Cloud with the Sonar model produced markedly improved inference speed and efficiency, enabling real-time search experiences for customers.
Showing 1 to 3 of 3 entries

Buyer Intent: Companies Evaluating Cerebras Cloud

ARTW Buyer Intent uncovers actionable customer signals, identifying software buyers actively evaluating Cerebras Cloud. Gain ongoing access to real-time prospects and uncover hidden opportunities. Companies Actively Evaluating Cerebras Cloud for AI infrastructure include:

  1. Ace Data Centers, a United States based Professional Services organization with 15 Employees

Discover Software Buyers actively Evaluating Enterprise Applications

Logo Company Industry Employees Revenue Country Evaluated
No data found
FAQ - APPS RUN THE WORLD Cerebras Cloud Coverage

Cerebras Cloud is a AI infrastructure solution from Cerebras Systems.

Companies worldwide use Cerebras Cloud, from small firms to large enterprises across 21+ industries.

Organizations such as Mistral AI, Hugging Face and Perplexity are recorded users of Cerebras Cloud for AI infrastructure.

Companies using Cerebras Cloud are most concentrated in Professional Services, with adoption spanning over 21 industries.

Companies using Cerebras Cloud are most concentrated in France and United States, with adoption tracked across 195 countries worldwide. This global distribution highlights the popularity of Cerebras Cloud across Americas, EMEA, and APAC.

Companies using Cerebras Cloud range from small businesses with 0-100 employees - 33.33%, to mid-sized firms with 101-1,000 employees - 66.67%, large organizations with 1,001-10,000 employees - 0%, and global enterprises with 10,000+ employees - 0%.

Customers of Cerebras Cloud include firms across all revenue levels — from $0-100M, to $101M-$1B, $1B-$10B, and $10B+ global corporations.

Contact APPS RUN THE WORLD to access the full verified Cerebras Cloud customer database with detailed Firmographics such as industry, geography, revenue, and employee breakdowns as well as key decision makers in charge of AI infrastructure.