AI Buyer Insights:

Moog, an UKG AutoTime customer evaluated Workday Time and Attendance

Michelin, an e2open customer evaluated Oracle Transportation Management

Swedbank, a Temenos T24 customer evaluated Oracle Flexcube

Westpac NZ, an Infosys Finacle customer evaluated nCino Bank OS

Citigroup, a VestmarkONE customer evaluated BlackRock Aladdin Wealth

Wayfair, a Korber HighJump WMS customer just evaluated Manhattan WMS

Cantor Fitzgerald, a Kyriba Treasury customer evaluated GTreasury

Moog, an UKG AutoTime customer evaluated Workday Time and Attendance

Michelin, an e2open customer evaluated Oracle Transportation Management

Swedbank, a Temenos T24 customer evaluated Oracle Flexcube

Westpac NZ, an Infosys Finacle customer evaluated nCino Bank OS

Citigroup, a VestmarkONE customer evaluated BlackRock Aladdin Wealth

Wayfair, a Korber HighJump WMS customer just evaluated Manhattan WMS

Cantor Fitzgerald, a Kyriba Treasury customer evaluated GTreasury

List of Faiss Customers

Apply Filters For Customers

Logo Customer Industry Empl. Revenue Country Vendor Application Category When SI Insight Insight Source
Amazon Web Services Communications 130000 $107.6B United States Meta Faiss AI Frameworks and Libraries 2025 n/a In 2025, Amazon Web Services exposed Faiss as an engine option within Amazon OpenSearch Service k-NN and vector search capabilities, extending that option to OpenSearch Serverless vector collections to support large scale approximate nearest neighbor retrieval. AWS documentation and public blogs in the US region present Faiss under the AI Frameworks and Libraries category and position the library specifically for retrieval augmented generation and semantic search workloads. Faiss is surfaced as the vector indexing and ANN engine, with configuration choices that include index types, quantization settings, and performance tuning commonly associated with Faiss deployments. These capabilities align with standard AI Frameworks and Libraries functions, enabling dense vector indexing, approximate nearest neighbor search, and model-agnostic vector ingestion pipelines within OpenSearch Service. Integration points described in AWS guidance include Amazon OpenSearch Service serverless vector collections and examples that pair Faiss based retrieval with Amazon SageMaker JumpStart inference and model provisioning workflows. Documentation emphasizes engine selection at the OpenSearch layer, and shows how Faiss integrates into the search stack for RAG workflows, linking model outputs to vector collections managed inside OpenSearch. Governance and rollout are driven through AWS published guidance and examples released in 2025, which provide configuration patterns and quantitative option guidance for customers choosing Faiss. The published materials focus on configuration, operational integration with OpenSearch Service, and deployment patterns for customers adopting Faiss as their AI Frameworks and Libraries solution for vector retrieval and semantic search.
Grab Professional Services 11267 $2.8B Singapore Meta Faiss AI Frameworks and Libraries 2024 n/a In 2024 Grab implemented Faiss as part of its data and machine learning stack, using AI Frameworks and Libraries tooling to power vector similarity search shortlisting. The work was regionally scoped with internal deployment in Southeast Asia and was documented in an engineering post published October 23, 2024. Faiss functioned as the shortlist index and retrieval layer to surface candidate vectors for downstream ranking. Grab configured Faiss to produce compact shortlists that were then re ranked by an internal large language model, creating an LLM assisted retrieval pipeline described in the engineering write up. The implementation emphasized vector similarity search, index management and shortlisting capabilities, and integration with Grab's internal LLM re ranker and broader data and ML stack for complex, nuanced queries. The engineering post explicitly reported improved retrieval quality when Faiss shortlists were re ranked by the LLM.
Hugging Face Professional Services 500 $50M United States Meta Faiss AI Frameworks and Libraries 2022 n/a In 2022, Hugging Face integrated Faiss into its Datasets library to provide fast nearest neighbor indexes for semantic search and Retrieval Augmented Generation workflows across its developer platform. This integration situates Faiss within the AI Frameworks and Libraries layer, and it was implemented to support ML engineering use cases focused on vector search and retrieval based augmentation. The implementation exposes a public API, Datasets.add_faiss_index, which creates and persists Faiss indexes directly from dataset objects, enabling in library index creation, queryable nearest neighbor search, and embedding based retrieval pipelines. Faiss is used as the core vector index engine while the Datasets.add_faiss_index API manages index lifecycle, serialization, and access patterns that are consistent with library oriented workflows. Operationally the rollout targeted Hugging Face's global developer audience, with the Datasets.add_faiss_index API documented in the Hugging Face docs and observed in repository issues and community usage from 2022 onward. Governance for the capability is community facing, with public documentation and repository issue activity driving adoption, troubleshooting, and iterative enhancements rather than private enterprise change control.
Showing 1 to 3 of 3 entries

Buyer Intent: Companies Evaluating Faiss

ARTW Buyer Intent uncovers actionable customer signals, identifying software buyers actively evaluating Faiss. Gain ongoing access to real-time prospects and uncover hidden opportunities.

Discover Software Buyers actively Evaluating Enterprise Applications

Logo Company Industry Employees Revenue Country Evaluated
No data found
FAQ - APPS RUN THE WORLD Faiss Coverage

Faiss is a AI Frameworks and Libraries solution from Meta.

Companies worldwide use Faiss, from small firms to large enterprises across 21+ industries.

Organizations such as Amazon Web Services, Grab and Hugging Face are recorded users of Faiss for AI Frameworks and Libraries.

Companies using Faiss are most concentrated in Communications and Professional Services, with adoption spanning over 21 industries.

Companies using Faiss are most concentrated in United States and Singapore, with adoption tracked across 195 countries worldwide. This global distribution highlights the popularity of Faiss across Americas, EMEA, and APAC.

Companies using Faiss range from small businesses with 0-100 employees - 0%, to mid-sized firms with 101-1,000 employees - 33.33%, large organizations with 1,001-10,000 employees - 0%, and global enterprises with 10,000+ employees - 66.67%.

Customers of Faiss include firms across all revenue levels — from $0-100M, to $101M-$1B, $1B-$10B, and $10B+ global corporations.

Contact APPS RUN THE WORLD to access the full verified Faiss customer database with detailed Firmographics such as industry, geography, revenue, and employee breakdowns as well as key decision makers in charge of AI Frameworks and Libraries.