List of Apache Hive Customers
Wilmington, 19801, DE,
United States
Since 2010, our global team of researchers has been studying Apache Hive customers around the world, aggregating massive amounts of data points that form the basis of our forecast assumptions and perhaps the rise and fall of certain vendors and their products on a quarterly basis.
Each quarter our research team identifies companies that have purchased Apache Hive for Data Warehouse from public (Press Releases, Customer References, Testimonials, Case Studies and Success Stories) and proprietary sources, including the customer size, industry, location, implementation status, partner involvement, LOB Key Stakeholders and related IT decision-makers contact details.
Companies using Apache Hive for Data Warehouse include: AT&T, a United States based Communications organisation with 146040 employees and revenues of $122.43 billion, Royal Bank of Canada, a Canada based Banking and Financial Services organisation with 96628 employees and revenues of $48.64 billion, Netflix, a United States based Media organisation with 14000 employees and revenues of $39.00 billion, Banco Itau, a Brazil based Banking and Financial Services organisation with 93200 employees and revenues of $28.40 billion, NextEra Energy, a United States based Utilities organisation with 16800 employees and revenues of $24.75 billion and many others.
Contact us if you need a completed and verified list of companies using Apache Hive, including the breakdown by industry (21 Verticals), Geography (Region, Country, State, City), Company Size (Revenue, Employees, Asset) and related IT Decision Makers, Key Stakeholders, business and technology executives responsible for the Analytics and BI software purchases.
The Apache Hive customer wins are being incorporated in our Enterprise Applications Buyer Insight and Technographics Customer Database which has over 100 data fields that detail company usage of Analytics and BI software systems and their digital transformation initiatives. Apps Run The World wants to become your No. 1 technographic data source!
Apply Filters For Customers
| Logo | Customer | Industry | Empl. | Revenue | Country | Vendor | Application | Category | When | SI | Insight |
|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
AT&T | Communications | 146040 | $122.4B | United States | Apache Software | Apache Hive | Data Warehouse | 2018 | n/a |
In 2018, AT&T evaluated Apache Hive in a Data Warehouse proof of concept. The engagement executed a POC that compared processing time of Impala with Apache Hive for batch applications, with the stated intent to implement Impala in the project.
Implementation-level focus centered on Apache Hive capabilities common to Data Warehouse deployments, including batch SQL-on-Hadoop processing, ETL orchestration points, and metadata governance via the Hive metastore, to assess fit for batch analytics workloads. The assessment emphasized query execution characteristics and throughput for batch pipelines, informing architecture choices around query engine selection and batch processing orchestration for the project's data ingestion and analytics functions.
|
|
|
Banco Itau | Banking and Financial Services | 93200 | $28.4B | Brazil | Apache Software | Apache Hive | Data Warehouse | 2020 | n/a |
In 2020, Banco Itau deployed Apache Hive as a central component of a Data Warehouse initiative. This deployment was part of a broader migration that moved on premises datasets to the AWS cloud and migrated Cloudera Distribution Hadoop CDH to Cloudera Data Platform CDP running fully in the cloud, aligned with the bank's adoption of Data Mesh principles.
Apache Hive was configured to serve as the primary SQL query and batch processing engine within the Data Warehouse, interoperating with Apache Spark for analytic transformations. Implementation work emphasized data ingestion and processing pipelines, with a dedicated squad responsible for evaluating ingestion patterns and implementing batch ingestion engine processes, event based streams via Kafka, and external data ingestion via SFTP and AWS API feeds.
Integrations connected the Apache Hive Data Warehouse to AWS storage layers and to real time and file based intake systems, using Kafka for events and SFTP and API mechanisms for third party feeds. The operational scope covered CIO teams and both consumer and producer accounts across the bank, and impacted business areas that were transitioning workloads from SAS and Alteryx toward SQL and Hive based processing on CDP and Impala.
Governance and rollout were organized through a Customer Success Engineer team and a product owner led squad, focusing on onboarding, education, and backlog driven prioritization for ingestion work. Activity included SQL, Hive and Spark training, community meetups to support adoption, and comparative guidance on processing patterns in Hive and Impala versus SAS to help business stakeholders adopt data driven practices.
|
|
|
Cotiviti | Professional Services | 5500 | $680M | United States | Apache Software | Apache Hive | Data Warehouse | 2017 | n/a |
Cotiviti implemented Apache Hive in 2017 as a Data Warehouse capability to support internal analytics and fraud detection analytics. The implementation centered on data engineering teams developing and building Spark pipelines that fed Apache Hive tables and analytical artifacts, enabling structured data models and cube-style aggregations for downstream consumption.
The deployment combined Spark-based ETL and transformation pipelines with Apache Hive data modeling, using Hive tables and HiveQL for persistent analytical datasets. Functional capabilities implemented included batch ingestion, transformation workflows, dimensional data modeling, and materialized cube generation to accelerate query patterns common to fraud analytics and internal reporting.
Integrations were focused on Spark and Apache Hive interoperability, with Spark jobs producing cleansed and enriched datasets that were persisted into Apache Hive for SQL access by analysts. Operational coverage included Cotiviti data engineering and analytics teams in the United States, with artifacts consumed by fraud detection analytics and internal business intelligence groups across the organization.
Governance and operational ownership rested with the internal data engineering organization who developed the pipelines, defined schema and model standards, and provisioned Hive-based datasets for analytics consumers. The implementation emphasized repeatable pipeline construction and model publication to support ongoing analytics use cases within the Data Warehouse environment.
|
|
|
|
Retail | 6000 | $23.4B | United States | Apache Software | Apache Hive | Data Warehouse | 2016 | n/a |
|
|
|
|
Media | 14000 | $39.0B | United States | Apache Software | Apache Hive | Data Warehouse | 2016 | n/a |
|
|
|
|
Utilities | 16800 | $24.8B | United States | Apache Software | Apache Hive | Data Warehouse | 2018 | n/a |
|
|
|
|
Professional Services | 50 | $5M | Brazil | Apache Software | Apache Hive | Data Warehouse | 2016 | n/a |
|
|
|
|
Utilities | 24000 | $1.1B | Australia | Apache Software | Apache Hive | Data Warehouse | 2015 | n/a |
|
|
|
|
Banking and Financial Services | 96628 | $48.6B | Canada | Apache Software | Apache Hive | Data Warehouse | 2021 | n/a |
|
|
|
|
Professional Services | 600 | $60M | Brazil | Apache Software | Apache Hive | Data Warehouse | 2021 | n/a |
|
Buyer Intent: Companies Evaluating Apache Hive
- BMC Software, a United States based Professional Services organization with 6500 Employees
- The George Washington University Hospital, a United States based Healthcare company with 2500 Employees
- Kaspersky, a Russia based Communications organization with 5152 Employees
Discover Software Buyers actively Evaluating Enterprise Applications
| Logo | Company | Industry | Employees | Revenue | Country | Evaluated | ||
|---|---|---|---|---|---|---|---|---|
| No data found | ||||||||