People are always asking me at meetups whether they should use Apache Hive, Apache HBase, Apache SparkSQL, or some buzzword data engine.
My answer is yes: use them all for the appropriate use case and data.
Ask yourself some questions first:
- What does your data look like?
- How many rows will you have?
- What is more important: reads, writes, appends, updates, or deletes?
- Do you need SQL?
- Do you need deep, rich full ANSI SQL?
- What interfaces will you have to the data? JDBC? APIs? Apache Spark?
- How many concurrent users will access this data?
- How often is it inserted? Updated? Deleted? Read? Joined? Exported?
- Is this structured? Unstructured? Semistructured? AVRO? JSON?
- Do you want to integrate with OLAP? Druid?
- Is this for temporary use?
- Is this part of a real-time streaming ingest flow?
- Is it columnar? If so, how many columns? Are they natural grouped?
- Do you have sparse data?
- What BI or query tool are you using?
- Do you need to do scans?
- Is your data key-value?
My next question is: How are you ingesting it? For most cases, it makes sense to use Apache NiFi for either Apache Hive or Apache HBase destinations. Sometimes, Apache SQOOP makes sense, as well. What is the source format? Do you need to store it in the original format? Is it already JSON or CSV?
Apache HBase has some very interesting updates coming in version 2.0 that makes it great for a lot of use cases.
Apache Hive is great for its full SQL, in-memory caching, sorting, joining data, ACID, and integration with BI tools, Druid, and Spark SQL integration.
With Apache Phoenix, HBase has a good set of SQL to start with — but it's nowhere near as mature or rich as Apache Hive's SQL.
Apache HBase pros:
- Huge sparse datasets are killer
- NoSQL store
- Medium object storage
- Key-value usage
- Co-processors
- UDF
- Apache Phoenix for SQL
- Upserts in Phoenix
- Scans
Apache Hive pros:
- Real SQL database
- Massive datasets
- ACID tables
- BI tool integration
- EDW use cases
- Apache HiveMall for machine learning
- Druid interactivity
- UDF
- Various file storage on HDFS including Apache ORC, Apache Parquet, CSV, and JSON
- ACID merge
- Hybrid procedural SQL on Hadoop (HPL/SQL)
So, who wins? There was a time I tried to use Apache Phoenix for everything since its JDBC driver is really solid, made it easy to put lots of data in quickly, and makes for fast queries. It's also great for use cases that I used to use something like MongoDB for, with varying JSON data.
Apache Hive has the Apache Spark SQL integration and rich SQL that makes it great for tabular data, and its Apache ORC format is amazing.
In most use cases, Apache Hive wins. For NoSQL, sparse data, really high-end requirements, Apache HBase wins. The good news is that they both work well together on the same Hadoop cluster and utilize your massive HDFS store. I rarely see places where they don't use both. Use them both — if one doesn't work, use the other. The two together have solved every query and storage requirement that I have had for 100 different use cases in dozens of different enterprises.
References
- Apache HBase 2.0
- Apache Hive LLAP
- Apache HBase Setup
- Apache Hive for Data Warehouses
- An Apache Hive-Based Data Warehouse
- Reading OpenData JSON and Storing Into Phoenix Tables
- Creating a Spring Boot Java 8 Microservice to Read Apache Phoenix Data
- Incrementally Streaming RDBMS Data Into Your Hadoop Data Lake
- Apache Hive With Apache Hivemall