Get expert guidance on architecting end-to-end data management solutions with Apache Hadoop. While many sources explain how to use various components in the Hadoop ecosystem, this practical book takes you through architectural considerations necessary to tie those components together into a complete tailored application, based on your particular use case.
To reinforce those lessons, the book's second section provides detailed examples of architectures used in some of the most commonly found Hadoop applications. Whether you're designing a new Hadoop application, or planning to integrate Hadoop into your existing data infrastructure, Hadoop Application Architectures will skillfully guide you through the process.
This book covers:
- Factors to consider when using Hadoop to store and model data
- Best practices for moving data in and out of the system
- Data processing frameworks, including MapReduce, Spark, and Hive
- Common Hadoop processing patterns, such as removing duplicate records and using windowing analytics
- Giraph, GraphX, and other tools for large graph processing on Hadoop
- Using workflow orchestration and scheduling tools such as Apache Oozie
- Near-real-time stream processing with Apache Storm, Apache Spark Streaming, and Apache Flume
- Architecture examples for clickstream analysis, fraud detection, and data warehousing
About the Author: Mark is a committer on Apache Bigtop and a committer and PMC member on Apache Sentry (incubating) and a contributor to Apache Hadoop, Apache Hive, Apache Sqoop and Apache Flume projects. He is also a section author of O'Reilly's book on Apache Hive - ProgrammingHive.
Ted is a Senior Solutions Architect at Cloudera helping clients be successful with Hadoop and the Hadoop ecosystem. Previously, he was a Lead Architect at the Financial Industry Regulatory Authority (FINRA), helping build out a number of solutions from web applications and Service Oriented Architectures to big data applicatons. He has also contributed code to Apache Flume, Apache Avro, Yarn, and Apache Pig.
Jonathan is a Solutions Architect at Cloudera working with partners to integrate their solutions with Cloudera's software stack. Previously, he was a technical lead on the big data team at Orbitz Worldwide, helping to manage the Hadoop clusters for one of the most heavily traffickedsites on the internet. He's also a co--founder of the Chicago Hadoop User Group and Chicago Big Data, technical editor for Hadoop in Practice, and has spoken at a number of industry conferences on Hadoop and big data,
Gwen is a Solutions Architect at Cloudera. She has 15 years of experience working with customers to design scalable data architectures. Formerly a senior consultant at Pythian, Oracle ACE Director and board member at NoCOUG. Gwen is a frequent speaker at industry conferences and maintains a popular blog.