Packt Publishing; 2 edition
May 31, 2017
294 pages
B071HX7GHW
Key Features
- This book contains recipes on how to use Apache Spark as a unified compute engine
- Cover how to connect various source systems to Apache Spark
- Covers various parts of machine learning including supervised/unsupervised learning & recommendation engines
Book Description
While Apache Spark 1.x gained a lot of traction and adoption in the early years, Spark 2.x delivers notable improvements in the areas of API, schema awareness, Performance, Structured Streaming, and simplifying building blocks to build better, faster, smarter, and more accessible big data applications. This book uncovers all these features in the form of structured recipes to analyze and mature large and complex sets of data.
Starting with installing and configuring Apache Spark with various cluster managers, you will learn to set up development environments. Further on, you will be introduced to working with RDDs, DataFrames and Datasets to operate on schema aware data, and real-time streaming with various sources such as Twitter Stream and Apache Kafka. You will also work through recipes on machine learning, including supervised learning, unsupervised learning & recommendation engines in Spark.
Last but not least, the final few chapters delve deeper into the concepts of graph processing using GraphX, securing your implementations, cluster optimization, and troubleshooting.
What you will learn
- Install and configure Apache Spark with various cluster managers & on AWS
- Set up a development environment for Apache Spark including Databricks Cloud notebook
- Find out how to operate on data in Spark with schemas
- Get to grips with real-time streaming analytics using Spark Streaming & Structured Streaming
- Master supervised learning and unsupervised learning using MLlib
- Build a recommendation engine using MLlib
- Graph processing using GraphX and GraphFrames libraries
- Develop a set of common applications or project types, and solutions that solve complex big data problems
About the Author
Rishi Yadav has 19 years of experience in designing and developing enterprise applications. He is an open source software expert and advises American companies on big data and public cloud trends. Rishi was honored as one of Silicon Valley's 40 under 40 in 2014. He earned his bachelor's degree from the prestigious Indian Institute of Technology, Delhi, in 1998.
About 12 years ago, Rishi started InfoObjects, a company that helps data-driven businesses gain new insights into data. InfoObjects combines the power of open source and big data to solve business challenges for its clients and has a special focus on Apache Spark. The company has been on the Inc. 5000 list of the fastest growing companies for 6 years in a row. InfoObjects has also been named the best place to work in the Bay Area in 2014 and 2015.
Rishi is an open source contributor and active blogger.
Table of Contents
- Getting Started with Apache Spark
- Developing Applications with Spark
- Spark SQL
- Working with External Data Sources
- Spark Streaming
- Getting Started with Machine Learning
- Supervised Learning with MLlib – Regression
- Supervised Learning with MLlib – Classification
- Unsupervised learning
- Recommendations Using Collaborative Filtering
- Graph Processing Using GraphX and GraphFrames
- Optimizations and Performance Tuning