Data is bigger, arrives faster, and comes in a variety of formats--and it all needs to be processed at scale for Analytics or machine learning.
Through step-by-step walk-throughs, code snippets, and notebooks, you\'ll be able to: Learn Python, SQL, Scala, or Java high-level Structured APIs Understand Spark operations and SQL Engine Inspect, tune, and debug Spark operations with Spark configurations and Spark UI Connect to Data sources: JSON, Parquet, CSV, Avro, ORC, Hive, S3, or Kafka Perform Analytics on batch and streaming Data using Structured Streaming Build reliable Data pipelines with open source Delta Lake and Spark Develop machine Learning pipelines with MLlib and productionize models using MLflow.
Specifically, this book explains how to perform simple and complex Data Analytics and employ machine Learning algorithms.
Updated to include Spark 3.0, this second edition shows Data engineers and Data scientists why structure and unification in Spark matters.
But how can you process such varied workloads efficiently? Enter Apache Spark.
Data is bigger, arrives faster, and comes in a variety of formats--and it all needs to be processed at scale for Analytics or machine learning