The Apache Spark & Scala course will enable learners to understand how Spark enables in-memory data processing and runs much faster than Hadoop MapReduce & helps in NRT analytics. Learners learn about RDDs, different APIs & components which Spark offers such as Spark Streaming, MLlib, SparkSQL, GraphX.
Apache Spark & Scala Training
Collabera TACT’s Apache Spark & Scala Training helps the participants to develop an understanding of Spark framework. The training will educate you on in-memory data processing of Spark which makes it run much faster than Hadoop MapR. Spark & Scala Training helps you learn about RDDs, different APIs on offer such as Spark Streaming, MLlib, SparkSQL, GraphX. Apache Spark & Scala Training proves to be a significant contributor in a developer’s learning curve.
Who is this course for?
The primary beneficiary of this training can be someone who wishes to make a career in big data and wants to keep himself updated with the latest advancements in efficient processing of consistently growing data using Spark related projects. Following professionals can reap the maximum benefits from this training:
- Big Data Professionals
- Software Engineers and Software Developers
- Data Scientists and Data Analysts
The participants should have an understanding of the basic concepts of programming. Also having an understanding of Scala can prove to be helpful but it is not necessary.
Why should you learn Spark?
Apache Spark and Scala Certification is an integral certification for a developer to have. In today’s world when the data is growing at unprecedented speed, there is a high requirement of analyzing this data to use it for business insights and strategies. Collabera TACT’s Spark and Scala Certification helps you with the nuances and environment of this framework. There are varied big data processing frameworks such as Hadoop, Spark and Storm etc. Though, Spark has the capability of working hundred times faster than Hadoop when it comes to streaming and processing data which makes it a preferred choice among developers for fast big data analysis.
Introduction to Scala for Apache Spark
- What is Scala?
- Why Scala for Spark?
- Scala in other frameworks,
- Introduction to Scala REPL
- Basic Scala operations
- Variable Types in Scala
- Control Structures in Scala
- Foreach loop, Functions, Procedures, Collections in Scala- Array, ArrayBuffer, Map, Tuples, Lists, and more.
OOPS and Functional Programming in Scala
- Class in Scala
- Getters and Setters
- Custom Getters and Setters
- Properties with only Getters
- Auxiliary Constructor
- Primary Constructor
- Companion Objects
- Extending a Class
- Overriding Methods
- Traits as Interfaces
- Layered Traits
- Functional Programming
- Higher Order Functions
- Anonymous Functions and more.
Introduction to Big Data and Apache Spark
- Introduction to big data
- challenges with big data
- Batch Vs. Real Time big data analytics
- Batch Analytics – Hadoop Ecosystem Overview
- Real-time Analytics Options
- Streaming Data – Spark
- In-memory data – Spark
- What is Spark?
- Spark Ecosystem
- modes of Spark
- Spark installation demo
- overview of Spark on a cluster
- Spark Standalone cluster, Spark Web UI.
Spark Common Operations
- Invoking Spark Shell
- creating the Spark Context
- loading a file in Shell
- performing basic Operations on files in Spark Shell
- Overview of SBT, building a Spark project with SBT
- running Spark project with SBT
- local mode
- Spark mode
- caching overview
- Distributed Persistence.
Playing with RDDs
- transformations in RDD
- actions in RDD
- loading data in RDD
- saving data through RDD
- Key-Value Pair RDD
- MapReduce and Pair RDD Operations
- Spark and Hadoop Integration-HDFS
- Spark and Hadoop Integration-Yarn
- Handling Sequence Files, Partitioner.
Spark Streaming and MLlib
- Spark Streaming Architecture
- first Spark Streaming Program
- transformations in Spark Streaming
- fault tolerance in Spark Streaming
- parallelism level
- machine learning with Spark
- data types
- algorithms – statistics
- classification and regression
- collaborative filtering.
GraphX, Spark SQL and Performance Tuning in Spark
- Analyze Hive and Spark SQL architecture
- SQLContext in Spark SQL
- working with DataFrames
- implementing an example for Spark SQL
- integrating hive and Spark SQL
- support for JSON and Parquet File Formats
- implement data visualization in Spark
- loading of data
- Hive queries through Spark
- testing tips in Scala
- performance tuning tips in Spark
- shared variables: Broadcast Variables
- Shared Variables: Accumulators.