Hotline : ‎0811 9720 2000 info@codeva.co.id
Select Page

Big Data with Amazon Cloud, Hadoop/Spark and Docker

Fill Out The Form For FREE Classes And To View Codeva Program Packages

Beginner

Big Data with Amazon Cloud,
Hadoop/Spark and Docker

This is a 6-week evening program providing a hands-on introduction to the Hadoop and Spark ecosystem of Big Data technologies. The course will cover these key components of Apache Hadoop: HDFS, MapReduce with streaming, Hive, and Spark. Programming will be done in Python. The course will begin with a review of Python concepts needed for our examples. The course format is interactive. Students will need to bring laptops to class.

* Tuition paid for part-time courses can be applied to the Data Science Bootcamps if admitted within 9 months.

Course Overview

This 6-week program provides a hands-on introduction to Apache Hadoop and Spark programming using Python and cloud computing. The key components covered by the course include Hadoop Distributed File Systems, MapReduce using MRJob, Apache Hive, Pig, and Spark. Tools and platforms that are used include Docker, Amazon Web Services and Databricks. In the first half of the program students are required to pull a pre-built Docker image and run most of the exercises locally using docker containers. In the second half students must access their AWS and Databricks accounts to run cloud computing exercises. Students will need to bring their laptops to class.

Prerequisites

To get the most out of the class, you need to be familiar with Linux file systems, Linux command line interface (CLI) and the basic linux commands such as cd, ls, cp, etc. You also need to have basic programming skills in Python, and are comfortable with functional programming style, for example, how to use map() function to split a list of strings into a nested list. Object oriented programming (OOP) in python is not required.

Certificate

Certificates are awarded at the end of the program at the satisfactory completion of the course. Students are evaluated on a pass/fail basis for their performance on the required homework and final project (where applicable). Students who complete 80% of the homework and attend a minimum of 85% of all classes are eligible for the certificate of completion.

Syllabus

Unit 1: Introduction to Hadoop

  • 1. Data Engineering Toolkits
    • Running Linux using Docker containers
    • Linux CLI command and bash scripts
    • Python basics
  • 2. Hadoop and MapReduce
    • Big Data Overview
    • HDFS
    • YARN
    • MapReduce

Unit 2 – MapReduce

  • 3. MapReduce using MRJob 1
    • Protocols for Input & Output
    • Filtering
  • 4. MapReduce using MRJob 2
    • Top n
    • Inverted Index
    • Multi-step Jobs

Unit 3 – Apache Hive

  • 5. Apache Hive 1
    • Databases for Big Data
    • HiveQL and Querying Data
    • Windowing And Analytics Functions
    • MapReduce Scripts
  • 6. Apache Hive 2
    • Tables in Hive
    • Managed Tables and External Tables
    • Storage Formats
    • Partitions and Buckets

Unit 4 – Apache Pig

  • 7. Apache Pig 1
    • Overview
    • Pig Latin: Data Types
    • Pig Latin: Relational Operators
  • 8. Apache Pig 2
    • More Pig Latin: Relational operators
    • More Pig Latin: Functions
    • Compiling Pig to MapReduce
    • The Parallel Clause
    • Join Optimizations

Unit 5 – Apache Spark and AWS

  • 9. Apache Spark – Spark Core
    • Spark Overview
    • Running Spark using Databricks Notebooks
    • Working with PySpark: RDDs
    • Transformations and Actions
  • 10. Apache Spark – Spark SQL
    • Spark DataFrame
    • SQL Operations using Spark SQL
  • 11. Apache Spark – Spark ML
    • ML Pipeline using PySpark
  • 12. Amazon Elastic MapReduce
    • Overview
    • Amazon Web Services: IAM, EC2, S3
    • Creating EMR Cluster
    • Submitting Jobs
    • Intro to AWS CLI
×