
Online or onsite, instructor-led live Apache Spark training courses demonstrate through hands-on practice how Spark fits into the Big Data ecosystem, and how to use Spark for data analysis.
Apache Spark training is available as "online live training" or "onsite live training". Online live training (aka "remote live training") is carried out by way of an interactive, remote desktop. Onsite live Apache Spark training can be carried out locally on customer premises in Indonesia or in NobleProg corporate training centers in Indonesia.
NobleProg -- Your Local Training Provider
Testimonials
Richard is very calm and methodical, with an analytic insight - exactly the qualities needed to present this sort of course.
Kieran Mac Kenna
Course: Spark for Developers
We know a lot more about the whole environment.
John Kidd
Course: Spark for Developers
The trainer made the class interesting and entertaining which helps quite a bit with all day training.
Ryan Speelman
Course: Spark for Developers
I think the trainer had an excellent style of combining humor and real life stories to make the subjects at hand very approachable. I would highly recommend this professor in the future.
Course: Spark for Developers
Ernesto did a great job explaining the high level concepts of using Spark and its various modules.
Michael Nemerouf
Course: Spark for Developers
This is one of the best hands-on with exercises programming courses I have ever taken.
Laura Kahn
Course: Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP
This is one of the best quality online training I have ever taken in my 13 year career. Keep up the great work!.
Course: Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP
Richard was very willing to digress when we wanted to ask semi-related questions about things not on the syllabus. Explanations were clear and he was up front about caveats in any advice he gave us.
ARM Limited
Course: Spark for Developers
The VM I liked very much The Teacher was very knowledgeable regarding the topic as well as other topics, he was very nice and friendly I liked the facility in Dubai.
Safar Alqahtani - Elm Information Security
Course: Big Data Analytics in Health
practice tasks
Pawel Kozikowski - GE Medical Systems Polska Sp. Zoo
Course: Python and Spark for Big Data (PySpark)
Small group (4 trainees) and we could progress together. Also the trainer could so help everybody.
ICE International Copyright Enterprise Germany GmbH
Course: Spark for Developers
Ajay was very friendly, helpful and also knowledgable about the topic he was discussing.
Biniam Guulay - ICE International Copyright Enterprise Germany GmbH
Course: Spark for Developers
The lab exercises. Applying the theory from the first day in subsequent days.
Dell
Course: A Practical Introduction to Stream Processing
* Organization * Trainer's expertise with the subject
ENGIE- 101 Arch Street
Course: Python and Spark for Big Data (PySpark)
The trainer was passionate and well-known what he said I appreciate his help and answers all our questions and suggested cases.
Course: A Practical Introduction to Stream Processing
Doing similar exercises different ways really help understanding what each component (Hadoop/Spark, standalone/cluster) can do on its own and together. It gave me ideas on how I should test my application on my local machine when I develop vs when it is deployed on a cluster.
Thomas Carcaud - IT Frankfurt GmbH
Course: Spark for Developers
The lessons were taught in a Jupyter notebook. The topics were structured with a logical sequence and naturally helped develop the session from the easier parts to the more complex. I'm already an advanced user of Python with background in Machine Learning, so found the course easier to follow than, possibly, some of my classmates that took the training course. I appreciate that some of the most elementary concepts were skipped and that he focused on the most substantial matters.
Angela DeLaMora - ADT, LLC
Course: Python and Spark for Big Data (PySpark)
NA
DBS
Course: Python and Spark for Big Data (PySpark)
hands on Training..
Abraham Thomas - PPL
Course: Python and Spark for Big Data (PySpark)
individual attention.
ARCHANA ANILKUMAR - PPL
Course: Python and Spark for Big Data (PySpark)
get to learn spark streaming , databricks and aws redshift
Lim Meng Tee - Jobstreet.com Shared Services Sdn. Bhd.
Course: Apache Spark in the Cloud
The content and the knowledge .
Jobstreet.com Shared Services Sdn. Bhd.
Course: Apache Spark in the Cloud
It was very informative. I've had very little experience with Spark before and so far this course has provided a very good introduction to the subject.
Intelligent Medical Objects
Course: Apache Spark in the Cloud
It was great to get an understanding of what is going on under the hood of Spark. Knowing what's going on under the hood helps to better understand why your code is or is not doing what you expect it to do. A lot of the training was hands on which is always great and the section on optimizations was exceptionally relevant to my current work which was nice.
Intelligent Medical Objects
Course: Apache Spark in the Cloud
This is a great class! I most appreciate that Andras explains very clearly what Spark is all about, where it came from, and what problems it is able to solve. Much better than other introductions I've seen that just dive into how to use it. Andras has a deep knowledge of the topic and explains things very well.
Intelligent Medical Objects
Course: Apache Spark in the Cloud
The live examples that were given and showed the basic aspects of Spark.
Intelligent Medical Objects
Course: Apache Spark in the Cloud
1. Right balance between high level concepts and technical details. 2. Andras is very knowledgeable about his teaching. 3. Exercise
Steven Wu - Intelligent Medical Objects
Course: Apache Spark in the Cloud
Having hands on session / assignments
Poornima Chenthamarakshan - Intelligent Medical Objects
Course: Apache Spark in the Cloud
Trainer adjusted the training slightly based on audience request , so throw some light on few diff topics that we have requested
Intelligent Medical Objects
Course: Apache Spark in the Cloud
His pace, was great. I loved the fact he went into theory too so that I understand WHY i would do the things he is asking.
Intelligent Medical Objects
Course: Apache Spark in the Cloud
I think the trainer had an excellent style of combining humor and real life stories to make the subjects at hand very approachable. I would highly recommend this professor in the future.
Course: Spark for Developers
This is one of the best quality online training I have ever taken in my 13 year career. Keep up the great work!.
Course: Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP
The trainer was passionate and well-known what he said I appreciate his help and answers all our questions and suggested cases.
Course: A Practical Introduction to Stream Processing
Spark Subcategories in Indonesia
Spark Course Outlines in Indonesia
- Set up the necessary environment to start processing big data with Spark, Hadoop, and Python.
- Understand the features, core components, and architecture of Spark and Hadoop.
- Learn how to integrate Spark, Hadoop, and Python for big data processing.
- Explore the tools in the Spark ecosystem (Spark MlLib, Spark Streaming, Kafka, Sqoop, Kafka, and Flume).
- Build collaborative filtering recommendation systems similar to Netflix, YouTube, Amazon, Spotify, and Google.
- Use Apache Mahout to scale machine learning algorithms.
- Learn how to use Spark with Python to analyze Big Data.
- Work on exercises that mimic real world cases.
- Use different tools and techniques for big data analysis using PySpark.
- Use Hortonworks to reliably run Hadoop at a large scale.
- Unify Hadoop's security, governance, and operations capabilities with Spark's agile analytic workflows.
- Use Hortonworks to investigate, validate, certify and support each of the components in a Spark project.
- Process different types of data, including structured, unstructured, in-motion, and at-rest.
- Install and configure different Stream Processing frameworks, such as Spark Streaming and Kafka Streaming.
- Understand and select the most appropriate framework for the job.
- Process of data continuously, concurrently, and in a record-by-record fashion.
- Integrate Stream Processing solutions with existing databases, data warehouses, data lakes, etc.
- Integrate the most appropriate stream processing library with enterprise applications and microservices.
- Create Spark applications with the Scala programming language.
- Use Spark Streaming to process continuous streams of data.
- Process streams of real-time data with Spark Streaming.
- Implement a data pipeline architecture for processing big data.
- Develop a cluster infrastructure with Apache Mesos and Docker.
- Analyze data with Spark and Scala.
- Manage unstructured data with Apache Cassandra.
- Install and configure Apache Spark.
- Quickly process and analyze very large data sets.
- Understand the difference between Apache Spark and Hadoop MapReduce and when to use which.
- Integrate Apache Spark with other machine learning tools.
- Install and configure Apache Spark.
- Understand how .NET implements Spark APIs so that they can be accessed from a .NET application.
- Develop data processing applications using C# or F#, capable of handling data sets whose size is measured in terabytes and pedabytes.
- Develop machine learning features for a .NET application using Apache Spark capabilities.
- Carry out exploratory analysis using SQL queries on big data sets.
- Install and configure Apache Hadoop.
- Understand the four major components in the Hadoop ecoystem: HDFS, MapReduce, YARN, and Hadoop Common.
- Use Hadoop Distributed File System (HDFS) to scale a cluster to hundreds or thousands of nodes.
- Set up HDFS to operate as storage engine for on-premise Spark deployments.
- Set up Spark to access alternative storage solutions such as Amazon S3 and NoSQL database systems such as Redis, Elasticsearch, Couchbase, Aerospike, etc.
- Carry out administrative tasks such as provisioning, management, monitoring and securing an Apache Hadoop cluster.
- Set up the necessary development environment to start building NLP pipelines with Spark NLP.
- Understand the features, architecture, and benefits of using Spark NLP.
- Use the pre-trained models available in Spark NLP to implement text processing.
- Learn how to build, train, and scale Spark NLP models for production-grade projects.
- Apply classification, inference, and sentiment analysis on real-world use cases (clinical data, customer behavior insights, etc.).
- spark.mllib contains the original API built on top of RDDs.
- spark.ml provides higher-level API built on top of DataFrames for constructing ML pipelines.
- Understand how graph data is persisted and traversed.
- Select the best framework for a given task (from graph databases to batch processing frameworks.)
- Implement Hadoop, Spark, GraphX and Pregel to carry out graph computing across many machines in parallel.
- View real-world big data problems in terms of graphs, processes and traversals.
Last Updated: