Big Data Training Courses in Indonesia

Big Data Training Courses

Online or onsite, instructor-led live Big Data training courses start with an introduction to elemental concepts of Big Data, then progress into the programming languages and methodologies used to perform Data Analysis. Tools and infrastructure for enabling Big Data storage, Distributed Processing, and Scalability are discussed, compared and implemented in demo practice sessions.

Big Data training is available as "online live training" or "onsite live training". Online live training (aka "remote live training") is carried out by way of an interactive, remote desktop. Onsite live Big Data trainings in Indonesia can be carried out locally on customer premises or in NobleProg corporate training centers.

NobleProg -- Your Local Training Provider

Testimonials

★★★★★
★★★★★

Big Data Course Outlines in Indonesia

Course Name
Duration
Overview
Course Name
Duration
Overview
21 hours
This instructor-led, live training in Indonesia (online or onsite) is aimed at developers who wish to use and integrate Spark, Hadoop, and Python to process, analyze, and transform large and complex data sets. By the end of this training, participants will be able to:
  • Set up the necessary environment to start processing big data with Spark, Hadoop, and Python.
  • Understand the features, core components, and architecture of Spark and Hadoop.
  • Learn how to integrate Spark, Hadoop, and Python for big data processing.
  • Explore the tools in the Spark ecosystem (Spark MlLib, Spark Streaming, Kafka, Sqoop, Kafka, and Flume).
  • Build collaborative filtering recommendation systems similar to Netflix, YouTube, Amazon, Spotify, and Google.
  • Use Apache Mahout to scale machine learning algorithms.
14 hours
This instructor-led, live training in Indonesia (online or onsite) is aimed at beginner to intermediate-level data analysts and data scientists who wish to use Weka to perform data mining tasks. By the end of this training, participants will be able to:
  • Install and configure Weka.
  • Understand the Weka environment and workbench.
  • Perform data mining tasks using Weka.
14 hours
This instructor-led, live training in Indonesia (online or onsite) is aimed at data analysts or anyone who wishes to use SPSS Modeler to perform data mining activities. By the end of this training, participants will be able to:
  • Understand the fundamentals of data mining.
  • Learn how to import and assess data quality with the Modeler.
  • Develop, deploy, and evaluate data models efficiently.
35 hours
Participants who complete this instructor-led, live training in Indonesia will gain a practical, real-world understanding of Big Data and its related technologies, methodologies and tools. Participants will have the opportunity to put this knowledge into practice through hands-on exercises. Group interaction and instructor feedback make up an important component of the class. The course starts with an introduction to elemental concepts of Big Data, then progresses into the programming languages and methodologies used to perform Data Analysis. Finally, we discuss the tools and infrastructure that enable Big Data storage, Distributed Processing, and Scalability.
21 hours
In this instructor-led, live training in Indonesia, participants will learn how to use Python and Spark together to analyze big data as they work on hands-on exercises. By the end of this training, participants will be able to:
  • Learn how to use Spark with Python to analyze Big Data.
  • Work on exercises that mimic real world cases.
  • Use different tools and techniques for big data analysis using PySpark.
7 hours
This course covers how to use Hive SQL language (AKA: Hive HQL, SQL on Hive, HiveQL) for people who extract data from Hive
21 hours
Knowledge discovery in databases (KDD) is the process of discovering useful knowledge from a collection of data. Real-life applications for this data mining technique include marketing, fraud detection, telecommunication and manufacturing. In this instructor-led, live course, we introduce the processes involved in KDD and carry out a series of exercises to practice the implementation of those processes. Audience
  • Data analysts or anyone interested in learning how to interpret data to solve problems
Format of the Course
  • After a theoretical discussion of KDD, the instructor will present real-life cases which call for the application of KDD to solve a problem. Participants will prepare, select and cleanse sample data sets and use their prior knowledge about the data to propose solutions based on the results of their observations.
14 hours
Apache Kylin is an extreme, distributed analytics engine for big data. In this instructor-led live training, participants will learn how to use Apache Kylin to set up a real-time data warehouse. By the end of this training, participants will be able to:
  • Consume real-time streaming data using Kylin
  • Utilize Apache Kylin's powerful features, rich SQL interface, spark cubing and subsecond query latency
Note
  • We use the latest version of Kylin (as of this writing, Apache Kylin v2.0)
Audience
  • Big data engineers
  • Big Data analysts
Format of the course
  • Part lecture, part discussion, exercises and heavy hands-on practice
14 hours
Datameer is a business intelligence and analytics platform built on Hadoop It allows endusers to access, explore and correlate largescale, structured, semistructured and unstructured data in an easytouse fashion In this instructorled, live training, participants will learn how to use Datameer to overcome Hadoop's steep learning curve as they step through the setup and analysis of a series of big data sources By the end of this training, participants will be able to: Create, curate, and interactively explore an enterprise data lake Access business intelligence data warehouses, transactional databases and other analytic stores Use a spreadsheet userinterface to design endtoend data processing pipelines Access prebuilt functions to explore complex data relationships Use draganddrop wizards to visualize data and create dashboards Use tables, charts, graphs, and maps to analyze query results Audience Data analysts Format of the course Part lecture, part discussion, exercises and heavy handson practice .
14 hours
This instructor-led, live training in Indonesia (online or onsite) is aimed at data scientists who wish to use Excel for data mining.
  • By the end of this training, participants will be able to:
  • Explore data with Excel to perform data mining and analysis.
  • Use Microsoft algorithms for data mining.
  • Understand concepts in Excel data mining.
21 hours
Dremio is an opensource "selfservice data platform" that accelerates the querying of different types of data sources Dremio integrates with relational databases, Apache Hadoop, MongoDB, Amazon S3, ElasticSearch, and other data sources It supports SQL and provides a web UI for building queries In this instructorled, live training, participants will learn how to install, configure and use Dremio as a unifying layer for data analysis tools and the underlying data repositories By the end of this training, participants will be able to: Install and configure Dremio Execute queries against multiple data sources, regardless of location, size, or structure Integrate Dremio with BI and data sources such as Tableau and Elasticsearch Audience Data scientists Business analysts Data engineers Format of the course Part lecture, part discussion, exercises and heavy handson practice Notes To request a customized training for this course, please contact us to arrange .
21 hours
Apache Drill is a schemafree, distributed, inmemory columnar SQL query engine for Hadoop, NoSQL and other Cloud and file storage systems The power of Apache Drill lies in its ability to join data from multiple data stores using a single query Apache Drill supports numerous NoSQL databases and file systems, including HBase, MongoDB, MapRDB, HDFS, MapRFS, Amazon S3, Azure Blob Storage, Google Cloud Storage, Swift, NAS and local files Apache Drill is the open source version of Google's Dremel system which is available as an infrastructure service called Google BigQuery In this instructorled, live training, participants will learn the fundamentals of Apache Drill, then leverage the power and convenience of SQL to interactively query big data across multiple data sources, without writing code Participants will also learn how to optimize their Drill queries for distributed SQL execution By the end of this training, participants will be able to: Perform "selfservice" exploration on structured and semistructured data on Hadoop Query known as well as unknown data using SQL queries Understand how Apache Drills receives and executes queries Write SQL queries to analyze different types of data, including structured data in Hive, semistructured data in HBase or MapRDB tables, and data saved in files such as Parquet and JSON Use Apache Drill to perform onthefly schema discovery, bypassing the need for complex ETL and schema operations Integrate Apache Drill with BI (Business Intelligence) tools such as Tableau, Qlikview, MicroStrategy and Excel Audience Data analysts Data scientists SQL programmers Format of the course Part lecture, part discussion, exercises and heavy handson practice .
14 hours
Apache Arrow is an opensource inmemory data processing framework It is often used together with other data science tools for accessing disparate data stores for analysis It integrates well with other technologies such as GPU databases, machine learning libraries and tools, execution engines, and data visualization frameworks In this onsite instructorled, live training, participants will learn integrate Apache Arrow with various Data Science frameworks to access data from disparate data sources By the end of this training, participants will be able to: Install and configure Apache Arrow in a distributed clustered environment Use Apache Arrow to access data from disparate data sources Use Apache Arrow to bypass the need for constructing and maintaining complex ETL pipelines Analyze data across disparate data sources without having to consolidate it into a centralized repository Audience Data scientists Data engineers Format of the Course Part lecture, part discussion, exercises and heavy handson practice Note To request a customized training for this course, please contact us to arrange .
14 hours
  This instructor-led, live training (online or onsite) is aimed at software developers, managers, and business analyst who wish to use big data systems to store and retrieve large amounts of data. By the end of this training, participants will be able to:
  • Query large amounts of data efficiently.
  • Understand how Big Data system store and retrieve data
  • Use the latest big data systems available
  • Wrangle data from data systems into reporting systems
  • Learn to write SQL queries in:
    • MySQL
    • Postgres
    • Hive Query Language (HiveQL/HQL)
    • Redshift 
Format of the Course
  • Interactive lecture and discussion.
  • Lots of exercises and practice.
  • Hands-on implementation in a live-lab environment.
Course Customization Options
  • To request a customized training for this course, please contact us to arrange.
14 hours
The objective of the course is to enable participants to gain a mastery of how to work with the SQL language in Oracle database for data extraction at intermediate level.
35 hours
Advances in technologies and the increasing amount of information are transforming how business is conducted in many industries, including government. Government data generation and digital archiving rates are on the rise due to the rapid growth of mobile devices and applications, smart sensors and devices, cloud computing solutions, and citizen-facing portals. As digital information expands and becomes more complex, information management, processing, storage, security, and disposition become more complex as well. New capture, search, discovery, and analysis tools are helping organizations gain insights from their unstructured data. The government market is at a tipping point, realizing that information is a strategic asset, and government needs to protect, leverage, and analyze both structured and unstructured information to better serve and meet mission requirements. As government leaders strive to evolve data-driven organizations to successfully accomplish mission, they are laying the groundwork to correlate dependencies across events, people, processes, and information. High-value government solutions will be created from a mashup of the most disruptive technologies:
  • Mobile devices and applications
  • Cloud services
  • Social business technologies and networking
  • Big Data and analytics
IDC predicts that by 2020, the IT industry will reach $5 trillion, approximately $1.7 trillion larger than today, and that 80% of the industry's growth will be driven by these 3rd Platform technologies. In the long term, these technologies will be key tools for dealing with the complexity of increased digital information. Big Data is one of the intelligent industry solutions and allows government to make better decisions by taking action based on patterns revealed by analyzing large volumes of data — related and unrelated, structured and unstructured. But accomplishing these feats takes far more than simply accumulating massive quantities of data.“Making sense of thesevolumes of Big Datarequires cutting-edge tools and technologies that can analyze and extract useful knowledge from vast and diverse streams of information,” Tom Kalil and Fen Zhao of the White House Office of Science and Technology Policy wrote in a post on the OSTP Blog. The White House took a step toward helping agencies find these technologies when it established the National Big Data Research and Development Initiative in 2012. The initiative included more than $200 million to make the most of the explosion of Big Data and the tools needed to analyze it. The challenges that Big Data poses are nearly as daunting as its promise is encouraging. Storing data efficiently is one of these challenges. As always, budgets are tight, so agencies must minimize the per-megabyte price of storage and keep the data within easy access so that users can get it when they want it and how they need it. Backing up massive quantities of data heightens the challenge. Analyzing the data effectively is another major challenge. Many agencies employ commercial tools that enable them to sift through the mountains of data, spotting trends that can help them operate more efficiently. (A recent study by MeriTalk found that federal IT executives think Big Data could help agencies save more than $500 billion while also fulfilling mission objectives.). Custom-developed Big Data tools also are allowing agencies to address the need to analyze their data. For example, the Oak Ridge National Laboratory’s Computational Data Analytics Group has made its Piranha data analytics system available to other agencies. The system has helped medical researchers find a link that can alert doctors to aortic aneurysms before they strike. It’s also used for more mundane tasks, such as sifting through résumés to connect job candidates with hiring managers.
21 hours
Audience If you try to make sense out of the data you have access to or want to analyse unstructured data available on the net (like Twitter, Linked in, etc...) this course is for you. It is mostly aimed at decision makers and people who need to choose what data is worth collecting and what is worth analyzing. It is not aimed at people configuring the solution, those people will benefit from the big picture though. Delivery Mode During the course delegates will be presented with working examples of mostly open source technologies. Short lectures will be followed by presentation and simple exercises by the participants Content and Software used All software used is updated each time the course is run, so we check the newest versions possible. It covers the process from obtaining, formatting, processing and analysing the data, to explain how to automate decision making process with machine learning.
35 hours
Day 1 - provides a high-level overview of essential Big Data topic areas. The module is divided into a series of sections, each of which is accompanied by a hands-on exercise. Day 2 - explores a range of topics that relate analysis practices and tools for Big Data environments. It does not get into implementation or programming details, but instead keeps coverage at a conceptual level, focusing on topics that enable participants to develop a comprehensive understanding of the common analysis functions and features offered by Big Data solutions. Day 3 - provides an overview of the fundamental and essential topic areas relating to Big Data solution platform architecture. It covers Big Data mechanisms required for the development of a Big Data solution platform and architectural options for assembling a data processing platform. Common scenarios are also presented to provide a basic understanding of how a Big Data solution platform is generally used.  Day 4 - builds upon Day 3 by exploring advanced topics relatng to Big Data solution platform architecture. In particular, different architectural layers that make up the Big Data solution platform are introduced and discussed, including data sources, data ingress, data storage, data processing and security.  Day 5 - covers a number of exercises and problems designed to test the delegates ability to apply knowledge of topics covered Day 3 and 4. 
21 hours
Big Data is a term that refers to solutions destined for storing and processing large data sets. Developed by Google initially, these Big Data solutions have evolved and inspired other similar projects, many of which are available as open-source. R is a popular programming language in the financial industry.
14 hours
When traditional storage technologies don't handle the amount of data you need to store there are hundereds of alternatives. This course try to guide the participants what are alternatives for storing and analyzing Big Data and what are theirs pros and cons. This course is mostly focused on discussion and presentation of solutions, though hands-on exercises are available on demand.
14 hours
The course is part of the Data Scientist skill set (Domain: Data and Technology).
35 hours
Big data is data sets that are so voluminous and complex that traditional data processing application software are inadequate to deal with them. Big data challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating and information privacy.
14 hours
Vespa an opensource big data processing and serving engine created by Yahoo  It is used to respond to user queries, make recommendations, and provide personalized content and advertisements in realtime This instructorled, live training introduces the challenges of serving largescale data and walks participants through the creation of an application that can compute responses to user requests, over large datasets in realtime By the end of this training, participants will be able to: Use Vespa to quickly compute data (store, search, rank, organize) at serving time while a user waits Implement Vespa into existing applications involving feature search, recommendations, and personalization Integrate and deploy Vespa with existing big data systems such as Hadoop and Storm Audience Developers Format of the course Part lecture, part discussion, exercises and heavy handson practice .
14 hours
To meet compliance of the regulators, CSPs ( Communication service providers) can tap into Big Data Analytics which not only help them to meet compliance but within the scope of same project they can increase customer satisfaction and thus reduce the churn In fact since compliance is related to Quality of service tied to a contract, any initiative towards meeting the compliance, will improve the “competitive edge” of the CSPs Therefore, it is important that Regulators should be able to advise/guide a set of Big Data analytic practice for CSPs that will be of mutual benefit between the regulators and CSPs 2 days of course : 8 modules, 2 hours each = 16 hours .
35 hours
Advances in technologies and the increasing amount of information are transforming how law enforcement is conducted The challenges that Big Data pose are nearly as daunting as Big Data's promise Storing data efficiently is one of these challenges; effectively analyzing it is another In this instructorled, live training, participants will learn the mindset with which to approach Big Data technologies, assess their impact on existing processes and policies, and implement these technologies for the purpose of identifying criminal activity and preventing crime Case studies from law enforcement organizations around the world will be examined to gain insights on their adoption approaches, challenges and results By the end of this training, participants will be able to: Combine Big Data technology with traditional data gathering processes to piece together a story during an investigation Implement industrial big data storage and processing solutions for data analysis Prepare a proposal for the adoption of the most adequate tools and processes for enabling a datadriven approach to criminal investigation Audience Law Enforcement specialists with a technical background Format of the course Part lecture, part discussion, exercises and heavy handson practice .
14 hours
This classroom based training session will explore Big Data Delegates will have computer based examples and case study exercises to undertake with relevant big data tools .
14 hours
Objective : This training course aims at helping attendees understand why Big Data is changing our lives and how it is altering the way businesses see us as consumers Indeed, users of big data in businesses find that big data unleashes a wealth of information and insights which translate to higher profits, reduced costs, and less risk However, the downside was frustration sometimes when putting too much emphasis on individual technologies and not enough focus on the pillars of big data management Attendees will learn during this course how to manage the big data using its three pillars of data integration, data governance and data security in order to turn big data into real business value Different exercices conducted on a case study of customer management will help attendees to better understand the underlying processes .
7 hours
This instructor-led, live training in Indonesia (online or onsite) is aimed at technical persons who wish to learn how to implement a machine learning strategy while maximizing the use of big data. By the end of this training, participants will:
  • Understand the evolution and trends for machine learning.
  • Know how machine learning is being used across different industries.
  • Become familiar with the tools, skills and services available to implement machine learning within an organization.
  • Understand how machine learning can be used to enhance data mining and analysis.
  • Learn what a data middle backend is, and how it is being used by businesses.
  • Understand the role that big data and intelligent applications are playing across industries.
7 hours
This instructor-led, live training in Indonesia (online or onsite) is aimed at software engineers who wish to use Sqoop and Flume for big data. By the end of this training, participants will be able to:
  • Ingest big data with Sqoop and Flume.
  • Ingest data from multiple data sources.
  • Move data from relational databases to HDFS and Hive.
  • Export data from HDFS to a relational database.
28 hours
This instructor-led, live training in Indonesia (online or onsite) is aimed at technical persons who wish to deploy Talend Open Studio for Big Data to simplifying the process of reading and crunching through Big Data. By the end of this training, participants will be able to:
  • Install and configure Talend Open Studio for Big Data.
  • Connect with Big Data systems such as Cloudera, HortonWorks, MapR, Amazon EMR and Apache.
  • Understand and set up Open Studio's big data components and connectors.
  • Configure parameters to automatically generate MapReduce code.
  • Use Open Studio's drag-and-drop interface to run Hadoop jobs.
  • Prototype big data pipelines.
  • Automate big data integration projects.

Last Updated:

Online Big Data courses, Weekend Big Data courses, Evening Big Data training, Big Data boot camp, Big Data instructor-led, Weekend Big Data training, Evening Big Data courses, Big Data coaching, Big Data instructor, Big Data trainer, Big Data training courses, Big Data classes, Big Data on-site, Big Data private courses, Big Data one on one training

Course Discounts

No course discounts for now.

Course Discounts Newsletter

We respect the privacy of your email address. We will not pass on or sell your address to others.
You can always change your preferences or unsubscribe completely.

Some of our clients

is growing fast!

We are looking to expand our presence in Indonesia!

As a Business Development Manager you will:

  • expand business in Indonesia
  • recruit local talent (sales, agents, trainers, consultants)
  • recruit local trainers and consultants

We offer:

  • Artificial Intelligence and Big Data systems to support your local operation
  • high-tech automation
  • continuously upgraded course catalogue and content
  • good fun in international team

If you are interested in joining a growing organisation and running a high-tech, high-quality training and consulting business.

Apply now!

This site in other countries/regions