SlideShare a Scribd company logo
1 of 12
Download to read offline
Hadoop - Big Data Overview
Big data means really a big data, it is a collection of large datasets that cannot be processed
using traditional computing techniques. Big data is not merely a data, rather it has become a
complete subject, which involves various tools, techniques and frameworks.
Thus Big Data includes huge volume, high velocity, and extensible variety of data. The data
in it will be of three types.
 Structured data : Relational data. 

 Semi Structured data : XML data. 

 Unstructured data : Word, PDF, Text, Media Logs. 
Benefits of Big Data
Big data is really critical to our life and its emerging as one of the most important
technologies in modern world. Follow are just few benefits which are very much known to
all of us:
 Using the information kept in the social network like Facebook, the marketing
agencies are learning about the response for their campaigns, promotions, and other
advertising mediums. 

 Using the information in the social media like preferences and product perception of
their consumers, product companies and retail organizations are planning their
production. 

 Using the data regarding the previous medical history of patients, hospitals are
providing better and quick service. 
Big Data Technologies
Big data technologies are important in providing more accurate analysis, which may lead to
more concrete decision-making resulting in greater operational efficiencies, cost reductions,
and reduced risks for the business.
To harness the power of big data, you would require an infrastructure that can manage and
process huge volumes of structured and unstructured data in realtime and can protect data
privacy and security.
R.Kirubaburi
There are various technologies in the market from different vendors including Amazon,
IBM, Microsoft, etc., to handle big data. While looking into the technologies that handle big
data, we examine the following two classes of technology:
Operational Big Data
This include systems like MongoDB that provide operational capabilities for real-time,
interactive workloads where data is primarily captured and stored.
NoSQL Big Data systems are designed to take advantage of new cloud computing
architectures that have emerged over the past decade to allow massive computations to be
run inexpensively and efficiently. This makes operational big data workloads much easier to
manage, cheaper, and faster to implement.
Some NoSQL systems can provide insights into patterns and trends based on real-time data
with minimal coding and without the need for data scientists and additional infrastructure.
Analytical Big Data
This includes systems like Massively Parallel Processing (MPP) database systems and
MapReduce that provide analytical capabilities for retrospective and complex analysis that
may touch most or all of the data.
MapReduce provides a new method of analyzing data that is complementary to the
capabilities provided by SQL, and a system based on MapReduce that can be scaled up from
single servers to thousands of high and low end machines.
These two classes of technology are complementary and frequently deployed together.
Big Data Challenges
The major challenges associated with big data are as follows:
 Capturing data, Curation, Storage, Searching, Sharing, Transfer, Analysis &
Presentation 
To fulfill the above challenges, organizations normally take the help of enterprise servers.
Characteristics Of 'Big Data'
(i)Volume – The name 'Big Data' itself is related to a size which is enormous. Size of data
plays very crucial role in determining value out of data. Also, whether a particular data can
actually be considered as a Big Data or not, is dependent upon volume of data.
R.Kirubaburi.
Hence, 'Volume' is one characteristic which needs to be considered while dealing with 'Big
Data'.
(ii)Variety – The next aspect of 'Big Data' is its variety.
Variety refers to heterogeneous sources and the nature of data, both structured and unstructured.
During earlier days, spreadsheets and databases were the only sources of data considered by
most of the applications. Now days, data in the form of emails, photos, videos, monitoring
devices, PDFs, audio, etc. is also being considered in the analysis applications. This variety of
unstructured data poses certain issues for storage, mining and analysing data.
(iii)Velocity – The term 'velocity' refers to the speed of generation of data. How fast the
data is generated and processed to meet the demands, determines real potential in the data.
Big Data Velocity deals with the speed at which data flows in from sources like business
processes, application logs, networks and social media sites, sensors, mobile devices, etc.
The flow of data is massive and continuous.
(iv)Variability – This refers to the inconsistency which can be shown by the data at times,
thus hampering the process of being able to handle and manage the data effectively.
Advantages Of Big Data Processing
Ability to process 'Big Data' brings in multiple benefits, such as-
• Businesses can utilize outside intelligence while taking decisions
Access to social data from search engines and sites like facebook, twitter are enabling
organizations to fine tune their business strategies.
• Improved customer service
Traditional customer feedback systems are getting replaced by new systems designed
with 'Big Data' technologies. In these new systems, Big Data and natural language
processing technologies are being used to read and evaluate consumer responses.
• Early identification of risk to the product/services, if any
• Better operational efficiency
'Big Data' technologies can be used for creating staging area or landing zone for new data
before identifying what data should be moved to the data warehouse. In addition, such
R.Kirubaburi
integration of 'Big Data' technologies and data warehouse helps organization to offload
infrequently accessed data.
Traditional Approach
In this approach, an enterprise will have a computer to store and process big data. Here data
will be stored in an RDBMS like Oracle Database, MS SQL Server or DB2 and
sophisticated softwares can be written to interact with the database, process the required data
and present it to the users for analysis purpose.
Limitation
This approach works well where we have less volume of data that can be accommodated by
standard database servers, or up to the limit of the processor which is processing the data.
But when it comes to dealing with huge amounts of data, it is really a tedious task to process
such data through a traditional database server.
Google’s Solution
Google solved this problem using an algorithm called MapReduce. This algorithm divides
the task into small parts and assigns those parts to many computers connected over the
network, and collects the results to form the final result dataset.
Hadoop
Doug Cutting, Mike Cafarella and team took the solution provided by Google and started an
Open Source Project called HADOOP in 2005 and Doug named it after his son's toy
elephant. Now Apache Hadoop is a registered trademark of the Apache Software
Foundation.
Hadoop runs applications using the MapReduce algorithm, where the data is processed in
parallel on different CPU nodes. In short, Hadoop framework is capabale enough to develop
applications capable of running on clusters of computers and they could perform complete
statistical analysis for a huge amounts of data.
Apache Hadoop consists of two sub-projects –
1. Hadoop MapReduce : MapReduce is a computational model and software framework for
writing applications which are run on Hadoop. These MapReduce programs are capable of
processing enormous data in parallel on large clusters of computation nodes.
2. HDFS (Hadoop Distributed File System): HDFS takes care of storage part of Hadoop
applications. MapReduce applications consume data from HDFS. HDFS creates multiple
replicas of data blocks and distributes them on compute nodes in cluster. This
distribution enables reliable and extremely rapid computations.
R.Kirubaburi
Hadoop Architecture
Although Hadoop is best known for MapReduce and its distributed file system- HDFS, the
term is also used for a family of related projects that fall under the umbrella of distributed
computing and large-scale data processing. Other Hadoop-related projects at Apache include
are Hive, HBase, Mahout, Sqoop , Flume and ZooKeeper.
Hadoop cluster consists of data center, the rack and the node which actually executes jobs.
Here, data center consists of racks and rack consists of nodes. Network bandwidth available
to processes varies depending upon location of the processes. That is, bandwidth available
becomes lesser as we go away from-
 Processes on the same node 

 Different nodes on the same rack 

 Nodes on different racks of the same data center 

 Nodes in different data centers 
Hadoop is an Apache open source framework written in java that allows
distributed processing of large datasets across clusters of computers using simple
programming models. A Hadoop frame-worked application works in an environment that
provides distributed storage and computation across clusters of computers. Hadoop is
R.Kirubaburi
designed to scale up from single server to thousands of machines, each offering local
computation and storage.
Hadoop framework includes following four modules:
 Hadoop Common: These are Java libraries and utilities required by other Hadoop
modules. These libraries provides filesystem and OS level abstractions and contains
the necessary Java files and scripts required to start Hadoop. 

 Hadoop YARN: This is a framework for job scheduling and cluster resource
management. 

 Hadoop Distributed File System (HDFS™): A distributed file system that provides
high-throughput access to application data. 

 Hadoop MapReduce: This is YARN-based system for parallel processing of large
data sets. 
We can use following diagram to depict these four components available in Hadoop
framework.
Since 2012, the term "Hadoop" often refers not just to the base modules mentioned above
but also to the collection of additional software packages that can be installed on top of or
alongside Hadoop, such as Apache Pig, Apache Hive, Apache HBase, Apache Spark etc.
Overview of HDFS
HDFS has many similarities with other distributed file systems, but is different in several
respects. One noticeable difference is HDFS's write-once-read-many model that relaxes
concurrency control requirements, simplifies data coherency, and enables high-throughput
access. Another unique attribute of HDFS is the viewpoint that it is usually better to locate
processing logic near the data rather than moving the data to the application space. HDFS
rigorously restricts data writing to one writer at a time. Bytes are always appended to the
end of a stream, and byte streams are guaranteed to be stored in the order written.
HDFS has many goals. Here are some of the most notable:
 Fault tolerance by detecting faults and applying quick, automatic recovery 

 Data access via MapReduce streaming 

 Simple and robust coherency model 

 Processing logic close to the data, rather than the data close to the processing logic 
R.Kirubaburi
 Portability across heterogeneous commodity hardware and operating systems 

 Scalability to reliably store and process large amounts of data 

 Economy by distributing data and processing across clusters of commodity personal
computers 

 Efficiency by distributing data and logic to process it in parallel on nodes where data
is located 

 Reliability by automatically maintaining multiple copies of data and automatically
redeploying processing logic in the event of failures 
MapReduce
Hadoop MapReduce is a software framework for easily writing applications which process
big amounts of data in-parallel on large clusters (thousands of nodes) of commodity
hardware in a reliable, fault-tolerant manner.The term MapReduce actually refers to the
following two different tasks that Hadoop programs perform:
 The Map Task: This is the first task, which takes input data and converts it into a set
of data, where individual elements are broken down into tuples (key/value pairs). 

 The Reduce Task: This task takes the output from a map task as input and combines
those data tuples into a smaller set of tuples. The reduce task is always performed
after the map task. 
R.Kirubaburi
Typically both the input and the output are stored in a file-system. The framework takes care
of scheduling tasks, monitoring them and re-executes the failed tasks.
The MapReduce framework consists of a single master JobTracker and one slave
TaskTracker per cluster-node. The master is responsible for resource management, tracking
resource consumption/availability and scheduling the jobs component tasks on the slaves,
monitoring them and re-executing the failed tasks. The slaves TaskTracker execute the tasks
as directed by the master and provide task-status information to the master periodically.
The JobTracker is a single point of failure for the Hadoop MapReduce service which means
if JobTracker goes down, all running jobs are halted.
Hadoop Distributed File System
Hadoop can work directly with any mountable distributed file system such as Local FS,
HFTP FS, S3 FS, and others, but the most common file system used by Hadoop is the
Hadoop Distributed File System (HDFS).
R.Kirubaburi
Functions of a NameNode:
Let’s list out various functions of a NameNode:
1. The NameNode maintains and executes the file system namespace. If there are any
modifications in the file system namespace or in its properties, this is tracked by the
NameNode.
2. It directs the Datanodes (Slave nodes) to execute the low-level I/O operations.
3. It keeps a record of how the files in HDFS are divided into blocks, in which nodes these
blocks are stored and by and large the NameNode manages cluster configuration.
4. It maps a file name to a set of blocks and maps a block to the DataNodes where it is
located.
5. It records the metadata of all the files stored in the cluster, e.g. the location, the size of
the files, permissions, hierarchy, etc.
6. With the help of a transactional log, that is, the EditLog, the NameNode records each
and every change that takes place to the file system metadata. For example, if a file is
deleted in HDFS, the NameNode will immediately record this in the EditLog.
7. The NameNode is also responsible to take care of the replication factor of all the blocks.
If there is a change in the replication factor of any of the blocks, the NameNode will
record this in the EditLog.
8. NameNode regularly receives a Heartbeat and a Blockreport from all the DataNodes in
the cluster to make sure that the datanodes are working properly. A Block Report contains
a list of all blocks on a DataNode.
9. In case of a datanode failure, the Namenode chooses new datanodes for new replicas,
balances disk usage and also manages the communication traffic to the datanodes.
DataNodes:
Datanodes are the slave nodes in HDFS, just like a any average car in front of aLamborghini!
Unlike NameNode, datanode is a commodity hardware, that is, a non-expensive system
which is not of high quality or high-availability. Datanode is a block server that stores the
data in the local file ext3 or ext4.
Functions of DataNodes:
Let’s list out various functions of Datanodes:
1. Datanodes perform the low-level read and write requests from the file system’s clients.
2. They are also responsible for creating blocks, deleting blocks and replicating the same
based on the decisions taken by the NameNode.
3. They regularly send a report on all the blocks present in the cluster to the NameNode.
4. Datanodes also enables pipelining of data.
5. They forward data to other specified DataNodes.
6. Datanodes send heartbeats to the NameNode once every 3 seconds, to report the overall
health of HDFS.
R.Kirubaburi
7. The DataNode stores each block of HDFS data in separate files in its local file system.
8. When the Datanodes gets started, they scan through its local file system, creates a list
of all HDFS data blocks that relate to each of these local files and send a Blockreport to
the NameNode.
The Hadoop Distributed File System (HDFS) is based on the Google File System (GFS) and
provides a distributed file system that is designed to run on large clusters (thousands of
computers) of small computer machines in a reliable, fault-tolerant manner.
HDFS uses a master/slave architecture where master consists of a singleNameNode that
manages the file system metadata and one or more slaveDataNodes that store the actual data.
A file in an HDFS namespace is split into several blocks and those blocks are stored in a set
of DataNodes. The NameNode determines the mapping of blocks to the DataNodes. The
DataNodes takes care of read and write operation with the file system. They also take care of
block creation, deletion and replication based on instruction given by NameNode. HDFS
provides a shell like any other file system and a list of commands are available to interact
with the file system.
R.Kirubaburi
How Does Hadoop Work?
Stage 1
A user/application can submit a job to the Hadoop (a hadoop job client) for required process
by specifying the following items:
1. The location of the input and output files in the distributed file system.
2. The java classes in the form of jar file containing the implementation of map and
reduce functions.
3. The job configuration by setting different parameters specific to the job.
Stage 2
The Hadoop job client then submits the job (jar/executable etc) and configuration to the
JobTracker which then assumes the responsibility of distributing the software/configuration
to the slaves, scheduling tasks and monitoring them, providing status and diagnostic
information to the job-client.
Stage 3
The TaskTrackers on different nodes execute the task as per MapReduce implementation
and output of the reduce function is stored into the output files on the file system.
Advantages of Hadoop
 Hadoop framework allows the user to quickly write and test distributed systems. It is
efficient, and it automatic distributes the data and work across the machines and in
turn, utilizes the underlying parallelism of the CPU cores. 

 Hadoop does not rely on hardware to provide fault-tolerance and high availability
(FTHA), rather Hadoop library itself has been designed to detect and handle failures
at the application layer. 

 Servers can be added or removed from the cluster dynamically and Hadoop continues
to operate without interruption. 

 Another big advantage of Hadoop is that apart from being open source, it is
compatible on all the platforms since it is Java based. 
R.Kirubaburi
In conclusion, the promise and potential of big data needs to be matched by a considered
approach to collection, storage, licensing and use. Without a well thought through data
strategy, remedies for misuse may be hard to find. Traditional copyright protection is unlikely
to assist and contract and confidential information remedies are likely to be far more
significant.
Analysis of Big Data is characterised by use of real time information and very large sets of
information from disparate sources. Much of the relevant data is unstructured or only semi-
structured, and will often lack originality, and even meaning, without the work of the data
analyst to extract insights. On the one hand we have raw data with little form and little
meaning, and on the other, immense value when it is combined with other data sources and
advanced techniques of evaluation. It is entirely possible that after evaluation, a new
structured data set may also be created which contains the real insights, and which although
highly valuable, may be simply expressed.
Big data can be analyzed with the software tools commonly used as part of advanced
analytics disciplines such as predictive analytics, data mining, text analytics and statistical
analysis. Mainstream Business Intelligence software and data visualization tools can also
play a role in the analysis process. But the semi-structured and unstructured data may not fit
well in traditional data warehouses based on relational databases. Furthermore, data
warehouses may not be able to handle the processing demands posed by sets of big data that
need to be updated frequently or even continually -- for example, real-time data on the
performance of mobile applications or of oil and gas pipelines. As a result, many
organizations looking to collect, process and analyze big data have turned to a newer class of
technologies that includes Hadoop and related tools such as YARN, MapReduce, Spark,
Hive and Pig as well as NoSQL databases. Those technologies form the core of an open
source software framework that supports the processing of large and diverse data sets across
clustered systems.
The biggest challenges does not seem to be technology itself as this is evolving much more
rapidly than humans but rather how to make sure we have enough skills to make use of
technology and make sense out of data collected . we need to resolve many legal issues like
intellectual property, cyber security and big data code of conduct. Promises of big data
innovation growth and long term sustainability.
R.Kirubaburi

More Related Content

What's hot

Lecture1 introduction to big data
Lecture1 introduction to big dataLecture1 introduction to big data
Lecture1 introduction to big datahktripathy
 
Introduction to Big Data
Introduction to Big DataIntroduction to Big Data
Introduction to Big DataVipin Batra
 
Big Data Mining, Techniques, Handling Technologies and Some Related Issues: A...
Big Data Mining, Techniques, Handling Technologies and Some Related Issues: A...Big Data Mining, Techniques, Handling Technologies and Some Related Issues: A...
Big Data Mining, Techniques, Handling Technologies and Some Related Issues: A...IJSRD
 
hadoop seminar training report
hadoop seminar  training reporthadoop seminar  training report
hadoop seminar training reportSarvesh Meena
 
Big Data-Survey
Big Data-SurveyBig Data-Survey
Big Data-Surveyijeei-iaes
 
A Review Paper on Big Data and Hadoop for Data Science
A Review Paper on Big Data and Hadoop for Data ScienceA Review Paper on Big Data and Hadoop for Data Science
A Review Paper on Big Data and Hadoop for Data Scienceijtsrd
 
Lesson 1 introduction to_big_data_and_hadoop.pptx
Lesson 1 introduction to_big_data_and_hadoop.pptxLesson 1 introduction to_big_data_and_hadoop.pptx
Lesson 1 introduction to_big_data_and_hadoop.pptxPankajkumar496281
 
RDBMS vs Hadoop vs Spark
RDBMS vs Hadoop vs SparkRDBMS vs Hadoop vs Spark
RDBMS vs Hadoop vs SparkLaxmi8
 
Big data introduction, Hadoop in details
Big data introduction, Hadoop in detailsBig data introduction, Hadoop in details
Big data introduction, Hadoop in detailsMahmoud Yassin
 
Big Data - An Overview
Big Data -  An OverviewBig Data -  An Overview
Big Data - An OverviewArvind Kalyan
 
Big Data: An Overview
Big Data: An OverviewBig Data: An Overview
Big Data: An OverviewC. Scyphers
 
introduction to big data frameworks
introduction to big data frameworksintroduction to big data frameworks
introduction to big data frameworksAmal Targhi
 

What's hot (20)

Hadoop
HadoopHadoop
Hadoop
 
IJARCCE_49
IJARCCE_49IJARCCE_49
IJARCCE_49
 
Big Data & Hadoop
Big Data & HadoopBig Data & Hadoop
Big Data & Hadoop
 
Lecture1 introduction to big data
Lecture1 introduction to big dataLecture1 introduction to big data
Lecture1 introduction to big data
 
BIG DATA
BIG DATABIG DATA
BIG DATA
 
Introduction to Big Data
Introduction to Big DataIntroduction to Big Data
Introduction to Big Data
 
Big data Presentation
Big data PresentationBig data Presentation
Big data Presentation
 
Big Data Mining, Techniques, Handling Technologies and Some Related Issues: A...
Big Data Mining, Techniques, Handling Technologies and Some Related Issues: A...Big Data Mining, Techniques, Handling Technologies and Some Related Issues: A...
Big Data Mining, Techniques, Handling Technologies and Some Related Issues: A...
 
Big data ppt
Big data pptBig data ppt
Big data ppt
 
hadoop seminar training report
hadoop seminar  training reporthadoop seminar  training report
hadoop seminar training report
 
Big Data-Survey
Big Data-SurveyBig Data-Survey
Big Data-Survey
 
A Review Paper on Big Data and Hadoop for Data Science
A Review Paper on Big Data and Hadoop for Data ScienceA Review Paper on Big Data and Hadoop for Data Science
A Review Paper on Big Data and Hadoop for Data Science
 
Big Data simplified
Big Data simplifiedBig Data simplified
Big Data simplified
 
Hadoop in action
Hadoop in actionHadoop in action
Hadoop in action
 
Lesson 1 introduction to_big_data_and_hadoop.pptx
Lesson 1 introduction to_big_data_and_hadoop.pptxLesson 1 introduction to_big_data_and_hadoop.pptx
Lesson 1 introduction to_big_data_and_hadoop.pptx
 
RDBMS vs Hadoop vs Spark
RDBMS vs Hadoop vs SparkRDBMS vs Hadoop vs Spark
RDBMS vs Hadoop vs Spark
 
Big data introduction, Hadoop in details
Big data introduction, Hadoop in detailsBig data introduction, Hadoop in details
Big data introduction, Hadoop in details
 
Big Data - An Overview
Big Data -  An OverviewBig Data -  An Overview
Big Data - An Overview
 
Big Data: An Overview
Big Data: An OverviewBig Data: An Overview
Big Data: An Overview
 
introduction to big data frameworks
introduction to big data frameworksintroduction to big data frameworks
introduction to big data frameworks
 

Similar to Big Data

Big data with hadoop
Big data with hadoopBig data with hadoop
Big data with hadoopAnusha sweety
 
Lecture 5 - Big Data and Hadoop Intro.ppt
Lecture 5 - Big Data and Hadoop Intro.pptLecture 5 - Big Data and Hadoop Intro.ppt
Lecture 5 - Big Data and Hadoop Intro.pptalmaraniabwmalk
 
How Big Data ,Cloud Computing ,Data Science can help business
How Big Data ,Cloud Computing ,Data Science can help businessHow Big Data ,Cloud Computing ,Data Science can help business
How Big Data ,Cloud Computing ,Data Science can help businessAjay Ohri
 
Hadoop and Big Data Analytics | Sysfore
Hadoop and Big Data Analytics | SysforeHadoop and Big Data Analytics | Sysfore
Hadoop and Big Data Analytics | SysforeSysfore Technologies
 
Learn About Big Data and Hadoop The Most Significant Resource
Learn About Big Data and Hadoop The Most Significant ResourceLearn About Big Data and Hadoop The Most Significant Resource
Learn About Big Data and Hadoop The Most Significant ResourceAssignment Help
 
Big Data Testing Using Hadoop Platform
Big Data Testing Using Hadoop PlatformBig Data Testing Using Hadoop Platform
Big Data Testing Using Hadoop PlatformIRJET Journal
 
A Glimpse of Bigdata - Introduction
A Glimpse of Bigdata - IntroductionA Glimpse of Bigdata - Introduction
A Glimpse of Bigdata - Introductionsaisreealekhya
 
Unstructured Datasets Analysis: Thesaurus Model
Unstructured Datasets Analysis: Thesaurus ModelUnstructured Datasets Analysis: Thesaurus Model
Unstructured Datasets Analysis: Thesaurus ModelEditor IJCATR
 
Introduction to Big Data and Hadoop using Local Standalone Mode
Introduction to Big Data and Hadoop using Local Standalone ModeIntroduction to Big Data and Hadoop using Local Standalone Mode
Introduction to Big Data and Hadoop using Local Standalone Modeinventionjournals
 
The book of elephant tattoo
The book of elephant tattooThe book of elephant tattoo
The book of elephant tattooMohamed Magdy
 
Big Data Mining, Techniques, Handling Technologies and Some Related Issues: A...
Big Data Mining, Techniques, Handling Technologies and Some Related Issues: A...Big Data Mining, Techniques, Handling Technologies and Some Related Issues: A...
Big Data Mining, Techniques, Handling Technologies and Some Related Issues: A...IJSRD
 
Big data and apache hadoop adoption
Big data and apache hadoop adoptionBig data and apache hadoop adoption
Big data and apache hadoop adoptionfaizrashid1995
 
Introduction-to-Big-Data-and-Hadoop.pptx
Introduction-to-Big-Data-and-Hadoop.pptxIntroduction-to-Big-Data-and-Hadoop.pptx
Introduction-to-Big-Data-and-Hadoop.pptxPratimakumari213460
 

Similar to Big Data (20)

Big data with hadoop
Big data with hadoopBig data with hadoop
Big data with hadoop
 
Lecture 5 - Big Data and Hadoop Intro.ppt
Lecture 5 - Big Data and Hadoop Intro.pptLecture 5 - Big Data and Hadoop Intro.ppt
Lecture 5 - Big Data and Hadoop Intro.ppt
 
How Big Data ,Cloud Computing ,Data Science can help business
How Big Data ,Cloud Computing ,Data Science can help businessHow Big Data ,Cloud Computing ,Data Science can help business
How Big Data ,Cloud Computing ,Data Science can help business
 
Hadoop and Big Data Analytics | Sysfore
Hadoop and Big Data Analytics | SysforeHadoop and Big Data Analytics | Sysfore
Hadoop and Big Data Analytics | Sysfore
 
Big Data Hadoop
Big Data HadoopBig Data Hadoop
Big Data Hadoop
 
Learn About Big Data and Hadoop The Most Significant Resource
Learn About Big Data and Hadoop The Most Significant ResourceLearn About Big Data and Hadoop The Most Significant Resource
Learn About Big Data and Hadoop The Most Significant Resource
 
Big data
Big dataBig data
Big data
 
Big data
Big dataBig data
Big data
 
Hadoop Overview
Hadoop OverviewHadoop Overview
Hadoop Overview
 
Big Data Testing Using Hadoop Platform
Big Data Testing Using Hadoop PlatformBig Data Testing Using Hadoop Platform
Big Data Testing Using Hadoop Platform
 
Big Data przt.pptx
Big Data przt.pptxBig Data przt.pptx
Big Data przt.pptx
 
A Glimpse of Bigdata - Introduction
A Glimpse of Bigdata - IntroductionA Glimpse of Bigdata - Introduction
A Glimpse of Bigdata - Introduction
 
Unstructured Datasets Analysis: Thesaurus Model
Unstructured Datasets Analysis: Thesaurus ModelUnstructured Datasets Analysis: Thesaurus Model
Unstructured Datasets Analysis: Thesaurus Model
 
Introduction to Big Data and Hadoop using Local Standalone Mode
Introduction to Big Data and Hadoop using Local Standalone ModeIntroduction to Big Data and Hadoop using Local Standalone Mode
Introduction to Big Data and Hadoop using Local Standalone Mode
 
Big Data
Big DataBig Data
Big Data
 
The book of elephant tattoo
The book of elephant tattooThe book of elephant tattoo
The book of elephant tattoo
 
Big Data Mining, Techniques, Handling Technologies and Some Related Issues: A...
Big Data Mining, Techniques, Handling Technologies and Some Related Issues: A...Big Data Mining, Techniques, Handling Technologies and Some Related Issues: A...
Big Data Mining, Techniques, Handling Technologies and Some Related Issues: A...
 
Big data and apache hadoop adoption
Big data and apache hadoop adoptionBig data and apache hadoop adoption
Big data and apache hadoop adoption
 
Introduction-to-Big-Data-and-Hadoop.pptx
Introduction-to-Big-Data-and-Hadoop.pptxIntroduction-to-Big-Data-and-Hadoop.pptx
Introduction-to-Big-Data-and-Hadoop.pptx
 
Big Data and Hadoop
Big Data and HadoopBig Data and Hadoop
Big Data and Hadoop
 

Recently uploaded

Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girlsssuser7cb4ff
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...Chandu841456
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptSAURABHKUMAR892774
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvLewisJB
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHC Sai Kiran
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEroselinkalist12
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfROCENODodongVILLACER
 
Comparative Analysis of Text Summarization Techniques
Comparative Analysis of Text Summarization TechniquesComparative Analysis of Text Summarization Techniques
Comparative Analysis of Text Summarization Techniquesugginaramesh
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...asadnawaz62
 

Recently uploaded (20)

Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girls
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.ppt
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvv
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECH
 
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Serviceyoung call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdf
 
Comparative Analysis of Text Summarization Techniques
Comparative Analysis of Text Summarization TechniquesComparative Analysis of Text Summarization Techniques
Comparative Analysis of Text Summarization Techniques
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
 

Big Data

  • 1. Hadoop - Big Data Overview Big data means really a big data, it is a collection of large datasets that cannot be processed using traditional computing techniques. Big data is not merely a data, rather it has become a complete subject, which involves various tools, techniques and frameworks. Thus Big Data includes huge volume, high velocity, and extensible variety of data. The data in it will be of three types.  Structured data : Relational data.    Semi Structured data : XML data.    Unstructured data : Word, PDF, Text, Media Logs.  Benefits of Big Data Big data is really critical to our life and its emerging as one of the most important technologies in modern world. Follow are just few benefits which are very much known to all of us:  Using the information kept in the social network like Facebook, the marketing agencies are learning about the response for their campaigns, promotions, and other advertising mediums.    Using the information in the social media like preferences and product perception of their consumers, product companies and retail organizations are planning their production.    Using the data regarding the previous medical history of patients, hospitals are providing better and quick service.  Big Data Technologies Big data technologies are important in providing more accurate analysis, which may lead to more concrete decision-making resulting in greater operational efficiencies, cost reductions, and reduced risks for the business. To harness the power of big data, you would require an infrastructure that can manage and process huge volumes of structured and unstructured data in realtime and can protect data privacy and security. R.Kirubaburi
  • 2. There are various technologies in the market from different vendors including Amazon, IBM, Microsoft, etc., to handle big data. While looking into the technologies that handle big data, we examine the following two classes of technology: Operational Big Data This include systems like MongoDB that provide operational capabilities for real-time, interactive workloads where data is primarily captured and stored. NoSQL Big Data systems are designed to take advantage of new cloud computing architectures that have emerged over the past decade to allow massive computations to be run inexpensively and efficiently. This makes operational big data workloads much easier to manage, cheaper, and faster to implement. Some NoSQL systems can provide insights into patterns and trends based on real-time data with minimal coding and without the need for data scientists and additional infrastructure. Analytical Big Data This includes systems like Massively Parallel Processing (MPP) database systems and MapReduce that provide analytical capabilities for retrospective and complex analysis that may touch most or all of the data. MapReduce provides a new method of analyzing data that is complementary to the capabilities provided by SQL, and a system based on MapReduce that can be scaled up from single servers to thousands of high and low end machines. These two classes of technology are complementary and frequently deployed together. Big Data Challenges The major challenges associated with big data are as follows:  Capturing data, Curation, Storage, Searching, Sharing, Transfer, Analysis & Presentation  To fulfill the above challenges, organizations normally take the help of enterprise servers. Characteristics Of 'Big Data' (i)Volume – The name 'Big Data' itself is related to a size which is enormous. Size of data plays very crucial role in determining value out of data. Also, whether a particular data can actually be considered as a Big Data or not, is dependent upon volume of data. R.Kirubaburi.
  • 3. Hence, 'Volume' is one characteristic which needs to be considered while dealing with 'Big Data'. (ii)Variety – The next aspect of 'Big Data' is its variety. Variety refers to heterogeneous sources and the nature of data, both structured and unstructured. During earlier days, spreadsheets and databases were the only sources of data considered by most of the applications. Now days, data in the form of emails, photos, videos, monitoring devices, PDFs, audio, etc. is also being considered in the analysis applications. This variety of unstructured data poses certain issues for storage, mining and analysing data. (iii)Velocity – The term 'velocity' refers to the speed of generation of data. How fast the data is generated and processed to meet the demands, determines real potential in the data. Big Data Velocity deals with the speed at which data flows in from sources like business processes, application logs, networks and social media sites, sensors, mobile devices, etc. The flow of data is massive and continuous. (iv)Variability – This refers to the inconsistency which can be shown by the data at times, thus hampering the process of being able to handle and manage the data effectively. Advantages Of Big Data Processing Ability to process 'Big Data' brings in multiple benefits, such as- • Businesses can utilize outside intelligence while taking decisions Access to social data from search engines and sites like facebook, twitter are enabling organizations to fine tune their business strategies. • Improved customer service Traditional customer feedback systems are getting replaced by new systems designed with 'Big Data' technologies. In these new systems, Big Data and natural language processing technologies are being used to read and evaluate consumer responses. • Early identification of risk to the product/services, if any • Better operational efficiency 'Big Data' technologies can be used for creating staging area or landing zone for new data before identifying what data should be moved to the data warehouse. In addition, such R.Kirubaburi
  • 4. integration of 'Big Data' technologies and data warehouse helps organization to offload infrequently accessed data. Traditional Approach In this approach, an enterprise will have a computer to store and process big data. Here data will be stored in an RDBMS like Oracle Database, MS SQL Server or DB2 and sophisticated softwares can be written to interact with the database, process the required data and present it to the users for analysis purpose. Limitation This approach works well where we have less volume of data that can be accommodated by standard database servers, or up to the limit of the processor which is processing the data. But when it comes to dealing with huge amounts of data, it is really a tedious task to process such data through a traditional database server. Google’s Solution Google solved this problem using an algorithm called MapReduce. This algorithm divides the task into small parts and assigns those parts to many computers connected over the network, and collects the results to form the final result dataset. Hadoop Doug Cutting, Mike Cafarella and team took the solution provided by Google and started an Open Source Project called HADOOP in 2005 and Doug named it after his son's toy elephant. Now Apache Hadoop is a registered trademark of the Apache Software Foundation. Hadoop runs applications using the MapReduce algorithm, where the data is processed in parallel on different CPU nodes. In short, Hadoop framework is capabale enough to develop applications capable of running on clusters of computers and they could perform complete statistical analysis for a huge amounts of data. Apache Hadoop consists of two sub-projects – 1. Hadoop MapReduce : MapReduce is a computational model and software framework for writing applications which are run on Hadoop. These MapReduce programs are capable of processing enormous data in parallel on large clusters of computation nodes. 2. HDFS (Hadoop Distributed File System): HDFS takes care of storage part of Hadoop applications. MapReduce applications consume data from HDFS. HDFS creates multiple replicas of data blocks and distributes them on compute nodes in cluster. This distribution enables reliable and extremely rapid computations. R.Kirubaburi
  • 5. Hadoop Architecture Although Hadoop is best known for MapReduce and its distributed file system- HDFS, the term is also used for a family of related projects that fall under the umbrella of distributed computing and large-scale data processing. Other Hadoop-related projects at Apache include are Hive, HBase, Mahout, Sqoop , Flume and ZooKeeper. Hadoop cluster consists of data center, the rack and the node which actually executes jobs. Here, data center consists of racks and rack consists of nodes. Network bandwidth available to processes varies depending upon location of the processes. That is, bandwidth available becomes lesser as we go away from-  Processes on the same node    Different nodes on the same rack    Nodes on different racks of the same data center    Nodes in different data centers  Hadoop is an Apache open source framework written in java that allows distributed processing of large datasets across clusters of computers using simple programming models. A Hadoop frame-worked application works in an environment that provides distributed storage and computation across clusters of computers. Hadoop is R.Kirubaburi
  • 6. designed to scale up from single server to thousands of machines, each offering local computation and storage. Hadoop framework includes following four modules:  Hadoop Common: These are Java libraries and utilities required by other Hadoop modules. These libraries provides filesystem and OS level abstractions and contains the necessary Java files and scripts required to start Hadoop.    Hadoop YARN: This is a framework for job scheduling and cluster resource management.    Hadoop Distributed File System (HDFS™): A distributed file system that provides high-throughput access to application data.    Hadoop MapReduce: This is YARN-based system for parallel processing of large data sets.  We can use following diagram to depict these four components available in Hadoop framework. Since 2012, the term "Hadoop" often refers not just to the base modules mentioned above but also to the collection of additional software packages that can be installed on top of or alongside Hadoop, such as Apache Pig, Apache Hive, Apache HBase, Apache Spark etc. Overview of HDFS HDFS has many similarities with other distributed file systems, but is different in several respects. One noticeable difference is HDFS's write-once-read-many model that relaxes concurrency control requirements, simplifies data coherency, and enables high-throughput access. Another unique attribute of HDFS is the viewpoint that it is usually better to locate processing logic near the data rather than moving the data to the application space. HDFS rigorously restricts data writing to one writer at a time. Bytes are always appended to the end of a stream, and byte streams are guaranteed to be stored in the order written. HDFS has many goals. Here are some of the most notable:  Fault tolerance by detecting faults and applying quick, automatic recovery    Data access via MapReduce streaming    Simple and robust coherency model    Processing logic close to the data, rather than the data close to the processing logic  R.Kirubaburi
  • 7.  Portability across heterogeneous commodity hardware and operating systems    Scalability to reliably store and process large amounts of data    Economy by distributing data and processing across clusters of commodity personal computers    Efficiency by distributing data and logic to process it in parallel on nodes where data is located    Reliability by automatically maintaining multiple copies of data and automatically redeploying processing logic in the event of failures  MapReduce Hadoop MapReduce is a software framework for easily writing applications which process big amounts of data in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner.The term MapReduce actually refers to the following two different tasks that Hadoop programs perform:  The Map Task: This is the first task, which takes input data and converts it into a set of data, where individual elements are broken down into tuples (key/value pairs).    The Reduce Task: This task takes the output from a map task as input and combines those data tuples into a smaller set of tuples. The reduce task is always performed after the map task.  R.Kirubaburi
  • 8. Typically both the input and the output are stored in a file-system. The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks. The MapReduce framework consists of a single master JobTracker and one slave TaskTracker per cluster-node. The master is responsible for resource management, tracking resource consumption/availability and scheduling the jobs component tasks on the slaves, monitoring them and re-executing the failed tasks. The slaves TaskTracker execute the tasks as directed by the master and provide task-status information to the master periodically. The JobTracker is a single point of failure for the Hadoop MapReduce service which means if JobTracker goes down, all running jobs are halted. Hadoop Distributed File System Hadoop can work directly with any mountable distributed file system such as Local FS, HFTP FS, S3 FS, and others, but the most common file system used by Hadoop is the Hadoop Distributed File System (HDFS). R.Kirubaburi
  • 9. Functions of a NameNode: Let’s list out various functions of a NameNode: 1. The NameNode maintains and executes the file system namespace. If there are any modifications in the file system namespace or in its properties, this is tracked by the NameNode. 2. It directs the Datanodes (Slave nodes) to execute the low-level I/O operations. 3. It keeps a record of how the files in HDFS are divided into blocks, in which nodes these blocks are stored and by and large the NameNode manages cluster configuration. 4. It maps a file name to a set of blocks and maps a block to the DataNodes where it is located. 5. It records the metadata of all the files stored in the cluster, e.g. the location, the size of the files, permissions, hierarchy, etc. 6. With the help of a transactional log, that is, the EditLog, the NameNode records each and every change that takes place to the file system metadata. For example, if a file is deleted in HDFS, the NameNode will immediately record this in the EditLog. 7. The NameNode is also responsible to take care of the replication factor of all the blocks. If there is a change in the replication factor of any of the blocks, the NameNode will record this in the EditLog. 8. NameNode regularly receives a Heartbeat and a Blockreport from all the DataNodes in the cluster to make sure that the datanodes are working properly. A Block Report contains a list of all blocks on a DataNode. 9. In case of a datanode failure, the Namenode chooses new datanodes for new replicas, balances disk usage and also manages the communication traffic to the datanodes. DataNodes: Datanodes are the slave nodes in HDFS, just like a any average car in front of aLamborghini! Unlike NameNode, datanode is a commodity hardware, that is, a non-expensive system which is not of high quality or high-availability. Datanode is a block server that stores the data in the local file ext3 or ext4. Functions of DataNodes: Let’s list out various functions of Datanodes: 1. Datanodes perform the low-level read and write requests from the file system’s clients. 2. They are also responsible for creating blocks, deleting blocks and replicating the same based on the decisions taken by the NameNode. 3. They regularly send a report on all the blocks present in the cluster to the NameNode. 4. Datanodes also enables pipelining of data. 5. They forward data to other specified DataNodes. 6. Datanodes send heartbeats to the NameNode once every 3 seconds, to report the overall health of HDFS. R.Kirubaburi
  • 10. 7. The DataNode stores each block of HDFS data in separate files in its local file system. 8. When the Datanodes gets started, they scan through its local file system, creates a list of all HDFS data blocks that relate to each of these local files and send a Blockreport to the NameNode. The Hadoop Distributed File System (HDFS) is based on the Google File System (GFS) and provides a distributed file system that is designed to run on large clusters (thousands of computers) of small computer machines in a reliable, fault-tolerant manner. HDFS uses a master/slave architecture where master consists of a singleNameNode that manages the file system metadata and one or more slaveDataNodes that store the actual data. A file in an HDFS namespace is split into several blocks and those blocks are stored in a set of DataNodes. The NameNode determines the mapping of blocks to the DataNodes. The DataNodes takes care of read and write operation with the file system. They also take care of block creation, deletion and replication based on instruction given by NameNode. HDFS provides a shell like any other file system and a list of commands are available to interact with the file system. R.Kirubaburi
  • 11. How Does Hadoop Work? Stage 1 A user/application can submit a job to the Hadoop (a hadoop job client) for required process by specifying the following items: 1. The location of the input and output files in the distributed file system. 2. The java classes in the form of jar file containing the implementation of map and reduce functions. 3. The job configuration by setting different parameters specific to the job. Stage 2 The Hadoop job client then submits the job (jar/executable etc) and configuration to the JobTracker which then assumes the responsibility of distributing the software/configuration to the slaves, scheduling tasks and monitoring them, providing status and diagnostic information to the job-client. Stage 3 The TaskTrackers on different nodes execute the task as per MapReduce implementation and output of the reduce function is stored into the output files on the file system. Advantages of Hadoop  Hadoop framework allows the user to quickly write and test distributed systems. It is efficient, and it automatic distributes the data and work across the machines and in turn, utilizes the underlying parallelism of the CPU cores.    Hadoop does not rely on hardware to provide fault-tolerance and high availability (FTHA), rather Hadoop library itself has been designed to detect and handle failures at the application layer.    Servers can be added or removed from the cluster dynamically and Hadoop continues to operate without interruption.    Another big advantage of Hadoop is that apart from being open source, it is compatible on all the platforms since it is Java based.  R.Kirubaburi
  • 12. In conclusion, the promise and potential of big data needs to be matched by a considered approach to collection, storage, licensing and use. Without a well thought through data strategy, remedies for misuse may be hard to find. Traditional copyright protection is unlikely to assist and contract and confidential information remedies are likely to be far more significant. Analysis of Big Data is characterised by use of real time information and very large sets of information from disparate sources. Much of the relevant data is unstructured or only semi- structured, and will often lack originality, and even meaning, without the work of the data analyst to extract insights. On the one hand we have raw data with little form and little meaning, and on the other, immense value when it is combined with other data sources and advanced techniques of evaluation. It is entirely possible that after evaluation, a new structured data set may also be created which contains the real insights, and which although highly valuable, may be simply expressed. Big data can be analyzed with the software tools commonly used as part of advanced analytics disciplines such as predictive analytics, data mining, text analytics and statistical analysis. Mainstream Business Intelligence software and data visualization tools can also play a role in the analysis process. But the semi-structured and unstructured data may not fit well in traditional data warehouses based on relational databases. Furthermore, data warehouses may not be able to handle the processing demands posed by sets of big data that need to be updated frequently or even continually -- for example, real-time data on the performance of mobile applications or of oil and gas pipelines. As a result, many organizations looking to collect, process and analyze big data have turned to a newer class of technologies that includes Hadoop and related tools such as YARN, MapReduce, Spark, Hive and Pig as well as NoSQL databases. Those technologies form the core of an open source software framework that supports the processing of large and diverse data sets across clustered systems. The biggest challenges does not seem to be technology itself as this is evolving much more rapidly than humans but rather how to make sure we have enough skills to make use of technology and make sense out of data collected . we need to resolve many legal issues like intellectual property, cyber security and big data code of conduct. Promises of big data innovation growth and long term sustainability. R.Kirubaburi