SlideShare ist ein Scribd-Unternehmen logo
1 von 32
© Hortonworks Inc. 2014
Apache Hadoop YARN
Best Practices
Zhijie Shen
zshen [at] hortonworks.com
Varun Vasudev
vvasudev [at] hortonworks.com
Page 1
© Hortonworks Inc. 2014
Who we are
• Zhijie Shen
– Software engineer at Hortonworks
– Apache Hadoop Committer
– Apache SAMZA Committer and PPMC
– PhD from National University of Singapore
• Varun Vasudev
– Software engineer at Hortonworks, working on YARN
– Worked on image and web search at Yahoo!
Page 2
Architecting the Future of Big Data
© Hortonworks Inc. 2014
Agenda
• Talking about what we have learnt from our experiences working with
YARN users
• Best practices for
– Administrators
– Application Developers
Page 3
Architecting the Future of Big Data
© Hortonworks Inc. 2014
For Administrators
Architecting the Future of Big Data
Page 4
© Hortonworks Inc. 2014
Sub-Agenda
• Overview of YARN configuration
• ResourceManager
• Schedulers
• NodeManagers
• Others
– Log aggregation
– Metrics
Page 5
Architecting the Future of Big Data
© Hortonworks Inc. 2014
Overview of YARN configuration
• Almost everything YARN related in yarn-site.xml
• Granular – individual variables documented
• Nearly 150 configuration properties
– Required: Very small set – hostnames etc
– Common: Client and server
– Advanced: RPC retries etc.
– yarn.resourcemanager.* yarn.nodemanager.* usually - server configs
– Admins can mark them ‘final’ to clarify to users they cannot be overridden
– yarn.client.* - client configs
• Security, ResourceManager, NodeManager, TimelineServer, Scheduler –
all in one file
• Topology scripts on RM, NM and all nodes
– BUG: MR AM has to read the same script. Work in progress to send it from RM to
AMs
Page 6
Architecting the Future of Big Data
© Hortonworks Inc. 2014
ResourceManager
• Hardware requirements
– ResourceManagers needs CPU
– Doesn’t require as much memory as JobTracker
– 4 to 8 GB should be fine
• JobHistoryServer
– Needs memory, at least 8 GB
Page 7
Architecting the Future of Big Data
© Hortonworks Inc. 2014
Enable RM HA
• Enable RM HA - availability
• Only supported using Zookeeper
– Leader election used
– Fencing support
• Automatic failover enabled by default
– Using zookeeper again
– Embedded zkfc, no need to explicitly start separate process
• You can start multiple ResourceManagers
• Specify rm-ids using yarn.resourcemanager.ha.rm-ids
– e.g yarn.resourcemanager.ha.rm-ids rm1, rm2
• Associate hostnames with rm-ids using
yarn.resourcemanager.hostname.rm1,
yarn.resourcemanager.hostname.rm2
– No need to change any other configs – scheduler, resource-tracker addresses are
automatically taken care of
• Web-Uis automatically get redirected to the active
Page 8
Architecting the Future of Big Data
© Hortonworks Inc. 2014
YARN schedulers
• Two main schedulers
– capacity
– fair
• Capacity Scheduler allows you to setup queues to split resources –
useful for multi-tenant clusters where you want to guarantee resources
• Fair Scheduler allows you to split resources ‘fairly’ across applications
• Both have admin files which can be used to dynamically change the
setup
• If you have enabled HA, queue configuration files are on local disk
– Make sure queue files are consistent across nodes
– Feature to centralize configs in progress
Page 9
Architecting the Future of Big Data
© Hortonworks Inc. 2014
Capacity Scheduler
Page 10
Architecting the Future of Big Data
50%
queue-1 queue-2 queue-3
Apps Apps Apps
Guaranteed
Resources
30% 20%
© Hortonworks Inc. 2014
YARN Capacity scheduler
• Configuration in capacity-scheduler.xml
• Take some time to setup your queues!
• Queues have per-queue acls to restrict queue access
– Access can be dynamically changed
• Elasticity can be limited on a per-queue basis – use
yarn.scheduler.capacity.<queue-path>.maximum-capacity
• Use yarn.scheduler.capacity.<queue-path>.state to drain queues
– ‘Decommissioning’ a queue
• yarn rmadmin –refreshQueues to make runtime changes
Page 11
Architecting the Future of Big Data
© Hortonworks Inc. 2014
YARN Fair Scheduler
• Apps get equal share of resources, on average, over time
• No worry about starvation
• Support for queues – meant to be used so that you can prevent users
from flooding the system with apps
• Has support for fairness policy which can be modified at runtime
• Good if you have lots of small jobs
Page 12
Architecting the Future of Big Data
© Hortonworks Inc. 2014
Size your containers
• Memory and cores – minimum and maximum allocation, affects
containers per node
• yarn.scheduler.*-allocation-*
• Defaults are 1GB, 8GB, 1 core and 32 cores
• CPU scheduling needs a bit more stabilization
– Historically – translate to memory calculations
• Similarly Disk-scheduling
– translate disk limits to memory/cpu.
Page 13
Architecting the Future of Big Data
0
10
20
30
40
50
60
70
4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64
Number of containers
per node
Memory for NodeManager(in GB)
© Hortonworks Inc. 2014
NodeManagers
• Set resource-memory – variable is yarn.nodemanager.resource.memory-
mb
– Sets how much memory YARN can use for containers
– Default is 8GB
• Set up a health-checker script!
– Check disk
– Check network
– Check any external resources required for job completion
– Test it on your OS
– Weed out bad nodes automatically!
• Figure out if the physical and virtual memory monitors make sense;
both are enabled by default.
– Default ratio is 2.1
• Multiple disks for containers on NodeManagers
– HDFS too accesses them
– If bottlenecked on disks, separate them. Haven’t seen it in the wild though
Page 14
Architecting the Future of Big Data
© Hortonworks Inc. 2014
YARN log aggregation
• Log aggregation can be enabled using yarn.log-aggregation-enable.
• Can control how long you keep the logs by setting parameters for
purging
• App logs can be obtained using “yarn logs” command
• Creates lots of small files, can affect HDFS performance
Page 15
Architecting the Future of Big Data
© Hortonworks Inc. 2014
YARN Metrics
• JMX – http://<rm address>:<port>/jmx, http://<nm address>:<port>/jmx
– Cluster metrics – apps running, successful, failed, etc
– Scheduler metrics – queue usage
– RPC metrics
• Web UI – http://<rm address>:<port>/cluster
– Cluster metrics
– Scheduler metrics – easier to digest, especially queue usage
– Healthy, failed nodes
• Can be emitted to Ganglia directly using the metrics sink
– Metrics configuration file
Page 16
Architecting the Future of Big Data
© Hortonworks Inc. 2014
For Application Developers
Architecting the Future of Big Data
Page 17
© Hortonworks Inc. 2014
Sub-Agenda
• Framework or a native Application?
• Understanding YARN Basics
• Writing an YARN Client
• Writing an ApplicationMaster
• Misc Lessons
Page 18
Architecting the Future of Big Data
© Hortonworks Inc. 2014
Framework or a native app?
• Two choices
– Write applications on top of existing frameworks
– Battle tested
– Already work
– APIs
– Roll your own native YARN application
• Existing frameworks
– Scalable batch processing: MapReduce
– Stream processing: Storm/Samza
– Interactive processing, iterations: Tez/Spark
– SQL: Hive
– Data pipelines: Pig
– Graph processing: Giraph
– Existing app: Slider
• Apache: Your App Store
Page 19
Architecting the Future of Big Data
© Hortonworks Inc. 2014
Ease of development
• Check the other developing or deployment tools
Page 20
Architecting the Future of Big Data
NativeSlider
Frameworks
Complexity
Twill/REEF
© Hortonworks Inc. 2014
Understanding YARN Components
Page 21
Architecting the Future of Big Data
• ResourceManager
– Master of a cluster
• NodeManager
– Slave to take care of one host
• ApplicationMaster
– Master of an application
• Container
– Resource abstraction, process to
complete a task
© Hortonworks Inc. 2014
User code: Client and AM
• Client
– Client to ResourceManager
• ApplicationMaster
– ApplicationMaster to scheduler
– Allocate resources
– ApplicationMaster to NodeMasters
– Manage containers
Page 22
Architecting the Future of Big Data
© Hortonworks Inc. 2014
Client: Rule of Thumb
• Use the client libraries
– YarnClient
– Submit an application
– AMRMClient(Async)
– Negotiate resources
– NMClient(Async)
– Manage containers
– TimelineClient
– Monitor an application
Page 23
Architecting the Future of Big Data
© Hortonworks Inc. 2014
Writing Client
1. Get the application Id from RM
2. Construct ApplicationSubmissionContext
1. Shell command to run the AM
2. Environment (class path, env-variable)
3. LocalResources (Job jars downloaded from HDFS)
3. Submit the request to RM
1. submitApplication
Page 24
Architecting the Future of Big Data
© Hortonworks Inc. 2014
Tips for Writing Client
• Cluster Dependencies
–Try to make zero assumptions on the cluster
–Cluster location
–Cluster sizes.
– ApplicationMaster too
• Your application bundle should deploy everything required
using YARN’s local resources.
Page 25
Architecting the Future of Big Data
© Hortonworks Inc. 2014
Writing ApplicationMaster
1. AM registers with RM (registerApplicationMaster)
2. HeartBeats(allocate) with RM (asynchronously)
1. send the Request
1. Request new containers.
2. Release containers.
2. Received containers and send request to NM to start the container
1. construct ContainerLaunchContext
– commands
– env
– jars
3. Unregisters with RM (finishApplicationMaster)
Page 26
Architecting the Future of Big Data
© Hortonworks Inc. 2014
Tips for writing ApplicationMaster
• RM assigns containers asynchronously
– Containers are likely not returned immediately at current call.
– User needs to give empty requests until it gets the containers it requested.
– ResourceRequest is incremental.
• Locality requests may not always be met
– Relaxed Locality
• AMs can fail
– They run on cluster nodes which can fail
– RM restarts AMs automatically
– Write AMs to handle failures on restarts - recovery
– May be continue your work when AM restarts
• Optionally talk to your containers directly through the AM
– To get progress, give work, kill it, etc
– YARN doesn’t do anything for you
Page 27
Architecting the Future of Big Data
© Hortonworks Inc. 2014
Using the Timeline Service
• Metadata/Metrics
• Put application specific information
– TimelineClient
– POJO objects
• Query the information
– Get all entities of an entity type
– Get one specific entity
– Get all events of an entity type
Page 28
Architecting the Future of Big Data
© Hortonworks Inc. 2014
Page 29
Architecting the Future of Big Data
Summary: Application Workflow
• Execution Sequence
1. Client submits an application
2. RM allocates a container to start
AM
3. AM registers with RM
4. AM asks containers from RM
5. AM notifies NM to launch
containers
6. Application code is executed in
container
7. Client contacts RM/AM to monitor
application’s status
8. AM unregisters with RM
Client RM NM AM
1
2
3
4
5
7
8
6
© Hortonworks Inc. 2014
Misc Lessons: Taking What YARN offers
• Monitor your application
– RM
– NM
– Timeline server
Page 30
Architecting the Future of Big Data
© Hortonworks Inc. 2014
Misc Lessons: Debugging/Testing
• MiniYARNCluster
– In JVM YARN cluster!
– Regression tests for your applications
• Unmanaged AM
– Support to run the AM outside of a YARN cluster for development and
testing
– AM logs on your console!
• Logs
– RM/NM logs
– App Log aggregation
– Accessible via CLI, web UI
Page 31
Architecting the Future of Big Data
© Hortonworks Inc. 2014
Thank you!
Questions?
Architecting the Future of Big Data
Page 32

Weitere ähnliche Inhalte

Was ist angesagt?

From cache to in-memory data grid. Introduction to Hazelcast.
From cache to in-memory data grid. Introduction to Hazelcast.From cache to in-memory data grid. Introduction to Hazelcast.
From cache to in-memory data grid. Introduction to Hazelcast.Taras Matyashovsky
 
HBase and HDFS: Understanding FileSystem Usage in HBase
HBase and HDFS: Understanding FileSystem Usage in HBaseHBase and HDFS: Understanding FileSystem Usage in HBase
HBase and HDFS: Understanding FileSystem Usage in HBaseenissoz
 
HBase Advanced - Lars George
HBase Advanced - Lars GeorgeHBase Advanced - Lars George
HBase Advanced - Lars GeorgeJAX London
 
Scalability, Availability & Stability Patterns
Scalability, Availability & Stability PatternsScalability, Availability & Stability Patterns
Scalability, Availability & Stability PatternsJonas Bonér
 
ORC File - Optimizing Your Big Data
ORC File - Optimizing Your Big DataORC File - Optimizing Your Big Data
ORC File - Optimizing Your Big DataDataWorks Summit
 
Autoscaling Flink with Reactive Mode
Autoscaling Flink with Reactive ModeAutoscaling Flink with Reactive Mode
Autoscaling Flink with Reactive ModeFlink Forward
 
Spark shuffle introduction
Spark shuffle introductionSpark shuffle introduction
Spark shuffle introductioncolorant
 
Introduction to Storm
Introduction to Storm Introduction to Storm
Introduction to Storm Chandler Huang
 
Cassandra Introduction & Features
Cassandra Introduction & FeaturesCassandra Introduction & Features
Cassandra Introduction & FeaturesDataStax Academy
 
Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...
Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...
Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...Amazon Web Services
 
Apache Kafka – (Pattern and) Anti-Pattern
Apache Kafka – (Pattern and) Anti-PatternApache Kafka – (Pattern and) Anti-Pattern
Apache Kafka – (Pattern and) Anti-Patternconfluent
 
Kafka replication apachecon_2013
Kafka replication apachecon_2013Kafka replication apachecon_2013
Kafka replication apachecon_2013Jun Rao
 
Hive + Tez: A Performance Deep Dive
Hive + Tez: A Performance Deep DiveHive + Tez: A Performance Deep Dive
Hive + Tez: A Performance Deep DiveDataWorks Summit
 
Apache HBase Performance Tuning
Apache HBase Performance TuningApache HBase Performance Tuning
Apache HBase Performance TuningLars Hofhansl
 
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBaseHBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBaseHBaseCon
 

Was ist angesagt? (20)

From cache to in-memory data grid. Introduction to Hazelcast.
From cache to in-memory data grid. Introduction to Hazelcast.From cache to in-memory data grid. Introduction to Hazelcast.
From cache to in-memory data grid. Introduction to Hazelcast.
 
HBase and HDFS: Understanding FileSystem Usage in HBase
HBase and HDFS: Understanding FileSystem Usage in HBaseHBase and HDFS: Understanding FileSystem Usage in HBase
HBase and HDFS: Understanding FileSystem Usage in HBase
 
HBase Advanced - Lars George
HBase Advanced - Lars GeorgeHBase Advanced - Lars George
HBase Advanced - Lars George
 
Scalability, Availability & Stability Patterns
Scalability, Availability & Stability PatternsScalability, Availability & Stability Patterns
Scalability, Availability & Stability Patterns
 
ORC File - Optimizing Your Big Data
ORC File - Optimizing Your Big DataORC File - Optimizing Your Big Data
ORC File - Optimizing Your Big Data
 
Autoscaling Flink with Reactive Mode
Autoscaling Flink with Reactive ModeAutoscaling Flink with Reactive Mode
Autoscaling Flink with Reactive Mode
 
Spark shuffle introduction
Spark shuffle introductionSpark shuffle introduction
Spark shuffle introduction
 
Introduction to Storm
Introduction to Storm Introduction to Storm
Introduction to Storm
 
Cassandra Introduction & Features
Cassandra Introduction & FeaturesCassandra Introduction & Features
Cassandra Introduction & Features
 
Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...
Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...
Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...
 
Apache Kafka – (Pattern and) Anti-Pattern
Apache Kafka – (Pattern and) Anti-PatternApache Kafka – (Pattern and) Anti-Pattern
Apache Kafka – (Pattern and) Anti-Pattern
 
Kafka replication apachecon_2013
Kafka replication apachecon_2013Kafka replication apachecon_2013
Kafka replication apachecon_2013
 
Hive tuning
Hive tuningHive tuning
Hive tuning
 
Hive + Tez: A Performance Deep Dive
Hive + Tez: A Performance Deep DiveHive + Tez: A Performance Deep Dive
Hive + Tez: A Performance Deep Dive
 
Apache HBase Performance Tuning
Apache HBase Performance TuningApache HBase Performance Tuning
Apache HBase Performance Tuning
 
HBase Accelerated: In-Memory Flush and Compaction
HBase Accelerated: In-Memory Flush and CompactionHBase Accelerated: In-Memory Flush and Compaction
HBase Accelerated: In-Memory Flush and Compaction
 
Intro to HBase
Intro to HBaseIntro to HBase
Intro to HBase
 
Flink vs. Spark
Flink vs. SparkFlink vs. Spark
Flink vs. Spark
 
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBaseHBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBase
 
HBase Storage Internals
HBase Storage InternalsHBase Storage Internals
HBase Storage Internals
 

Andere mochten auch (8)

Ambari Meetup: YARN
Ambari Meetup: YARNAmbari Meetup: YARN
Ambari Meetup: YARN
 
MapReduce Scheduling Algorithms
MapReduce Scheduling AlgorithmsMapReduce Scheduling Algorithms
MapReduce Scheduling Algorithms
 
Hadoop scheduler
Hadoop schedulerHadoop scheduler
Hadoop scheduler
 
Hadoop Internals
Hadoop InternalsHadoop Internals
Hadoop Internals
 
Cs6703 grid and cloud computing unit 4
Cs6703 grid and cloud computing unit 4Cs6703 grid and cloud computing unit 4
Cs6703 grid and cloud computing unit 4
 
Hadoop YARN
Hadoop YARNHadoop YARN
Hadoop YARN
 
Cloud computing ppt
Cloud computing pptCloud computing ppt
Cloud computing ppt
 
Monitoring Kafka w/ Prometheus
Monitoring Kafka w/ PrometheusMonitoring Kafka w/ Prometheus
Monitoring Kafka w/ Prometheus
 

Ähnlich wie Apache Hadoop YARN: best practices

Running Non-MapReduce Big Data Applications on Apache Hadoop
Running Non-MapReduce Big Data Applications on Apache HadoopRunning Non-MapReduce Big Data Applications on Apache Hadoop
Running Non-MapReduce Big Data Applications on Apache Hadoophitesh1892
 
Developing Applications with Hadoop 2.0 and YARN by Abhijit Lele
Developing Applications with Hadoop 2.0 and YARN by Abhijit Lele Developing Applications with Hadoop 2.0 and YARN by Abhijit Lele
Developing Applications with Hadoop 2.0 and YARN by Abhijit Lele Hakka Labs
 
Hortonworks Yarn Code Walk Through January 2014
Hortonworks Yarn Code Walk Through January 2014Hortonworks Yarn Code Walk Through January 2014
Hortonworks Yarn Code Walk Through January 2014Hortonworks
 
Apache Hadoop YARN - Enabling Next Generation Data Applications
Apache Hadoop YARN - Enabling Next Generation Data ApplicationsApache Hadoop YARN - Enabling Next Generation Data Applications
Apache Hadoop YARN - Enabling Next Generation Data ApplicationsHortonworks
 
Apache Hadoop YARN: Present and Future
Apache Hadoop YARN: Present and FutureApache Hadoop YARN: Present and Future
Apache Hadoop YARN: Present and FutureDataWorks Summit
 
YARN - Presented At Dallas Hadoop User Group
YARN - Presented At Dallas Hadoop User GroupYARN - Presented At Dallas Hadoop User Group
YARN - Presented At Dallas Hadoop User GroupRommel Garcia
 
Developing YARN Applications - Integrating natively to YARN July 24 2014
Developing YARN Applications - Integrating natively to YARN July 24 2014Developing YARN Applications - Integrating natively to YARN July 24 2014
Developing YARN Applications - Integrating natively to YARN July 24 2014Hortonworks
 
[db tech showcase Tokyo 2014] C32: Hadoop最前線 - 開発の現場から by NTT 小沢健史
[db tech showcase Tokyo 2014] C32: Hadoop最前線 - 開発の現場から  by NTT 小沢健史[db tech showcase Tokyo 2014] C32: Hadoop最前線 - 開発の現場から  by NTT 小沢健史
[db tech showcase Tokyo 2014] C32: Hadoop最前線 - 開発の現場から by NTT 小沢健史Insight Technology, Inc.
 
YARN Ready - Integrating to YARN using Slider Webinar
YARN Ready - Integrating to YARN using Slider WebinarYARN Ready - Integrating to YARN using Slider Webinar
YARN Ready - Integrating to YARN using Slider WebinarHortonworks
 
HDFS- What is New and Future
HDFS- What is New and FutureHDFS- What is New and Future
HDFS- What is New and FutureDataWorks Summit
 
Taming YARN @ Hadoop Conference Japan 2014
Taming YARN @ Hadoop Conference Japan 2014Taming YARN @ Hadoop Conference Japan 2014
Taming YARN @ Hadoop Conference Japan 2014Tsuyoshi OZAWA
 
Taming YARN @ Hadoop conference Japan 2014
Taming YARN @ Hadoop conference Japan 2014Taming YARN @ Hadoop conference Japan 2014
Taming YARN @ Hadoop conference Japan 2014Tsuyoshi OZAWA
 
Apache Hadoop YARN - The Future of Data Processing with Hadoop
Apache Hadoop YARN - The Future of Data Processing with HadoopApache Hadoop YARN - The Future of Data Processing with Hadoop
Apache Hadoop YARN - The Future of Data Processing with HadoopHortonworks
 
Writing YARN Applications Hadoop Summit 2012
Writing YARN Applications Hadoop Summit 2012Writing YARN Applications Hadoop Summit 2012
Writing YARN Applications Hadoop Summit 2012hitesh1892
 
Writing Yarn Applications Hadoop Summit 2012
Writing Yarn Applications Hadoop Summit 2012Writing Yarn Applications Hadoop Summit 2012
Writing Yarn Applications Hadoop Summit 2012Hortonworks
 
Apache Hadoop YARN: Past, Present and Future
Apache Hadoop YARN: Past, Present and FutureApache Hadoop YARN: Past, Present and Future
Apache Hadoop YARN: Past, Present and FutureDataWorks Summit
 
YARN: Future of Data Processing with Apache Hadoop
YARN: Future of Data Processing with Apache HadoopYARN: Future of Data Processing with Apache Hadoop
YARN: Future of Data Processing with Apache HadoopHortonworks
 

Ähnlich wie Apache Hadoop YARN: best practices (20)

Running Non-MapReduce Big Data Applications on Apache Hadoop
Running Non-MapReduce Big Data Applications on Apache HadoopRunning Non-MapReduce Big Data Applications on Apache Hadoop
Running Non-MapReduce Big Data Applications on Apache Hadoop
 
Developing Applications with Hadoop 2.0 and YARN by Abhijit Lele
Developing Applications with Hadoop 2.0 and YARN by Abhijit Lele Developing Applications with Hadoop 2.0 and YARN by Abhijit Lele
Developing Applications with Hadoop 2.0 and YARN by Abhijit Lele
 
Hortonworks Yarn Code Walk Through January 2014
Hortonworks Yarn Code Walk Through January 2014Hortonworks Yarn Code Walk Through January 2014
Hortonworks Yarn Code Walk Through January 2014
 
Apache Hadoop YARN - Enabling Next Generation Data Applications
Apache Hadoop YARN - Enabling Next Generation Data ApplicationsApache Hadoop YARN - Enabling Next Generation Data Applications
Apache Hadoop YARN - Enabling Next Generation Data Applications
 
Apache Hadoop YARN: Present and Future
Apache Hadoop YARN: Present and FutureApache Hadoop YARN: Present and Future
Apache Hadoop YARN: Present and Future
 
YARN - Presented At Dallas Hadoop User Group
YARN - Presented At Dallas Hadoop User GroupYARN - Presented At Dallas Hadoop User Group
YARN - Presented At Dallas Hadoop User Group
 
Developing YARN Applications - Integrating natively to YARN July 24 2014
Developing YARN Applications - Integrating natively to YARN July 24 2014Developing YARN Applications - Integrating natively to YARN July 24 2014
Developing YARN Applications - Integrating natively to YARN July 24 2014
 
Yarnthug2014
Yarnthug2014Yarnthug2014
Yarnthug2014
 
[db tech showcase Tokyo 2014] C32: Hadoop最前線 - 開発の現場から by NTT 小沢健史
[db tech showcase Tokyo 2014] C32: Hadoop最前線 - 開発の現場から  by NTT 小沢健史[db tech showcase Tokyo 2014] C32: Hadoop最前線 - 開発の現場から  by NTT 小沢健史
[db tech showcase Tokyo 2014] C32: Hadoop最前線 - 開発の現場から by NTT 小沢健史
 
YARN Ready - Integrating to YARN using Slider Webinar
YARN Ready - Integrating to YARN using Slider WebinarYARN Ready - Integrating to YARN using Slider Webinar
YARN Ready - Integrating to YARN using Slider Webinar
 
HDFS- What is New and Future
HDFS- What is New and FutureHDFS- What is New and Future
HDFS- What is New and Future
 
MHUG - YARN
MHUG - YARNMHUG - YARN
MHUG - YARN
 
Taming YARN @ Hadoop Conference Japan 2014
Taming YARN @ Hadoop Conference Japan 2014Taming YARN @ Hadoop Conference Japan 2014
Taming YARN @ Hadoop Conference Japan 2014
 
Yarn
YarnYarn
Yarn
 
Taming YARN @ Hadoop conference Japan 2014
Taming YARN @ Hadoop conference Japan 2014Taming YARN @ Hadoop conference Japan 2014
Taming YARN @ Hadoop conference Japan 2014
 
Apache Hadoop YARN - The Future of Data Processing with Hadoop
Apache Hadoop YARN - The Future of Data Processing with HadoopApache Hadoop YARN - The Future of Data Processing with Hadoop
Apache Hadoop YARN - The Future of Data Processing with Hadoop
 
Writing YARN Applications Hadoop Summit 2012
Writing YARN Applications Hadoop Summit 2012Writing YARN Applications Hadoop Summit 2012
Writing YARN Applications Hadoop Summit 2012
 
Writing Yarn Applications Hadoop Summit 2012
Writing Yarn Applications Hadoop Summit 2012Writing Yarn Applications Hadoop Summit 2012
Writing Yarn Applications Hadoop Summit 2012
 
Apache Hadoop YARN: Past, Present and Future
Apache Hadoop YARN: Past, Present and FutureApache Hadoop YARN: Past, Present and Future
Apache Hadoop YARN: Past, Present and Future
 
YARN: Future of Data Processing with Apache Hadoop
YARN: Future of Data Processing with Apache HadoopYARN: Future of Data Processing with Apache Hadoop
YARN: Future of Data Processing with Apache Hadoop
 

Mehr von DataWorks Summit

Floating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache RatisFloating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache RatisDataWorks Summit
 
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFiTracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFiDataWorks Summit
 
HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...DataWorks Summit
 
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...DataWorks Summit
 
Managing the Dewey Decimal System
Managing the Dewey Decimal SystemManaging the Dewey Decimal System
Managing the Dewey Decimal SystemDataWorks Summit
 
Practical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist ExamplePractical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist ExampleDataWorks Summit
 
HBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberHBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberDataWorks Summit
 
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixScaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixDataWorks Summit
 
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFiBuilding the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFiDataWorks Summit
 
Supporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsSupporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsDataWorks Summit
 
Security Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureSecurity Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureDataWorks Summit
 
Presto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EnginePresto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EngineDataWorks Summit
 
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...DataWorks Summit
 
Extending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google CloudExtending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google CloudDataWorks Summit
 
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiEvent-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiDataWorks Summit
 
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache RangerSecuring Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache RangerDataWorks Summit
 
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...DataWorks Summit
 
Computer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouComputer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouDataWorks Summit
 
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache SparkBig Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache SparkDataWorks Summit
 

Mehr von DataWorks Summit (20)

Data Science Crash Course
Data Science Crash CourseData Science Crash Course
Data Science Crash Course
 
Floating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache RatisFloating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache Ratis
 
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFiTracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
 
HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...
 
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
 
Managing the Dewey Decimal System
Managing the Dewey Decimal SystemManaging the Dewey Decimal System
Managing the Dewey Decimal System
 
Practical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist ExamplePractical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist Example
 
HBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberHBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at Uber
 
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixScaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
 
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFiBuilding the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
 
Supporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsSupporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability Improvements
 
Security Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureSecurity Framework for Multitenant Architecture
Security Framework for Multitenant Architecture
 
Presto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EnginePresto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything Engine
 
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
 
Extending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google CloudExtending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google Cloud
 
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiEvent-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
 
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache RangerSecuring Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
 
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
 
Computer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouComputer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near You
 
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache SparkBig Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
 

Kürzlich hochgeladen

Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfSeasiaInfotech2
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 

Kürzlich hochgeladen (20)

Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdf
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 

Apache Hadoop YARN: best practices

  • 1. © Hortonworks Inc. 2014 Apache Hadoop YARN Best Practices Zhijie Shen zshen [at] hortonworks.com Varun Vasudev vvasudev [at] hortonworks.com Page 1
  • 2. © Hortonworks Inc. 2014 Who we are • Zhijie Shen – Software engineer at Hortonworks – Apache Hadoop Committer – Apache SAMZA Committer and PPMC – PhD from National University of Singapore • Varun Vasudev – Software engineer at Hortonworks, working on YARN – Worked on image and web search at Yahoo! Page 2 Architecting the Future of Big Data
  • 3. © Hortonworks Inc. 2014 Agenda • Talking about what we have learnt from our experiences working with YARN users • Best practices for – Administrators – Application Developers Page 3 Architecting the Future of Big Data
  • 4. © Hortonworks Inc. 2014 For Administrators Architecting the Future of Big Data Page 4
  • 5. © Hortonworks Inc. 2014 Sub-Agenda • Overview of YARN configuration • ResourceManager • Schedulers • NodeManagers • Others – Log aggregation – Metrics Page 5 Architecting the Future of Big Data
  • 6. © Hortonworks Inc. 2014 Overview of YARN configuration • Almost everything YARN related in yarn-site.xml • Granular – individual variables documented • Nearly 150 configuration properties – Required: Very small set – hostnames etc – Common: Client and server – Advanced: RPC retries etc. – yarn.resourcemanager.* yarn.nodemanager.* usually - server configs – Admins can mark them ‘final’ to clarify to users they cannot be overridden – yarn.client.* - client configs • Security, ResourceManager, NodeManager, TimelineServer, Scheduler – all in one file • Topology scripts on RM, NM and all nodes – BUG: MR AM has to read the same script. Work in progress to send it from RM to AMs Page 6 Architecting the Future of Big Data
  • 7. © Hortonworks Inc. 2014 ResourceManager • Hardware requirements – ResourceManagers needs CPU – Doesn’t require as much memory as JobTracker – 4 to 8 GB should be fine • JobHistoryServer – Needs memory, at least 8 GB Page 7 Architecting the Future of Big Data
  • 8. © Hortonworks Inc. 2014 Enable RM HA • Enable RM HA - availability • Only supported using Zookeeper – Leader election used – Fencing support • Automatic failover enabled by default – Using zookeeper again – Embedded zkfc, no need to explicitly start separate process • You can start multiple ResourceManagers • Specify rm-ids using yarn.resourcemanager.ha.rm-ids – e.g yarn.resourcemanager.ha.rm-ids rm1, rm2 • Associate hostnames with rm-ids using yarn.resourcemanager.hostname.rm1, yarn.resourcemanager.hostname.rm2 – No need to change any other configs – scheduler, resource-tracker addresses are automatically taken care of • Web-Uis automatically get redirected to the active Page 8 Architecting the Future of Big Data
  • 9. © Hortonworks Inc. 2014 YARN schedulers • Two main schedulers – capacity – fair • Capacity Scheduler allows you to setup queues to split resources – useful for multi-tenant clusters where you want to guarantee resources • Fair Scheduler allows you to split resources ‘fairly’ across applications • Both have admin files which can be used to dynamically change the setup • If you have enabled HA, queue configuration files are on local disk – Make sure queue files are consistent across nodes – Feature to centralize configs in progress Page 9 Architecting the Future of Big Data
  • 10. © Hortonworks Inc. 2014 Capacity Scheduler Page 10 Architecting the Future of Big Data 50% queue-1 queue-2 queue-3 Apps Apps Apps Guaranteed Resources 30% 20%
  • 11. © Hortonworks Inc. 2014 YARN Capacity scheduler • Configuration in capacity-scheduler.xml • Take some time to setup your queues! • Queues have per-queue acls to restrict queue access – Access can be dynamically changed • Elasticity can be limited on a per-queue basis – use yarn.scheduler.capacity.<queue-path>.maximum-capacity • Use yarn.scheduler.capacity.<queue-path>.state to drain queues – ‘Decommissioning’ a queue • yarn rmadmin –refreshQueues to make runtime changes Page 11 Architecting the Future of Big Data
  • 12. © Hortonworks Inc. 2014 YARN Fair Scheduler • Apps get equal share of resources, on average, over time • No worry about starvation • Support for queues – meant to be used so that you can prevent users from flooding the system with apps • Has support for fairness policy which can be modified at runtime • Good if you have lots of small jobs Page 12 Architecting the Future of Big Data
  • 13. © Hortonworks Inc. 2014 Size your containers • Memory and cores – minimum and maximum allocation, affects containers per node • yarn.scheduler.*-allocation-* • Defaults are 1GB, 8GB, 1 core and 32 cores • CPU scheduling needs a bit more stabilization – Historically – translate to memory calculations • Similarly Disk-scheduling – translate disk limits to memory/cpu. Page 13 Architecting the Future of Big Data 0 10 20 30 40 50 60 70 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 Number of containers per node Memory for NodeManager(in GB)
  • 14. © Hortonworks Inc. 2014 NodeManagers • Set resource-memory – variable is yarn.nodemanager.resource.memory- mb – Sets how much memory YARN can use for containers – Default is 8GB • Set up a health-checker script! – Check disk – Check network – Check any external resources required for job completion – Test it on your OS – Weed out bad nodes automatically! • Figure out if the physical and virtual memory monitors make sense; both are enabled by default. – Default ratio is 2.1 • Multiple disks for containers on NodeManagers – HDFS too accesses them – If bottlenecked on disks, separate them. Haven’t seen it in the wild though Page 14 Architecting the Future of Big Data
  • 15. © Hortonworks Inc. 2014 YARN log aggregation • Log aggregation can be enabled using yarn.log-aggregation-enable. • Can control how long you keep the logs by setting parameters for purging • App logs can be obtained using “yarn logs” command • Creates lots of small files, can affect HDFS performance Page 15 Architecting the Future of Big Data
  • 16. © Hortonworks Inc. 2014 YARN Metrics • JMX – http://<rm address>:<port>/jmx, http://<nm address>:<port>/jmx – Cluster metrics – apps running, successful, failed, etc – Scheduler metrics – queue usage – RPC metrics • Web UI – http://<rm address>:<port>/cluster – Cluster metrics – Scheduler metrics – easier to digest, especially queue usage – Healthy, failed nodes • Can be emitted to Ganglia directly using the metrics sink – Metrics configuration file Page 16 Architecting the Future of Big Data
  • 17. © Hortonworks Inc. 2014 For Application Developers Architecting the Future of Big Data Page 17
  • 18. © Hortonworks Inc. 2014 Sub-Agenda • Framework or a native Application? • Understanding YARN Basics • Writing an YARN Client • Writing an ApplicationMaster • Misc Lessons Page 18 Architecting the Future of Big Data
  • 19. © Hortonworks Inc. 2014 Framework or a native app? • Two choices – Write applications on top of existing frameworks – Battle tested – Already work – APIs – Roll your own native YARN application • Existing frameworks – Scalable batch processing: MapReduce – Stream processing: Storm/Samza – Interactive processing, iterations: Tez/Spark – SQL: Hive – Data pipelines: Pig – Graph processing: Giraph – Existing app: Slider • Apache: Your App Store Page 19 Architecting the Future of Big Data
  • 20. © Hortonworks Inc. 2014 Ease of development • Check the other developing or deployment tools Page 20 Architecting the Future of Big Data NativeSlider Frameworks Complexity Twill/REEF
  • 21. © Hortonworks Inc. 2014 Understanding YARN Components Page 21 Architecting the Future of Big Data • ResourceManager – Master of a cluster • NodeManager – Slave to take care of one host • ApplicationMaster – Master of an application • Container – Resource abstraction, process to complete a task
  • 22. © Hortonworks Inc. 2014 User code: Client and AM • Client – Client to ResourceManager • ApplicationMaster – ApplicationMaster to scheduler – Allocate resources – ApplicationMaster to NodeMasters – Manage containers Page 22 Architecting the Future of Big Data
  • 23. © Hortonworks Inc. 2014 Client: Rule of Thumb • Use the client libraries – YarnClient – Submit an application – AMRMClient(Async) – Negotiate resources – NMClient(Async) – Manage containers – TimelineClient – Monitor an application Page 23 Architecting the Future of Big Data
  • 24. © Hortonworks Inc. 2014 Writing Client 1. Get the application Id from RM 2. Construct ApplicationSubmissionContext 1. Shell command to run the AM 2. Environment (class path, env-variable) 3. LocalResources (Job jars downloaded from HDFS) 3. Submit the request to RM 1. submitApplication Page 24 Architecting the Future of Big Data
  • 25. © Hortonworks Inc. 2014 Tips for Writing Client • Cluster Dependencies –Try to make zero assumptions on the cluster –Cluster location –Cluster sizes. – ApplicationMaster too • Your application bundle should deploy everything required using YARN’s local resources. Page 25 Architecting the Future of Big Data
  • 26. © Hortonworks Inc. 2014 Writing ApplicationMaster 1. AM registers with RM (registerApplicationMaster) 2. HeartBeats(allocate) with RM (asynchronously) 1. send the Request 1. Request new containers. 2. Release containers. 2. Received containers and send request to NM to start the container 1. construct ContainerLaunchContext – commands – env – jars 3. Unregisters with RM (finishApplicationMaster) Page 26 Architecting the Future of Big Data
  • 27. © Hortonworks Inc. 2014 Tips for writing ApplicationMaster • RM assigns containers asynchronously – Containers are likely not returned immediately at current call. – User needs to give empty requests until it gets the containers it requested. – ResourceRequest is incremental. • Locality requests may not always be met – Relaxed Locality • AMs can fail – They run on cluster nodes which can fail – RM restarts AMs automatically – Write AMs to handle failures on restarts - recovery – May be continue your work when AM restarts • Optionally talk to your containers directly through the AM – To get progress, give work, kill it, etc – YARN doesn’t do anything for you Page 27 Architecting the Future of Big Data
  • 28. © Hortonworks Inc. 2014 Using the Timeline Service • Metadata/Metrics • Put application specific information – TimelineClient – POJO objects • Query the information – Get all entities of an entity type – Get one specific entity – Get all events of an entity type Page 28 Architecting the Future of Big Data
  • 29. © Hortonworks Inc. 2014 Page 29 Architecting the Future of Big Data Summary: Application Workflow • Execution Sequence 1. Client submits an application 2. RM allocates a container to start AM 3. AM registers with RM 4. AM asks containers from RM 5. AM notifies NM to launch containers 6. Application code is executed in container 7. Client contacts RM/AM to monitor application’s status 8. AM unregisters with RM Client RM NM AM 1 2 3 4 5 7 8 6
  • 30. © Hortonworks Inc. 2014 Misc Lessons: Taking What YARN offers • Monitor your application – RM – NM – Timeline server Page 30 Architecting the Future of Big Data
  • 31. © Hortonworks Inc. 2014 Misc Lessons: Debugging/Testing • MiniYARNCluster – In JVM YARN cluster! – Regression tests for your applications • Unmanaged AM – Support to run the AM outside of a YARN cluster for development and testing – AM logs on your console! • Logs – RM/NM logs – App Log aggregation – Accessible via CLI, web UI Page 31 Architecting the Future of Big Data
  • 32. © Hortonworks Inc. 2014 Thank you! Questions? Architecting the Future of Big Data Page 32