SlideShare ist ein Scribd-Unternehmen logo
1 von 78
Matching Data Intensive
Applications and
Hardware/Software Architectures
LBNL Berkeley CA
June 19 2014
Geoffrey Fox
gcf@indiana.edu
http://www.infomall.org
School of Informatics and Computing
Digital Science Center
Indiana University Bloomington
Abstract
• There is perhaps a broad consensus as to important issues in practical parallel
computing as applied to large scale simulations; this is reflected in
supercomputer architectures, algorithms, libraries, languages, compilers and
best practice for application development. However the same is not so true for
data intensive problems even though commercial clouds presumably devote
more resources to data analytics than supercomputers devote to simulations.
We try to establish some principles that allow one to compare data intensive
architectures and decide which applications fit which machines and which
software.
• We use a sample of over 50 big data applications to identify characteristics of
data intensive applications and propose a big data version of the famous
Berkeley dwarfs and NAS parallel benchmarks. We consider hardware from
clouds to HPC. Our software analysis builds on the Apache software stack
(ABDS) that is well used in modern cloud computing, which we enhance with
HPC concepts to derive HPC-ABDS.
• We illustrate issues with examples including kernels like clustering, and multi-
dimensional scaling; cyberphysical systems; databases; and variants of image
processing from beam lines, Facebook and deep-learning.
http://www.kpcb.com/internet-trends
Note largest science ~100 petabytes = 0.000025 total
HPC-ABDS
Integrating High Performance Computing with
Apache Big Data Stack
Shantenu Jha, Judy Qiu, Andre Luckow
• HPC-ABDS
• ~120 Capabilities
• >40 Apache
• Green layers have strong HPC Integration opportunities
• Goal
• Functionality of ABDS
• Performance of HPC
Broad Layers in HPC-ABDS
• Workflow-Orchestration
• Application and Analytics: Mahout, MLlib, R…
• High level Programming
• Basic Programming model and runtime
– SPMD, Streaming, MapReduce, MPI
• Inter process communication
– Collectives, point-to-point, publish-subscribe
• In-memory databases/caches
• Object-relational mapping
• SQL and NoSQL, File management
• Data Transport
• Cluster Resource Management (Yarn, Slurm, SGE)
• File systems(HDFS, Lustre …)
• DevOps (Puppet, Chef …)
• IaaS Management from HPC to hypervisors (OpenStack)
• Cross Cutting
– Message Protocols
– Distributed Coordination
– Security & Privacy
– Monitoring
Useful Set of Analytics Architectures
• Pleasingly Parallel: including local machine learning as in
parallel over images and apply image processing to each image
- Hadoop could be used but many other HTC, Many task tools
• Search: including collaborative filtering and motif finding
implemented using classic MapReduce (Hadoop)
• Map-Collective or Iterative MapReduce using Collective
Communication (clustering) – Hadoop with Harp, Spark …..
• Map-Communication or Iterative Giraph: (MapReduce) with
point-to-point communication (most graph algorithms such as
maximum clique, connected component, finding diameter,
community detection)
– Vary in difficulty of finding partitioning (classic parallel load balancing)
• Shared memory: thread-based (event driven) graph algorithms
(shortest path, Betweenness centrality)
Ideas like workflow are “orthogonal” to this
Getting High Performance on Data Analytics
(e.g. Mahout, R…)
• On the systems side, we have two principles:
– The Apache Big Data Stack with ~120 projects has important broad
functionality with a vital large support organization
– HPC including MPI has striking success in delivering high performance,
however with a fragile sustainability model
• There are key systems abstractions which are levels in HPC-ABDS software stack
where Apache approach needs careful integration with HPC
– Resource management
– Storage
– Programming model -- horizontal scaling parallelism
– Collective and Point-to-Point communication
– Support of iteration
– Data interface (not just key-value)
• In application areas, we define application abstractions to support:
– Graphs/network
– Geospatial
– Genes
– Images, etc.
HPC-ABDS Hourglass
HPC ABDS
System (Middleware)
High performance
Applications
• HPC Yarn for Resource management
• Horizontally scalable parallel programming model
• Collective and Point-to-Point communication
• Support of iteration (in memory databases)
System Abstractions/standards
• Data format
• Storage
120 Software Projects
Application Abstractions/standards
Graphs, Networks, Images, Geospatial ….
SPIDAL (Scalable Parallel
Interoperable Data Analytics Library)
or High performance Mahout, R,
Matlab…
NIST Big Data Use Cases
Chaitin Baru, Bob Marcus, Wo Chang
co-leaders
Use Case Template
• 26 fields completed for 51
areas
• Government Operation: 4
• Commercial: 8
• Defense: 3
• Healthcare and Life Sciences:
10
• Deep Learning and Social
Media: 6
• The Ecosystem for Research:
4
• Astronomy and Physics: 5
• Earth, Environmental and
Polar Science: 10
• Energy: 1
12
51 Detailed Use Cases: Contributed July-September 2013
Covers goals, data features such as 3 V’s, software, hardware
• http://bigdatawg.nist.gov/usecases.php
• https://bigdatacoursespring2014.appspot.com/course (Section 5)
• Government Operation(4): National Archives and Records Administration, Census Bureau
• Commercial(8): Finance in Cloud, Cloud Backup, Mendeley (Citations), Netflix, Web Search,
Digital Materials, Cargo shipping (as in UPS)
• Defense(3): Sensors, Image surveillance, Situation Assessment
• Healthcare and Life Sciences(10): Medical records, Graph and Probabilistic analysis,
Pathology, Bioimaging, Genomics, Epidemiology, People Activity models, Biodiversity
• Deep Learning and Social Media(6): Driving Car, Geolocate images/cameras, Twitter, Crowd
Sourcing, Network Science, NIST benchmark datasets
• The Ecosystem for Research(4): Metadata, Collaboration, Language Translation, Light source
experiments
• Astronomy and Physics(5): Sky Surveys including comparison to simulation, Large Hadron
Collider at CERN, Belle Accelerator II in Japan
• Earth, Environmental and Polar Science(10): Radar Scattering in Atmosphere, Earthquake,
Ocean, Earth Observation, Ice sheet Radar scattering, Earth radar mapping, Climate
simulation datasets, Atmospheric turbulence identification, Subsurface Biogeochemistry
(microbes to watersheds), AmeriFlux and FLUXNET gas sensors
• Energy(1): Smart grid 13
26 Features for each use case
Biased to science
14
Part of Property Summary Table
10 Suggested Generic Use Cases
1) Multiple users performing interactive queries and updates on a database
with basic availability and eventual consistency (BASE)
2) Perform real time analytics on data source streams and notify users when
specified events occur
3) Move data from external data sources into a highly horizontally scalable
data store, transform it using highly horizontally scalable processing (e.g.
Map-Reduce), and return it to the horizontally scalable data store (ELT)
4) Perform batch analytics on the data in a highly horizontally scalable data
store using highly horizontally scalable processing (e.g MapReduce) with a
user-friendly interface (e.g. SQL-like)
5) Perform interactive analytics on data in analytics-optimized database
6) Visualize data extracted from horizontally scalable Big Data score
7) Move data from a highly horizontally scalable data store into a traditional
Enterprise Data Warehouse
8) Extract, process, and move data from data stores to archives
9) Combine data from Cloud databases and on premise data stores for
analytics, data mining, and/or machine learning
10) Orchestrate multiple sequential and parallel data transformations and/or
analytic processing using a workflow manager
10 Security & Privacy Use Cases
• Consumer Digital Media Usage
• Nielsen Homescan
• Web Traffic Analytics
• Health Information Exchange
• Personal Genetic Privacy
• Pharma Clinic Trial Data Sharing
• Cyber-security
• Aviation Industry
• Military - Unmanned Vehicle sensor data
• Education - “Common Core” Student Performance Reporting
• Need to integrate 10 “generic” and 10 “security & privacy” with
51 “full use cases”
Big Data Patterns – the Ogres
Would like to capture “essence of
these use cases”
“small” kernels, mini-apps
Or Classify applications into patterns
Do it from HPC background not database viewpoint
e.g. focus on cases with detailed analytics
Section 5 of my class
https://bigdatacoursespring2014.appspot.com/preview classifies
51 use cases with ogre facets
What are “mini-Applications”
• Use for benchmarks of computers and software (is my
parallel compiler any good?)
• In parallel computing, this is well established
– Linpack for measuring performance to rank machines in Top500
(changing?)
– NAS Parallel Benchmarks (originally a pencil and paper
specification to allow optimal implementations; then MPI library)
– Other specialized Benchmark sets keep changing and used to
guide procurements
• Last 2 NSF hardware solicitations had NO preset benchmarks –
perhaps as no agreement on key applications for clouds and
data intensive applications
– Berkeley dwarfs capture different structures that any approach
to parallel computing must address
– Templates used to capture parallel computing patterns
• Also database benchmarks like TPC
HPC Benchmark Classics
• Linpack or HPL: Parallel LU factorization for solution of
linear equations
• NPB version 1: Mainly classic HPC solver kernels
– MG: Multigrid
– CG: Conjugate Gradient
– FT: Fast Fourier Transform
– IS: Integer sort
– EP: Embarrassingly Parallel
– BT: Block Tridiagonal
– SP: Scalar Pentadiagonal
– LU: Lower-Upper symmetric Gauss Seidel
13 Berkeley Dwarfs
• Dense Linear Algebra
• Sparse Linear Algebra
• Spectral Methods
• N-Body Methods
• Structured Grids
• Unstructured Grids
• MapReduce
• Combinational Logic
• Graph Traversal
• Dynamic Programming
• Backtrack and Branch-and-Bound
• Graphical Models
• Finite State Machines
First 6 of these correspond to
Colella’s original.
Monte Carlo dropped.
N-body methods are a subset of
Particle in Colella.
Note a little inconsistent in that
MapReduce is a programming
model and spectral method is a
numerical method.
Need multiple facets!
51 Use Cases: What is Parallelism Over?
• People: either the users (but see below) or subjects of application and often both
• Decision makers like researchers or doctors (users of application)
• Items such as Images, EMR, Sequences below; observations or contents of online
store
– Images or “Electronic Information nuggets”
– EMR: Electronic Medical Records (often similar to people parallelism)
– Protein or Gene Sequences;
– Material properties, Manufactured Object specifications, etc., in custom dataset
– Modelled entities like vehicles and people
• Sensors – Internet of Things
• Events such as detected anomalies in telescope or credit card data or atmosphere
• (Complex) Nodes in RDF Graph
• Simple nodes as in a learning network
• Tweets, Blogs, Documents, Web Pages, etc.
– And characters/words in them
• Files or data to be backed up, moved or assigned metadata
• Particles/cells/mesh points as in parallel simulations 22
51 Use Cases: Low-Level (Run-time)
Computational Types
• PP(26): Pleasingly Parallel or Map Only
• MR(18 +7 MRStat): Classic MapReduce
• MRStat(7): Simple version of MR where key computations
are simple reduction as coming in statistical averages
• MRIter(23): Iterative MapReduce
• Graph(9): complex graph data structure needed in analysis
• Fusion(11): Integrate diverse data to aid
discovery/decision making; could involve sophisticated
algorithms or could just be a portal
• Streaming(41): some data comes in incrementally and is
processed this way
23
(Count) out of 51
51 Use Cases: Higher-Level
Computational Types or Features
• Classification(30): divide data into categories
• S/Q/Index(12): Search and Query
• CF(4): Collaborative Filtering
• Local ML(36): Local Machine Learning
• Global ML(23): Deep Learning, Clustering, LDA, PLSI, MDS, Large Scale
Optimizations as in Variational Bayes, Lifted Belief Propagation, Stochastic
Gradient Descent, L-BFGS, Levenberg-Marquardt (Sometimes call EGO or
Exascale Global Optimization)
• Workflow: (Left out of analysis but very common)
• GIS(16): Geotagged data and often displayed in ESRI, Microsoft Virtual
Earth, Google Earth, GeoServer etc.
• HPC(5): Classic large-scale simulation of cosmos, materials, etc. generates
big data
• Agent(2): Simulations of models of data-defined macroscopic entities
represented as agents 24
Not Independent
Global Machine Learning aka EGO –
Exascale Global Optimization
• Typically maximum likelihood or 2 with a sum over the N
data items – documents, sequences, items to be sold, images
etc. and often links (point-pairs). Usually it’s a sum of positive
number as in least squares
• Covering clustering/community detection, mixture models,
topic determination, Multidimensional scaling, (Deep)
Learning Networks
• PageRank is “just” parallel linear algebra
• Note many Mahout algorithms are sequential – partly as
MapReduce limited; partly because parallelism unclear
– MLLib (Spark based) better
• SVM and Hidden Markov Models do not use large scale
parallelization in practice?
• Detailed papers on particular parallel graph algorithms
Image based Applications
http://www.kpcb.com/internet-trends
17:Pathology Imaging/ Digital Pathology I
• Application: Digital pathology imaging is an emerging field where examination of
high resolution images of tissue specimens enables novel and more effective ways
for disease diagnosis. Pathology image analysis segments massive (millions per
image) spatial objects such as nuclei and blood vessels, represented with their
boundaries, along with many extracted image features from these objects. The
derived information is used for many complex queries and analytics to support
biomedical research and clinical diagnosis.
28
Healthcare
Life Sciences
MR, MRIter, PP, Classification Parallelism over ImagesStreaming
17:Pathology Imaging/ Digital Pathology II
• Current Approach: 1GB raw image data + 1.5GB analytical results per 2D image. MPI for
image analysis; MapReduce + Hive with spatial extension on supercomputers and clouds.
GPU’s used effectively. Figure below shows the architecture of Hadoop-GIS, a spatial data
warehousing system over MapReduce to support spatial analytics for analytical pathology
imaging.
29
Healthcare
Life Sciences
• Futures: Recently, 3D pathology imaging
is made possible through 3D laser
technologies or serially sectioning
hundreds of tissue sections onto slides
and scanning them into digital images.
Segmenting 3D microanatomic objects
from registered serial images could
produce tens of millions of 3D objects
from a single image. This provides a
deep “map” of human tissues for next
generation diagnosis. 1TB raw image
data + 1TB analytical results per 3D
image and 1PB data per moderated
hospital per year.
Architecture of Hadoop-GIS, a spatial data warehousing system over MapReduce
to support spatial analytics for analytical pathology imaging
Parallelism over images or over pixels within image (especially for GPU)
18: Computational Bioimaging
• Application: Data delivered from bioimaging is increasingly automated, higher
resolution, and multi-modal. This has created a data analysis bottleneck that, if
resolved, can advance the biosciences discovery through Big Data techniques.
• Current Approach: The current piecemeal analysis approach does not scale to
situation where a single scan on emerging machines is 32 TB and medical
diagnostic imaging is annually around 70 PB even excluding cardiology. One needs
a web-based one-stop-shop for high performance, high throughput image
processing for producers and consumers of models built on bio-imaging data.
• Futures: Goal is to solve that bottleneck with extreme scale computing with
community-focused science gateways to support the application of massive data
analysis toward massive imaging data sets. Workflow components include data
acquisition, storage, enhancement, minimizing noise, segmentation of regions of
interest, crowd-based selection and extraction of features, and object
classification, organization, and search. Use ImageJ, OMERO, VolRover, advanced
segmentation and feature detection software.
30
Healthcare
Life Sciences
Largely Local Machine Learning and Pleasingly Parallel
26: Large-scale Deep Learning
• Application: Large models (e.g., neural networks with more neurons and connections) combined with
large datasets are increasingly the top performers in benchmark tasks for vision, speech, and Natural
Language Processing. One needs to train a deep neural network from a large (>>1TB) corpus of data
(typically imagery, video, audio, or text). Such training procedures often require customization of the
neural network architecture, learning criteria, and dataset pre-processing. In addition to the
computational expense demanded by the learning algorithms, the need for rapid prototyping and ease
of development is extremely high.
• Current Approach: The largest applications so far are to image recognition and scientific studies of
unsupervised learning with 10 million images and up to 11 billion parameters on a 64 GPU HPC
Infiniband cluster. Both supervised (using existing classified images) and unsupervised applications
31
Deep Learning
Social Networking
• Futures: Large datasets of 100TB or more may be
necessary in order to exploit the representational power
of the larger models. Training a self-driving car could take
100 million images at megapixel resolution. Deep
Learning shares many characteristics with the broader
field of machine learning. The paramount requirements
are high computational throughput for mostly dense
linear algebra operations, and extremely high productivity
for researcher exploration. One needs integration of high
performance libraries with high level (python) prototyping
environments
IN
Classified
OUT
MRIter,EGO Classification Parallelism over Nodes in NN, Data being classified
Global Machine Learning but Stochastic Gradient Descent only use small fraction of total
images (100’s) at each iteration so parallelism over images not clearly useful
27: Organizing large-scale, unstructured collections
of consumer photos I
• Application: Produce 3D reconstructions of scenes using collections of
millions to billions of consumer images, where neither the scene structure
nor the camera positions are known a priori. Use resulting 3D models to
allow efficient browsing of large-scale photo collections by geographic
position. Geolocate new images by matching to 3D models. Perform object
recognition on each image. 3D reconstruction posed as a robust non-linear
least squares optimization problem where observed relations between
images are constraints and unknowns are 6-D camera pose of each image
and 3D position of each point in the scene.
• Current Approach: Hadoop cluster with 480 cores processing data of initial
applications. Note over 500 billion images (too small) on Facebook and
over 5 billion on Flickr with over 1800 (was 500 a year ago) million images
added to social media sites each day.
32
Deep Learning
Social NetworkingGlobal Machine Learning after Initial Local steps
27: Organizing large-scale, unstructured collections
of consumer photos II
• Futures: Need many analytics, including feature extraction, feature
matching, and large-scale probabilistic inference, which appear in many or
most computer vision and image processing problems, including
recognition, stereo resolution, and image denoising. Need to visualize
large-scale 3D reconstructions, and navigate large-scale collections of
images that have been aligned to maps.
33
Deep Learning
Social Networking
Global Machine Learning after Initial Local ML pleasingly parallel steps
36: Catalina Real-Time Transient Survey (CRTS): a
digital, panoramic, synoptic sky survey I
• Application: The survey explores the variable universe in the visible light regime, on
time scales ranging from minutes to years, by searching for variable and transient
sources. It discovers a broad variety of astrophysical objects and phenomena, including
various types of cosmic explosions (e.g., Supernovae), variable stars, phenomena
associated with accretion to massive black holes (active galactic nuclei) and their
relativistic jets, high proper motion stars, etc. The data are collected from 3 telescopes
(2 in Arizona and 1 in Australia), with additional ones expected in the near future (in
Chile).
• Current Approach: The survey generates up to ~ 0.1 TB on a clear night with a total of
~100 TB in current data holdings. The data are preprocessed at the telescope, and
transferred to Univ. of Arizona and Caltech, for further analysis, distribution, and
archiving. The data are processed in real time, and detected transient events are
published electronically through a variety of dissemination mechanisms, with no
proprietary withholding period (CRTS has a completely open data policy). Further data
analysis includes classification of the detected transient events, additional observations
using other telescopes, scientific interpretation, and publishing. In this process, it
makes a heavy use of the archival data (several PB’s) from a wide variety of
geographically distributed resources connected through the Virtual Observatory (VO)
framework.
34
Astronomy & Physics
PP, ML, Classification
Parallelism over Images and Events: Celestial events identified in Telescope Images
Streaming, workflow
36: Catalina Real-Time Transient Survey (CRTS):
a digital, panoramic, synoptic sky survey II
• Futures: CRTS is a scientific and methodological testbed and precursor of larger surveys to
come, notably the Large Synoptic Survey Telescope (LSST), expected to operate in 2020’s
and selected as the highest-priority ground-based instrument in the 2010 Astronomy and
Astrophysics Decadal Survey. LSST will gather about 30 TB per night.
35
Astronomy & Physics
43: Radar Data Analysis for CReSIS
Remote Sensing of Ice Sheets IV
• Typical CReSIS echogram with Detected Boundaries. The upper (green) boundary is
between air and ice layer while the lower (red) boundary is between ice and terrain
36
Earth, Environmental
and Polar Science
PP, GIS Parallelism over Radar ImagesStreaming
44: UAVSAR Data Processing, Data
Product Delivery, and Data Services II
• Combined
unwrapped
coseismic
interferograms
for flight lines
26501, 26505,
and 08508 for
the October
2009 – April
2010 time
period. End
points where
slip can be
seen on the
Imperial,
Superstition
Hills, and
Elmore Ranch
faults are
noted. GPS
stations are
marked by
dots and are
labeled.
37
Earth, Environmental
and Polar Science
PP, GIS Parallelism over Radar ImagesStreaming
Other Facets of the Ogres
Application Class Facet of Ogres
• Classification (30) divide data into categories
• Search Index and query (12)
• Maximum Likelihood or 2 minimizations
• Expectation Maximization (often Steepest descent)
• Local (pleasingly parallel) Machine Learning (36) contrasted to
• (Exascale) Global Optimization (23) (such as Learning Networks,
Variational Bayes and Gibbs Sampling)
• Do they Use Agents (2) as in epidemiology (swarm approaches)?
Higher-Level Computational Types or Features in earlier slide also has
CF(4): Collaborative Filtering in Core Analytics Facet
and two categories in data source and style
GIS(16): Geotagged data and often displayed in ESRI, Microsoft Virtual
Earth, Google Earth, GeoServer etc.
HPC(5): Classic large-scale simulation of cosmos, materials, etc.
generates big data
Problem Architecture Facet of Ogres (Meta or MacroPattern)
i. Pleasingly Parallel – as in BLAST, Protein docking, some (bio-
)imagery including Local Analytics or Machine Learning – ML or
filtering pleasingly parallel, as in bio-imagery, radar images
(pleasingly parallel but sophisticated local analytics)
ii. Classic MapReduce for Search and Query
iii. Global Analytics or Machine Learning requiring iterative
programming models
iv. Problem set up as a graph as opposed to vector, grid
v. SPMD (Single Program Multiple Data)
vi. Bulk Synchronous Processing: well-defined compute-
communication phases
vii. Fusion: Knowledge discovery often involves fusion of multiple
methods.
viii. Workflow (often used in fusion)
Note problem and machine architectures are related
Slight expansion of an earlier slides on:
Major Analytics Architectures in Use Cases
Pleasingly parallel
Search (MapReduce)
Map-Collective
Map-Communication as in MPI
Shared Memory
Low-Level (Run-time) Computational Types
used to label 51 use cases
PP(26): Pleasingly Parallel
MR(18 +7 MRStat): Classic MapReduce
MRStat(7)
MRIter(23)
Graph(9)
Fusion(11)
Streaming(41) In data source
4 Forms of MapReduce (Users and Abusers)
41
(a) Map Only (d) Point to Point(c) Iterative Map Reduce
or Map-Collective
(b) Classic
MapReduce
Input
map
reduce
Input
map
reduce
Iterations
Input
Output
map
Pij
BLAST Analysis
Local Machine Learning
Pleasingly Parallel
High Energy Physics
(HEP) Histograms
Distributed search
Classic MPI
PDE Solvers and
particle dynamics
Domain of MapReduce and Iterative Extensions
MPI
Giraph
Expectation maximization
Clustering e.g. K-means
Linear Algebra, PageRank
All of them are Map-Communication?
One Facet of Ogres has Computational Features
a) Flops per byte;
b) Communication Interconnect requirements;
c) Is application (graph) constant or dynamic?
d) Most applications consist of a set of interconnected
entities; is this regular as a set of pixels or is it a
complicated irregular graph?
e) Is communication BSP or Asynchronous? In latter case
shared memory may be attractive;
f) Are algorithms Iterative or not?
g) Data Abstraction: key-value, pixel, graph, vector
 Are data points in metric or non-metric spaces?
h) Core libraries needed: matrix-matrix/vector algebra,
conjugate gradient, reduction, broadcast
Data Source and Style Facet of Ogres
• (i) SQL
• (ii) NOSQL based
• (iii) Other Enterprise data systems (10 examples from Bob Marcus)
• (iv) Set of Files (as managed in iRODS)
• (v) Internet of Things
• (vi) Streaming and
• (vii) HPC simulations
• (viii) Involve GIS (Geographical Information Systems)
• Before data gets to compute system, there is often an initial data gathering
phase which is characterized by a block size and timing. Block size varies
from month (Remote Sensing, Seismic) to day (genomic) to seconds or
lower (Real time control, streaming)
• There are storage/compute system styles: Shared, Dedicated, Permanent,
Transient
• Other characteristics are needed for permanent auxiliary/comparison
datasets and these could be interdisciplinary, implying nontrivial data
movement/replication
Analytics Facet (kernels) of the
Ogres
Core Analytics Facet of Ogres (microPattern) I
• Map-Only
• Pleasingly parallel - Local Machine Learning
• MapReduce: Search/Query
• Summarizing statistics as in LHC Data analysis (histograms)
• Recommender Systems (Collaborative Filtering)
• Linear Classifiers (Bayes, Random Forests)
• Global Analytics
• Nonlinear Solvers (structure depends on objective function)
– Stochastic Gradient Descent SGD
– (L-)BFGS approximation to Newton’s Method
– Levenberg-Marquardt solver
• Map-Collective I (need to improve/extend Mahout, MLlib)
• Outlier Detection, Clustering (many methods),
• Mixture Models, LDA (Latent Dirichlet Allocation), PLSI (Probabilistic
Latent Semantic Indexing)
Core Analytics Facet of Ogres (microPattern) II
• Map-Collective II
• Use matrix-matrix,-vector operations, solvers (conjugate gradient)
• SVM and Logistic Regression
• PageRank, (find leading eigenvector of sparse matrix)
• SVD (Singular Value Decomposition)
• MDS (Multidimensional Scaling)
• Learning Neural Networks (Deep Learning)
• Hidden Markov Models
• Map-Communication
• Graph Structure (Communities, subgraphs/motifs, diameter,
maximal cliques, connected components)
• Network Dynamics - Graph simulation Algorithms (epidemiology)
• Asynchronous Shared Memory
• Graph Structure (Betweenness centrality, shortest path)
Parallel Global Machine Learning
Examples
Clustering and MDS Large Scale O(N2) GML
WDA SMACOF MDS (Multidimensional
Scaling) using Harp on Big Red 2
Parallel Efficiency: on 100-300K sequences
Conjugate Gradient (dominant time) and Matrix Multiplication
0.00
0.20
0.40
0.60
0.80
1.00
1.20
0 20 40 60 80 100 120 140
ParallelEfficiency
Number of Nodes
100K points 200K points 300K points
Features of Harp Hadoop Plugin
• Hadoop Plugin (on Hadoop 1.2.1 and Hadoop
2.2.0)
• Hierarchical data abstraction on arrays, key-values
and graphs for easy programming expressiveness.
• Collective communication model to support
various communication operations on the data
abstractions
• Caching with buffer management for memory
allocation required from computation and
communication
• BSP style parallelism
• Fault tolerance with checkpointing
Summarize a million Fungi Sequences
Spherical Phylogram Visualization
RAxML result visualized to right.
Spherical Phylogram from new MDS
method visualized in PlotViz
Comparing Data Intensive and
Simulation Problems
Comparison of Data Analytics with
Simulation I
• Pleasingly parallel often important in both
• Both are often SPMD and BSP
• Non-iterative MapReduce is major big data paradigm
– not a common simulation paradigm except where “Reduce”
summarizes pleasingly parallel execution
• Big Data often has large collective communication
– Classic simulation has a lot of smallish point-to-point
messages
• Simulation dominantly sparse (nearest neighbor) data
structures
– “Bag of words (users, rankings, images..)” algorithms are
sparse, as is PageRank
– Important data analytics involves full matrix algorithms
Comparison of Data Analytics with
Simulation II
• There are similarities between some graph problems and particle
simulations with a strange cutoff force.
– Both Map-Communication
• Note many big data problems are “long range force” as all points are
linked.
– Easiest to parallelize. Often full matrix algorithms
– e.g. in DNA sequence studies, distance (i, j) defined by BLAST,
Smith-Waterman, etc., between all sequences i, j.
– Opportunity for “fast multipole” ideas in big data.
• In image-based deep learning, neural network weights are block
sparse (corresponding to links to pixel blocks) but can be formulated
as full matrix operations on GPUs and MPI in blocks.
• In HPC benchmarking, Linpack being challenged by a new sparse
conjugate gradient benchmark HPCG, while I am diligently using non-
sparse conjugate gradient solvers in clustering and Multi-
dimensional scaling.
“Force Diagrams” for
macromolecules and Facebook
Sensors and the Internet of Things
Covers many streaming applications
IaaS
 Virtual Clusters, Networks
(software defined system)
 Hypervisor
 GPU, Multi-core CPU
PaaS
 Secure Publish Subscribe
 MapReduce, Workflow
 Metadata(iRODS)
 SQL/noSQL stores (MongoDB)
 Automatic Cloud Bursting
SaaS
 Manage Sensors and
their data
 Image Processing
 Robot Planning
Sensors as a Service
Military Command & Control
Personalized Medicine
Disaster Informatics (Earthquakes)
Execution Environment
Sensor, Local Cluster, XSEDE, FutureGrid, Cloud
Platform
As a Service
Software
As a Service
Infrastructure
As a Service
S
e
c
u
r
i
t
y
A
P
I
Internet of Things and the Cloud
• It is projected that there will be 24 (Mobile Industry Group) -50
(Cisco) billion devices on the Internet by 2020. Most will be small
sensors that send streams of information into the cloud where it will
be processed and integrated with other streams and turned into
knowledge that will help our lives in a multitude of small and big
ways.
• The cloud will become increasing important as a controller of and
resource provider for the Internet of Things.
• As well as today’s use for smart phone and gaming console support,
“Intelligent River” “smart homes and grid” and “ubiquitous cities”
build on this vision and we could expect a growth in cloud
supported/controlled robotics.
• Some of these “things” will be supporting science
• Natural parallelism over “things”
• “Things” are distributed and so form a Grid
58
http://www.kpcb.com/internet-trends
Database
SS SS SS SS SS SS
SS: Sensor or Data
Interchange
Service
Workflow
through multiple
filter/discovery
clouds
Another
Cloud
Raw Data  Data  Information  Knowledge  Wisdom  Decisions
SSSS
Another
Service
SS
Another
Grid SS
SS
SS
SS
SS
SS
SS
SS
Storage
Cloud
Compute
Cloud
SS
SSSS
SS
Filter
Cloud
Filter
Cloud
Filter
Cloud
Discovery
Cloud
Discovery
Cloud
Filter
Cloud
Filter
Cloud
Filter
Cloud
SS
Filter
Cloud
Filter
Cloud Filter
Cloud
Filter
Cloud
Distributed
Grid
Hadoop
Cluster
SS
IOTCloud
• Device  Pub-
SubStorm  Datastore
 Data Analysis
• Apache Storm provides
scalable distributed
system for processing
data streams coming
from devices in real time.
• For example Storm layer
can decide to store the
data in cloud storage for
further analysis or to
send control data back to
the devices
• Evaluating Pub-Sub
Systems ActiveMQ,
RabbitMQ, Kafka, Kestrel
Performance
From Device to Cloud
• 6 FutureGrid India Medium
OpenStack machines
• 1 Broker machine,
RabbitMQ or ActiveMQ
• 1 machine hosting
ZooKeeper and Storm –
Nimbus (Master for Storm)
• 2 Sensor sites generating
data
• 2 Storm nodes sending
back the same data and we
measure the unidirectional
latency
• Using drones and Kinects
39: Particle Physics: Analysis of LHC Large Hadron Collider Data:
Discovery of Higgs particle I
• Application: One analyses collisions at the CERN LHC (Large Hadron Collider)
Accelerator and Monte Carlo producing events describing particle-apparatus
interaction. Processed information defines physics properties of events (lists of
particles with type and momenta). These events are analyzed to find new effects;
both new particles (Higgs) and present evidence that conjectured particles
(Supersymmetry) have not been detected. LHC has a few major experiments
including ATLAS and CMS. These experiments have global participants (for example
CMS has 3600 participants from 183 institutions in 38 countries), and so the data at
all levels is transported and accessed across continents.
63
Astronomy & Physics
CERN LHC
Accelerator Ring
(27 km
circumference.
Up to 175m
depth) at Geneva
with 4
Experiment
positions marked
MRStat or PP, MC Parallelism over observed collisionsJune 19 2014: Kaggle competition for machine learning to uncover Higgs to pair of tau leptons
13: Cloud Large Scale Geospatial Analysis and
Visualization
• Application: Need to support large scale geospatial data analysis and
visualization with number of geospatially aware sensors and the
number of geospatially tagged data sources rapidly increasing.
64
Defense
PP, GIS, Classification Parallelism over Sensors and people accessing dataStreaming
50: DOE-BER AmeriFlux and FLUXNET
Networks I
• Application: AmeriFlux and FLUXNET are US and world collections respectively of
sensors that observe trace gas fluxes (CO2, water vapor) across a broad spectrum of
times (hours, days, seasons, years, and decades) and space. Moreover, such
datasets provide the crucial linkages among organisms, ecosystems, and process-
scale studies—at climate-relevant scales of landscapes, regions, and continents—
for incorporation into biogeochemical and climate models.
• Current Approach: Software includes EddyPro, Custom analysis software, R,
python, neural networks, Matlab. There are ~150 towers in AmeriFlux and over 500
towers distributed globally collecting flux measurements.
• Futures: Field experiment data taking would be improved by access to existing data
and automated entry of new data via mobile devices. Need to support
interdisciplinary study integrating diverse data sources.
65
Earth, Environmental
and Polar Science
Fusion, PP, GIS Parallelism over SensorsStreaming
51: Consumption forecasting in Smart Grids
• Application: Predict energy consumption for customers, transformers, sub-
stations and the electrical grid service area using smart meters providing
measurements every 15-mins at the granularity of individual consumers
within the service area of smart power utilities. Combine Head-end of
smart meters (distributed), Utility databases (Customer Information,
Network topology; centralized), US Census data (distributed), NOAA
weather data (distributed), Micro-grid building information system
(centralized), Micro-grid sensor network (distributed). This generalizes to
real-time data-driven analytics for time series from cyber physical systems
• Current Approach: GIS based visualization. Data is around 4 TB a year for a
city with 1.4M sensors in Los Angeles. Uses R/Matlab, Weka, Hadoop
software. Significant privacy issues requiring anonymization by
aggregation. Combine real time and historic data with machine learning for
predicting consumption.
• Futures: Wide spread deployment of Smart Grids with new analytics
integrating diverse data and supporting curtailment requests. Mobile
applications for client interactions.
66
Energy
Fusion, PP, MR, ML, GIS, Classification Parallelism over SensorsStreaming
Java Grande
Java Grande
• I once tried to encourage use of Java in HPC with Java Grande
Forum but Fortran, C and C++ remain central HPC languages.
– Not helped by .com and Sun collapse in 2000-2005
• The pure Java CartaBlanca, a 2005 R&D100 award-winning
project, was an early successful example of HPC use of Java in a
simulation tool for non-linear physics on unstructured grids.
• Of course Java is a major language in ABDS and as data analysis
and simulation are naturally linked, should consider broader use
of Java
• Using Habanero Java (from Rice University) for Threads and
mpiJava or FastMPJ for MPI, gathering collection of high
performance parallel Java analytics
– Converted from C# and sequential Java faster than sequential C#
• So will have either Hadoop+Harp of classic Threads/MPI
versions in Java Grande version of mahout
Performance of MPI Kernel Operations
1
100
10000
0B
2B
8B
32B
128B
512B
2KB
8KB
32KB
128KB
512KB
Averagetime(us)
Message size (bytes)
MPI.NET C# in Tempest
FastMPJ Java in FG
OMPI-nightly Java FG
OMPI-trunk Java FG
OMPI-trunk C FG
Performance of MPI send and receive operations
5
5000
4B
16B
64B
256B
1KB
4KB
16KB
64KB
256KB
1MB
4MB
Averagetime(us)
Message size (bytes)
MPI.NET C# in Tempest
FastMPJ Java in FG
OMPI-nightly Java FG
OMPI-trunk Java FG
OMPI-trunk C FG
Performance of MPI allreduce operation
1
100
10000
1000000
4B
16B
64B
256B
1KB
4KB
16KB
64KB
256KB
1MB
4MB
AverageTime(us)
Message Size (bytes)
OMPI-trunk C Madrid
OMPI-trunk Java Madrid
OMPI-trunk C FG
OMPI-trunk Java FG
1
10
100
1000
10000
0B
2B
8B
32B
128B
512B
2KB
8KB
32KB
128KB
512KB
AverageTime(us)
Message Size (bytes)
OMPI-trunk C Madrid
OMPI-trunk Java Madrid
OMPI-trunk C FG
OMPI-trunk Java FG
Performance of MPI send and receive on
Infiniband and Ethernet
Performance of MPI allreduce on Infiniband
and Ethernet
Pure Java as
in FastMPJ
slower than
Java
interfacing
to C version
of MPI
DAVS Performance
• Charge2 Proteomics 241605 points
4/1/2013 70
Pure MPI Times MPI with Threads Pure MPI Speedup
0
5
10
15
20
25
30
1x1x1 1x1x2 1x2x1 1x1x4 1x4x1 1x1x8 1x2x4 1x4x2
Time(hours)
TxPxN
MPI.NET
OMPI-nightly
OMPI-trunk
1
1.5
2
2.5
3
3.5
4
4.5
5
5.5
1x1x11x1x21x2x11x1x41x4x11x1x81x2x41x4x2
Speedup
TxPxN
MPI.NET
OMPI-
nightly
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
2x1x8 4x1x8 8x1x8 1x2x8 4x2x8 1x4x8 2x4x8 1x8x8
Time(hours)
TxPxN
MPI.NET
OMPI-nightly
OMPI-trunk
0
10000
20000
30000
40000
50000
60000
1.00E-031.00E-021.00E-011.00E+001.00E+011.00E+021.00E+031.00E+041.00E+051.00E+06
ClusterCount
Temperature
DAVS(2) DA2D
Start Sponge DAVS(2)
Add Close Cluster Check
Sponge Reaches final value
Cluster Count v. Temperature for 2 Runs
• All start with one cluster at far left
• T=1 special as measurement errors divided out
• DA2D counts clusters with 1 member as clusters. DAVS(2) does not
Lessons / Insights
• Integrate (don’t compete) HPC with “Commodity Big
data” (Google to Amazon to Enterprise Data Analytics)
– i.e. improve Mahout; don’t compete with it
– Use Hadoop plug-ins rather than replacing Hadoop
• Enhanced Apache Big Data Stack HPC-ABDS has ~120
members
• Opportunities at Resource management, Data/File,
Streaming, Programming, monitoring, workflow layers
for HPC and ABDS integration
• Data intensive algorithms do not have the well
developed high performance libraries familiar from
HPC
• Strong case for high performance Java (Grande) run
time supporting all forms of parallelism
Spare Slides
http://www.kpcb.com/internet-trends
Iterative MapReduce
Implementing HPC-ABDS
Judy Qiu, Bingjing Zhang, Dennis
Gannon, Thilina Gunarathne
Using Optimal “Collective” Operations
• Twister4Azure Iterative MapReduce with enhanced collectives
– Map-AllReduce primitive and MapReduce-MergeBroadcast
• Strong Scaling on K-means for up to 256 cores on Azure
Kmeans and (Iterative) MapReduce
• Shaded areas are computing only where Hadoop on HPC cluster is
fastest
• Areas above shading are overheads where T4A smallest and T4A with
AllReduce collective have lowest overhead
• Note even on Azure Java (Orange) faster than T4A C# for compute 77
0
200
400
600
800
1000
1200
1400
32 x 32 M 64 x 64 M 128 x 128 M 256 x 256 M
Time(s)
Num. Cores X Num. Data Points
Hadoop AllReduce
Hadoop MapReduce
Twister4Azure AllReduce
Twister4Azure Broadcast
Twister4Azure
HDInsight
(AzureHadoop)
Collectives improve traditional
MapReduce
• Poly-algorithms choose the best collective implementation for machine
and collective at hand
• This is K-means running within basic Hadoop but with optimal AllReduce
collective operations
• Running on Infiniband Linux Cluster

Weitere ähnliche Inhalte

Was ist angesagt?

Visualizing and Clustering Life Science Applications in Parallel 
Visualizing and Clustering Life Science Applications in Parallel Visualizing and Clustering Life Science Applications in Parallel 
Visualizing and Clustering Life Science Applications in Parallel Geoffrey Fox
 
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...Geoffrey Fox
 
High Performance Data Analytics and a Java Grande Run Time
High Performance Data Analytics and a Java Grande Run TimeHigh Performance Data Analytics and a Java Grande Run Time
High Performance Data Analytics and a Java Grande Run TimeGeoffrey Fox
 
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC Geoffrey Fox
 
51 Use Cases and implications for HPC & Apache Big Data Stack
51 Use Cases and implications for HPC & Apache Big Data Stack51 Use Cases and implications for HPC & Apache Big Data Stack
51 Use Cases and implications for HPC & Apache Big Data StackGeoffrey Fox
 
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...inside-BigData.com
 
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...Geoffrey Fox
 
Hadoop - Architectural road map for Hadoop Ecosystem
Hadoop -  Architectural road map for Hadoop EcosystemHadoop -  Architectural road map for Hadoop Ecosystem
Hadoop - Architectural road map for Hadoop Ecosystemnallagangus
 
07 data structures_and_representations
07 data structures_and_representations07 data structures_and_representations
07 data structures_and_representationsMarco Quartulli
 
Anusua Trivedi, Data Scientist at Texas Advanced Computing Center (TACC), UT ...
Anusua Trivedi, Data Scientist at Texas Advanced Computing Center (TACC), UT ...Anusua Trivedi, Data Scientist at Texas Advanced Computing Center (TACC), UT ...
Anusua Trivedi, Data Scientist at Texas Advanced Computing Center (TACC), UT ...MLconf
 
The Open Science Data Cloud: Empowering the Long Tail of Science
The Open Science Data Cloud: Empowering the Long Tail of ScienceThe Open Science Data Cloud: Empowering the Long Tail of Science
The Open Science Data Cloud: Empowering the Long Tail of ScienceRobert Grossman
 
Distributed computing abstractions_data_science_6_june_2016_ver_0.4
Distributed computing abstractions_data_science_6_june_2016_ver_0.4Distributed computing abstractions_data_science_6_june_2016_ver_0.4
Distributed computing abstractions_data_science_6_june_2016_ver_0.4Vijay Srinivas Agneeswaran, Ph.D
 
PEARC 17: Spark On the ARC
PEARC 17: Spark On the ARCPEARC 17: Spark On the ARC
PEARC 17: Spark On the ARCHimanshu Bedi
 
What Are Science Clouds?
What Are Science Clouds?What Are Science Clouds?
What Are Science Clouds?Robert Grossman
 
Using the Open Science Data Cloud for Data Science Research
Using the Open Science Data Cloud for Data Science ResearchUsing the Open Science Data Cloud for Data Science Research
Using the Open Science Data Cloud for Data Science ResearchRobert Grossman
 
MACHINE LEARNING ON MAPREDUCE FRAMEWORK
MACHINE LEARNING ON MAPREDUCE FRAMEWORKMACHINE LEARNING ON MAPREDUCE FRAMEWORK
MACHINE LEARNING ON MAPREDUCE FRAMEWORKAbhi Jit
 
High Availability HPC ~ Microservice Architectures for Supercomputing
High Availability HPC ~ Microservice Architectures for SupercomputingHigh Availability HPC ~ Microservice Architectures for Supercomputing
High Availability HPC ~ Microservice Architectures for Supercomputinginside-BigData.com
 

Was ist angesagt? (20)

Visualizing and Clustering Life Science Applications in Parallel 
Visualizing and Clustering Life Science Applications in Parallel Visualizing and Clustering Life Science Applications in Parallel 
Visualizing and Clustering Life Science Applications in Parallel 
 
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
 
High Performance Data Analytics and a Java Grande Run Time
High Performance Data Analytics and a Java Grande Run TimeHigh Performance Data Analytics and a Java Grande Run Time
High Performance Data Analytics and a Java Grande Run Time
 
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
 
51 Use Cases and implications for HPC & Apache Big Data Stack
51 Use Cases and implications for HPC & Apache Big Data Stack51 Use Cases and implications for HPC & Apache Big Data Stack
51 Use Cases and implications for HPC & Apache Big Data Stack
 
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
 
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
 
Big data analytics
Big data analyticsBig data analytics
Big data analytics
 
Hadoop - Architectural road map for Hadoop Ecosystem
Hadoop -  Architectural road map for Hadoop EcosystemHadoop -  Architectural road map for Hadoop Ecosystem
Hadoop - Architectural road map for Hadoop Ecosystem
 
04 open source_tools
04 open source_tools04 open source_tools
04 open source_tools
 
07 data structures_and_representations
07 data structures_and_representations07 data structures_and_representations
07 data structures_and_representations
 
Anusua Trivedi, Data Scientist at Texas Advanced Computing Center (TACC), UT ...
Anusua Trivedi, Data Scientist at Texas Advanced Computing Center (TACC), UT ...Anusua Trivedi, Data Scientist at Texas Advanced Computing Center (TACC), UT ...
Anusua Trivedi, Data Scientist at Texas Advanced Computing Center (TACC), UT ...
 
The Open Science Data Cloud: Empowering the Long Tail of Science
The Open Science Data Cloud: Empowering the Long Tail of ScienceThe Open Science Data Cloud: Empowering the Long Tail of Science
The Open Science Data Cloud: Empowering the Long Tail of Science
 
Big data analytics_7_giants_public_24_sep_2013
Big data analytics_7_giants_public_24_sep_2013Big data analytics_7_giants_public_24_sep_2013
Big data analytics_7_giants_public_24_sep_2013
 
Distributed computing abstractions_data_science_6_june_2016_ver_0.4
Distributed computing abstractions_data_science_6_june_2016_ver_0.4Distributed computing abstractions_data_science_6_june_2016_ver_0.4
Distributed computing abstractions_data_science_6_june_2016_ver_0.4
 
PEARC 17: Spark On the ARC
PEARC 17: Spark On the ARCPEARC 17: Spark On the ARC
PEARC 17: Spark On the ARC
 
What Are Science Clouds?
What Are Science Clouds?What Are Science Clouds?
What Are Science Clouds?
 
Using the Open Science Data Cloud for Data Science Research
Using the Open Science Data Cloud for Data Science ResearchUsing the Open Science Data Cloud for Data Science Research
Using the Open Science Data Cloud for Data Science Research
 
MACHINE LEARNING ON MAPREDUCE FRAMEWORK
MACHINE LEARNING ON MAPREDUCE FRAMEWORKMACHINE LEARNING ON MAPREDUCE FRAMEWORK
MACHINE LEARNING ON MAPREDUCE FRAMEWORK
 
High Availability HPC ~ Microservice Architectures for Supercomputing
High Availability HPC ~ Microservice Architectures for SupercomputingHigh Availability HPC ~ Microservice Architectures for Supercomputing
High Availability HPC ~ Microservice Architectures for Supercomputing
 

Andere mochten auch

Architecture and Performance of Runtime Environments for Data Intensive Scala...
Architecture and Performance of Runtime Environments for Data Intensive Scala...Architecture and Performance of Runtime Environments for Data Intensive Scala...
Architecture and Performance of Runtime Environments for Data Intensive Scala...jaliyae
 
Geoff Rothman Presentation on Parallel Processing
Geoff Rothman Presentation on Parallel ProcessingGeoff Rothman Presentation on Parallel Processing
Geoff Rothman Presentation on Parallel ProcessingGeoff Rothman
 
HPC with Clouds and Cloud Technologies
HPC with Clouds and Cloud TechnologiesHPC with Clouds and Cloud Technologies
HPC with Clouds and Cloud TechnologiesInderjeet Singh
 
High Performance Parallel Computing with Clouds and Cloud Technologies
High Performance Parallel Computing with Clouds and Cloud TechnologiesHigh Performance Parallel Computing with Clouds and Cloud Technologies
High Performance Parallel Computing with Clouds and Cloud Technologiesjaliyae
 
Determining Relevance Rankings from Search Click Logs
Determining Relevance Rankings from Search Click LogsDetermining Relevance Rankings from Search Click Logs
Determining Relevance Rankings from Search Click LogsInderjeet Singh
 
Evolving HDFS to a Generalized Distributed Storage Subsystem
Evolving HDFS to a Generalized Distributed Storage SubsystemEvolving HDFS to a Generalized Distributed Storage Subsystem
Evolving HDFS to a Generalized Distributed Storage SubsystemDataWorks Summit/Hadoop Summit
 
High performance computing
High performance computingHigh performance computing
High performance computingGuy Tel-Zur
 
File Format Benchmarks - Avro, JSON, ORC, & Parquet
File Format Benchmarks - Avro, JSON, ORC, & ParquetFile Format Benchmarks - Avro, JSON, ORC, & Parquet
File Format Benchmarks - Avro, JSON, ORC, & ParquetOwen O'Malley
 

Andere mochten auch (9)

Architecture and Performance of Runtime Environments for Data Intensive Scala...
Architecture and Performance of Runtime Environments for Data Intensive Scala...Architecture and Performance of Runtime Environments for Data Intensive Scala...
Architecture and Performance of Runtime Environments for Data Intensive Scala...
 
Geoff Rothman Presentation on Parallel Processing
Geoff Rothman Presentation on Parallel ProcessingGeoff Rothman Presentation on Parallel Processing
Geoff Rothman Presentation on Parallel Processing
 
HPC with Clouds and Cloud Technologies
HPC with Clouds and Cloud TechnologiesHPC with Clouds and Cloud Technologies
HPC with Clouds and Cloud Technologies
 
High Performance Parallel Computing with Clouds and Cloud Technologies
High Performance Parallel Computing with Clouds and Cloud TechnologiesHigh Performance Parallel Computing with Clouds and Cloud Technologies
High Performance Parallel Computing with Clouds and Cloud Technologies
 
Determining Relevance Rankings from Search Click Logs
Determining Relevance Rankings from Search Click LogsDetermining Relevance Rankings from Search Click Logs
Determining Relevance Rankings from Search Click Logs
 
Evolving HDFS to a Generalized Distributed Storage Subsystem
Evolving HDFS to a Generalized Distributed Storage SubsystemEvolving HDFS to a Generalized Distributed Storage Subsystem
Evolving HDFS to a Generalized Distributed Storage Subsystem
 
High performance computing
High performance computingHigh performance computing
High performance computing
 
Apache Hadoop 3.0 What's new in YARN and MapReduce
Apache Hadoop 3.0 What's new in YARN and MapReduceApache Hadoop 3.0 What's new in YARN and MapReduce
Apache Hadoop 3.0 What's new in YARN and MapReduce
 
File Format Benchmarks - Avro, JSON, ORC, & Parquet
File Format Benchmarks - Avro, JSON, ORC, & ParquetFile Format Benchmarks - Avro, JSON, ORC, & Parquet
File Format Benchmarks - Avro, JSON, ORC, & Parquet
 

Ähnlich wie Matching Data Intensive Applications and Hardware/Software Architectures

High Performance Computing and Big Data
High Performance Computing and Big Data High Performance Computing and Big Data
High Performance Computing and Big Data Geoffrey Fox
 
Cloud Services for Big Data Analytics
Cloud Services for Big Data AnalyticsCloud Services for Big Data Analytics
Cloud Services for Big Data AnalyticsGeoffrey Fox
 
Designing Convergent HPC and Big Data Software Stacks: An Overview of the HiB...
Designing Convergent HPC and Big Data Software Stacks: An Overview of the HiB...Designing Convergent HPC and Big Data Software Stacks: An Overview of the HiB...
Designing Convergent HPC and Big Data Software Stacks: An Overview of the HiB...inside-BigData.com
 
عصر کلان داده، چرا و چگونه؟
عصر کلان داده، چرا و چگونه؟عصر کلان داده، چرا و چگونه؟
عصر کلان داده، چرا و چگونه؟datastack
 
Research on vector spatial data storage scheme based
Research on vector spatial data storage scheme basedResearch on vector spatial data storage scheme based
Research on vector spatial data storage scheme basedAnant Kumar
 
Module-2_HADOOP.pptx
Module-2_HADOOP.pptxModule-2_HADOOP.pptx
Module-2_HADOOP.pptxShreyasKv13
 
BIg Data Analytics-Module-2 vtu engineering.pptx
BIg Data Analytics-Module-2 vtu engineering.pptxBIg Data Analytics-Module-2 vtu engineering.pptx
BIg Data Analytics-Module-2 vtu engineering.pptxVishalBH1
 
Introduction to Apache Hadoop
Introduction to Apache HadoopIntroduction to Apache Hadoop
Introduction to Apache HadoopChristopher Pezza
 
Big Data Analytics with Hadoop
Big Data Analytics with HadoopBig Data Analytics with Hadoop
Big Data Analytics with HadoopPhilippe Julio
 
Hadoop ecosystem for health/life sciences
Hadoop ecosystem for health/life sciencesHadoop ecosystem for health/life sciences
Hadoop ecosystem for health/life sciencesUri Laserson
 
The Analytics Frontier of the Hadoop Eco-System
The Analytics Frontier of the Hadoop Eco-SystemThe Analytics Frontier of the Hadoop Eco-System
The Analytics Frontier of the Hadoop Eco-Systeminside-BigData.com
 
Hadoop Tutorial.ppt
Hadoop Tutorial.pptHadoop Tutorial.ppt
Hadoop Tutorial.pptSathish24111
 
CouchBase The Complete NoSql Solution for Big Data
CouchBase The Complete NoSql Solution for Big DataCouchBase The Complete NoSql Solution for Big Data
CouchBase The Complete NoSql Solution for Big DataDebajani Mohanty
 
Hadoop/MapReduce/HDFS
Hadoop/MapReduce/HDFSHadoop/MapReduce/HDFS
Hadoop/MapReduce/HDFSpraveen bhat
 
VTU 6th Sem Elective CSE - Module 4 cloud computing
VTU 6th Sem Elective CSE - Module 4  cloud computingVTU 6th Sem Elective CSE - Module 4  cloud computing
VTU 6th Sem Elective CSE - Module 4 cloud computingSachin Gowda
 
module4-cloudcomputing-180131071200.pdf
module4-cloudcomputing-180131071200.pdfmodule4-cloudcomputing-180131071200.pdf
module4-cloudcomputing-180131071200.pdfSumanthReddy540432
 
Simple, Modular and Extensible Big Data Platform Concept
Simple, Modular and Extensible Big Data Platform ConceptSimple, Modular and Extensible Big Data Platform Concept
Simple, Modular and Extensible Big Data Platform ConceptSatish Mohan
 
Big Data Open Source Tools and Trends: Enable Real-Time Business Intelligence...
Big Data Open Source Tools and Trends: Enable Real-Time Business Intelligence...Big Data Open Source Tools and Trends: Enable Real-Time Business Intelligence...
Big Data Open Source Tools and Trends: Enable Real-Time Business Intelligence...Perficient, Inc.
 

Ähnlich wie Matching Data Intensive Applications and Hardware/Software Architectures (20)

High Performance Computing and Big Data
High Performance Computing and Big Data High Performance Computing and Big Data
High Performance Computing and Big Data
 
Cloud Services for Big Data Analytics
Cloud Services for Big Data AnalyticsCloud Services for Big Data Analytics
Cloud Services for Big Data Analytics
 
C cerin piv2017_c
C cerin piv2017_cC cerin piv2017_c
C cerin piv2017_c
 
Designing Convergent HPC and Big Data Software Stacks: An Overview of the HiB...
Designing Convergent HPC and Big Data Software Stacks: An Overview of the HiB...Designing Convergent HPC and Big Data Software Stacks: An Overview of the HiB...
Designing Convergent HPC and Big Data Software Stacks: An Overview of the HiB...
 
عصر کلان داده، چرا و چگونه؟
عصر کلان داده، چرا و چگونه؟عصر کلان داده، چرا و چگونه؟
عصر کلان داده، چرا و چگونه؟
 
Research on vector spatial data storage scheme based
Research on vector spatial data storage scheme basedResearch on vector spatial data storage scheme based
Research on vector spatial data storage scheme based
 
Module-2_HADOOP.pptx
Module-2_HADOOP.pptxModule-2_HADOOP.pptx
Module-2_HADOOP.pptx
 
BIg Data Analytics-Module-2 vtu engineering.pptx
BIg Data Analytics-Module-2 vtu engineering.pptxBIg Data Analytics-Module-2 vtu engineering.pptx
BIg Data Analytics-Module-2 vtu engineering.pptx
 
Introduction to Apache Hadoop
Introduction to Apache HadoopIntroduction to Apache Hadoop
Introduction to Apache Hadoop
 
Big Data Analytics with Hadoop
Big Data Analytics with HadoopBig Data Analytics with Hadoop
Big Data Analytics with Hadoop
 
Hadoop ecosystem for health/life sciences
Hadoop ecosystem for health/life sciencesHadoop ecosystem for health/life sciences
Hadoop ecosystem for health/life sciences
 
Hadoop tutorial
Hadoop tutorialHadoop tutorial
Hadoop tutorial
 
The Analytics Frontier of the Hadoop Eco-System
The Analytics Frontier of the Hadoop Eco-SystemThe Analytics Frontier of the Hadoop Eco-System
The Analytics Frontier of the Hadoop Eco-System
 
Hadoop Tutorial.ppt
Hadoop Tutorial.pptHadoop Tutorial.ppt
Hadoop Tutorial.ppt
 
CouchBase The Complete NoSql Solution for Big Data
CouchBase The Complete NoSql Solution for Big DataCouchBase The Complete NoSql Solution for Big Data
CouchBase The Complete NoSql Solution for Big Data
 
Hadoop/MapReduce/HDFS
Hadoop/MapReduce/HDFSHadoop/MapReduce/HDFS
Hadoop/MapReduce/HDFS
 
VTU 6th Sem Elective CSE - Module 4 cloud computing
VTU 6th Sem Elective CSE - Module 4  cloud computingVTU 6th Sem Elective CSE - Module 4  cloud computing
VTU 6th Sem Elective CSE - Module 4 cloud computing
 
module4-cloudcomputing-180131071200.pdf
module4-cloudcomputing-180131071200.pdfmodule4-cloudcomputing-180131071200.pdf
module4-cloudcomputing-180131071200.pdf
 
Simple, Modular and Extensible Big Data Platform Concept
Simple, Modular and Extensible Big Data Platform ConceptSimple, Modular and Extensible Big Data Platform Concept
Simple, Modular and Extensible Big Data Platform Concept
 
Big Data Open Source Tools and Trends: Enable Real-Time Business Intelligence...
Big Data Open Source Tools and Trends: Enable Real-Time Business Intelligence...Big Data Open Source Tools and Trends: Enable Real-Time Business Intelligence...
Big Data Open Source Tools and Trends: Enable Real-Time Business Intelligence...
 

Mehr von Geoffrey Fox

AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...Geoffrey Fox
 
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...Geoffrey Fox
 
Data Science and Online Education
Data Science and Online EducationData Science and Online Education
Data Science and Online EducationGeoffrey Fox
 
Big Data HPC Convergence and a bunch of other things
Big Data HPC Convergence and a bunch of other thingsBig Data HPC Convergence and a bunch of other things
Big Data HPC Convergence and a bunch of other thingsGeoffrey Fox
 
Lessons from Data Science Program at Indiana University: Curriculum, Students...
Lessons from Data Science Program at Indiana University: Curriculum, Students...Lessons from Data Science Program at Indiana University: Curriculum, Students...
Lessons from Data Science Program at Indiana University: Curriculum, Students...Geoffrey Fox
 
Data Science Curriculum at Indiana University
Data Science Curriculum at Indiana UniversityData Science Curriculum at Indiana University
Data Science Curriculum at Indiana UniversityGeoffrey Fox
 
Experience with Online Teaching with Open Source MOOC Technology
Experience with Online Teaching with Open Source MOOC TechnologyExperience with Online Teaching with Open Source MOOC Technology
Experience with Online Teaching with Open Source MOOC TechnologyGeoffrey Fox
 
Big Data and Clouds: Research and Education
Big Data and Clouds: Research and EducationBig Data and Clouds: Research and Education
Big Data and Clouds: Research and EducationGeoffrey Fox
 
Multi-faceted Classification of Big Data Use Cases and Proposed Architecture ...
Multi-faceted Classification of Big Data Use Cases and Proposed Architecture ...Multi-faceted Classification of Big Data Use Cases and Proposed Architecture ...
Multi-faceted Classification of Big Data Use Cases and Proposed Architecture ...Geoffrey Fox
 
FutureGrid Computing Testbed as a Service
 FutureGrid Computing Testbed as a Service FutureGrid Computing Testbed as a Service
FutureGrid Computing Testbed as a ServiceGeoffrey Fox
 
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...Geoffrey Fox
 
NIST Big Data Public Working Group NBD-PWG
NIST Big Data Public Working Group NBD-PWGNIST Big Data Public Working Group NBD-PWG
NIST Big Data Public Working Group NBD-PWGGeoffrey Fox
 
Linking Programming models between Grids, Web 2.0 and Multicore
Linking Programming models between Grids, Web 2.0 and Multicore Linking Programming models between Grids, Web 2.0 and Multicore
Linking Programming models between Grids, Web 2.0 and Multicore Geoffrey Fox
 
CTS Conference Web 2.0 Tutorial Part 2
CTS Conference Web 2.0 Tutorial Part 2CTS Conference Web 2.0 Tutorial Part 2
CTS Conference Web 2.0 Tutorial Part 2Geoffrey Fox
 
CTS Conference Web 2.0 Tutorial Part 1
CTS Conference Web 2.0 Tutorial Part 1CTS Conference Web 2.0 Tutorial Part 1
CTS Conference Web 2.0 Tutorial Part 1Geoffrey Fox
 

Mehr von Geoffrey Fox (16)

AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
 
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
 
Data Science and Online Education
Data Science and Online EducationData Science and Online Education
Data Science and Online Education
 
Big Data HPC Convergence and a bunch of other things
Big Data HPC Convergence and a bunch of other thingsBig Data HPC Convergence and a bunch of other things
Big Data HPC Convergence and a bunch of other things
 
Lessons from Data Science Program at Indiana University: Curriculum, Students...
Lessons from Data Science Program at Indiana University: Curriculum, Students...Lessons from Data Science Program at Indiana University: Curriculum, Students...
Lessons from Data Science Program at Indiana University: Curriculum, Students...
 
Data Science Curriculum at Indiana University
Data Science Curriculum at Indiana UniversityData Science Curriculum at Indiana University
Data Science Curriculum at Indiana University
 
Experience with Online Teaching with Open Source MOOC Technology
Experience with Online Teaching with Open Source MOOC TechnologyExperience with Online Teaching with Open Source MOOC Technology
Experience with Online Teaching with Open Source MOOC Technology
 
Big Data and Clouds: Research and Education
Big Data and Clouds: Research and EducationBig Data and Clouds: Research and Education
Big Data and Clouds: Research and Education
 
Multi-faceted Classification of Big Data Use Cases and Proposed Architecture ...
Multi-faceted Classification of Big Data Use Cases and Proposed Architecture ...Multi-faceted Classification of Big Data Use Cases and Proposed Architecture ...
Multi-faceted Classification of Big Data Use Cases and Proposed Architecture ...
 
Remarks on MOOC's
Remarks on MOOC'sRemarks on MOOC's
Remarks on MOOC's
 
FutureGrid Computing Testbed as a Service
 FutureGrid Computing Testbed as a Service FutureGrid Computing Testbed as a Service
FutureGrid Computing Testbed as a Service
 
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
 
NIST Big Data Public Working Group NBD-PWG
NIST Big Data Public Working Group NBD-PWGNIST Big Data Public Working Group NBD-PWG
NIST Big Data Public Working Group NBD-PWG
 
Linking Programming models between Grids, Web 2.0 and Multicore
Linking Programming models between Grids, Web 2.0 and Multicore Linking Programming models between Grids, Web 2.0 and Multicore
Linking Programming models between Grids, Web 2.0 and Multicore
 
CTS Conference Web 2.0 Tutorial Part 2
CTS Conference Web 2.0 Tutorial Part 2CTS Conference Web 2.0 Tutorial Part 2
CTS Conference Web 2.0 Tutorial Part 2
 
CTS Conference Web 2.0 Tutorial Part 1
CTS Conference Web 2.0 Tutorial Part 1CTS Conference Web 2.0 Tutorial Part 1
CTS Conference Web 2.0 Tutorial Part 1
 

Kürzlich hochgeladen

"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DaySri Ambati
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 

Kürzlich hochgeladen (20)

"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 

Matching Data Intensive Applications and Hardware/Software Architectures

  • 1. Matching Data Intensive Applications and Hardware/Software Architectures LBNL Berkeley CA June 19 2014 Geoffrey Fox gcf@indiana.edu http://www.infomall.org School of Informatics and Computing Digital Science Center Indiana University Bloomington
  • 2. Abstract • There is perhaps a broad consensus as to important issues in practical parallel computing as applied to large scale simulations; this is reflected in supercomputer architectures, algorithms, libraries, languages, compilers and best practice for application development. However the same is not so true for data intensive problems even though commercial clouds presumably devote more resources to data analytics than supercomputers devote to simulations. We try to establish some principles that allow one to compare data intensive architectures and decide which applications fit which machines and which software. • We use a sample of over 50 big data applications to identify characteristics of data intensive applications and propose a big data version of the famous Berkeley dwarfs and NAS parallel benchmarks. We consider hardware from clouds to HPC. Our software analysis builds on the Apache software stack (ABDS) that is well used in modern cloud computing, which we enhance with HPC concepts to derive HPC-ABDS. • We illustrate issues with examples including kernels like clustering, and multi- dimensional scaling; cyberphysical systems; databases; and variants of image processing from beam lines, Facebook and deep-learning.
  • 4. HPC-ABDS Integrating High Performance Computing with Apache Big Data Stack Shantenu Jha, Judy Qiu, Andre Luckow
  • 5.
  • 6. • HPC-ABDS • ~120 Capabilities • >40 Apache • Green layers have strong HPC Integration opportunities • Goal • Functionality of ABDS • Performance of HPC
  • 7. Broad Layers in HPC-ABDS • Workflow-Orchestration • Application and Analytics: Mahout, MLlib, R… • High level Programming • Basic Programming model and runtime – SPMD, Streaming, MapReduce, MPI • Inter process communication – Collectives, point-to-point, publish-subscribe • In-memory databases/caches • Object-relational mapping • SQL and NoSQL, File management • Data Transport • Cluster Resource Management (Yarn, Slurm, SGE) • File systems(HDFS, Lustre …) • DevOps (Puppet, Chef …) • IaaS Management from HPC to hypervisors (OpenStack) • Cross Cutting – Message Protocols – Distributed Coordination – Security & Privacy – Monitoring
  • 8. Useful Set of Analytics Architectures • Pleasingly Parallel: including local machine learning as in parallel over images and apply image processing to each image - Hadoop could be used but many other HTC, Many task tools • Search: including collaborative filtering and motif finding implemented using classic MapReduce (Hadoop) • Map-Collective or Iterative MapReduce using Collective Communication (clustering) – Hadoop with Harp, Spark ….. • Map-Communication or Iterative Giraph: (MapReduce) with point-to-point communication (most graph algorithms such as maximum clique, connected component, finding diameter, community detection) – Vary in difficulty of finding partitioning (classic parallel load balancing) • Shared memory: thread-based (event driven) graph algorithms (shortest path, Betweenness centrality) Ideas like workflow are “orthogonal” to this
  • 9. Getting High Performance on Data Analytics (e.g. Mahout, R…) • On the systems side, we have two principles: – The Apache Big Data Stack with ~120 projects has important broad functionality with a vital large support organization – HPC including MPI has striking success in delivering high performance, however with a fragile sustainability model • There are key systems abstractions which are levels in HPC-ABDS software stack where Apache approach needs careful integration with HPC – Resource management – Storage – Programming model -- horizontal scaling parallelism – Collective and Point-to-Point communication – Support of iteration – Data interface (not just key-value) • In application areas, we define application abstractions to support: – Graphs/network – Geospatial – Genes – Images, etc.
  • 10. HPC-ABDS Hourglass HPC ABDS System (Middleware) High performance Applications • HPC Yarn for Resource management • Horizontally scalable parallel programming model • Collective and Point-to-Point communication • Support of iteration (in memory databases) System Abstractions/standards • Data format • Storage 120 Software Projects Application Abstractions/standards Graphs, Networks, Images, Geospatial …. SPIDAL (Scalable Parallel Interoperable Data Analytics Library) or High performance Mahout, R, Matlab…
  • 11. NIST Big Data Use Cases Chaitin Baru, Bob Marcus, Wo Chang co-leaders
  • 12. Use Case Template • 26 fields completed for 51 areas • Government Operation: 4 • Commercial: 8 • Defense: 3 • Healthcare and Life Sciences: 10 • Deep Learning and Social Media: 6 • The Ecosystem for Research: 4 • Astronomy and Physics: 5 • Earth, Environmental and Polar Science: 10 • Energy: 1 12
  • 13. 51 Detailed Use Cases: Contributed July-September 2013 Covers goals, data features such as 3 V’s, software, hardware • http://bigdatawg.nist.gov/usecases.php • https://bigdatacoursespring2014.appspot.com/course (Section 5) • Government Operation(4): National Archives and Records Administration, Census Bureau • Commercial(8): Finance in Cloud, Cloud Backup, Mendeley (Citations), Netflix, Web Search, Digital Materials, Cargo shipping (as in UPS) • Defense(3): Sensors, Image surveillance, Situation Assessment • Healthcare and Life Sciences(10): Medical records, Graph and Probabilistic analysis, Pathology, Bioimaging, Genomics, Epidemiology, People Activity models, Biodiversity • Deep Learning and Social Media(6): Driving Car, Geolocate images/cameras, Twitter, Crowd Sourcing, Network Science, NIST benchmark datasets • The Ecosystem for Research(4): Metadata, Collaboration, Language Translation, Light source experiments • Astronomy and Physics(5): Sky Surveys including comparison to simulation, Large Hadron Collider at CERN, Belle Accelerator II in Japan • Earth, Environmental and Polar Science(10): Radar Scattering in Atmosphere, Earthquake, Ocean, Earth Observation, Ice sheet Radar scattering, Earth radar mapping, Climate simulation datasets, Atmospheric turbulence identification, Subsurface Biogeochemistry (microbes to watersheds), AmeriFlux and FLUXNET gas sensors • Energy(1): Smart grid 13 26 Features for each use case Biased to science
  • 14. 14 Part of Property Summary Table
  • 15. 10 Suggested Generic Use Cases 1) Multiple users performing interactive queries and updates on a database with basic availability and eventual consistency (BASE) 2) Perform real time analytics on data source streams and notify users when specified events occur 3) Move data from external data sources into a highly horizontally scalable data store, transform it using highly horizontally scalable processing (e.g. Map-Reduce), and return it to the horizontally scalable data store (ELT) 4) Perform batch analytics on the data in a highly horizontally scalable data store using highly horizontally scalable processing (e.g MapReduce) with a user-friendly interface (e.g. SQL-like) 5) Perform interactive analytics on data in analytics-optimized database 6) Visualize data extracted from horizontally scalable Big Data score 7) Move data from a highly horizontally scalable data store into a traditional Enterprise Data Warehouse 8) Extract, process, and move data from data stores to archives 9) Combine data from Cloud databases and on premise data stores for analytics, data mining, and/or machine learning 10) Orchestrate multiple sequential and parallel data transformations and/or analytic processing using a workflow manager
  • 16. 10 Security & Privacy Use Cases • Consumer Digital Media Usage • Nielsen Homescan • Web Traffic Analytics • Health Information Exchange • Personal Genetic Privacy • Pharma Clinic Trial Data Sharing • Cyber-security • Aviation Industry • Military - Unmanned Vehicle sensor data • Education - “Common Core” Student Performance Reporting • Need to integrate 10 “generic” and 10 “security & privacy” with 51 “full use cases”
  • 17. Big Data Patterns – the Ogres
  • 18. Would like to capture “essence of these use cases” “small” kernels, mini-apps Or Classify applications into patterns Do it from HPC background not database viewpoint e.g. focus on cases with detailed analytics Section 5 of my class https://bigdatacoursespring2014.appspot.com/preview classifies 51 use cases with ogre facets
  • 19. What are “mini-Applications” • Use for benchmarks of computers and software (is my parallel compiler any good?) • In parallel computing, this is well established – Linpack for measuring performance to rank machines in Top500 (changing?) – NAS Parallel Benchmarks (originally a pencil and paper specification to allow optimal implementations; then MPI library) – Other specialized Benchmark sets keep changing and used to guide procurements • Last 2 NSF hardware solicitations had NO preset benchmarks – perhaps as no agreement on key applications for clouds and data intensive applications – Berkeley dwarfs capture different structures that any approach to parallel computing must address – Templates used to capture parallel computing patterns • Also database benchmarks like TPC
  • 20. HPC Benchmark Classics • Linpack or HPL: Parallel LU factorization for solution of linear equations • NPB version 1: Mainly classic HPC solver kernels – MG: Multigrid – CG: Conjugate Gradient – FT: Fast Fourier Transform – IS: Integer sort – EP: Embarrassingly Parallel – BT: Block Tridiagonal – SP: Scalar Pentadiagonal – LU: Lower-Upper symmetric Gauss Seidel
  • 21. 13 Berkeley Dwarfs • Dense Linear Algebra • Sparse Linear Algebra • Spectral Methods • N-Body Methods • Structured Grids • Unstructured Grids • MapReduce • Combinational Logic • Graph Traversal • Dynamic Programming • Backtrack and Branch-and-Bound • Graphical Models • Finite State Machines First 6 of these correspond to Colella’s original. Monte Carlo dropped. N-body methods are a subset of Particle in Colella. Note a little inconsistent in that MapReduce is a programming model and spectral method is a numerical method. Need multiple facets!
  • 22. 51 Use Cases: What is Parallelism Over? • People: either the users (but see below) or subjects of application and often both • Decision makers like researchers or doctors (users of application) • Items such as Images, EMR, Sequences below; observations or contents of online store – Images or “Electronic Information nuggets” – EMR: Electronic Medical Records (often similar to people parallelism) – Protein or Gene Sequences; – Material properties, Manufactured Object specifications, etc., in custom dataset – Modelled entities like vehicles and people • Sensors – Internet of Things • Events such as detected anomalies in telescope or credit card data or atmosphere • (Complex) Nodes in RDF Graph • Simple nodes as in a learning network • Tweets, Blogs, Documents, Web Pages, etc. – And characters/words in them • Files or data to be backed up, moved or assigned metadata • Particles/cells/mesh points as in parallel simulations 22
  • 23. 51 Use Cases: Low-Level (Run-time) Computational Types • PP(26): Pleasingly Parallel or Map Only • MR(18 +7 MRStat): Classic MapReduce • MRStat(7): Simple version of MR where key computations are simple reduction as coming in statistical averages • MRIter(23): Iterative MapReduce • Graph(9): complex graph data structure needed in analysis • Fusion(11): Integrate diverse data to aid discovery/decision making; could involve sophisticated algorithms or could just be a portal • Streaming(41): some data comes in incrementally and is processed this way 23 (Count) out of 51
  • 24. 51 Use Cases: Higher-Level Computational Types or Features • Classification(30): divide data into categories • S/Q/Index(12): Search and Query • CF(4): Collaborative Filtering • Local ML(36): Local Machine Learning • Global ML(23): Deep Learning, Clustering, LDA, PLSI, MDS, Large Scale Optimizations as in Variational Bayes, Lifted Belief Propagation, Stochastic Gradient Descent, L-BFGS, Levenberg-Marquardt (Sometimes call EGO or Exascale Global Optimization) • Workflow: (Left out of analysis but very common) • GIS(16): Geotagged data and often displayed in ESRI, Microsoft Virtual Earth, Google Earth, GeoServer etc. • HPC(5): Classic large-scale simulation of cosmos, materials, etc. generates big data • Agent(2): Simulations of models of data-defined macroscopic entities represented as agents 24 Not Independent
  • 25. Global Machine Learning aka EGO – Exascale Global Optimization • Typically maximum likelihood or 2 with a sum over the N data items – documents, sequences, items to be sold, images etc. and often links (point-pairs). Usually it’s a sum of positive number as in least squares • Covering clustering/community detection, mixture models, topic determination, Multidimensional scaling, (Deep) Learning Networks • PageRank is “just” parallel linear algebra • Note many Mahout algorithms are sequential – partly as MapReduce limited; partly because parallelism unclear – MLLib (Spark based) better • SVM and Hidden Markov Models do not use large scale parallelization in practice? • Detailed papers on particular parallel graph algorithms
  • 28. 17:Pathology Imaging/ Digital Pathology I • Application: Digital pathology imaging is an emerging field where examination of high resolution images of tissue specimens enables novel and more effective ways for disease diagnosis. Pathology image analysis segments massive (millions per image) spatial objects such as nuclei and blood vessels, represented with their boundaries, along with many extracted image features from these objects. The derived information is used for many complex queries and analytics to support biomedical research and clinical diagnosis. 28 Healthcare Life Sciences MR, MRIter, PP, Classification Parallelism over ImagesStreaming
  • 29. 17:Pathology Imaging/ Digital Pathology II • Current Approach: 1GB raw image data + 1.5GB analytical results per 2D image. MPI for image analysis; MapReduce + Hive with spatial extension on supercomputers and clouds. GPU’s used effectively. Figure below shows the architecture of Hadoop-GIS, a spatial data warehousing system over MapReduce to support spatial analytics for analytical pathology imaging. 29 Healthcare Life Sciences • Futures: Recently, 3D pathology imaging is made possible through 3D laser technologies or serially sectioning hundreds of tissue sections onto slides and scanning them into digital images. Segmenting 3D microanatomic objects from registered serial images could produce tens of millions of 3D objects from a single image. This provides a deep “map” of human tissues for next generation diagnosis. 1TB raw image data + 1TB analytical results per 3D image and 1PB data per moderated hospital per year. Architecture of Hadoop-GIS, a spatial data warehousing system over MapReduce to support spatial analytics for analytical pathology imaging Parallelism over images or over pixels within image (especially for GPU)
  • 30. 18: Computational Bioimaging • Application: Data delivered from bioimaging is increasingly automated, higher resolution, and multi-modal. This has created a data analysis bottleneck that, if resolved, can advance the biosciences discovery through Big Data techniques. • Current Approach: The current piecemeal analysis approach does not scale to situation where a single scan on emerging machines is 32 TB and medical diagnostic imaging is annually around 70 PB even excluding cardiology. One needs a web-based one-stop-shop for high performance, high throughput image processing for producers and consumers of models built on bio-imaging data. • Futures: Goal is to solve that bottleneck with extreme scale computing with community-focused science gateways to support the application of massive data analysis toward massive imaging data sets. Workflow components include data acquisition, storage, enhancement, minimizing noise, segmentation of regions of interest, crowd-based selection and extraction of features, and object classification, organization, and search. Use ImageJ, OMERO, VolRover, advanced segmentation and feature detection software. 30 Healthcare Life Sciences Largely Local Machine Learning and Pleasingly Parallel
  • 31. 26: Large-scale Deep Learning • Application: Large models (e.g., neural networks with more neurons and connections) combined with large datasets are increasingly the top performers in benchmark tasks for vision, speech, and Natural Language Processing. One needs to train a deep neural network from a large (>>1TB) corpus of data (typically imagery, video, audio, or text). Such training procedures often require customization of the neural network architecture, learning criteria, and dataset pre-processing. In addition to the computational expense demanded by the learning algorithms, the need for rapid prototyping and ease of development is extremely high. • Current Approach: The largest applications so far are to image recognition and scientific studies of unsupervised learning with 10 million images and up to 11 billion parameters on a 64 GPU HPC Infiniband cluster. Both supervised (using existing classified images) and unsupervised applications 31 Deep Learning Social Networking • Futures: Large datasets of 100TB or more may be necessary in order to exploit the representational power of the larger models. Training a self-driving car could take 100 million images at megapixel resolution. Deep Learning shares many characteristics with the broader field of machine learning. The paramount requirements are high computational throughput for mostly dense linear algebra operations, and extremely high productivity for researcher exploration. One needs integration of high performance libraries with high level (python) prototyping environments IN Classified OUT MRIter,EGO Classification Parallelism over Nodes in NN, Data being classified Global Machine Learning but Stochastic Gradient Descent only use small fraction of total images (100’s) at each iteration so parallelism over images not clearly useful
  • 32. 27: Organizing large-scale, unstructured collections of consumer photos I • Application: Produce 3D reconstructions of scenes using collections of millions to billions of consumer images, where neither the scene structure nor the camera positions are known a priori. Use resulting 3D models to allow efficient browsing of large-scale photo collections by geographic position. Geolocate new images by matching to 3D models. Perform object recognition on each image. 3D reconstruction posed as a robust non-linear least squares optimization problem where observed relations between images are constraints and unknowns are 6-D camera pose of each image and 3D position of each point in the scene. • Current Approach: Hadoop cluster with 480 cores processing data of initial applications. Note over 500 billion images (too small) on Facebook and over 5 billion on Flickr with over 1800 (was 500 a year ago) million images added to social media sites each day. 32 Deep Learning Social NetworkingGlobal Machine Learning after Initial Local steps
  • 33. 27: Organizing large-scale, unstructured collections of consumer photos II • Futures: Need many analytics, including feature extraction, feature matching, and large-scale probabilistic inference, which appear in many or most computer vision and image processing problems, including recognition, stereo resolution, and image denoising. Need to visualize large-scale 3D reconstructions, and navigate large-scale collections of images that have been aligned to maps. 33 Deep Learning Social Networking Global Machine Learning after Initial Local ML pleasingly parallel steps
  • 34. 36: Catalina Real-Time Transient Survey (CRTS): a digital, panoramic, synoptic sky survey I • Application: The survey explores the variable universe in the visible light regime, on time scales ranging from minutes to years, by searching for variable and transient sources. It discovers a broad variety of astrophysical objects and phenomena, including various types of cosmic explosions (e.g., Supernovae), variable stars, phenomena associated with accretion to massive black holes (active galactic nuclei) and their relativistic jets, high proper motion stars, etc. The data are collected from 3 telescopes (2 in Arizona and 1 in Australia), with additional ones expected in the near future (in Chile). • Current Approach: The survey generates up to ~ 0.1 TB on a clear night with a total of ~100 TB in current data holdings. The data are preprocessed at the telescope, and transferred to Univ. of Arizona and Caltech, for further analysis, distribution, and archiving. The data are processed in real time, and detected transient events are published electronically through a variety of dissemination mechanisms, with no proprietary withholding period (CRTS has a completely open data policy). Further data analysis includes classification of the detected transient events, additional observations using other telescopes, scientific interpretation, and publishing. In this process, it makes a heavy use of the archival data (several PB’s) from a wide variety of geographically distributed resources connected through the Virtual Observatory (VO) framework. 34 Astronomy & Physics PP, ML, Classification Parallelism over Images and Events: Celestial events identified in Telescope Images Streaming, workflow
  • 35. 36: Catalina Real-Time Transient Survey (CRTS): a digital, panoramic, synoptic sky survey II • Futures: CRTS is a scientific and methodological testbed and precursor of larger surveys to come, notably the Large Synoptic Survey Telescope (LSST), expected to operate in 2020’s and selected as the highest-priority ground-based instrument in the 2010 Astronomy and Astrophysics Decadal Survey. LSST will gather about 30 TB per night. 35 Astronomy & Physics
  • 36. 43: Radar Data Analysis for CReSIS Remote Sensing of Ice Sheets IV • Typical CReSIS echogram with Detected Boundaries. The upper (green) boundary is between air and ice layer while the lower (red) boundary is between ice and terrain 36 Earth, Environmental and Polar Science PP, GIS Parallelism over Radar ImagesStreaming
  • 37. 44: UAVSAR Data Processing, Data Product Delivery, and Data Services II • Combined unwrapped coseismic interferograms for flight lines 26501, 26505, and 08508 for the October 2009 – April 2010 time period. End points where slip can be seen on the Imperial, Superstition Hills, and Elmore Ranch faults are noted. GPS stations are marked by dots and are labeled. 37 Earth, Environmental and Polar Science PP, GIS Parallelism over Radar ImagesStreaming
  • 38. Other Facets of the Ogres
  • 39. Application Class Facet of Ogres • Classification (30) divide data into categories • Search Index and query (12) • Maximum Likelihood or 2 minimizations • Expectation Maximization (often Steepest descent) • Local (pleasingly parallel) Machine Learning (36) contrasted to • (Exascale) Global Optimization (23) (such as Learning Networks, Variational Bayes and Gibbs Sampling) • Do they Use Agents (2) as in epidemiology (swarm approaches)? Higher-Level Computational Types or Features in earlier slide also has CF(4): Collaborative Filtering in Core Analytics Facet and two categories in data source and style GIS(16): Geotagged data and often displayed in ESRI, Microsoft Virtual Earth, Google Earth, GeoServer etc. HPC(5): Classic large-scale simulation of cosmos, materials, etc. generates big data
  • 40. Problem Architecture Facet of Ogres (Meta or MacroPattern) i. Pleasingly Parallel – as in BLAST, Protein docking, some (bio- )imagery including Local Analytics or Machine Learning – ML or filtering pleasingly parallel, as in bio-imagery, radar images (pleasingly parallel but sophisticated local analytics) ii. Classic MapReduce for Search and Query iii. Global Analytics or Machine Learning requiring iterative programming models iv. Problem set up as a graph as opposed to vector, grid v. SPMD (Single Program Multiple Data) vi. Bulk Synchronous Processing: well-defined compute- communication phases vii. Fusion: Knowledge discovery often involves fusion of multiple methods. viii. Workflow (often used in fusion) Note problem and machine architectures are related Slight expansion of an earlier slides on: Major Analytics Architectures in Use Cases Pleasingly parallel Search (MapReduce) Map-Collective Map-Communication as in MPI Shared Memory Low-Level (Run-time) Computational Types used to label 51 use cases PP(26): Pleasingly Parallel MR(18 +7 MRStat): Classic MapReduce MRStat(7) MRIter(23) Graph(9) Fusion(11) Streaming(41) In data source
  • 41. 4 Forms of MapReduce (Users and Abusers) 41 (a) Map Only (d) Point to Point(c) Iterative Map Reduce or Map-Collective (b) Classic MapReduce Input map reduce Input map reduce Iterations Input Output map Pij BLAST Analysis Local Machine Learning Pleasingly Parallel High Energy Physics (HEP) Histograms Distributed search Classic MPI PDE Solvers and particle dynamics Domain of MapReduce and Iterative Extensions MPI Giraph Expectation maximization Clustering e.g. K-means Linear Algebra, PageRank All of them are Map-Communication?
  • 42. One Facet of Ogres has Computational Features a) Flops per byte; b) Communication Interconnect requirements; c) Is application (graph) constant or dynamic? d) Most applications consist of a set of interconnected entities; is this regular as a set of pixels or is it a complicated irregular graph? e) Is communication BSP or Asynchronous? In latter case shared memory may be attractive; f) Are algorithms Iterative or not? g) Data Abstraction: key-value, pixel, graph, vector  Are data points in metric or non-metric spaces? h) Core libraries needed: matrix-matrix/vector algebra, conjugate gradient, reduction, broadcast
  • 43. Data Source and Style Facet of Ogres • (i) SQL • (ii) NOSQL based • (iii) Other Enterprise data systems (10 examples from Bob Marcus) • (iv) Set of Files (as managed in iRODS) • (v) Internet of Things • (vi) Streaming and • (vii) HPC simulations • (viii) Involve GIS (Geographical Information Systems) • Before data gets to compute system, there is often an initial data gathering phase which is characterized by a block size and timing. Block size varies from month (Remote Sensing, Seismic) to day (genomic) to seconds or lower (Real time control, streaming) • There are storage/compute system styles: Shared, Dedicated, Permanent, Transient • Other characteristics are needed for permanent auxiliary/comparison datasets and these could be interdisciplinary, implying nontrivial data movement/replication
  • 45. Core Analytics Facet of Ogres (microPattern) I • Map-Only • Pleasingly parallel - Local Machine Learning • MapReduce: Search/Query • Summarizing statistics as in LHC Data analysis (histograms) • Recommender Systems (Collaborative Filtering) • Linear Classifiers (Bayes, Random Forests) • Global Analytics • Nonlinear Solvers (structure depends on objective function) – Stochastic Gradient Descent SGD – (L-)BFGS approximation to Newton’s Method – Levenberg-Marquardt solver • Map-Collective I (need to improve/extend Mahout, MLlib) • Outlier Detection, Clustering (many methods), • Mixture Models, LDA (Latent Dirichlet Allocation), PLSI (Probabilistic Latent Semantic Indexing)
  • 46. Core Analytics Facet of Ogres (microPattern) II • Map-Collective II • Use matrix-matrix,-vector operations, solvers (conjugate gradient) • SVM and Logistic Regression • PageRank, (find leading eigenvector of sparse matrix) • SVD (Singular Value Decomposition) • MDS (Multidimensional Scaling) • Learning Neural Networks (Deep Learning) • Hidden Markov Models • Map-Communication • Graph Structure (Communities, subgraphs/motifs, diameter, maximal cliques, connected components) • Network Dynamics - Graph simulation Algorithms (epidemiology) • Asynchronous Shared Memory • Graph Structure (Betweenness centrality, shortest path)
  • 47. Parallel Global Machine Learning Examples
  • 48. Clustering and MDS Large Scale O(N2) GML
  • 49. WDA SMACOF MDS (Multidimensional Scaling) using Harp on Big Red 2 Parallel Efficiency: on 100-300K sequences Conjugate Gradient (dominant time) and Matrix Multiplication 0.00 0.20 0.40 0.60 0.80 1.00 1.20 0 20 40 60 80 100 120 140 ParallelEfficiency Number of Nodes 100K points 200K points 300K points
  • 50. Features of Harp Hadoop Plugin • Hadoop Plugin (on Hadoop 1.2.1 and Hadoop 2.2.0) • Hierarchical data abstraction on arrays, key-values and graphs for easy programming expressiveness. • Collective communication model to support various communication operations on the data abstractions • Caching with buffer management for memory allocation required from computation and communication • BSP style parallelism • Fault tolerance with checkpointing
  • 51. Summarize a million Fungi Sequences Spherical Phylogram Visualization RAxML result visualized to right. Spherical Phylogram from new MDS method visualized in PlotViz
  • 52. Comparing Data Intensive and Simulation Problems
  • 53. Comparison of Data Analytics with Simulation I • Pleasingly parallel often important in both • Both are often SPMD and BSP • Non-iterative MapReduce is major big data paradigm – not a common simulation paradigm except where “Reduce” summarizes pleasingly parallel execution • Big Data often has large collective communication – Classic simulation has a lot of smallish point-to-point messages • Simulation dominantly sparse (nearest neighbor) data structures – “Bag of words (users, rankings, images..)” algorithms are sparse, as is PageRank – Important data analytics involves full matrix algorithms
  • 54. Comparison of Data Analytics with Simulation II • There are similarities between some graph problems and particle simulations with a strange cutoff force. – Both Map-Communication • Note many big data problems are “long range force” as all points are linked. – Easiest to parallelize. Often full matrix algorithms – e.g. in DNA sequence studies, distance (i, j) defined by BLAST, Smith-Waterman, etc., between all sequences i, j. – Opportunity for “fast multipole” ideas in big data. • In image-based deep learning, neural network weights are block sparse (corresponding to links to pixel blocks) but can be formulated as full matrix operations on GPUs and MPI in blocks. • In HPC benchmarking, Linpack being challenged by a new sparse conjugate gradient benchmark HPCG, while I am diligently using non- sparse conjugate gradient solvers in clustering and Multi- dimensional scaling.
  • 56. Sensors and the Internet of Things Covers many streaming applications
  • 57. IaaS  Virtual Clusters, Networks (software defined system)  Hypervisor  GPU, Multi-core CPU PaaS  Secure Publish Subscribe  MapReduce, Workflow  Metadata(iRODS)  SQL/noSQL stores (MongoDB)  Automatic Cloud Bursting SaaS  Manage Sensors and their data  Image Processing  Robot Planning Sensors as a Service Military Command & Control Personalized Medicine Disaster Informatics (Earthquakes) Execution Environment Sensor, Local Cluster, XSEDE, FutureGrid, Cloud Platform As a Service Software As a Service Infrastructure As a Service S e c u r i t y A P I
  • 58. Internet of Things and the Cloud • It is projected that there will be 24 (Mobile Industry Group) -50 (Cisco) billion devices on the Internet by 2020. Most will be small sensors that send streams of information into the cloud where it will be processed and integrated with other streams and turned into knowledge that will help our lives in a multitude of small and big ways. • The cloud will become increasing important as a controller of and resource provider for the Internet of Things. • As well as today’s use for smart phone and gaming console support, “Intelligent River” “smart homes and grid” and “ubiquitous cities” build on this vision and we could expect a growth in cloud supported/controlled robotics. • Some of these “things” will be supporting science • Natural parallelism over “things” • “Things” are distributed and so form a Grid 58
  • 60. Database SS SS SS SS SS SS SS: Sensor or Data Interchange Service Workflow through multiple filter/discovery clouds Another Cloud Raw Data  Data  Information  Knowledge  Wisdom  Decisions SSSS Another Service SS Another Grid SS SS SS SS SS SS SS SS Storage Cloud Compute Cloud SS SSSS SS Filter Cloud Filter Cloud Filter Cloud Discovery Cloud Discovery Cloud Filter Cloud Filter Cloud Filter Cloud SS Filter Cloud Filter Cloud Filter Cloud Filter Cloud Distributed Grid Hadoop Cluster SS
  • 61. IOTCloud • Device  Pub- SubStorm  Datastore  Data Analysis • Apache Storm provides scalable distributed system for processing data streams coming from devices in real time. • For example Storm layer can decide to store the data in cloud storage for further analysis or to send control data back to the devices • Evaluating Pub-Sub Systems ActiveMQ, RabbitMQ, Kafka, Kestrel
  • 62. Performance From Device to Cloud • 6 FutureGrid India Medium OpenStack machines • 1 Broker machine, RabbitMQ or ActiveMQ • 1 machine hosting ZooKeeper and Storm – Nimbus (Master for Storm) • 2 Sensor sites generating data • 2 Storm nodes sending back the same data and we measure the unidirectional latency • Using drones and Kinects
  • 63. 39: Particle Physics: Analysis of LHC Large Hadron Collider Data: Discovery of Higgs particle I • Application: One analyses collisions at the CERN LHC (Large Hadron Collider) Accelerator and Monte Carlo producing events describing particle-apparatus interaction. Processed information defines physics properties of events (lists of particles with type and momenta). These events are analyzed to find new effects; both new particles (Higgs) and present evidence that conjectured particles (Supersymmetry) have not been detected. LHC has a few major experiments including ATLAS and CMS. These experiments have global participants (for example CMS has 3600 participants from 183 institutions in 38 countries), and so the data at all levels is transported and accessed across continents. 63 Astronomy & Physics CERN LHC Accelerator Ring (27 km circumference. Up to 175m depth) at Geneva with 4 Experiment positions marked MRStat or PP, MC Parallelism over observed collisionsJune 19 2014: Kaggle competition for machine learning to uncover Higgs to pair of tau leptons
  • 64. 13: Cloud Large Scale Geospatial Analysis and Visualization • Application: Need to support large scale geospatial data analysis and visualization with number of geospatially aware sensors and the number of geospatially tagged data sources rapidly increasing. 64 Defense PP, GIS, Classification Parallelism over Sensors and people accessing dataStreaming
  • 65. 50: DOE-BER AmeriFlux and FLUXNET Networks I • Application: AmeriFlux and FLUXNET are US and world collections respectively of sensors that observe trace gas fluxes (CO2, water vapor) across a broad spectrum of times (hours, days, seasons, years, and decades) and space. Moreover, such datasets provide the crucial linkages among organisms, ecosystems, and process- scale studies—at climate-relevant scales of landscapes, regions, and continents— for incorporation into biogeochemical and climate models. • Current Approach: Software includes EddyPro, Custom analysis software, R, python, neural networks, Matlab. There are ~150 towers in AmeriFlux and over 500 towers distributed globally collecting flux measurements. • Futures: Field experiment data taking would be improved by access to existing data and automated entry of new data via mobile devices. Need to support interdisciplinary study integrating diverse data sources. 65 Earth, Environmental and Polar Science Fusion, PP, GIS Parallelism over SensorsStreaming
  • 66. 51: Consumption forecasting in Smart Grids • Application: Predict energy consumption for customers, transformers, sub- stations and the electrical grid service area using smart meters providing measurements every 15-mins at the granularity of individual consumers within the service area of smart power utilities. Combine Head-end of smart meters (distributed), Utility databases (Customer Information, Network topology; centralized), US Census data (distributed), NOAA weather data (distributed), Micro-grid building information system (centralized), Micro-grid sensor network (distributed). This generalizes to real-time data-driven analytics for time series from cyber physical systems • Current Approach: GIS based visualization. Data is around 4 TB a year for a city with 1.4M sensors in Los Angeles. Uses R/Matlab, Weka, Hadoop software. Significant privacy issues requiring anonymization by aggregation. Combine real time and historic data with machine learning for predicting consumption. • Futures: Wide spread deployment of Smart Grids with new analytics integrating diverse data and supporting curtailment requests. Mobile applications for client interactions. 66 Energy Fusion, PP, MR, ML, GIS, Classification Parallelism over SensorsStreaming
  • 68. Java Grande • I once tried to encourage use of Java in HPC with Java Grande Forum but Fortran, C and C++ remain central HPC languages. – Not helped by .com and Sun collapse in 2000-2005 • The pure Java CartaBlanca, a 2005 R&D100 award-winning project, was an early successful example of HPC use of Java in a simulation tool for non-linear physics on unstructured grids. • Of course Java is a major language in ABDS and as data analysis and simulation are naturally linked, should consider broader use of Java • Using Habanero Java (from Rice University) for Threads and mpiJava or FastMPJ for MPI, gathering collection of high performance parallel Java analytics – Converted from C# and sequential Java faster than sequential C# • So will have either Hadoop+Harp of classic Threads/MPI versions in Java Grande version of mahout
  • 69. Performance of MPI Kernel Operations 1 100 10000 0B 2B 8B 32B 128B 512B 2KB 8KB 32KB 128KB 512KB Averagetime(us) Message size (bytes) MPI.NET C# in Tempest FastMPJ Java in FG OMPI-nightly Java FG OMPI-trunk Java FG OMPI-trunk C FG Performance of MPI send and receive operations 5 5000 4B 16B 64B 256B 1KB 4KB 16KB 64KB 256KB 1MB 4MB Averagetime(us) Message size (bytes) MPI.NET C# in Tempest FastMPJ Java in FG OMPI-nightly Java FG OMPI-trunk Java FG OMPI-trunk C FG Performance of MPI allreduce operation 1 100 10000 1000000 4B 16B 64B 256B 1KB 4KB 16KB 64KB 256KB 1MB 4MB AverageTime(us) Message Size (bytes) OMPI-trunk C Madrid OMPI-trunk Java Madrid OMPI-trunk C FG OMPI-trunk Java FG 1 10 100 1000 10000 0B 2B 8B 32B 128B 512B 2KB 8KB 32KB 128KB 512KB AverageTime(us) Message Size (bytes) OMPI-trunk C Madrid OMPI-trunk Java Madrid OMPI-trunk C FG OMPI-trunk Java FG Performance of MPI send and receive on Infiniband and Ethernet Performance of MPI allreduce on Infiniband and Ethernet Pure Java as in FastMPJ slower than Java interfacing to C version of MPI
  • 70. DAVS Performance • Charge2 Proteomics 241605 points 4/1/2013 70 Pure MPI Times MPI with Threads Pure MPI Speedup 0 5 10 15 20 25 30 1x1x1 1x1x2 1x2x1 1x1x4 1x4x1 1x1x8 1x2x4 1x4x2 Time(hours) TxPxN MPI.NET OMPI-nightly OMPI-trunk 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 1x1x11x1x21x2x11x1x41x4x11x1x81x2x41x4x2 Speedup TxPxN MPI.NET OMPI- nightly 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 2x1x8 4x1x8 8x1x8 1x2x8 4x2x8 1x4x8 2x4x8 1x8x8 Time(hours) TxPxN MPI.NET OMPI-nightly OMPI-trunk
  • 71. 0 10000 20000 30000 40000 50000 60000 1.00E-031.00E-021.00E-011.00E+001.00E+011.00E+021.00E+031.00E+041.00E+051.00E+06 ClusterCount Temperature DAVS(2) DA2D Start Sponge DAVS(2) Add Close Cluster Check Sponge Reaches final value Cluster Count v. Temperature for 2 Runs • All start with one cluster at far left • T=1 special as measurement errors divided out • DA2D counts clusters with 1 member as clusters. DAVS(2) does not
  • 72. Lessons / Insights • Integrate (don’t compete) HPC with “Commodity Big data” (Google to Amazon to Enterprise Data Analytics) – i.e. improve Mahout; don’t compete with it – Use Hadoop plug-ins rather than replacing Hadoop • Enhanced Apache Big Data Stack HPC-ABDS has ~120 members • Opportunities at Resource management, Data/File, Streaming, Programming, monitoring, workflow layers for HPC and ABDS integration • Data intensive algorithms do not have the well developed high performance libraries familiar from HPC • Strong case for high performance Java (Grande) run time supporting all forms of parallelism
  • 75. Iterative MapReduce Implementing HPC-ABDS Judy Qiu, Bingjing Zhang, Dennis Gannon, Thilina Gunarathne
  • 76. Using Optimal “Collective” Operations • Twister4Azure Iterative MapReduce with enhanced collectives – Map-AllReduce primitive and MapReduce-MergeBroadcast • Strong Scaling on K-means for up to 256 cores on Azure
  • 77. Kmeans and (Iterative) MapReduce • Shaded areas are computing only where Hadoop on HPC cluster is fastest • Areas above shading are overheads where T4A smallest and T4A with AllReduce collective have lowest overhead • Note even on Azure Java (Orange) faster than T4A C# for compute 77 0 200 400 600 800 1000 1200 1400 32 x 32 M 64 x 64 M 128 x 128 M 256 x 256 M Time(s) Num. Cores X Num. Data Points Hadoop AllReduce Hadoop MapReduce Twister4Azure AllReduce Twister4Azure Broadcast Twister4Azure HDInsight (AzureHadoop)
  • 78. Collectives improve traditional MapReduce • Poly-algorithms choose the best collective implementation for machine and collective at hand • This is K-means running within basic Hadoop but with optimal AllReduce collective operations • Running on Infiniband Linux Cluster

Hinweis der Redaktion

  1. Big dwarfs are Ogres Implement Ogres in ABDS+
  2. Big dwarfs are Ogres Implement Ogres in ABDS+