1. V. MilutinoviÄ, G. Rakocevic, S. StojanoviÄ, and Z. Sustran
University of Belgrade
Oskar Mencer
Imperial College, London
Oliver Pell
Maxeler Technologies, London and Palo Alto
Michael Flynn
Stanford University, Palo Alto
Valentina E. Balas
1/52
Aurel Vlaicu University of Arad
2. For Big Data algorithms
and for the same hardware price as before,
achieving:
a) speed-up, 20-200
b) monthly electricity bills, reduced 20 times
c) size, 20 times smaller
2/52
3. Absolutely all results achieved with:
a) all hardware produced in Europe,
specifically UK
b) all software generated by programmers
of EU and WB
3/52
4. ControlFlow (MultiFlow and ManyFlow):
ï Top500 ranks using Linpack (Japanese K,âŠ)
DataFlow:
ï Coarse Grain (HEP) vs. Fine Grain (Maxeler)
4/52
5. Compiling below the machine code level brings speedups;
also a smaller power, size, and cost.
The price to pay:
The machine is more difficult to program.
Consequently:
Ideal for WORM applications :)
Examples using Maxeler:
GeoPhysics (20-40), Banking (200-1000, with JP Morgan 20%),
M&C (New York City), Datamining (Google), âŠ
5/52
11. tCPU = tGPU = tDF = NOPS * CDF * TclkDF +
N * NOPS * CCPU*TclkCPU N * NOPS * CGPU*TclkGPU / (N â 1) * TclkDF / NDF
/NcoresCPU NcoresGPU
Assumptions:
1. Software includes enough parallelism to keep all cores busy
2. The only limiting factor is the number of cores. 11/52
19. ïFactor: 20
MultiCore/ManyCore DataFlow
Data Processing
Data Processing
Process Control
Process Control
19/52
20. ï MultiCore:
ï Explain what to do, to the driver
ï Caches, instruction buffers, and predictors needed
ï ManyCore:
ï Explain what to do, to many sub-drivers
ï Reduced caches and instruction buffers needed
ï DataFlow:
ï Make a field of processing gates: 1C+2nJava+3Java
ï No caches, etc. (300 students/year: BGD, BCN, LjU, ICL,âŠ)
20/52
21. ïMultiCore:
ï Business as usual
ïManyCore:
ï More difficult
ïDataFlow:
ï Much more difficult
ï Debugging both, application and configuration code
21/52
22. ï MultiCore/ManyCore:
ï Several minutes
ï DataFlow:
ï Several hours for the real hardware
ï Fortunately, only several minutes for the simulator
ï The simulator supports
both the large JPMorgan machine
as well as the smallest âUniversity Supportâ machine
ï Good news:
ï Tabula@2GHz
22/52
29. ï Revisiting the Top 500 SuperComputer Benchmarks
ï Our paper in Communications of the ACM
ï Revisiting all major Big Data DM algorithms
ï Massive static parallelism at low clock frequencies
ï Concurrency and communication
ï Concurrency between millions of tiny cores difficult,
âjitterâ between cores will harm performance
at synchronization points
ï Reliability and fault tolerance
ï 10-100x fewer nodes, failures much less often
ï Memory bandwidth and FLOP/byte ratio
ï Optimize data choreography, data movement,
and the algorithmic computation
29/52
30. Maxeler Hardware
CPUs plus DFEs DFEs shared over Infiniband Low latency connectivity
Intel Xeon CPU cores and up to Up to 8 DFEs with 384GB of Intel Xeon CPUs and 1-2 DFEs
4 DFEs with 192GB of RAM RAM and dynamic allocation with up to six 10Gbit Ethernet
of DFEs to CPU servers connections
MaxWorkstation MaxCloud
Desktop development system On-demand scalable accelerated
compute resource, hosted in London
30/52
31. Major Classes of Algorithms,
from the Computational Perspective
1. Coarse grained, stateful: Business
â CPU requires DFE for minutes or hours
1. Fine grained, transactional with shared database: DM
â CPU utilizes DFE for ms to s
â Many short computations, accessing common database data
1. Fine grained, stateless transactional: Science (FF)
â CPU requires DFE for ms to s
â Many short computations
31/52
32. Coarse Grained: Modeling
80
âą Long runtime, but: 70
Timesteps (thousand)
Domain points (billion)
60
âą Memory requirements 50 Total computed points (trillion)
40
change dramatically based 30
on modelled frequency
20
10
âą Number of DFEs allocated
0
0 10 20 30 40 50 60 70 80
Peak Frequency (Hz)
to a CPU process can be 2,000
easily varied to increase 1,800
1,600
15Hz peak frequency
30Hz peak frequency
available memory 1,400
1,200
45Hz peak frequency
70Hz peak frequency
âą Streaming compression
1,000
800
600
âą Boundary data exchanged 400
U
o
n
u
q
P
C
e
a
E
v
c
s
r
t
l
i
200
over chassis MaxRing 0
1 4
Number of MAX2 cards
8
32/52
33. Fine Grained, Shared Data: Monitoring
âą DFE DRAM contains the database to be searched
âą CPUs issue transactions find(x, db)
âą Complex search function
â Text search against documents
â Shortest distance to coordinate (multi-dimensional)
â Smith Waterman sequence alignment for genomes
âą Any CPU runs on any DFE
that has been loaded with the database
â MaxelerOS may add or remove DFEs
from the processing group to balance system demands
â New DFEs must be loaded with the search DB before use
33/52
34. Fine Grained, Stateless: The BSOP Control
âą Analyse > 1,000,000 scenarios
âą Many CPU processes run on many DFEs
â Each transaction executes on any DFE in the assigned group atomically
âą ~50x MPC-X vs. multi-core x86 node
CPU
CPU DFE
CPU
CPU Market and DFE
DFE
Loop over instruments
Loop over instruments
Loop over instruments
Loop over instruments
CPU instruments DFE
DFE
Loop over instruments
Loop over instruments
Loop over instruments
Loop over instruments
Loop over instruments
Loop over instruments
Random number
Random number
data Random number
Random number
Random number
Random number
generator and
Random numberand
generator
Random number
generator and
Random numberand
generator
Random number
generator and
generator and
sampling of and
Tail
Tail generator underliers
sampling of of underliers
sampling underliers
generator and
Tail
Tail sampling of of underliers
sampling underliers
generator and
generator and
Tail
Tail sampling of of underliers
sampling underliers
Tail
analysis
Tail
analysis sampling of of underliers
sampling underliers
sampling of underliers
Tail
analysis
Tail
analysis
analysis
analysis
analysis
onCPU
CPU
analysis
analysis CPU
onCPU
onCPU
analysis
onCPU
onCPU
on
on CPU
on Price instruments
Price instruments
Price instruments
Price instruments
on CPU
on CPU Price instruments
Priceusing Black
instruments
using Black
Price instruments
Priceusing Black
instruments
using Black
Price instruments
Priceusing Scholes
instruments
Black
using Black
Scholes
using Scholes
Black
using Black
Scholes
using Scholes
Black
using Black
Scholes
Instrument Scholes
Scholes
Scholes
Scholes
values
34/52
37. The CRS Results
ïŹ
Performance of one MAX2 card vs. 1 CPU core
ïŹ
Land case (8 params), speedup of 230x
ïŹ
Marine case (6 params), speedup of 190x
CPU Coherency MAX2 Coherency
37/52
38. Seismic Imaging
âą Running on MaxNode servers
- 8 parallel compute pipelines per chip
- 150MHz => low power consumption!
- 30x faster than microprocessors
An Implementation of the Acoustic Wave Equation on FPGAs
T. Nemethâ , J. Stefaniâ , W. Liuâ , R. DimondâĄ, O. PellâĄ, R.Ergas§
â
Chevron, âĄMaxeler, §Formerly Chevron, SEG 2008
38/52
41. P. Marchetti et al, 2010
Trace Stacking: Speed-up 217
âą DM for Monitoring and Control in Seismic processing
âą Velocity independent / data driven method
to obtain a stack of traces, based on 8 parameters
â Search for every sample of each output trace
2
ïŁ« 2 T ïŁ¶ 2t0 T
t 2
hyp = ïŁŹ t0 + w m ïŁ· +
ïŁŹ v0 ïŁ· v0
(
m H zy K N H T m + h T H zy K NIP H T h
zy zy )
ïŁ ïŁž
2 parameters ( emergence angle & azimuth )
3 Normal Wave front parameters ( KN,11; KN,12 ; KN22 )
3 NIP Wave front parameters ( KNip,11; KNip,12 ; KNip22 )
41/52
45. Conclusion: Nota Bene
This is about algorithmic changes,
to maximize
the algorithm to architecture match:
Data choreography,
process modifications,
and
decision precision.
The winning paradigm
of Big Data ExaScale?
45/52
46. The TriPeak
Siena
+ BSC
+ Imperial College
+ Maxeler
+ Belgrade
46/52
46/8
47. The TriPeak
MontBlanc = A ManyCore (NVidia) + a MultiCore (ARM)
Maxeler = A FineGrain DataFlow (FPGA)
How about a happy marriage?
MontBlanc (ompSS) and Maxeler (an accelerator)
In each happy marriage,
it is known who does what :)
The Big Data DM algorithms:
What part goes to MontBlanc and what to Maxeler?
47/52
47/8
48. Core of the Symbiotic Success
An intelligent DM algorithmic scheduler,
partially implemented for compile time,
and partially for run time.
At compile time:
Checking what part of code fits where
(MontBlanc or Maxeler): LoC 1M vs 2K vs 20K
At run time:
Rechecking the compile time decision,
based on the current data values.
48/52
48/8
49. Maxeler: Teaching (Google: prof
vm) VLSI, PowerPoints, Maxeler:
TEACHING,
Maxeler Veljko Explanations, August 2012
Maxeler Veljko Anegdotic,
Maxeler Oskar Talk, August 2012
Maxeler Forbes Article
Flyer by JP Morgan
Flyer by Maxeler HPC
Tutorial Slides by Sasha and Veljko: Practice (Current Update)
Paper, unconditionally accepted for Advances in Computers by Elsevier
Paper, unconditionally accepted for Communications of the ACM
Tutorial Slides by Oskar: Theory (7 parts)
Slides by Jacob, New York
Slides by Jacob, Alabama
Slides by Sasha: Practice (Current Update)
Maxeler in Meteorology
Maxeler in Mathematics
Examples generated in Belgrade and Worldwide
THE COURSE ALSO INCLUDES DARPA METHODOLOGY FOR MICROPROCESSOR DESIGN,
with an example
49/52
49/8
50. Maxeler: Research (Google: good
method)
Structure of a Typical Research Paper: Scenario #1
[Comparison of Platforms for One Algorithm]
Curve A: MultiCore of approximately the same PurchasePrice
Curve B: ManyCore of approximately the same PurchasePrice
Curve C: Maxeler after a direct algorithm migration
Curve D: Maxeler after algorithmic improvements
Curve E: Maxeler after data choreography
Curve F: Maxeler after precision modifications
Structure of a Typical Research Paper: Scenario #2
[Ranking of Algorithms for One Application]
CurveSet A: Comparison of Algorithms on a MultiCore
CurveSet B: Comparison of Algorithms on a ManyCore
CurveSet C: Comparison on Maxeler, after a direct algorithm migration
CurveSet D: Comparison on Maxeler, after algorithmic improvements
CurveSet E: Comparison on Maxeler, after data choreography
CurveSet F: Comparison on Maxeler, after precision modifications
50/52
50/8
51. Maxeler: Topics (Google: HiPeac Berlin)
SRB (TR):
KG: Blood Flow
NS: Combinatorial Math
BG1: MiSANU Math
BG2: Meteos Meteorology
BG3: Physics (Gross Pitaevskii 3D real)
BG4: Physics (Gross Pitaevskii 3D imaginary)
(reusability with MPI/OpenMP vs effort to accelerate)
FP7 (Call 11):
University of Siena, Italy,
ICL, UK,
BSC, Spain,
QPLAN, Greece,
ETF, Serbia,
IJS, Slovenia, âŠ
51/52
51/8