SlideShare ist ein Scribd-Unternehmen logo
1 von 34
Downloaden Sie, um offline zu lesen
Parallel Random
Generator
Manny Ko
Principal Engineer
Activision
Outline
●Serial RNG
●Background
●LCG, LFG, crypto-hash
●Parallel RNG
●Leapfrog, splitting, crypto-hash
RNG - desiderata
● White noise like
● Repeatable for any # of cores
● Fast
● Small storage
RNG Quality
● DIEHARD
● Spectral test
● SmallCrush
● BigCrush
GPUBBS
Power Spectrum
Power spectrum density Radial Mean Radial Variance
Serial RNG: LCG
● Linear-congruential (LCG)
● 𝑋𝑖 = 𝑎 ∗ 𝑋𝑖−1 + 𝑐 𝑚𝑜𝑑 𝑀,
● a, c and M must be chosen carefully!
● Never choose 𝑀 = 231
! Should be a prime
● Park & Miller: 𝑎 = 16807, 𝑚 = 214748647 =
231 − 1. 𝑚 is a Mersenne prime!
● Most likely in your C runtime
LCG: the good and bad
● Good:
● Simple and efficient even if we use mod
● Single word of state
● Bad:
● Short period – at most m
● Low-bits are correlated especially if 𝑚 = 2 𝑛
● Pure serial
LCG - bad
● 𝑋 𝑘_+1 = (3 ∗ 𝑋 𝑘+4) 𝑚𝑜𝑑 8
● {1,7,1,7, … }
Mersenne Prime modulo
● IDIV can be 40~80 cycles for 32b/32b
● 𝑘 𝑚𝑜𝑑 𝑝 where 𝑝 = 2 𝑠 − 1:
● 𝑖 = 𝑘 & 𝑝 + 𝑘 ≫ 𝑠 ;
● 𝑟𝑒𝑡 𝑖 ≥ 𝑝 ? 𝑖 − 𝑝 ∶ 𝑖;
Lagged-Fibonacci Generator
● 𝑋𝑖 = 𝑋𝑖−𝑝 ∗ 𝑋𝑖−𝑞; p and q are the lags
● ∗ is =-* mod M (or XOR);
● ALFG: 𝑋 𝑛 = 𝑋 𝑛−𝑗 + 𝑋 𝑛−𝑘(𝑚𝑜𝑑 2 𝑚)
● * give best quality
● Period = 2 𝑝 − 1 2 𝑏−3; 𝑀 = 2 𝑏
LFG
● The good:
●Very efficient: 2 ops + power-of-2 mod
●Much Long period than LCG;
●Directly works in floats
●Higher quality than LCG
●ALFG can skip ahead
LFG – the bad
● Need to store max(p,q) floats
● Pure sequential –
● multiplicative LFG can’t jump ahead.
Mersenne Twister
● Gold standard ?
● Large state (624 ints)
● Lots of flops
● Hard to leapfrog
● Limited parallelism
power spectrum
● End of Basic RNG Overview
Parallel RNG
● Maintain the RNG’s quality
● Same result regardless of the # of cores
● Minimal state especially for gpu.
● Minimal correlation among the streams.
Random Tree
• 2 LCGs with different 𝑎
• L used to generate a
seed for R
• No need to know how
many generators or # of
values #s per-thread
• GG
Leapfrog with 3 cores
• Each thread leaps
ahead by 𝑁 using L
• Each thread use its
own R to generate its
own sequence
• 𝑁 = 𝑐𝑜𝑟𝑒𝑠 ∗ 𝑠𝑒𝑞𝑝𝑒𝑟𝑐𝑜𝑟𝑒
Leapfrog
● basic LCG without c:
● 𝐿 𝑘+1 = 𝑎𝐿 𝑘 𝑚𝑜𝑑 𝑚
● 𝑅 𝑘+1 = 𝑎 𝑛 𝑅 𝑘 𝑚𝑜𝑑 𝑚
● LCG: 𝐴 = 𝑎 𝑛and 𝐶 = 𝑐(𝑎 𝑛 − 1)/(𝑎 − 1) –
each core jumps ahead by n (# of cores)
Leapfrog with 3 cores
• Each sequence will
not overlap
• Final sequence is the
same as the serial
code
Leapfrog – the good
● Same sequence as serial code
● Limited choice of RNG (e.g. no MLFG)
● No need to fix the # of random values used
per core (need to fix ‘n’)
Leapfrog – the bad
● 𝑎 𝑝no longer have the good qualities of 𝑎
● power-of-2 N produce correlated sub-
sequences
● Need to fix ‘n’ - # of generators/sequences
● the period of the original RNG is shorten by a
factor of ‘n’. 32 bit LCG has a short period to
start with.
Sequence Splitting
• If we know the # of
values per thread 𝑛
• 𝐿 𝑘+1 = 𝑎 𝑛
𝐿 𝑘 𝑚𝑜𝑑 𝑚
• 𝑅 𝑘+1 = 𝑎𝑅 𝑘 𝑚𝑜𝑑 𝑚
• the sequence is a subset
of the serial code
Leapfrog and Splitting
● Only guarantees the sequences are non-
overlap; nothing about its quality
● Not invariant to degree of parallelism
● Result change when # cores change
● Serial and parallel code does not match
Lagged-Fibonacci Leapfrog
● LFG has very long period
● Period = 2 𝑝 − 1 2 𝑏−3; 𝑀 = 2 𝑏
● 𝑀 can be power-of-two!
● Much better quality than LCG
● No leapfrog for the best variant – ‘*’
● Luckily the ALFG supports leapfrogging
Issues with Leapfrog & Splitting
● LCG’s period get even shorter
● Questionable quality
● ALFG is much better but have to store
more state – for the ‘lag’.
Crypto Hash
● MD5
● TEA: tiny encryption algorithm
Core Idea
1. input trivially prepared
in parallel, e.g. linear
ramp
2. feed input value into
hash, independently
and in parallel
3. output white noise
hash
input
output
TEA
● A Feistel coder
● Input is split into L
and R
● 128B key
● F: shift and XORs or
adds
TEA
Magic ‘delta’
● 𝑑𝑒𝑙𝑡𝑎 = 5 − 1 231
● Avalanche in 6 cycles (often in 4)
● * mixes better than ^ but makes TEA
twice as slow
Applications
Fractal terrain
(vertex
shader)
Texture tiling
(fragment
shader)st
SPRNG
● Good package by Michael Mascagni
● http://www.sprng.org/
References
● [Mascagni 99] Some Methods for Parallel Pseudorandom Number Generation, 1999.
● [Park & Miller 88] Random Number Generators: Good Ones are hard to Find, CACM, 1988.
● [Pryor 94] Implementation of a Portable and Reproducible Parallel Pseudorandom Number
Generator, SC, 1994
● [Tzeng & Li 08] Parallel White Noise Generation on a GPU via Cryptographic Hash, I3D, 2008
● [Wheeler 95] TEA, a tiny encryption algorithm, 1995.
Take Aways
● Look beyond LCG
● ALFG is worth a closer look
● Crypto-based hash is most promising –
especially TEA.

Weitere ähnliche Inhalte

Was ist angesagt?

NIR on the Mesa i965 backend (FOSDEM 2016)
NIR on the Mesa i965 backend (FOSDEM 2016)NIR on the Mesa i965 backend (FOSDEM 2016)
NIR on the Mesa i965 backend (FOSDEM 2016)Igalia
 
OWASP Netherlands -- ML vs Cryptocoin Miners
OWASP Netherlands -- ML vs Cryptocoin MinersOWASP Netherlands -- ML vs Cryptocoin Miners
OWASP Netherlands -- ML vs Cryptocoin MinersJonn Callahan
 
JVM memory management & Diagnostics
JVM memory management & DiagnosticsJVM memory management & Diagnostics
JVM memory management & DiagnosticsDhaval Shah
 
Loop and switch
Loop and switchLoop and switch
Loop and switchUsain_21
 

Was ist angesagt? (6)

Opal compiler
Opal compilerOpal compiler
Opal compiler
 
NIR on the Mesa i965 backend (FOSDEM 2016)
NIR on the Mesa i965 backend (FOSDEM 2016)NIR on the Mesa i965 backend (FOSDEM 2016)
NIR on the Mesa i965 backend (FOSDEM 2016)
 
Lec sequential
Lec sequentialLec sequential
Lec sequential
 
OWASP Netherlands -- ML vs Cryptocoin Miners
OWASP Netherlands -- ML vs Cryptocoin MinersOWASP Netherlands -- ML vs Cryptocoin Miners
OWASP Netherlands -- ML vs Cryptocoin Miners
 
JVM memory management & Diagnostics
JVM memory management & DiagnosticsJVM memory management & Diagnostics
JVM memory management & Diagnostics
 
Loop and switch
Loop and switchLoop and switch
Loop and switch
 

Ähnlich wie ParallelRandom-mannyko

hbaseconasia2017: HBase Practice At XiaoMi
hbaseconasia2017: HBase Practice At XiaoMihbaseconasia2017: HBase Practice At XiaoMi
hbaseconasia2017: HBase Practice At XiaoMiHBaseCon
 
How to build TiDB
How to build TiDBHow to build TiDB
How to build TiDBPingCAP
 
OpenFOAM benchmark for EPYC server -Influence of coarsestLevelCorr in GAMG so...
OpenFOAM benchmark for EPYC server -Influence of coarsestLevelCorr in GAMG so...OpenFOAM benchmark for EPYC server -Influence of coarsestLevelCorr in GAMG so...
OpenFOAM benchmark for EPYC server -Influence of coarsestLevelCorr in GAMG so...takuyayamamoto1800
 
5G-Performance-Optimisation DATA RADIO oT+P+++.pdf
5G-Performance-Optimisation DATA RADIO oT+P+++.pdf5G-Performance-Optimisation DATA RADIO oT+P+++.pdf
5G-Performance-Optimisation DATA RADIO oT+P+++.pdfZouhir Allaoui
 
A Methodology for Automatic GPU Kernel Optimization - NECSTTechTalk 4/06/2020
A Methodology for Automatic GPU Kernel Optimization - NECSTTechTalk 4/06/2020A Methodology for Automatic GPU Kernel Optimization - NECSTTechTalk 4/06/2020
A Methodology for Automatic GPU Kernel Optimization - NECSTTechTalk 4/06/2020NECST Lab @ Politecnico di Milano
 
Cleaning illumina reads - LSCC Lab Meeting - Fri 23 Nov 2012
Cleaning illumina reads - LSCC Lab Meeting - Fri 23 Nov 2012Cleaning illumina reads - LSCC Lab Meeting - Fri 23 Nov 2012
Cleaning illumina reads - LSCC Lab Meeting - Fri 23 Nov 2012Torsten Seemann
 
My talk from PgConf.Russia 2016
My talk from PgConf.Russia 2016My talk from PgConf.Russia 2016
My talk from PgConf.Russia 2016Alex Chistyakov
 
Scala & Spark(1.6) in Performance Aspect for Scala Taiwan
Scala & Spark(1.6) in Performance Aspect for Scala TaiwanScala & Spark(1.6) in Performance Aspect for Scala Taiwan
Scala & Spark(1.6) in Performance Aspect for Scala TaiwanJimin Hsieh
 
One-Wire-Serial-Communication.pdf
One-Wire-Serial-Communication.pdfOne-Wire-Serial-Communication.pdf
One-Wire-Serial-Communication.pdfshamtekawambwa1
 
Lock free programming - pro tips devoxx uk
Lock free programming - pro tips devoxx ukLock free programming - pro tips devoxx uk
Lock free programming - pro tips devoxx ukJean-Philippe BEMPEL
 
Haskell Symposium 2010: An LLVM backend for GHC
Haskell Symposium 2010: An LLVM backend for GHCHaskell Symposium 2010: An LLVM backend for GHC
Haskell Symposium 2010: An LLVM backend for GHCdterei
 
PostgreSQL Replication
PostgreSQL ReplicationPostgreSQL Replication
PostgreSQL Replicationelliando dias
 
Rotor Cipher and Enigma Machine
Rotor Cipher and Enigma MachineRotor Cipher and Enigma Machine
Rotor Cipher and Enigma MachineSaurabh Kaushik
 
Erasure codes and storage tiers on gluster
Erasure codes and storage tiers on glusterErasure codes and storage tiers on gluster
Erasure codes and storage tiers on glusterRed_Hat_Storage
 
bgp features presentation routing protocle
bgp features presentation routing protoclebgp features presentation routing protocle
bgp features presentation routing protocleBadr Belhajja
 

Ähnlich wie ParallelRandom-mannyko (20)

hbaseconasia2017: HBase Practice At XiaoMi
hbaseconasia2017: HBase Practice At XiaoMihbaseconasia2017: HBase Practice At XiaoMi
hbaseconasia2017: HBase Practice At XiaoMi
 
How to build TiDB
How to build TiDBHow to build TiDB
How to build TiDB
 
OpenFOAM benchmark for EPYC server -Influence of coarsestLevelCorr in GAMG so...
OpenFOAM benchmark for EPYC server -Influence of coarsestLevelCorr in GAMG so...OpenFOAM benchmark for EPYC server -Influence of coarsestLevelCorr in GAMG so...
OpenFOAM benchmark for EPYC server -Influence of coarsestLevelCorr in GAMG so...
 
5G-Performance-Optimisation DATA RADIO oT+P+++.pdf
5G-Performance-Optimisation DATA RADIO oT+P+++.pdf5G-Performance-Optimisation DATA RADIO oT+P+++.pdf
5G-Performance-Optimisation DATA RADIO oT+P+++.pdf
 
A Methodology for Automatic GPU Kernel Optimization - NECSTTechTalk 4/06/2020
A Methodology for Automatic GPU Kernel Optimization - NECSTTechTalk 4/06/2020A Methodology for Automatic GPU Kernel Optimization - NECSTTechTalk 4/06/2020
A Methodology for Automatic GPU Kernel Optimization - NECSTTechTalk 4/06/2020
 
Java under the hood
Java under the hoodJava under the hood
Java under the hood
 
Cleaning illumina reads - LSCC Lab Meeting - Fri 23 Nov 2012
Cleaning illumina reads - LSCC Lab Meeting - Fri 23 Nov 2012Cleaning illumina reads - LSCC Lab Meeting - Fri 23 Nov 2012
Cleaning illumina reads - LSCC Lab Meeting - Fri 23 Nov 2012
 
My talk from PgConf.Russia 2016
My talk from PgConf.Russia 2016My talk from PgConf.Russia 2016
My talk from PgConf.Russia 2016
 
Mastering GC.pdf
Mastering GC.pdfMastering GC.pdf
Mastering GC.pdf
 
Gpu Join Presentation
Gpu Join PresentationGpu Join Presentation
Gpu Join Presentation
 
Scala & Spark(1.6) in Performance Aspect for Scala Taiwan
Scala & Spark(1.6) in Performance Aspect for Scala TaiwanScala & Spark(1.6) in Performance Aspect for Scala Taiwan
Scala & Spark(1.6) in Performance Aspect for Scala Taiwan
 
One-Wire-Serial-Communication.pdf
One-Wire-Serial-Communication.pdfOne-Wire-Serial-Communication.pdf
One-Wire-Serial-Communication.pdf
 
Lock free programming - pro tips devoxx uk
Lock free programming - pro tips devoxx ukLock free programming - pro tips devoxx uk
Lock free programming - pro tips devoxx uk
 
Limen Alpha Processor
Limen Alpha ProcessorLimen Alpha Processor
Limen Alpha Processor
 
Eugene Khvedchenia - Image processing using FPGAs
Eugene Khvedchenia - Image processing using FPGAsEugene Khvedchenia - Image processing using FPGAs
Eugene Khvedchenia - Image processing using FPGAs
 
Haskell Symposium 2010: An LLVM backend for GHC
Haskell Symposium 2010: An LLVM backend for GHCHaskell Symposium 2010: An LLVM backend for GHC
Haskell Symposium 2010: An LLVM backend for GHC
 
PostgreSQL Replication
PostgreSQL ReplicationPostgreSQL Replication
PostgreSQL Replication
 
Rotor Cipher and Enigma Machine
Rotor Cipher and Enigma MachineRotor Cipher and Enigma Machine
Rotor Cipher and Enigma Machine
 
Erasure codes and storage tiers on gluster
Erasure codes and storage tiers on glusterErasure codes and storage tiers on gluster
Erasure codes and storage tiers on gluster
 
bgp features presentation routing protocle
bgp features presentation routing protoclebgp features presentation routing protocle
bgp features presentation routing protocle
 

ParallelRandom-mannyko

  • 2. Outline ●Serial RNG ●Background ●LCG, LFG, crypto-hash ●Parallel RNG ●Leapfrog, splitting, crypto-hash
  • 3. RNG - desiderata ● White noise like ● Repeatable for any # of cores ● Fast ● Small storage
  • 4. RNG Quality ● DIEHARD ● Spectral test ● SmallCrush ● BigCrush GPUBBS
  • 5. Power Spectrum Power spectrum density Radial Mean Radial Variance
  • 6. Serial RNG: LCG ● Linear-congruential (LCG) ● 𝑋𝑖 = 𝑎 ∗ 𝑋𝑖−1 + 𝑐 𝑚𝑜𝑑 𝑀, ● a, c and M must be chosen carefully! ● Never choose 𝑀 = 231 ! Should be a prime ● Park & Miller: 𝑎 = 16807, 𝑚 = 214748647 = 231 − 1. 𝑚 is a Mersenne prime! ● Most likely in your C runtime
  • 7. LCG: the good and bad ● Good: ● Simple and efficient even if we use mod ● Single word of state ● Bad: ● Short period – at most m ● Low-bits are correlated especially if 𝑚 = 2 𝑛 ● Pure serial
  • 8. LCG - bad ● 𝑋 𝑘_+1 = (3 ∗ 𝑋 𝑘+4) 𝑚𝑜𝑑 8 ● {1,7,1,7, … }
  • 9. Mersenne Prime modulo ● IDIV can be 40~80 cycles for 32b/32b ● 𝑘 𝑚𝑜𝑑 𝑝 where 𝑝 = 2 𝑠 − 1: ● 𝑖 = 𝑘 & 𝑝 + 𝑘 ≫ 𝑠 ; ● 𝑟𝑒𝑡 𝑖 ≥ 𝑝 ? 𝑖 − 𝑝 ∶ 𝑖;
  • 10. Lagged-Fibonacci Generator ● 𝑋𝑖 = 𝑋𝑖−𝑝 ∗ 𝑋𝑖−𝑞; p and q are the lags ● ∗ is =-* mod M (or XOR); ● ALFG: 𝑋 𝑛 = 𝑋 𝑛−𝑗 + 𝑋 𝑛−𝑘(𝑚𝑜𝑑 2 𝑚) ● * give best quality ● Period = 2 𝑝 − 1 2 𝑏−3; 𝑀 = 2 𝑏
  • 11. LFG ● The good: ●Very efficient: 2 ops + power-of-2 mod ●Much Long period than LCG; ●Directly works in floats ●Higher quality than LCG ●ALFG can skip ahead
  • 12. LFG – the bad ● Need to store max(p,q) floats ● Pure sequential – ● multiplicative LFG can’t jump ahead.
  • 13. Mersenne Twister ● Gold standard ? ● Large state (624 ints) ● Lots of flops ● Hard to leapfrog ● Limited parallelism power spectrum
  • 14. ● End of Basic RNG Overview
  • 15. Parallel RNG ● Maintain the RNG’s quality ● Same result regardless of the # of cores ● Minimal state especially for gpu. ● Minimal correlation among the streams.
  • 16. Random Tree • 2 LCGs with different 𝑎 • L used to generate a seed for R • No need to know how many generators or # of values #s per-thread • GG
  • 17. Leapfrog with 3 cores • Each thread leaps ahead by 𝑁 using L • Each thread use its own R to generate its own sequence • 𝑁 = 𝑐𝑜𝑟𝑒𝑠 ∗ 𝑠𝑒𝑞𝑝𝑒𝑟𝑐𝑜𝑟𝑒
  • 18. Leapfrog ● basic LCG without c: ● 𝐿 𝑘+1 = 𝑎𝐿 𝑘 𝑚𝑜𝑑 𝑚 ● 𝑅 𝑘+1 = 𝑎 𝑛 𝑅 𝑘 𝑚𝑜𝑑 𝑚 ● LCG: 𝐴 = 𝑎 𝑛and 𝐶 = 𝑐(𝑎 𝑛 − 1)/(𝑎 − 1) – each core jumps ahead by n (# of cores)
  • 19. Leapfrog with 3 cores • Each sequence will not overlap • Final sequence is the same as the serial code
  • 20. Leapfrog – the good ● Same sequence as serial code ● Limited choice of RNG (e.g. no MLFG) ● No need to fix the # of random values used per core (need to fix ‘n’)
  • 21. Leapfrog – the bad ● 𝑎 𝑝no longer have the good qualities of 𝑎 ● power-of-2 N produce correlated sub- sequences ● Need to fix ‘n’ - # of generators/sequences ● the period of the original RNG is shorten by a factor of ‘n’. 32 bit LCG has a short period to start with.
  • 22. Sequence Splitting • If we know the # of values per thread 𝑛 • 𝐿 𝑘+1 = 𝑎 𝑛 𝐿 𝑘 𝑚𝑜𝑑 𝑚 • 𝑅 𝑘+1 = 𝑎𝑅 𝑘 𝑚𝑜𝑑 𝑚 • the sequence is a subset of the serial code
  • 23. Leapfrog and Splitting ● Only guarantees the sequences are non- overlap; nothing about its quality ● Not invariant to degree of parallelism ● Result change when # cores change ● Serial and parallel code does not match
  • 24. Lagged-Fibonacci Leapfrog ● LFG has very long period ● Period = 2 𝑝 − 1 2 𝑏−3; 𝑀 = 2 𝑏 ● 𝑀 can be power-of-two! ● Much better quality than LCG ● No leapfrog for the best variant – ‘*’ ● Luckily the ALFG supports leapfrogging
  • 25. Issues with Leapfrog & Splitting ● LCG’s period get even shorter ● Questionable quality ● ALFG is much better but have to store more state – for the ‘lag’.
  • 26. Crypto Hash ● MD5 ● TEA: tiny encryption algorithm
  • 27. Core Idea 1. input trivially prepared in parallel, e.g. linear ramp 2. feed input value into hash, independently and in parallel 3. output white noise hash input output
  • 28. TEA ● A Feistel coder ● Input is split into L and R ● 128B key ● F: shift and XORs or adds
  • 29. TEA
  • 30. Magic ‘delta’ ● 𝑑𝑒𝑙𝑡𝑎 = 5 − 1 231 ● Avalanche in 6 cycles (often in 4) ● * mixes better than ^ but makes TEA twice as slow
  • 32. SPRNG ● Good package by Michael Mascagni ● http://www.sprng.org/
  • 33. References ● [Mascagni 99] Some Methods for Parallel Pseudorandom Number Generation, 1999. ● [Park & Miller 88] Random Number Generators: Good Ones are hard to Find, CACM, 1988. ● [Pryor 94] Implementation of a Portable and Reproducible Parallel Pseudorandom Number Generator, SC, 1994 ● [Tzeng & Li 08] Parallel White Noise Generation on a GPU via Cryptographic Hash, I3D, 2008 ● [Wheeler 95] TEA, a tiny encryption algorithm, 1995.
  • 34. Take Aways ● Look beyond LCG ● ALFG is worth a closer look ● Crypto-based hash is most promising – especially TEA.