More Related Content
Similar to GPars Workshop (20)
More from Russel Winder (20)
GPars Workshop
- 1. GPars Workshop
Russel Winder
email: russel@winder.org.uk
xmpp: russel@winder.org.uk
twitter: @russel_winder
http://www.russel.org.uk
Copyright © 2013 Russel Winder 1
- 2. Aims, Goals and Objectives
● Gain practical experience of the various models of
concurrent and parallel behaviour available in
GPars; actors, dataflow, data parallelism, etc.
● Have some fun.
Copyright © 2013 Russel Winder 2
- 3. Subsidiary Aims, Goals and Objects
● Show that shared mutable memory multi-threading
should return to being an operating systems
development technique and not continue to be
pushed as an applications programming technique
– remember…
Copyright © 2013 Russel Winder 3
- 4. …people should tremble
in fear at the prospect of using
Shared mutable memory
multi-threading.
Copyright © 2013 Russel Winder 4
- 5. Structure
Introduction.
Actors.
Dataflow.
Data Parallelism.
Analysis.
Closing.
Copyright © 2013 Russel Winder 5
- 6. Protocol
Short presentation.
3
(Short presentation → Practical period)
Interaction.
Short presentation.
Questions or comments welcome at any time.
Copyright © 2013 Russel Winder 6
- 9. It is no longer contentious that
The Multicore Revolution
is well underway.
Copyright © 2013 Russel Winder 9
- 10. Quad core laptops and phones.
Eight and twelve core workstations.
Servers with “zillions” of cores.
Copyright © 2013 Russel Winder 10
- 12. Software technology in use is now lagging
hardware technology by decades.
Copyright © 2013 Russel Winder 12
- 13. Operating systems manage cores
with kernel threads.
Operating systems are fundamentally shared
mutable memory multi-threaded systems.
Operating systems rightly use all the lock,
semaphore, monitor, etc. technologies.
Copyright © 2013 Russel Winder 13
- 14. Computationally intensive systems or
subsystems definitely have to be parallel.
Other systems likely use concurrency
but not parallelism.
Copyright © 2013 Russel Winder 14
- 15. Concurrency
Execution as co-routines:
Sequences of code give up the execution
to pass it to another coroutine.
Copyright © 2013 Russel Winder 15
- 16. More Concurrency
Concurrency is a technique founded in a
uniprocessor view of the world.
Time-division multiplexing.
Copyright © 2013 Russel Winder 16
- 17. Parallelism
Having multiple executions active
at the same time.
Copyright © 2013 Russel Winder 17
- 18. Concurrency is a tool for structuring execution where a
single processor is used by multiple computations.
Parallelism is about making a computation complete
faster than using a single processor.
Copyright © 2013 Russel Winder 18
- 28. The whole purpose of a lock is to
prevent parallelism.
Copyright © 2013 Russel Winder 28
- 31. Locks are needed only if
there is mutable shared state.
Copyright © 2013 Russel Winder 31
- 35. It's all about controlling
concurrency and parallelism
with tools that applications
programmers find usable.
Copyright © 2013 Russel Winder 35
- 36. Shared mutable memory multi-threading
is an operating system technique.
Copyright © 2013 Russel Winder 36
- 37. Applications and tools programmers
need computational models with
integrated synchronization.
Copyright © 2013 Russel Winder 37
- 39. It's all easier if processes
are single threaded.
Copyright © 2013 Russel Winder 39
- 40. Dataflow
Operators connected by
Actors channels with activity
Independent processes triggered by arrival of
communicating via data on the channels.
asynchronous exchange
of messages.
Data Parallelism
Transform a sequence to
another sequence where all
individual actions happen
at the same time.
Copyright © 2013 Russel Winder 40
- 41. Agents
A wrapper for some
Active Objects shared mutable state.
An object that is actually
an actor but looks like a
full service object.
Fork/Join
An toolkit for tree structured
concurrency and parallelism.
Software Transactional Memory
Wrappers for mutable values that uses transactions
rather than locks.
Copyright © 2013 Russel Winder 41
- 43. Actors
Independent processes
communicating via
asynchronous exchange
of messages
Copyright © 2013 Russel Winder 43
- 46. The Sleeping Barber Problem
● The barber's shop has a ● If the barber is cutting, a new
single cutting chair and a row customer checks to see if
of waiting seats. there is a free waiting seat.
● The barber sleeps in the ● If there is join the queue
cutting chair unless trimming to be trimmed.
a customer. ● If there isn't leave
● Customers arrive at the shop disgruntled.
at intervals.
● If the barber is asleep, the
customer wakes the barber Problem originally due
takes the cutting chair and to Edsgar Dijkstra.
gets a trim.
Copyright © 2013 Russel Winder 46
- 47. The cutting chair.
The waiting chairs
The barber's shop.
A new customer enters the shop,
check to see if they can go straight
to the cutting chair, if not can they
take a waiting chair, if not leave.
Copyright © 2013 Russel Winder 47
- 48. Wikipedia article presents the classic operating
systems approach using locks and semaphores.
http://en.wikipedia.org/wiki/Sleeping_barber_problem
Copyright © 2013 Russel Winder 48
- 51. Dataflow
Operators connected by
channels with activity
triggered by arrival of
data on the channels.
Copyright © 2013 Russel Winder 51
- 53. If you want the code, clone the Git repository:
http://www.russel.org.uk/Git/SleepingBarber.git
Copyright © 2013 Russel Winder 53
- 54. Or if you just want to browse:
http://www.russel.org.uk/gitweb
Copyright © 2013 Russel Winder 54
- 56. Data Parallelism
Transform a sequence to
another sequence where all
individual actions happen
at the same time.
Copyright © 2013 Russel Winder 56
- 59. What is the Value of ?
Easy, it's known exactly.
It's .
Obviously.
Copyright © 2013 Russel Winder 59
- 60. It's simples
Александр Орлов 2009
Copyright © 2013 Russel Winder 60
- 61. Approximating
● What is it's value represented as a floating point
number?
● We can only obtain an approximation.
● A plethora of possible algorithms to choose from, a
popular one is to employ the following integral
equation.
1 1
=∫0 dx
4 1x 2
Copyright © 2013 Russel Winder 61
- 62. One Possible Algorithm
● Use quadrature to estimate the value of the integral
– which is the area under the curve.
4 n 1
= ∑i=1
n i−0.5 2
Embarrassingly 1
parallel. n
With n = 3 not much to do,
but potentially lots of error.
Use n = 107 or n = 109?
Copyright © 2013 Russel Winder 62
- 63. Because addition is commutative and
associative, expression can be
decomposed into sums of partial sums.
Copyright © 2013 Russel Winder 63
- 64. a+b+c+d+e+f
=
(a+b)+(c+d)+(e+f)
Copyright © 2013 Russel Winder 64
- 67. If you want the code, clone the Git repository:
http://www.russel.org.uk/Git/Pi_Quadrature.git
Copyright © 2013 Russel Winder 67
- 68. Or if you just want to browse:
http://www.russel.org.uk/gitweb
Copyright © 2013 Russel Winder 68
- 73. Actors, dataflow, and data parallelism
(and CSP, agents, fork/join,…)
are the future of applications structure.
Copyright © 2013 Russel Winder 73
- 84. Don't be a squirrel.
Copyright © 2013 Russel Winder 84
- 85. Do not use explicit locking algorithms.
Copyright © 2013 Russel Winder 85
- 86. Use computational architectures that promote
parallelism and hence performance
improvement:
Actors
Dataflow
Data Parallelism
Copyright © 2013 Russel Winder 86
- 87. Use
Go on, you know you want to…
Copyright © 2013 Russel Winder 87
- 90. GPars Workshop
Russel Winder
email: russel@winder.org.uk
xmpp: russel@winder.org.uk
twitter: @russel_winder
website: http://www.russel.org.uk
Copyright © 2013 Russel Winder 90