More Related Content
Similar to Just Keep Sending The Messages
Similar to Just Keep Sending The Messages (20)
More from Russel Winder (20)
Just Keep Sending The Messages
- 1. Just Keep Sending
the Messages
Dr Russel Winder
It’z Interactive Ltd
russel@itzinteractive.com
@russel_winder
Copyright © 2011 Russel Winder 1
- 2. Just Keep Sending
the Messages
Dr Russel Winder
Independent Consultant
russel@russel.org.uk
@russel_winder
Copyright © 2011 Russel Winder 2
- 3. Just Keep Sending
the Messages
Prof Russel Winder
Wolverhampton University
russel@russel.org.uk
@russel_winder
Copyright © 2011 Russel Winder 3
- 4. Aims and Objectives
● Investigate why message passing architectures are the
software architectures of the future.
● Have some fun (whilst hopefully learning something).
Copyright © 2011 Russel Winder 4
- 6. Structure of the Session
● Introduction.
● Do stuff.
● Exit stage (left|right).
There will be significant dynamic
binding of the session.
Copyright © 2011 Russel Winder 6
- 7. Protocol
● Some slides, to kick things off.
● Some programs to really demonstrate things.
● NB Interaction between audience and presenter is
entirely mandatory.
We reserve the right to (shelve|stash) for
later any interaction that appears to go
on longer than seems appropriate.
Note the use of Git, Mercurial, Bazaar jargon
in a (gratuitous) attempt to make a connection
with other sessions at ACCU 2011.
Copyright © 2011 Russel Winder 7
- 8. BEGIN
● History is important:
– To know today, we look to yesterday. To know tomorrow,
we see today as yesterday.
http://wiki.answers.com/Q/Why_history_is_important
–
Copyright © 2011 Russel Winder 8
- 9. Processes / Threads Processes / RPC
Threads / Shared memory
Processes / IPC
Multi-tasking
One program at a time.
Copyright © 2011 Russel Winder 9
- 10. Historical Summary
● Shared-memory multi-threading for applications
programming is a total aberration:
– Consequences of operating systems handling of
concurrency have been imposed on all programmers.
● Cooperating processes is where applications
development sanity is:
– Operating system interprocess communication was slow,
hence threads, but this lead directly to the shared-memory,
multi-threading quagmire.
– Erlang has shown that processes and message passing can do
the job properly even after the “mainstream” had rejected
process-based working.
Copyright © 2011 Russel Winder 10
- 11. Concurrency
vs.
Parallelism
Copyright © 2011 Russel Winder 11
- 12. Concurrency
● Running multiple tasks using time-division
multiplexing on a single processor.
Copyright © 2011 Russel Winder 12
- 13. Parallelism
● Running multiple tasks concurrently.
Note that the word concurrency here is the English
meaning whereas the previous slide gave the
computing jargon meaning of the word.
Copyright © 2011 Russel Winder 13
- 14. Concurrency . . .
● . . . is a technique required for operating systems.
● . . . can sometimes be required for applications but
not as much as might be thought.
● . . . is an implementation detail (sort of).
● Applications can use alternative models, for example
event-driven systems.
– Abstract over the control flow rather than manage it with
locks, semaphores, monitors, etc.
Copyright © 2011 Russel Winder 14
- 15. Parallelism . . .
● . . . is about making an algorithm execute more
quickly in the presence of multiple processors.
● . . . is an architectural and design issue for
applications.
Copyright © 2011 Russel Winder 15
- 16. Concurrency
● The problem with threads is shared memory:
– Without writeable shared memory there is no need for
synchronization.
● Why impose a 1960s, operating system driven view of
concurrency management on applications
programmers?
Copyright © 2011 Russel Winder 16
- 17. Shared Memory is . . .
● . . . anathema to parallelism.
● . . . anathema to concurrency.
● . . . anathema to modern applications programming.
● . . . anathema.
Copyright © 2011 Russel Winder 17
- 18. Solution . . .
● . . . do not use mutable shared memory.
Note the caveat here that opens the door
to using shared immutable data.
Copyright © 2011 Russel Winder 18
- 19. Operating Systems . . .
● . . . are inherently a concurrency problem.
● Applications on the other hand are not, they
should be using higher level abstractions.
Copyright © 2011 Russel Winder 19
- 20. Object
Orientation
Copyright © 2011 Russel Winder 20
- 21. Original Model
● Object-based:
– A set of (small) closed namespaces, with methods,
exchanging messages requesting services.
● Object-oriented:
– Object-based plus classes and inheritance
Copyright © 2011 Russel Winder 21
- 22. Implementations
● Smalltalk realized the object-oriented correctly.
● C++ did not: message passing replaced by function
call.
C++ destroyed correct appreciation of the
object-oriented model of computation.
Copyright © 2011 Russel Winder 22
- 23. Non-contentious (sort of) Syllogism
● Object-orientation is
about objects passing
messages to each other:
object-orientation is not
about function call in a ● C++ is not an object-
shared memory context. oriented programming
● C++ is a programming language.
language based on
function call in a shared
memory context: C++
does not have objects
passing messages to
each other.
Copyright © 2011 Russel Winder 23
- 25. Object-orientation
● Java follows C++:
– Functional replaces message passing.
● Java beats C++, had threads 15 years before C++.
● Shared memory multi-threading requires all the
locks, semaphores, monitors, etc. and Java has it all.
Copyright © 2011 Russel Winder 25
- 26. Partial Solution for JVM
● Use java.util.concurrent:
– Provides many high-level tools.
– Has many low-level tools.
If you are using the low-level tools then you are lost
to the cause of quality application programming.
Use jsr166y (Fork/Join) and extra166y (ParallelArray)
in preference to using stuff in JDK6
Copyright © 2011 Russel Winder 26
- 27. High Performance
Computing
(HPC)
aka
Real Computing
Copyright © 2011 Russel Winder 27
- 28. Parallelism Rules
● HPC has been doing parallelism for 40+ years.
● Combinations of architectures:
– Vector processors
– Multi-bus multiprocessors
– Clusters
Copyright © 2011 Russel Winder 28
- 29. HPC Extant Solution
● Message Passing Interface (MPI)
MPI addresses the problem of SPMD or MIMD
parallelism in the context of multiple, possibly
multicore, systems.
Copyright © 2011 Russel Winder 29
- 30. HPC Proposed Solution
● Partitioned Global Address Space (PGAS)
● Champions:
– Chapel
– X10
– Fortress
Structure the global address space to allow for
multiple processors sharing a single memory and/or
to deal with distributed memory systems.
Copyright © 2011 Russel Winder 30
- 31. Return to a Better Way
Copyright © 2011 Russel Winder 31
- 32. Real Solutions?
● Actor Model
● Dataflow architectures
● CSP (Communicating Sequential Processes)
Return to a process and message passing view of applications.
Nothing wrong with threads as a tool.
The problem is using shared memory.
Copyright © 2011 Russel Winder 32
- 33. Actor Model
● A program is a collection of actors that send
messages to each other.
● An actor is a process with a “mailbox” for receiving
messages.
● A mailbox is a (thread-safe) queue of messages.
Copyright © 2011 Russel Winder 33
- 34. Dataflow Model
● A program is a graph of operators with some data
sources and some data sinks.
● An operator is an event-triggered computation with
some inputs and some output.
● An operator triggers for a certain state of its inputs.
Copyright © 2011 Russel Winder 34
- 35. CSP is . . .
● . . . mathematics:
VMS = μX. ( in2p → (chocolate → X|out1p → toffee → X)
| in1p → (toffee → X|in1p → (chocolate → X|in1p → STOPαX )))
● . . . but not scary since the mathematics can be
hidden in an API, so it just becomes a programming
tool.
Copyright © 2011 Russel Winder 35
- 36. CSP
● A program is a graph of processes running a
sequential computation that take input from input
channels and write output to output channels.
● Data exchange down a channel realizes a rendezvous.
Copyright © 2011 Russel Winder 36
- 37. Commentary
● Actors are non-deterministic, with chaotic
communications and hence complex.
● Dataflow and CSP have much greater determinism
with fixed communications channels.
Note the use of the term complex in a
(gratuitous) attempt to make a connection
with other sessions at ACCU 2011.
Copyright © 2011 Russel Winder 37
- 38. Implementations
● Actor Model:
– JVM: GPars, Scala, Akka, Fantom
– Native: C++/Just::Thread Pro, D
– Alternative: Erlang
● Dataflow Model:
– JVM: GPars, Pervasive DataRush
– Native: C++/Just::Thread Pro
● CSP:
– JVM: GPars, JCSP
– Native: C++CSP2
What, no mention of Groovy?
Copyright © 2011 Russel Winder 38
- 40. First Example Problem
● Something small, so the code is small.
● Something not too “toy”.
● Something with good parallelism.
– Embarrassingly parallel to allow checking of scaling.
Copyright © 2011 Russel Winder 40
- 42. What is the Value of ?
● Easy, it's known exactly, it's (obviously).
It's simples
Note the use of image of cuddly animals in a
(gratuitous) attempt to make a connection
with other sessions at ACCU 2011.
Copyright © 2011 Russel Winder 42
- 43. Approximating
● What is it's value represented as a floating point
number?
– We can only obtain an approximation.
– A plethora of possible algorithms to choose from, a popular
one is to employ the following integral equation.
1 1
=∫0 dx
4 1x 2
Copyright © 2011 Russel Winder 43
- 44. One Possible Algorithm
● Use quadrature to estimate the value of the integral
– which is the area under the curve.
4 n 1
= ∑i=1
n i−0.5 2
1
n
With n = 3 not much
to do, but potentially
lots of error.
Embarrassingly parallel.
Copyright © 2011 Russel Winder 44
- 45. Major Minor Hardware Problem
● Multiple hyperthreads per core on multicore
processors can be a serious waste of time.
Ed: Rant about chip manufacturers and
operating systems elided to avoid persecution
prosecution.
Copyright © 2011 Russel Winder 45
- 46. Second Example Problem
● Sleeping Barber Problem
– A barber shop has a cutting chair and some waiting chairs.
The barber sleeps in the cutting chair if there are no
customers. If a customer enters, the customer checks the
cutting chair and wakes the barber if the barber is asleep in
the chair, sits in the chair and gets a hair cut. If the
entering customer sees the cutting chair in use, the
customer checks to see if there is a waiting chair free. If
there is the customer sits and waits, otherwise the customer
leaves dissatisfied. On finishing a customer cut, the
customer leaves satisfied (we assume), and the barber
checks for waiting customers. If there are any waiting
customers, the customer moves to the cutting chair. If
there are no waiting customers, the barber returns to
sleeping in the cutting chair.
Copyright © 2011 Russel Winder 46
- 47. Sleeping Barber Problem . . .
● . . . is an interesting recasting of a process
synchronization problem in operating systems.
● . . . is due to Edsgar Dykstra
http://en.wikipedia.org/wiki/Sleeping_barber_problem
Copyright © 2011 Russel Winder 47
- 48. If the examples haven't been shown
yet, now is the time to show them!
Copyright © 2011 Russel Winder 48
- 49. Summary
● Multiprocessor programming is now the norm – even
if you don't actually need it.
● Hardware is rapidly heading towards distributed
memory architectures, instead of shared memory
multi-threading.
● Shared memory multi-threading requires locks,
semaphores, monitors, etc. and programmers find it
hard to get that stuff right.
● Actor Model, Dataflow Model, and CSP are higher
level abstractions of managed control flow that are
easier for programmers to deal with.
Copyright © 2011 Russel Winder 49
- 50. Summary of the Summary
● Shared-memory multi-threading is like
stacks, you know it's there but you just
don't worry about it.
Copyright © 2011 Russel Winder 50
- 51. Summary of the Summary of the
Summary
● If you think locks, semaphores, monitors, etc. are
important for your work, you are either working in
the concurrency frameworks business (*) OR you are
doing it wrong.
● (*) Which includes operating systems.
Copyright © 2011 Russel Winder 51
- 52. END
//G.SYSIN DD *
Notice the lack of gratuitous adverts for
books such as “Python for Rookies” and
“Developing Java Software” that are
authored by the presenter.
Copyright © 2011 Russel Winder 52
- 53. Just Keep Sending
the Messages
Dr Russel Winder
It’z Interactive Ltd
russel@itzinteractive.com
@russel_winder
Copyright © 2011 Russel Winder 53