11. JSR166y
ForkJoin Divide and Conquer
ForkJoinは、現在の並列処理の基本アルゴリ
ズムの一つ。Javaに限らず広く利用されている。
ForkJoinは、処理を分割して、分割された処理
を、複数のコア上で並列化することによって、パ
フォーマンスを上げようとするものである。
ここでは、まず、そのエッセンスとしての「Divide
and Conquer」の手法を見てみよう。
12. Divide and Conquer
esult solve(Problem problem) {
R
if (problem が小さいものであれば)
直接、problemを解け;
else {
problemを独立の部分に分割せよ;
それぞれの部分を解く、subtaskをforkせよ;
全てのsubtaskをjoinせよ;
subresultからresultを構成せよ;
}
}
13. class SortTask extends RecursiveAction {
final long[] array; final int lo; final int hi;
SortTask(long[] array, int lo, int hi) {
this.array = array; this.lo = lo; this.hi = hi;
}
protected void compute() { THRESHOLD以下は
if (hi - lo < THRESHOLD) 普通の線形SORT
sequentiallySort(array, lo, hi);
else { SortTaskをRecursive
int mid = (lo + hi) >>> 1; に呼び出す。
invokeAll(new SortTask(array, lo, mid),
new SortTask(array, mid, hi));
merge(array, lo, hi); 結果をmergeする
}
}
}
Recursiveな呼び出しで、処理が分割される
14. lo (lo+hi)/2 hi
invokeAll(sortTask…,sortTask… )
lo (lo+hi)/2 hi lo (lo+hi)/2 hi
invokeAll(sortTask…,sortTask… ) invokeAll(sortTask…,sortTask… )
lo hi lo hi lo hi lo hi
If (hi - lo) < THRESHHOLD
sequentialMerge
15.
16. class IncrementTask extends RecursiveAction {
final long[] array; final int lo; final int hi;
IncrementTask(long[] array, int lo, int hi) {
this.array = array; this.lo = lo; this.hi = hi;
}
protected void compute() {
if (hi - lo < THRESHOLD) { THRESHOLD以下なら
for (int i = lo; i < hi; ++i) Arrayの要素を+1
array[i]++;
}
else { IncrementalTask
int mid = (lo + hi) >>> 1; をRecursiveに呼び出す。
invokeAll(new IncrementTask(array, lo, mid),
new IncrementTask(array, mid, hi));
}
}
}
17. lo (lo+hi)/2 hi
invokeAll(incrementTask…,incrementTask… )
lo (lo+hi)/2 hi lo (lo+hi)/2 hi
invokeAll(incrementTask…,incrementTask… )
lo hi lo hi lo hi lo hi
invokeAll(incrementTask…,incrementTask… )
If (hi - lo) < THRESHHOLD
Array[i]++
21. Double-Link Queue(dequeu)
LIFO (Last In / First Out)
push pop
FIFO (First In / First Out)
take
Queueは、double-link Queue(dequeu)として管理され、
LIFOのpush,popとFIFOのtakeをサポートする。
30. Apply
ForkJoinPool fjp = new ForkJoinPool(i);
ParallelArray pa = ParallelArray.createUsingHandoff(array, fjp);
final Proc proc = new Proc();
pa.apply(proc);
public void apply(
Ops.Procedure<? super T> procedure)
それぞれの要素に、procedureを適用する。
static final class Proc implements Ops.Procedure<Rand> {
public void op(Rand x) {
for (int k = 0; k < (1 << 10); ++k)
x.next();
}
}
31. withFilter
ForkJoinPool fjp = new ForkJoinPool(ps);
ParallelArray<Rand> pa = ParallelArray.createUsingHandoff(
array, fjp);
final IsPrime pred = new IsPrime();
List<Rand> result = pa.withFilter(pred).all().asList();
public ParallelArray
withFilter(Ops.Predicate<? super T>
selector)
selectorが真となる要素を選ぶ。
static final Ops.Predicate isSenior = new Ops.Predicate() {
public boolean op(Student s) {
return s.graduationYear == Student.THIS_YEAR;
}
};
32. withMapping / Reduce
sum += pa.withMapping(getNext).reduce(accum, zero);
public <U>
ParallelArrayWithMapping<T,U>
withMapping(Ops.Op<? super T,?
extends U> op)
static final class GetNext implements Ops.Op<Rand, Long>
final GetNext getNext = new GetNext();
static final class Accum implements Ops.Reducer<Long>
final Accum accum = new Accum();
final Long zero = Long.valueOf(0);
33. static final class GetNext 引数の型、返り値の型
implements Ops.Op<Rand, Long> {
public Long op(Rand x) {
return x.next();
}
}
static final class Accum 引数の型
implements Ops.Reducer<Long> {
public Long op(Long a, Long b) {
long x = a;
long y = b;
return x + y;
}
}
34. 基本的には、メソッドopの実装を与える必要がある。
public class Ops {
private Ops() {} // disable construction
// Thanks to David Biesack for the above html table
// You want to read/edit this with a wide editor panel
public static interface Op<A,R> {R op(A a);}
public static interface BinaryOp<A,B,R> {R op(A a, B b);}
public static interface Predicate<A> { boolean op(A a);}
public static interface BinaryPredicate<A,B> { boolean op(A a, B b);}
public static interface Procedure<A> { void op(A a);}
public static interface Generator<R> {R op();}
public static interface Reducer<A> extends BinaryOp<A, A, A>{}
……
……
}
この面倒さは、Closureを導入することで
大幅に、軽減される。
35. There’s not a moment to lose!
http://mreinhold.org/blog/closures 2009/11/24
The free lunch is over.
Multicore processors are not just coming—they’re here.
Leveraging multiple cores requires writing scalable parallel
programs, which is incredibly hard.
Tools such as fork/join frameworks based on work-stealing
algorithms make the task easier, but it still takes a fair bit of
expertise and tuning.
Bulk-data APIs such as parallel arrays allow computations to
be expressed in terms of higher-level, SQL-like operations
(e.g., filter, map, and reduce) which can be mapped
automatically onto the fork-join paradigm.
Working with parallel arrays in Java, unfortunately, requires
lots of boilerplate code to solve even simple problems.
Closures can eliminate that boilerplate.
36. There’s not a moment to lose!
Closures for Java By M.Reinhold
無料ランチの時間は終わった。マルチコア・プロセ
ッサーは、これから登場しようとしているのではな
い。それは、もう、目の前にあるのだ。
マルチコアの力を発揮するには、スケーラブルな
並列プログラムを書く必要があるのだが、それは
信じられないほど困難だ。
Work-Stealアルゴリズムに基づいたFork/Join
フレームワークのようなツールは、その仕事をより
簡単にするのだが、それでも、かなりの熟練とチ
ューニングを必要とする。
37. There’s not a moment to lose!
Closures for Java By M.Reinhold
ParallelArayのような大量データ用のAPIは、計
算を抽象度の高いレベルで、SQL風な(例えば、
filter, map, reduceといった)操作で表現する
ことを可能とする。これらの操作を、自動的に、
ForkJoinパラダイムにマップすることが可能であ
る。
Javaで、ParallelArrayで仕事をするためには、
残念なことに、簡単な問題を解く時でさえも、沢山
の決まりきったコードを書く必要がある。
38. There’s not a moment to lose!
Closures for Java By M.Reinhold
Closureを使えば、こうした決まりきったコードを
無くすことが出来る。
JavaにClosureを追加すべきなのは、今だ。
このReinholdの主張は、2年前のものだが、残念な
がら、Java SE7では、ForkJoinは導入されたが、
Closureの導入は見送られ、Java SE8に持ち越さ
れた。
39. Java SE7のForkJoin
http://docs.oracle.com/javase/7/docs
/technotes/guides/concurrency/index.
html
43. 通常のSequentialな処理
for文での繰り返し
class Student {
String name;
int gradyear;
double score;
}
List<Student> students = …… ;
double max = Double.MIN_VALUE;
for (Student s : students) {
if (s.gradyear == 2011)
max = Math.max(max, s,score)
}
Return max;
44. ParalellArrayでの処理 Closure無し
Double max
= students
. filter(new Predicate<Student>() {
public boolean eval(Student s) {
return s.gradYear == 2011;
}
}} . map(new Mapper<Student,Double>() {
public Double map(Student s) {
return s.score;
}
}} . reduce(0,0, new Reducer<Double,Double> () {
public Double reduce(Double max, Double score) {
return Math.max(max,score);
}
}};
54. Parallel Programming in the
.NET Framework
Many personal computers and workstations have two or
four cores (that is, CPUs) that enable multiple threads to
be executed simultaneously.
Computers in the near future are expected to have
significantly more cores. To take advantage of the
hardware of today and tomorrow, you can
parallelize your code to distribute work across
multiple processors.
In the past, parallelization required low-level
manipulation of threads and locks.
Visual Studio 2010 and the .NET Framework 4 enhance
support for parallel programming by providing a new
runtime, new class library types, and new diagnostic
tools.
55. .NET 4
new runtime, new class library
Task Parallel Library
Parallel LINQ (PLINQ)
Data Structures for Parallel
Programming
Parallel Diagnostic Tools
Custom Partitioners for PLINQ and TPL
Task Factories
Task Schedulers
Lambda Expressions in PLINQ and TPL
………
56. .NET の
User Mode Scheduler
CLR Thread Pool
Global
Queue
Worker … Worker
Thread 1 Thread p
Program
Thread
57. .NET 4.0の
User Mode Scheduler For Tasks
CLR Thread Pool: Work-
Stealing
Local … Local
Global Queue Queue
Queue
Worker … Worker
Thread 1 Thread p
Task 6
Task 1 Task Task 3
4
TaskProgram
2 Task 5
Thread
58.
59. PLINK Code Sample
var source = Enumerable.Range(1, 10000);
// Opt-in to PLINQ with AsParallel
var evenNums = from num in source.AsParallel()
where Compute(num) > 0
select num;
var query = from item in
source.AsParallel().WithDegreeOfParallelism(2)
where Compute(item) > 42
select item;
evenNums = from num in numbers.AsParallel().AsOrdered()
where num % 2 == 0
select num;
60. ForAll Operation
var nums = Enumerable.Range(10, 10000);
var query = from num in nums.AsParallel()
where num % 10 == 0
select num;
// Process the results as each thread completes
// and add them to a
System.Collections.Concurrent.ConcurrentBag(Of Int)
// which can safely accept concurrent add operations
query.ForAll((e) => concurrentBag.Add(Compute(e)));
63. Sequential Fallback in .NET 4
and .NET 4.5
Operators that may cause sequential fallback in both .NET 4 and .NET 4.5 are marked
in blue, and operators that may cause fallback in .NET 4 but no longer in .NET 4.5 are
marked in orange.
68. Intel OpenCL
OpenCL™ (Open Computing Language) is
the first open, royalty-free standard for
general-purpose parallel programming of
heterogeneous systems
OpenCL provides a uniform programming
environment for software developers to
write efficient, portable code for client
computer systems, high-performance
computing servers, and handheld devices
using a diverse mix of multi-core CPUs
and other parallel processors.
72. Intel RiverTrail
https://github.com/RiverTrail/RiverTrail/wiki
The goal of Intel Lab’s River Trail project is to
enable data-parallelism in web applications.
River Trail gently extends JavaScript with
simple deterministic data-parallel
constructs that are translated at runtime
into a low-level hardware abstraction
layer.
By leveraging multiple CPU cores and vector
instructions, River Trail achieves significant
speedup over sequential JavaScript.
77. Map
myArray.map(elementalFunction,
arg1, arg2, ...)
Return
A freshly minted ParallelArray
Example: an identity function
pa.map(function(val){return val;})
78. Filter
myArray.filter(elementalFunction,
arg1, arg2, ...)
Returns
A freshly minted ParallelArray holding source elements
where the results of applying the elemental function is
true.
Example
pa.filter(function(){return true;})
79. Reduce
myArray.reduce(elementalFunction)
myArray.reduce(elementalFunction,
arg1, arg2, ...)
Returns
The result of the reducing a and b, typically used in
further applications of the elemental function.
Reduce is free to group calls to the elemental
function in arbitrary ways and order the calls
arbitrarily. If the elemental function is associative
then the final result will be the same regardless of
the ordering.
80. Flatten
myArray.flatten()
Returns
A freshly minted ParallelArray whose outermost two
dimensions have been collapsed into one.
Example
pa = new ParallelArray([[1,2][3,4]])
// <<1,2>,<3,4>>
pa.flatten()
// <1,2,3,4>
81. Partition
myArray.partition(size)
size
the size of each element of the newly created dimension;
the outermost dimension of myArray needs to be
divisible by size
Return
A freshly minted ParallelArray where the outermost
dimension has been partitioned into elements of size
size.
Example
pa = new ParallelArray([1,23,4]) // <1,2,3,4>
pa.partition(2) // <<1,2>,<3,4>>
83. Scale-outとStateless Server
Multi-tier Web ApplicationのScale-out
Java EE6:StatelessSessionBean+Servlet
Java EE6:RESTful Web Service
Play2.0:RoutesファイルとAction
102. # Routes
# This file defines all application routes (Higher priority routes first)
# ~~~~
# The home page
GET / controllers.Projects.index
# Authentication
GET /login controllers.Application.login
POST /login controllers.Application.authenticate
GET /logout controllers.Application.logout
# Projects
POST /projects controllers.Projects.add
POST /projects/groups controllers.Projects.addGroup()
DELETE /projects/groups controllers.Projects.deleteGroup(group: String)
PUT /projects/groups controllers.Projects.renameGroup(group: String)
DELETE /projects/:project controllers.Projects.delete(project: Long)
PUT /projects/:project controllers.Projects.rename(project: Long)
103. POST /projects/:project/team controllers.Projects.addUser(project: Long)
DELETE /projects/:project/team controllers.Projects.removeUser(project: Long)
# Tasks
GET /projects/:project/tasks controllers.Tasks.index(project: Long)
POST /projects/:project/tasks controllers.Tasks.add(project: Long, folder: String)
PUT /tasks/:task controllers.Tasks.update(task: Long)
DELETE /tasks/:task controllers.Tasks.delete(task: Long)
POST /tasks/folder controllers.Tasks.addFolder
DELETE /projects/:project/tasks/folder
controllers.Tasks.deleteFolder(project: Long, folder: String)
PUT /project/:project/tasks/folder
controllers.Tasks.renameFolder(project: Long, folder: String)
# Javascript routing
GET /assets/javascripts/routes controllers.Application.javascriptRoutes
# Map static resources from the /public folder to the /public path
GET /assets/*file controllers.Assets.at(path="/public", file)
104. Controller Actionの記述
app/controllers/Application.java
app/controllers/以下のJava/Scalaファイ
ルは、routesファイルで、HTTP Requestに
対応づけられたActionを定義する。
package controllers;
import play.*;
import play.mvc.*;
import views.html.*;
public class Application extends Controller {
public static Result index() {
return ok(index.render("Hello World!"));
}
}
106. Hack the Web for Real-Time
Ajax applications use various ―hacks‖ to
simulate real-time communication
Polling -HTTP requests at regular intervals and
immediately receives a response
Long Polling -HTTP request is kept open by the
server for a set period
Streaming -More efficient, but not complex to
implement and unreliable
Excessive HTTP header traffic, significant
overhead to each request response
107. HTTP Characteristics
HTTP is designed for document
transfer
Resource addressing
Request / Response interaction
Caching
HTTP is bidirectional, but half-
duplex
Traffic flows in only one direction at a time
HTTP is stateless
Header information is resent for each
request
108. Traditional vs Web
Traditional Computing
Full-duplex bidirectional TCP sockets
Access any server on the network
Web Computing
Half-duplex HTTP request-response
HTTP polling, long polling fraught with
problems
Lots of latency, lots of bandwidth, lots
of server-side resources
Bespoke solutions became very
complex over time
109. HTML5 WebSocket
WebSocketsprovide an improved Web
Commsfabric
Consists of W3C API and IETF Protocol
Provides a full-duplex, single socket over
the Web
Traverses firewalls, proxies, and routers
seamlessly
Leverages Cross-Origin Resource Sharing
Share port with existing HTTP content
Can be secured with TLS (much like HTTPS)
113. The Legacy Web Stack
Designed to serve static documents
HTTP
Half duplex communication
High latency
Bandwidth intensive
HTTP header traffic approx. 800 to 2000 bytes
overhead per request/response
Complex architecture
Not changed since the 90’s
Plug-ins
Polling / long polling
Legacy application servers
Expensive to ―Webscale‖ applications
115. WebSocket Handshake
Server Responce
必須
HTTP/1.1 101 “Switching Protocols” or other descriptions
Upgrade: websocket
Connection: Upgrade
Sec-Websocket-Accept: 20-bytes MDS hash in Base64
オプション
Sec-Websocket-Protocol: protocol
Sec-Websocket-Extension: extention [,extension]*
116. JavaScript How do I use:
WebSocket API
//Create new WebSocket
var mySocket = new WebSocket("ws://www.WebSocket.org");
// Associate listeners
mySocket.onopen = function(evt) {
alert("Connectionopen…");
};
mySocket.onmessage = function(evt) {
alert("Receivedmessage: " + evt.data);
};
117. JavaScript How do I use:
WebSocket API
mySocket.onclose = function(evt) {
alert("Connectionclosed…");
};
// Sending data
mySocket.send("WebSocket Rocks!");
// Close WebSocket
mySocket.close();
118. WebSocket Frames
Frameshave a fewheaderbytes
Data may be text or binary
Frames from client to server are masked
(XORed w/ random value) to avoid confusing
proxies
119. HTTP Header Traffic Analysis
Example network throughput for HTTP request
and response headers associated with polling
Use case A: 1,000 clients polling every second:
Network throughput is (871 x 1,000) = 871,000
bytes = 6,968,000 bits per second (~6.6 Mbps)
Use case B: 10,000 clients polling every
second:
Network throughput is (871 x 10,000) = 8,710,000
bytes = 69,680,000 bits per second (~66 Mbps)
Use case C: 100,000 clients polling every
second:
Network throughput is (871 x 100,000) =
87,100,000 bytes = 696,800,000 bits per second
(~665 Mbps)
120. Reduction in Network Traffic
With WebSocket, each frame has only
several bytes of packaging (a 500:1 or
even 1000:1 reduction)
No latency involved in establishing new
TCP connections for each HTTP message
Dramatic reduction in unnecessary network
traffic and latency
Remember the Polling HTTP header traffic?
665 Mbps network throughput for just headers
121. HTTP versus WebSockets
Example: Entering a character in a search
field with auto suggestion
HTTP Traffic WebSocket Traffic
Google 788 + 1 byte 2 + 1 byte
Yahoo 1737 + 1 byte 2 + 1 byte
WebSockets reduces bandwidth
overhead up to 1000x
123. “Reducing kilobytes of data to 2 bytes…and
reducing latency from 150ms to 50ms is
far more than marginal. In fact, these two
factors alone are enough to make
WebSocket seriously interesting to
Google.”
—Ian Hickson
(Google, HTML5 spec lead)
125. Let's make the web faster
As part of the "Let's make the web faster"
initiative, we are experimenting with
alternative protocols to help reduce the
latency of web pages. One of these
experiments is SPDY (pronounced "SPeeDY"),
an application-layer protocol for transporting
content over the web, designed specifically for
minimal latency.
In lab tests, we have compared the
performance of these applications over HTTP
and SPDY, and have observed up to 64%
reductions in page load times in SPDY.
126. Background:
web protocols and web latency
Unfortunately, HTTP was not particularly
designed for latency. Furthermore, the web
pages transmitted today are significantly
different from web pages 10 years ago and
demand improvements to HTTP that could not
have been anticipated when HTTP was
developed.
Single request per connection.
Exclusively client-initiated requests.
Uncompressed request and response
headers.
Redundant header
Optional data compression.
127. Goals for SPDY
To target a 50% reduction in page load
time.
To minimize deployment complexity.
To avoid the need for any changes to content
by website authors.
To bring together like-minded parties
interested in exploring protocols as a way of
solving the latency problem.
128. Some specific technical goals
To allow many concurrent HTTP requests
to run across a single TCP session.
To define a protocol that is easy to
implement and server-efficient.
To make SSL the underlying transport protocol,
for better security and compatibility with
existing network infrastructure.
To enable the server to initiate communications
with the client and push data to the client
whenever possible.
129. SPDY design and features
SPDY adds a session layer atop of SSL that
allows for multiple concurrent, interleaved
streams over a single TCP connection.
The usual HTTP GET and POST message
formats remain the same; however, SPDY
specifies a new framing format for encoding
and transmitting the data over the wire.
Streams are bi-directional
i.e. can be initiated by
the client and server.
130. Basic features
Multiplexed streams
SPDY allows for unlimited concurrent streams over a
single TCP connection. Because requests are
interleaved on a single channel, the efficiency of TCP
is much higher: fewer network connections need to
be made, and fewer, but more densely packed,
packets are issued.
Request prioritization
SPDY implements request priorities: the client can
request as many items as it wants from the server,
and assign a priority to each request.
HTTP header compression
SPDY compresses request and response HTTP
headers, resulting in fewer packets and fewer bytes
transmitted.
131. Advanced features
Server push
SPDY experiments with an option for servers to push
data to clients via the X-Associated-Content
header. This header informs the client that the server
is pushing a resource to the client before the client
has asked for it. For initial-page downloads (e.g. the
first time a user visits a site), this can vastly enhance
the user experience.
Server hint
Rather than automatically pushing resources to the
client, the server uses the X-Subresources header
to suggest to the client that it should ask for specific
resources, in cases where the server knows in
advance of the client that those resources will be
needed.
133. Java Future
http://docs.oracle.com/javase/7/docs
/api/java/util/concurrent/Future.html
Since Java SE5
134. public interface Future<V>
A Future represents the result of an asynchronous
computation. Methods are provided to check if the
computation is complete, to wait for its completion, and to
retrieve the result of the computation. The result can only
be retrieved using method get when the computation
has completed, blocking if necessary until it is ready.
Cancellation is performed by the cancel method. Additional
methods are provided to determine if the task completed
normally or was cancelled. Once a computation has
completed, the computation cannot be cancelled. If you
would like to use a Future for the sake of cancellability but
not provide a usable result, you can declare types of the
form Future<?> and return null as a result of the underlying
task.
135. Future Sample
interface ArchiveSearcher { String search(String target); }
class App {
ExecutorService executor = ...
ArchiveSearcher searcher = ...
void showSearch(final String target)
throws InterruptedException {
Future<String> future
= executor.submit(new Callable<String>() {
public String call() {
return searcher.search(target);
}});
displayOtherThings(); // do other things while searching
try {
displayText(future.get()); // use future
} catch (ExecutionException ex) { cleanup(); return; }
136. FutureTask
FutureTask<String> future =
new FutureTask<String>(new
Callable<String>() {
public String call() {
return searcher.search(target);
}});
executor.execute(future);
140. DoWorkAsync
async void DoWorkAsync() {
var t1 = ProcessFeedAsync("www.acme.com/rss");
var t2 = ProcessFeedAsync("www.xyznews.com/rss");
await Task.WhenAll(t1, t2);
DisplayMessage("Done");
}
async Task ProcessFeedAsync(string url) {
var text = await DownloadFeedAsync(url);
var doc = ParseFeedIntoDoc(text);
await SaveDocAsync(doc);
ProcessLog.WriteEntry(url);
}
141. WriteFileAsync
async public Task void WriteFileAsync(string filename, string
contents)
{
var localFolder =
Windows.Storage.ApplicationData.Current.LocalFolder;
var file = await localFolder.CreateFileAsync(filename,
Windows.Storage.CreationCollisionOption.ReplaceExisting);
var fs = await file.OpenAsync(
Windows.Storage.FileAccessMode.ReadWrite);
//...
}
await WriteFileAsync("FileName", "Some Text");
142. GetRssAsync
async Task <XElement> GetRssAsync(string url) {
var client = new WebClient();
var task = client.DownloadStringTaskAsync(url);
var text = await task;
var xml = XElement.Parse(text);
return xml;
}
145. Futures
A future is an abstraction which represents a
value which may become available at some
point.
A Future object either holds a result of a
computation or an exception in the case that
the computation failed.
An important property of a future is that it is in
effect immutable– it can never be written to or
failed by the holder of the Future object.
146. val f: Future[List[String]] = future {
session.getRecentPosts
}
f onFailure {
case t => render("An error has occured: " +
t.getMessage)
} onSuccess {
case posts => for (post <- posts) render(post)
147. Callbacks
Registering an onComplete callback on the
future ensures that the corresponding closure
is invoked after the future is completed.
Registering an onSuccess or onFailure
callback has the same semantics, with the
difference that the closure is only called if the
future is completed successfully or fails,
respectively.
Registering a callback on the future which is
already completed will result in the callback
being executed eventually (as implied by
). Furthermore, the callback may even be
executed synchronously on the same thread.
148. Callbacks
In the event that multiple callbacks are
registered on the future, the order in which
they are executed is not defined. In fact, the
callbacks may be executed concurrently with
one another. However, a particular Future
implementation may have a well-defined order.
In the event that some of the callbacks throw
an exception, the other callbacks are executed
irregardlessly.
In the event that some of the callbacks never
complete (e.g. the callback contains an infinite
loop), the other callbacks may not be executed
at all.
149. Functional Composition
val rateQuote = future {
connection.getCurrentValue(USD)
}
rateQuote onSuccess { case quote =>
val purchase = future {
if (isProfitable(quote)) connection.buy(amount, quote)
else throw new Exception("not profitable")
}
purchase onSuccess {
case _ => println("Purchased " + amount + " USD")
}
}
150. For-Comprehensions
val usdQuote = future { connection.getCurrentValue(USD)
}
val chfQuote = future { connection.getCurrentValue(CHF) }
val purchase = for {
usd <- usdQuote
chf <- chfQuote
if isProfitable(usd, chf)
} yield connection.buy(amount, chf)
purchase onSuccess {
case _ => println("Purchased " + amount + " CHF")
}
151. Promises
While futures are defined as a type of read-
only placeholder object created for a result
which doesn’t yet exist, a promise can be
thought of as a writeable, single-assignment
container, which completes a future.
That is, a promise can be used to successfully
complete a future with a value (by
“completing” the promise) using the success
method. Conversely, a promise can also be
used to complete a future with an exception,
by failing the promise, using the failure
method.
152. import scala.concurrent.{ future, promise }
val p = promise[T]
val f = p.future
val producer = future {
val r = produceSomething()
p success r
continueDoingSomethingUnrelated()
}
val consumer = future {
startDoingSomething()
f onSuccess {
case r => doSomethingWithResult()
}
}
154. import akka.dispatch.Await
implicit val timeout = system.settings.ActorTimeout
val future = actor ? msg
val result = Await.result(future, timeout.duration).
asInstanceOf[String]
import akka.dispatch.Future
val future: Future[String] = (actor ? msg).mapTo[String]
156. Composition
val f1 = Future {
"Hello" + "World"
}
val f2 = Promise.successful(3)
val f3 = f1 flatMap { x ⇒
f2 map { y ⇒
x.length * y
}
}
val result = Await.result(f3, 1 second)
result must be(30)
157. For Complehension
val f = for {
a ← Future(10 / 2) // 10 / 2 = 5
b ← Future(a + 1) // 5 + 1 = 6
c ← Future(a - 1) // 5 - 1 = 4
} yield b * c // 6 * 4 = 24
// Note that the execution of futures a, b, and c
// are not done in parallel.
val result = Await.result(f, 1 second)
result must be(24)
158. val f1 = actor1 ? msg1
val f2 = actor2 ? msg2
val a = Await.result(f1, 1 second).asInstanceOf[Int]
val b = Await.result(f2, 1 second).asInstanceOf[Int]
val f3 = actor3 ? (a + b)
val result = Await.result(f3, 1 second).asInstanceOf[Int]
159. // Create a sequence of Futures
val futures = for (i ← 1 to 1000) yield Future(i * 2)
val futureSum = Future.fold(futures)(0)(_ + _)
Await.result(futureSum, 1 second) must be(1001000)
// Create a sequence of Futures
val futures = for (i ← 1 to 1000) yield Future(i * 2)
val futureSum = Future.reduce(futures)(_ + _)
Await.result(futureSum, 1 second) must be(1001000)
160. Beyond Mere Actors
http://www.slideshare.net/bostonscal
a/beyond-mere-actors
161. On Time-Travel
Promised values are available in the
future.
What does it mean to get a value out
of the future? Time-travel into the
future is easy. Just wait. But we don't
have to go into the future. We can
give our future-selves instructions.
Instead of getting values out of
the future, we send computations
into the future.
162. JMS 2.0
Last maintenance release (1.1) was
in 2003
March 2011: JSR 343 launched to
develop JMS 2.0
163. Initial goals of JMS 2.0
Simpler and easier to use
simplify the API
make use of CDI (Contexts and
Dependency Injection)
clarify any ambiguities in the spec
Support new themes of Java EE 7
PaaS
Multi-tenancy
164. Initial goals of JMS 2.0
Standardise interface with application
servers
Clarify relationship with other Java EE
specs
some JMS behaviour defined in other
specs
New messaging features
standardize some existing vendor
extensions (or will retrospective
standardisation be difficult?)
165. Simplifying the JMS API
Receiving messages in Java EE
@MessageDriven(mappedName = "jms/inboundQueue")
public class MyMDB implements MessageListener {
public void onMessage(Message message) {
String payload = (TextMessage)textMessage.getText();
// do something with payload
}
}
167. Possible new API
@Resource(mappedName="jms/contextFactory")
ContextFactory contextFactory;
@Resource(mappedName="jms/orderQueue")
Queue orderQueue;
public void sendMessage(String payload) {
try (MessagingContext mCtx =
contextFactory.createContext();){
TextMessage textMessage =
mCtx.createTextMessage(payload);
mCtx.send(orderQueue,textMessage);
}
}
168. Annotations for the new API
@Resource(mappedName="jms/orderQueue")
Queue orderQueue;
@Inject
@MessagingContext(lookup="jms/contextFactory")
MessagingContext mCtx;
@Inject
TextMessage textMessage;
public void sendMessage(String payload) {
textMessage.setText(payload);
mCtx.send(orderQueue,textMessage);
}
169. Annotations for the old API
@Inject
@JMSConnection(lookup="jms/connFactory")
@JMSDestination(lookup="jms/inboundQueue")
MessageProducer producer;
@Inject
TextMessage textMessage;
public void sendMessage (String payload){
try {
textMessage.setText(payload);
producer.send(textMessage);
} catch {JMSException e}
// do something
}
}
170. Send a message with async
acknowledgement from server
Send a message and return immediately without
blocking until an acknowledgement has been received
from the server.
Instead, when the acknowledgement is received, an
asynchronous callback will be invoked
Why? Allows thread to do other work whilst waiting for
the acknowledgement
producer.send(message, new AcknowledgeListener(){
public void onAcknowledge(Message message) {
// process ack
}
});
171. Topic hierarchies
Topics can be arranged in a hierarchy
STOCK.NASDAQ.TECH.ORCL
STOCK.NASDAQ.TECH.GOOG
STOCK.NASDAQ.TECH.ADBE
STOCK.NYSE.TECH.HPQ
Consumers can subscribe using wildcards
STOCK.*.TECH.*
STOCK.NASDAQ.TECH.*
Most vendors support this already
Details TBD
172. Multiple consumers on a topic
subscription
Allows scalable consumption of messages from
a topic subscription
multiple threads
multiple JVMs
No further change to API for durable
subscriptions (clientID not used)
New API for non-durable subscriptions
Why? Scalability
Why? Allows greater scalability
MessageConsumer messageConsumer=
session.createSharedConsumer(
topic,sharedSubscriptionName);
173. Batch delivery
Will allow messages to be delivered
asynchronously in batches
New method on MessageConsumer
New listener interface BatchMessageListener
Acks also sent in a batch
Why? May be more efficient for JMS provider or
application
void setBatchMessageListener(
BatchMessageListener listener,
int batchSize,
long batchTimeOut)