Higher-order functions such as map(), flatmap(), filter() and reduce() have their origins in mathematics and ancient functional programming languages such as Lisp. But today they have entered the mainstream and are available in languages such as JavaScript, Scala and Java 8. They are well on their way to becoming an essential part of every developer’s toolbox.
In this talk you will learn how these and other higher-order functions enable you to write simple, expressive and concise code that solve problems in a diverse set of domains. We will describe how you use them to process collections in Java and Scala. You will learn how functional Futures and Rx (Reactive Extensions) Observables simplify concurrent code. We will even talk about how to write big data applications in a functional style using libraries such as Scalding.
How to submit a standout Adobe Champion Application
Map(), flatMap() and reduce() simplify collections, concurrency and big data
1. Map(), flatMap() and reduce()
are your new best friends:
simpler collections,
concurrency, and big data
Chris Richardson
Author of POJOs in Action
Founder of the original CloudFoundry.com
@crichardson
chris@chrisrichardson.net
http://plainoldobjects.com
4. @crichardson
About Chris
Founder of a buzzword compliant (stealthy, social, mobile, big data, machine
learning, ...) startup
Consultant helping organizations improve how they architect and deploy
applications using cloud, micro services, polyglot applications, NoSQL, ...
6. @crichardson
Functional programming is a programming paradigm
Functions are the building blocks of the application
Best done in a functional programming language
7. @crichardson
Functions as first class citizens
Assign functions to variables
Store functions in fields
Use and write higher-order functions:
Pass functions as arguments
Return functions as values
10. @crichardson
Why functional programming?
More expressive
More intuitive - declarative code matches problem definition
Functional code is usually much more composable
Immutable state:
Less error-prone
Easy parallelization and concurrency
But be pragmatic
13. @crichardson
Lisp = an early functional language
invented in 1958
http://en.wikipedia.org/wiki/Lisp_(programming_language)
1940
1950
1960
1970
1980
1990
2000
2010
garbage collection
dynamic typing
self-hosting compiler
tree data structures
(defun factorial (n)
(if (<= n 1)
1
(* n (factorial (- n 1)))))
14. @crichardson
My final year project in 1985:
Implementing SASL
sieve (p:xs) =
p : sieve [x | x <- xs, rem x p > 0];
primes = sieve [2..]
A list of integers starting with 2
Filter out multiples of p
15. Mostly an Ivory Tower technology
Lisp was used for AI
FP languages: Miranda, ML,
Haskell, ...
“Side-effects kills
kittens and puppies”
17. @crichardson
But today FP is mainstream
Clojure - a dialect of Lisp
A hybrid OO/functional language
A hybrid OO/FP language for .NET
Java 8 has lambda expressions
18. @crichardson
Java 8 lambda expressions are
functions x -> x * x
x -> {
for (int i = 2; i < Math.sqrt(x); i = i + 1) {
if (x % i == 0)
return false;
}
return true;
};
(x, y) -> x * x + y * y
An instance of an anonymous inner class that
implements a functional interface (kinda)
21. @crichardson
Social network example
public class Person {
enum Gender { MALE, FEMALE }
private Name name;
private LocalDate birthday;
private Gender gender;
private Hometown hometown;
private Set<Friend> friends = new HashSet<Friend>();
....
public class Friend {
private Person friend;
private LocalDate becameFriends;
...
}
public class SocialNetwork {
private Set<Person> people;
...
22. @crichardson
Typical iterative code - e.g. filtering
public class SocialNetwork {
private Set<Person> people;
...
public Set<Person> lonelyPeople() {
Set<Person> result = new HashSet<Person>();
for (Person p : people) {
if (p.getFriends().isEmpty())
result.add(p);
}
return result;
}
Declare result variable
Modify result
Return result
Iterate
23. @crichardson
Problems with this style of programming
Low level
Imperative (how to do it) NOT declarative (what to do)
Verbose
Mutable variables are potentially error prone
Difficult to parallelize
24. @crichardson
Java 8 streams to the rescue
A sequence of elements
“Wrapper” around a collection (and other types: e.g. JarFile.stream(), Files.lines())
Streams can also be infinite
Provides a functional/lambda-based API for transforming, filtering and aggregating
elements
Much simpler, cleaner and declarative
code
25. @crichardson
public class SocialNetwork {
private Set<Person> people;
...
public Set<Person> peopleWithNoFriends() {
Set<Person> result = new HashSet<Person>();
for (Person p : people) {
if (p.getFriends().isEmpty())
result.add(p);
}
return result;
}
Using Java 8 streams - filtering
public class SocialNetwork {
private Set<Person> people;
...
public Set<Person> lonelyPeople() {
return people.stream()
.filter(p -> p.getFriends().isEmpty())
.collect(Collectors.toSet());
}
predicate
lambda expression
29. @crichardson
Using Java 8 streams - friend of friends
using flatMap
class Person ..
public Set<Person> friendOfFriends() {
return friends.stream()
.flatMap(friend -> friend.getPerson().friends.stream())
.map(Friend::getPerson)
.filter(f -> f != this)
.collect(Collectors.toSet());
}
maps and flattens
31. @crichardson
Using Java 8 streams - reducing
public class SocialNetwork {
private Set<Person> people;
...
public long averageNumberOfFriends() {
return people.stream()
.map ( p -> p.getFriends().size() )
.reduce(0, (x, y) -> x + y)
/ people.size();
} int x = 0;
for (int y : inputStream)
x = x + y
return x;
33. @crichardson
Adopting FP with Java 8 is
straightforward
Simply start using streams and lambdas
Eclipse can refactor anonymous inner classes to lambdas
36. @crichardson
The need for concurrency
Step #1
Web service request to get the user profile including wish list (list of product Ids)
Step #2
For each productId: web service request to get product info
But
Getting products sequentially terrible response time
Need fetch productInfo concurrently
Composing sequential + scatter/gather-style
operations is very common
37. @crichardson
Futures are a great abstraction for
composing concurrent operations
http://en.wikipedia.org/wiki/Futures_and_promises
38. @crichardson
Worker thread or event-
driven code
Main thread
Composition with futures
Outcome
Future 2
Client
get Asynchronous
operation 2
set
initiates
Asynchronous
operation 1
Outcome
Future 1
get
set
39. @crichardson
But composition with basic futures is
difficult
Java 7 future.get([timeout]):
Blocking API client blocks thread
Difficult to compose multiple concurrent operations
Futures with callbacks:
e.g. Guava ListenableFutures, Spring 4 ListenableFuture
Attach callbacks to all futures and asynchronously consume outcomes
But callback-based code = messy code
See http://techblog.netflix.com/2013/02/rxjava-netflix-api.html
We need functional futures!
40. @crichardson
Functional futures - Scala, Java 8 CompletableFuture
def asyncPlus(x : Int, y : Int) : Future[Int] = ... x + y ...
val future2 = asyncPlus(4, 5).map{ _ * 3 }
assertEquals(27, Await.result(future2, 1 second))
Asynchronously transforms
future
def asyncSquare(x : Int) : Future[Int] = ... x * x ...
val f2 = asyncPlus(5, 8).flatMap { x => asyncSquare(x) }
assertEquals(169, Await.result(f2, 1 second))
Calls asyncSquare() with
the eventual outcome of
asyncPlus()
44. @crichardson
Introducing Reactive Extensions (Rx)
The Reactive Extensions (Rx) is a library for composing asynchronous and
event-based programs using observable sequences and LINQ-style query
operators. Using Rx, developers represent asynchronous data streams
with Observables , query asynchronous data streams using LINQ
operators , and .....
https://rx.codeplex.com/
45. @crichardson
About RxJava
Reactive Extensions (Rx) for the JVM
Original motivation for Netflix was to provide rich Futures
Implemented in Java
Adaptors for Scala, Groovy and Clojure
Embraced by Akka and Spring Reactor: http://www.reactive-streams.org/
https://github.com/Netflix/RxJava
46. @crichardson
RxJava core concepts
trait Observable[T] {
def subscribe(observer : Observer[T]) : Subscription
...
}
trait Observer[T] {
def onNext(value : T)
def onCompleted()
def onError(e : Throwable)
}
Notifies
An asynchronous stream of items
Used to unsubscribe
47. Comparing Observable to...
Observer pattern - similar but adds
Observer.onComplete()
Observer.onError()
Iterator pattern - mirror image
Push rather than pull
Futures - similar
Can be used as Futures
But Observables = a stream of
multiple values
Collections and Streams - similar
Functional API supporting map(),
flatMap(), ...
But Observables are asynchronous
48. @crichardson
Fun with observables
val every10Seconds = Observable.interval(10 seconds)
-1 0 1 ...
t=0 t=10 t=20 ...
val oneItem = Observable.items(-1L)
val ticker = oneItem ++ every10Seconds
val subscription = ticker.subscribe { (value: Long) => println("value=" + value) }
...
subscription.unsubscribe()
49. @crichardson
def getTableStatus(tableName: String) : Observable[DynamoDbStatus]=
Observable { subscriber: Subscriber[DynamoDbStatus] =>
}
Connecting observables to the outside
world
amazonDynamoDBAsyncClient.describeTableAsync(
new DescribeTableRequest(tableName),
new AsyncHandler[DescribeTableRequest, DescribeTableResult] {
override def onSuccess(request: DescribeTableRequest, result: DescribeTableResult) = {
subscriber.onNext(DynamoDbStatus(result.getTable.getTableStatus))
subscriber.onCompleted()
}
override def onError(exception: Exception) = exception match {
case t: ResourceNotFoundException =>
subscriber.onNext(DynamoDbStatus("NOT_FOUND"))
subscriber.onCompleted()
case _ =>
subscriber.onError(exception)
}
})
}
Called once per subscriber
Asynchronously gets information
about DynamoDB table
51. @crichardson
Calculating rolling average
class AverageTradePriceCalculator {
def calculateAverages(trades: Observable[Trade]):
Observable[AveragePrice] = {
...
}
case class Trade(
symbol : String,
price : Double,
quantity : Int
...
)
case class AveragePrice(
symbol : String,
price : Double,
...)
52. @crichardson
Calculating average prices
def calculateAverages(trades: Observable[Trade]): Observable[AveragePrice] = {
trades.groupBy(_.symbol).
map { case (symbol, tradesForSymbol) =>
val openingEverySecond =
Observable.items(-1L) ++ Observable.interval(1 seconds)
def closingAfterSixSeconds(opening: Any) =
Observable.interval(6 seconds).take(1)
tradesForSymbol.window(openingEverySecond, closingAfterSixSeconds _).map {
windowOfTradesForSymbol =>
windowOfTradesForSymbol.fold((0.0, 0, List[Double]())) { (soFar, trade) =>
val (sum, count, prices) = soFar
(sum + trade.price, count + trade.quantity, trade.price +: prices)
} map { case (sum, length, prices) =>
AveragePrice(symbol, sum / length, prices)
}
}.flatten
}.flatten
}
Create an Observable of per-symbol Observables
Create an Observable of per-symbol Observables
57. @crichardson
Apache Hadoop
Open-source software for reliable, scalable, distributed computing
Hadoop Distributed File System (HDFS)
Efficiently stores very large amounts of data
Files are partitioned and replicated across multiple machines
Hadoop MapReduce
Batch processing system
Provides plumbing for writing distributed jobs
Handles failures
...
59. @crichardson
MapReduce Word count - mapper
class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context) {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
context.write(word, one);
}
}
}
(“Four”, 1), (“score”, 1), (“and”, 1), (“seven”, 1), ...
Four score and seven years
http://wiki.apache.org/hadoop/WordCount
61. @crichardson
MapReduce Word count - reducer
class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key,
Iterable<IntWritable> values, Context context) {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
context.write(key, new IntWritable(sum));
}
}
(“the”, 11)
(“the”, (1, 1, 1, 1, 1, 1, ...))
http://wiki.apache.org/hadoop/WordCount
62. @crichardson
About MapReduce
Very simple programming abstract yet incredibly powerful
By chaining together multiple map/reduce jobs you can process very large amounts of
data in interesting ways
e.g. Apache Mahout for machine learning
But
Mappers and Reducers = verbose code
Development is challenging, e.g. unit testing is difficult
It’s disk-based, batch processing slow
63. @crichardson
Scalding: Scala DSL for MapReduce
class WordCountJob(args : Args) extends Job(args) {
TextLine( args("input") )
.flatMap('line -> 'word) { line : String => tokenize(line) }
.groupBy('word) { _.size }
.write( Tsv( args("output") ) )
def tokenize(text : String) : Array[String] = {
text.toLowerCase.replaceAll("[^a-zA-Z0-9s]", "")
.split("s+")
}
}
https://github.com/twitter/scalding
Expressive and unit testable
Each row is a map of named fields
64. @crichardson
Apache Spark
Part of the Hadoop ecosystem
Key abstraction = Resilient Distributed Datasets (RDD)
Collection that is partitioned across cluster members
Operations are parallelized
Created from either a Scala collection or a Hadoop supported datasource - HDFS, S3 etc
Can be cached in-memory for super-fast performance
Can be replicated for fault-tolerance
REPL for executing ad hoc queries
http://spark.apache.org
65. @crichardson
Spark Word Count
val sc = new SparkContext(...)
sc.textFile("s3n://mybucket/...")
.flatMap { _.split(" ")}
.groupBy(identity)
.mapValues(_.length)
.toArray.toMap
}
}
Expressive, unit testable and very fast
66. @crichardson
Summary
Functional programming enables the elegant expression of good ideas in a wide
variety of domains
map(), flatMap() and reduce() are remarkably versatile higher-order functions
Use FP and OOP together
Java 8 has taken a good first step towards supporting FP