Template made by Slidesgo
Implementing responsive and high-performance applications is the most obvious challenge that we face in our programming life. It’s interesting to deeply study concurrency and parallelism on the JVM. In this talk you will learn how to describe parallel tasks and the idea behind Futures and the execution context. I will cover the tricky part of concurrency when the concurrent tasks share and use the same resources and how flying Futures in the same sky can make the sun rise at midnight! At the end I will talk about some possible solutions that you can use to reduce your worries about the pitfalls of concurrency.
5. Concurrency
“Concurrency refers to the ability of different
parts or units of a program, algorithm, or
problem to be executed out-of-order or in
partial order, without affecting the final
outcome.” - Wikipedia
10. Thread
A linear flow of execution
managed by a Scheduler.
def task(i: Int) =
new Thread{() =>
println(s"Instruction 1: Task $i")
println(s"Instruction 2: Task $i")
}
task(1).start()
task(2).start()
11. Thread
A linear flow of execution
managed by a Scheduler.
> run
instruction 1: Task 1
instruction 1: Task 2
instruction 2: Task 1
instruction 2: Task 2
12. Thread
A linear flow of execution
managed by a Scheduler.
> run
instruction 1: Task 1
instruction 1: Task 2
instruction 2: Task 2
instruction 2: Task 1
13. Thread
A linear flow of execution
managed by a Scheduler.
> run
instruction 1: Task 2
instruction 2: Task 2
instruction 1: Task 1
instruction 2: Task 1
14. Depends to the number of the
instructions and the number of
threads.
Executionorder
20. ThreadPoolExecutor
. Helps to save resources
. Control the number of
the created threads in
the application.
def task(i: Int) =
new Thread{() =>
println(s"Iteration 1: Task $i")
println(s"Iteration 2: Task $i")
}
val service: ExecutorService =
Executors.newFixedThreadPool(2)
(1 to 1000).foreach(i =>
service.submit(task(i)))
21. def task(i: Int) =
new Thread{() =>
println(s"Iteration 1: Task $i")
println(s"Iteration 2: Task $i")
}
val service: ExecutorService =
Executors.newFixedThreadPool(2)
(1 to 1000).foreach(i =>
service.submit(task(i)))
ThreadPoolExecutor
. Helps to save resources
. Control the number of
the created threads in
the application.
22. def task(i: Int): Future[Unit] =
Future{
println(s"Iteration 1: Task $i")
println(s"Iteration 2: Task $i")
}
(1 to 1000).foreach(task)
ScalaFuture
Describe a non blocking
computation.
23. def task(i: Int): Future[Unit] =
Future{
println(s"Iteration 1: Task $i")
println(s"Iteration 2: Task $i")
}
(1 to 1000).foreach(task)
ScalaFuture
Describe a non blocking
computation.
29. Await.result
lazy val flyingFuture =
Future {
Thread.sleep(2000)
println(
"after 2 seconds I am still flying!"
)}
Await.result(flyingFuture, 1.second)
42. val changeSunState: Future[State] =
computeSunState.map { v =>
state = state.copy(sunState = v)
}
val changeMoonState: Future[State] =
computeMoonState.map { v =>
state = state.copy(moonState = v)
}
val changeEarthState: Future[State] =
computeEarthState.map { v =>
state = state.copy(earthState = v)
}
case class State(
sunState: Sun.State,
moonState: Moon.State,
earthState: Earth.State
)
61. ZIOSTM
Provides the ability to atomically commit a series of reads and
writes to transactional memory when a set of conditions is
satisfied.
STM[E,A]
Describes a transaction Describes the transactional
memory that will be read and
written inside the transaction
TRef[A]
71. Useful links
Concurrency is complicated
★ https://medium.com/@wiemzin
★ ZIO-STM: https://github.com/zio/zio
★ Akka Actor
★ Upgrade your Future by John De Goes
★ Functional programminglibraries: ZIO, Cats effects, Monix
★ Presentation template by Slidesgo
todo
example & slide for zio.fork
fiber.join.timeout(1.second)
if you have concurrent program there is soth to deal with and to achieve having a concurrent program is that yopur program return a correcrt result and the concurrent computations shouldn’t affect to your output
Concurrent programming is the management of sharing and timing.
a linear flow of execution that can be managed independently by a Scheduler,
a Scheduler, is a part of the Operating System that decides which process runs at a certain point in time. The time each thread receives is non-deterministic, which makes concurrency tricky.
a linear flow of execution that can be managed independently by a Scheduler,
a linear flow of execution that can be managed independently by a Scheduler,
a linear flow of execution that can be managed independently by a Scheduler,
Every statement in a program is translated into many CPU instructions
For N instructions and T threads there are N * T steps, there is a context switch between the T threads on each step.
From Clean Code Book page 322
2 Instructions and 2 Threads so we can have (24 /4 = 6) 6 possible results: I1T1I2T2, I1T2I1T2, I1T2I2T1, I2T1I1T2, I2T1I2T1, I2T2I1T1
But what if you want to run 100 tasks
Depending to the number of available cores
if you have 4 cores you will have 4 threads running in parallel and the other threads will be suspended
And Creating a thread is an expensive operation
What you can do is to define a fixed number of threads: Pool of threads which is represented in a blocked queue
The thread pool execution uses a blocking queue. It keeps storing all the tasks that you have submitted to the executor service, and all threads (workers) are always running and performing the same steps:
Take the Task from the queue
Execute it
Take the next or wait until a task will be added to the queue
* Creates a thread pool that reuses a fixed number of threads
* operating off a shared unbounded queue.
* Creates a thread pool that reuses a fixed number of threads
* operating off a shared unbounded queue. At any point, at most
* {@code nThreads} threads will be active processing tasks.
* If additional tasks are submitted when all threads are active,
* they will wait in the queue until a thread is available.
* If any thread terminates due to a failure during execution
* prior to shutdown, a new one will take its place if needed to
* execute subsequent tasks. The threads in the pool will exist
* until it is explicitly {@link ExecutorService#shutdown shutdown}.
The global execution context sets maxThreads depending to the number of the available cores on the system.
It means when you run a concurrent Futures using global execution context, the performance of your program depends on the number of the processors available to the JVM.
The global execution context sets maxThreads depending to the number of the available cores on the system.
It means when you run a concurrent Futures using global execution context, the performance of your program depends on the number of the processors available to the JVM.
The program changes its meaning
in the left hand side it is sequential
in the right hand side it is in parallel
The program changes its meaning
in the left hand side it is sequential
in the right hand side it is in parallel
The program changes its meaning
in the left hand side it is sequential
in the right hand side it is in parallel
The program changes its meaning
in the left hand side it is sequential
in the right hand side it is in parallel
The program changes its meaning
in the left hand side it is sequential
in the right hand side it is in parallel
The program changes its meaning
in the left hand side it is sequential
in the right hand side it is in parallel
- you can treat your program as data, passing it around to other functions and composing it with other data, and thus reach more structured and composable code.
The program changes its meaning
in the left hand side it is sequential
in the right hand side it is in parallel
The program changes its meaning
in the left hand side it is sequential
in the right hand side it is in parallel
- you can treat your program as data, passing it around to other functions and composing it with other data, and thus reach more structured and composable code.
referential transparency: no surprises
you can think to make your future lazy
or use FP library for effectful computation that describes programs as values which makes the expression referencial transparent
make thing immutable, but in case you need that, you would have to use locks/semaphore/ or if you already chose to use a FP librarty ZIO provides a data structure that enables you to define a transaction with transactional references that
the earth determin the day/night using the state of the sun and the moon
the moon computes which phase it would be in at any point in time, change its state using the sun state
the sun changes its own state and leave the same moon and earth state
at midnight the moonn phases supposed to be changed but at that point in time the sun state has been updated by the sun and the recent change of the moon wasn’t been considered, the earth computed its state using the sun and the moon state and the sun rises due to this a race condition
JVM dealing with mutable state is really a problem that we should consider we have to work hard to prevent starvation deadlocks race conditions
How threads interact through memory?
registers perform the operations very fast and cache memory layers can be accessed by the CPU
All cores can access to the main memory
when a CPU needs to access main memory it will read part of main memory into its CPU cache. some CPUS might have multiple cache layers (Level 1 and Level 2),
It may even read part of the cache into its internal registers and then perform operations on it. When the CPU needs to write the result back to main memory it will flush the value from its internal register to the cache memory, and at some point flush the value back to main memory.
Executing the load/store in memory is done to exchange the information between caches. Stores are more expensive than loads! the results have to be shared between other cores it will be sent to the main memory, and this takes time
Executing the load/store in memory is done to exchange the information between caches. Stores are more expensive than loads! the results have to be shared between other cores, and this takes time
Executing the load/store in memory is done to exchange the information between caches. Stores are more expensive than loads! the results have to be shared between other cores, and this takes time
Executing the load/store in memory is done to exchange the information between caches. Stores are more expensive than loads! the results have to be shared between other cores, and this takes time
A race condition is a special condition that may occur inside a critical section. A critical section is a section of code that is executed by multiple threads and where the sequence of execution for the threads makes a difference in the result of the concurrent execution of the critical section
Executing the load/store in memory is done to exchange the information between caches. Stores are more expensive than loads! the results have to be shared between other cores, and this takes time
. Copying the change made by the threads which means the writing thread crosses the memory barrier and the the read thread crossed the memory barrier synchronized and volatile keywords force that the changes are globally visible on a timely basis; these help cross the memory barrier—accidentally or intentionally.
But the solutions will reduce the performace of the program
. Copying the change made by the threads which means the writing thread crosses the memory barrier and the the read thread crossed the memory barrier synchronized and volatile keywords force that the changes are globally visible on a timely basis; these help cross the memory barrier—accidentally or intentionally.
But the solutions will reduce the performace of the program
For N instructions and T threads there are N * T steps, there is a context switch between the T threads on each step.
From Clean Code Book page 322
but this program is sequential
but this program is sequential
to describe a parallel computation you onlyt have to call
Executing the load/store in memory is done to exchange the information between caches. Stores are more expensive than loads! the results have to be shared between other cores, and this takes time
Executing the load/store in memory is done to exchange the information between caches. Stores are more expensive than loads! the results have to be shared between other cores, and this takes time
Executing the load/store in memory is done to exchange the information between caches. Stores are more expensive than loads! the results have to be shared between other cores, and this takes time
Executing the load/store in memory is done to exchange the information between caches. Stores are more expensive than loads! the results have to be shared between other cores, and this takes time
Executing the load/store in memory is done to exchange the information between caches. Stores are more expensive than loads! the results have to be shared between other cores, and this takes time
Executing the load/store in memory is done to exchange the information between caches. Stores are more expensive than loads! the results have to be shared between other cores, and this takes time