%in Midrand+277-882-255-28 abortion pills for sale in midrand
2017.09.07 Orleans - PWL Seattle
1. ORLEANS
Distributed Virtual Actors for Programmability and Scalability
Philip A. Bernstein, Sergey Bykov, Alan Geller, Gabriel Kliot, Jorgen Thelin
Victor Hurdugaci
2. VICTOR HURDUGACI
• Romanian
• Currently: Electronic Arts, Seattle
• Past: Microsoft (ASP.NET, Azure WebJobs/Functions, X++ compiler)
• Fell asleep and woke up in a different country
• @VictorHurdugaci
• http://victorh.info
10. INTRODUCTION
• Data shipping paradigm: on
every request, the data is shipped
from cache/storage to middle tier
• It does not provide data locality Frontend FrontendFrontendFrontendFrontendFrontend
Middle-tierMiddle-tierMiddle-tier
Storage
Cache
11. INTRODUCTION
• Function shipping paradigm
• Stateful middle tier
• Performance benefits of cache
• Data locality
• Semantic and consistency benefits
Frontend FrontendFrontendFrontendFrontendFrontend
Middle-tierMiddle-tierMiddle-tier
Storage
Data Data Data
12. INTRODUCTION
• OOP – intuitive way to model complex systems
• Marginalized in SOA
• Loosely-coupled partitioned services don’t often match objects
• The actor model brings OOP back to system level
13. VIRTUAL ACTORS
• An actor is a computational entity that, in response to a message it receives, can
concurrently:
• Send a finite number of messages to other actors
• Create a finite number of new actors
• Designate the behavior to be used for the next message it receives (change state)
-- Wikipedia
14. OTHER SYSTEMS
• Erlang and Akka
• No lifecycle management
• Distributed races
• No failure management
• Distributed resource management
15. ORLEANS VIRTUAL ACTORS
• Always exists
• No explicitly creation/destruction
• Automatic resource management
• Never fail
• Location is transparent to application code
17. 2.1. VIRTUAL ACTORS
• Actor identity: type + 128 bit GUID
• Behavior
• State
• Actors are isolated from each other
18. 2.1. VIRTUAL ACTORS
1. Perpetual existence
2. Automatic instantiation (activation)
3. Location transparency
4. Automatic scale out
• Stateful – single instance
• Stateless – scale automatically (up to a limit)
19. 2.2. ACTOR INTERFACES
• Strong type interfaces
• All methods must be async
1 public interface IGameActor: IActor
2 {
3 Task<string> GameName { get; }
4 Task<List> CurrentPlayers { get; }
5 Task JoinGame(IPlayerActor game);
6 Task LeaveGame(IPlayerActor game);
7 }
20. 2.3. ACTOR REFERENCES
• Strongly typed proxy
• “GetActor” method is generated at compile time
• Request by primary key (GUID)
• References can be passed as arguments (!)
1 public static class GameActorFactory
2 {
3 public static IGameActor GetActor(Guid gameId);
4 }
21. 2.4. PROMISES
• No blocking calls
• Promise states: unresolved, fulfilled, broken
• Closure/continuation
• .NET: System.Threading.Tasks.Task<T>
• C#: async/await
1 IGameActor gameActor = GameActorFactory.GetActor(gameId);
2 try{
3 string name = await gameActor.GameName;
4 Console.WriteLine(“Game name is ” + name);
5 } catch(Exception) {
6 Console.WriteLine(“The call to actor failed”);
7 }
22. 2.5. TURNS
• Activations are single threaded
• Work in chunks: turns
• Turns (from different actors) can be interleaved
• [Reentrant] attribute
27. 3.1 RUNTIME IMPLEMENTATION
Orleans cluster
Server
Orleans process
Actor Actor Actor
Actor Actor
Server
Orleans process
Actor Actor Actor
Actor
Server
Orleans process
Actor Actor Actor
Actor Actor
Server
Orleans process
Actor Actor Actor
28. 3.1 RUNTIME IMPLEMENTATION
Orleans process
Messaging Hosting Execution
• 1 TCP connection between
each pair of servers
• Multiplex messages
• Actor placement
• Actor lifecycle
• Resource management
• Actor code execution
• Reentrancy
• Single threading
30. 3.2. DISTRIBUTED DIRECTORY
• Maps actor id <-> location
• One-hop distributed hash table
• Each server holds a partition of the directory
• Actor partitioning: consistent hashing
A
C
[Act.1234, A]
[Act.1244, C]
[Act.1254, D]
[Act.1235, F]
[Act.1245, A]
[Act.1255, E]
B
D
Act.1244 -> Hello
31. 3.2. DISTRIBUTED DIRECTORY
• Maps actor id <-> location
• One-hop distributed hash table
• Each server holds a partition of the directory
• Actor partitioning: consistent hashing
A
C
1. Act.1244 ?
[Act.1234, A]
[Act.1244, C]
[Act.1254, D]
[Act.1235, F]
[Act.1245, A]
[Act.1255, E]
B
D
Act.1244 -> Hello
32. 3.2. DISTRIBUTED DIRECTORY
• Maps actor id <-> location
• One-hop distributed hash table
• Each server holds a partition of the directory
• Actor partitioning: consistent hashing
A
C
1. Act.1244 ?
[Act.1234, A]
[Act.1244, C]
[Act.1254, D]2. C
[Act.1235, F]
[Act.1245, A]
[Act.1255, E]
B
D
Act.1244 -> Hello
33. 3.2. DISTRIBUTED DIRECTORY
• Maps actor id <-> location
• One-hop distributed hash table
• Each server holds a partition of the directory
• Actor partitioning: consistent hashing
A
C
1. Act.1244 ?
[Act.1234, A]
[Act.1244, C]
[Act.1254, D]2. C
3. Hello
[Act.1235, F]
[Act.1245, A]
[Act.1255, E]
B
D
Act.1244 -> Hello
34. 3.2. DISTRIBUTED DIRECTORY
• Maps actor id <-> location
• One-hop distributed hash table
• Each server holds a partition of the directory
• Actor partitioning: consistent hashing
• (De)Activations update records in the table
• Directory enforces single-activations
A
C
1. Act.1244 ?
[Act.1234, A]
[Act.1244, C]
[Act.1254, D]2. C
3. Hello
[Act.1235, F]
[Act.1245, A]
[Act.1255, E]
B
D
Act.1244 -> Hello
35. 3.2. DISTRIBUTED DIRECTORY
• Maps actor id <-> location
• One-hop distributed hash table
• Each server holds a partition of the directory
• Actor partitioning: consistent hashing
• (De)Activations update records in the table
• Directory enforces single-activations
• Avoid extra hop: large (millions) cache of recent activations
A
C
1. Act.1244 ?
[Act.1234, A]
[Act.1244, C]
[Act.1254, D]2. C
3. Hello
[Act.1235, F]
[Act.1245, A]
[Act.1255, E]
B
D
Act.1244 -> Hello
36. 3.3. STRONG ISOLATION
• Actors don’t share state and communication is always message based
• Method arguments & return values are deep copied
• Guarantees immutability
37. 3.8. RELIABILITY
• Orleans manages everything except persistence
• Failure detection: heartbeats
• Membership view is eventual consistent (30-60s on production servers)
• A “dead” actor is activated on a different server
• Actor lifespan not linked to server lifespan
• No checkpoint strategy (depends on application)
• Misrouted messages are sent to the right destination and sender is notified
38. 3.9. EVENTUAL CONSISTENCY
• No failure = single activation guarantee
• Failure = single activation eventual guarantee
• Two activations in two different partitions
• One is eventually dropped
• Availability over consistency
39. 3.10. MESSAGING GUARANTEES
• At-least-once message guarantee
• Exactly-once can be implemented at application level (request id)
• No FIFO guarantees (sender: A, B, C; receiver: B, A, C)
47. ORLEANS DRAWBACKS
• Bulk operations
• Large number of actors (billions) + no temporal locality
• No cross-actor transactions
48. 6.2. OTHER ACTOR FRAMEWORKS
• Akka http://akka.io/
• Erlang https://www.erlang.org/
Not in the paper:
• Orbit (Java/JVM) https://github.com/orbit/orbit