Cacti and Big Data at Orange France, OW2online, June 2020
Cloud acceleration-pro active-solutionslinux-ow2
1. Cloud et Accélération des
Applications Java avec
ProActive Parallel Suite
D. Caromel, et al.
Agenda
1. Background: INRIA, ActiveEon
2. ProActive OS Toolkit:
Programming, Scheduling, Resourcing
3. UC 1: Genomics+Cloud
4. UC 2: Finance
5. UC 3: IT, SOA Web Server
Log Analysis
Parallelism+Distribution+Virtualization
Enterprise Grids & Clouds
3. OASIS Team & INRIA
A joint team, Now about 35 persons
2004: First ProActive User Group
2009, April: ProActive 4.1, Distributed & Parallel:
From Multi-cores to Enterprise GRIDs
3
3
4. OASIS Team Composition (35)
Researchers (5): PostDoc (1):
D. Caromel (UNSA, Det. INRIA) Regis Gascon (INRIA)
E. Madelaine (INRIA) Engineers (10):
F. Baude (UNSA) Elaine Isnard (AGOS)
F. Huet (UNSA) Fabien Viale (ANR OMD2, Renault )
L. Henrio (CNRS) Franca Perrina (AGOS)
PhDs (11):
Germain Sigety (INRIA)
Yu Feng (ETSI, FP6 EchoGrid)
Antonio Cansado (INRIA, Conicyt) Bastien Sauvan (ADT Galaxy)
Brian Amedro (SCS-Agos) Florin-Alexandru.Bratu (INRIA CPER)
Cristian Ruz (INRIA, Conicyt) Igor Smirnov (Microsoft)
Elton Mathias (INRIA-Cordi) Fabrice Fontenoy (AGOS)
Imen Filali (SCS-Agos / FP7 SOA4All) Open position (Thales)
Marcela Rivera (INRIA, Conicyt)
Trainee (2):
Muhammad Khan (STIC-Asia)
Etienne Vallette d’Osia (Master 2 ISI)
Paul Naoumenko (INRIA/Région PACA)
Laurent Vanni (Master 2 ISI)
Viet Dung Doan (FP6 Bionets)
Virginie Contes (SOA4ALL) Assistants (2):
Guilherme Pezzi (AGOS, CIFRE SCP) Patricia Maleyran (INRIA)
Located Sophia Antipolis,(I3S)
in Sandra Devauchelle between
+ Visitors + Interns Nice and Cannes,
Visitors and Students Welcome! 4
4
5. Startup Company Born of INRIA
Some Partners:
Co-developing, Support for ProActive Parallel Suite
Worldwide Customers: Fr, UK, Boston USA
5
5
10. ProActive : Active objects
A ag = newActive (“A”, […], VirtualNode)
V v1 = ag.foo (param);
V v2 = ag.bar (param);
...
v1.bar(); //Wait-By-Necessity
JVM JVM
A
v2 v1 ag
A
WBN!
V
Wait-By-Necessity
Java Object Active Object Req. Queue is a
Dataflow
Future Object Proxy Request Thread Synchronization
10
10 10
12. Broadcast and Scatter
Broadcast is the default behavior
Use a group as parameter, Scattered depends on rankings
cg ag
JVM
s
c1 c3
c3
c1 c2
c2
c1 c3
c2
JVM
JVM
ag.bar(cg); // broadcast cg
ProActive.setScatterGroup(cg);
ag.bar(cg); // scatter cg
JVM
12
12 12
16. GCM Fractal Deployment Standard
Interoperability:
Protocols: Cloud will start with existing IT infrastructure,
Rsh, ssh Build Non Intrusive Cloud with ProActive
Oarsh, Gsissh
Scheduler, and Grids:
GroupSSH, GroupRSH, GroupOARSH
ARC (NorduGrid), CGSP China Grid, EEGE gLITE,
Fura/InnerGrid (GridSystem Inc.)
GLOBUS, GridBus
IBM Load Leveler, LSF, Microsoft CCS (Windows HPC Server 2008)
Sun Grid Engine, OAR, PBS / Torque, PRUN
Clouds:
Amazon EC2
Denis Caromel
16 16
17. GCM Official Standardization
Grid Component Model
Overall, the standardization is supported by
industrials:
BT, FT-Orange, Nokia-Siemens, NEC,
Telefonica, Alcatel-Lucent, Huawei …
17
17
39. Resources set up
SOLID
machine from
16
nodes
Cluster
Desktops
Nodes
can be
dynamically
added!
EC2 Clouds
39
39
40. First Benchmarks
The distributed version with ProActive of Mapreads has been tested on the
INRIA cluster with two settings: the Reads file is split in either 30 or 10
slices
Use Case: Matching 31 millions Sequences with the Human Genome (M=2,
L=25)
4 Time FASTER from 20 to 100
Speed Up of 80 / Th.
Sequential : 50 h 35 mn
EC2 only test: nearly the same
performances as the local
SOLiD cluster (+10%)
For only $3,2/hour, EC2 has nearly the same perf. as
On going the local SOLiD cluster (16 cores, for 2H30)
Benchmarks on Windows Desktops and HPCS 2008 …
40
41. Benchmark: local vs. hybrid cloud
Use case: 3 runs performed in parallel containing a total of 28,5
millions of reads to be matched against the human genome
SOLID nodes only SOLID and EC2 nodes
Standard configuration 12 SOLiD nodes
using SOLID embedded 12 EC2 machines
nodes: 12 (type: “mlarge”, 2 nodes each)
Total computation time: Total computation time:
12.5 hours 8 hours
Gain: 4,5 hours (36% faster)
EC2 costs: $40
41
42. Benchmark: local vs. EC2 cloud
Execution time Cost
(min) (US$)
Standard PBS config 300 NA
ProActive Amazon EC2 340 20 US$
For only $3,2/hour, the EC2 setup has nearly the
same performances as the local SOLiD cluster
42
44. A High Performance Solution
A Collaboration between Pricing Partners and ActiveEon
Price-it® Excel Accelerated by ProActive Parallel
Suite®
A Global Solution: fully integrated with the
same functionalities and interface as Price-it
Excel while increasing its computing power
High Quality Service: from both companies
44
45. Some Technical Facts
Price-It®
C++ library developed by Pricing Partners
Pricing solution dedicated to highly complex
financial derivatives
Specification and Constraints
Accelerate Price-It® Excel product
Built on Price-It® library, this product integrates
an interface with Excel for input data management
and results display
Focus on highly parallelizable Greek computation
Operating system: Windows
45
46. How Does it Work?
Price-it Computing Distribution
Price-it Price-it Regular Price-it Excel
Excel Excel
Interface
ProActive Automatic execution
Scheduler via job scheduler
Pool of shared
resources
46
47. Accelerated Price-it Performances
Increased Productivity: Reduces Price-it Execution
Time by 6 or more!
Use Case: Bermuda
Vanilla, Model
American MC
More than 3 times faster
with only 4 nodes! Test conditions:
One computation
is split in 130
tasks that are
Even 6 times faster distributed
with 9 nodes! Each task uses
300ko
4 nodes 5 nodes 6 nodes 7 nodes 8 nodes 9 nodes
Sequential Distributed
47
49. Parallel Services
Separation: BPEL – Parallel Serv. – Task Flow
Standards et Portable
Flexibility High level Business Process
Domain specific Service
Other
… Basic Service
Job
Scheduling Parameter
Divide & Other …
Conquer Operational
Sweeping
Service Operational Services
…
Parallel Services
Scheduling `of Taskflow Jobs
Scheduling
Parameter
Sweeping
Service
Resource Man.
Enterprise Grid
49
49
50. Cas d’études AMADEUS : Démonstration
BPEL Scheduler
Log Parsing
Log Parsing
Log Parsing
RM Grille Log Parsing
Log Parsing
WS DB
Log Transfer Application ID Log Transfer
Application
ID Logs stockés
Log Parsing Log Parsing
Agos Web
Client
Admin Admin WS Scheduler/ MySQL
BPEL Console RM Console
Console Interface
BPEL/WS
Web Client 50
50
51. Grid Monitoring integration
Job Business
X Availability Center
Jobs
Scheduler Scheduler
Job DB
Y
Discovery Universal
JDBC
Resource DDM Create CMDB
interface
Manager Configuration
Discover Jobs,
Management
Tasks &
System
Resources
JMX interface information
Collect Grid Jobs
Type status
indicators
Indicators
Collect Grid SiS
Components
NATIVE
VM1 VM2
1
VM3 statistics
indicators
Pool of nodes in the grid
NATIVE NATIVE
VM4 VMx
3 2
51
52. AGOS Platform Management
HP- Business Availability Center
Tasks scheduler &
(HP-BAC) Resources manager
• Monitoring of the entire platform • Integration with grid
• Cover all layers in the scope components
•Provide monitoring dashboard and • Grid insights through indicator
reports collection and running jobs on
grid resources
52
59. Cloud Seeding with ProActive
CPU nodes
Web Interface ProActive Scheduler
+ Resource Manager
Amazon EC2
User
Noised video file
GPU nodes
59
60. Cloud Seeding with ProActive
CPU nodes
Web Interface ProActive Scheduler
+ Resource Manager
Amazon EC2
User
User submit its noised video to the web
interface GPU nodes
60
61. Cloud Seeding with ProActive
CPU nodes
Web Interface ProActive Scheduler
+ Resource Manager
Amazon EC2
User
Web Server submit a denoising job the
ProActive Scheduler GPU nodes
61
62. Cloud Seeding with ProActive
CPU nodes
Web Interface ProActive Scheduler
+ Resource Manager
Amazon EC2
User
CPU nodes are used to split the video into
smaller ones GPU nodes
62
63. Cloud Seeding with ProActive
CPU nodes
Web Interface ProActive Scheduler
+ Resource Manager
Amazon EC2
User
CPU nodes are used to split the video into
smaller ones GPU nodes
63
64. Cloud Seeding with ProActive
CPU nodes
Web Interface ProActive Scheduler
+ Resource Manager
Amazon EC2
User
GPU nodes are responsible to denoise
these small videos GPU nodes
64
65. Cloud Seeding with ProActive
CPU nodes
Web Interface ProActive Scheduler
+ Resource Manager
Amazon EC2
User
GPU nodes are responsible to denoise
these small videos GPU nodes
65
66. Cloud Seeding with ProActive
CPU nodes
Web Interface ProActive Scheduler
+ Resource Manager
Amazon EC2
User
CPU nodes merge the denoised video parts
GPU nodes
66
67. Cloud Seeding with ProActive
CPU nodes
Web Interface ProActive Scheduler
+ Resource Manager
Amazon EC2
User
CPU nodes merge the denoised video parts
GPU nodes
67
68. Cloud Seeding with ProActive
CPU nodes
Web Interface ProActive Scheduler
+ Resource Manager
Amazon EC2
User
The final denoised video is sent back to
the user GPU nodes
68
70. Standard system at Runtime: No Sharing
NoC: Network On Chip
Proofs of Determinism
70
70 70
71. AGOS: Grid Architecture for SOA
Building a Platform for Agile SOA with Grid
AGOS Solutions
In Open Source with Professional Support
71
71
72. AGOS Infrastructure Management
HP Systems Insight Manager (HP- Citrix XenCenter
SIM)
• Monitoring of entire infrastructure • Hypervisor and VM
• Communicates with upper layer management
management software (HP BAC) • Communicates with upper
layer management software
(HP BAC)
72
73. AGOS and HP Management tools Integration
Services
Processes and Jobs
Grid Components
Monitoring
Discovery
Scheduler, Resource Manager
Hypervisors/Virtual Machines
Xen and Vmware Hosts and Guests
Hardware Infrastructure
Servers, Storage, Network Components
73