Timo Klerx and Kalman Graffi. Bootstrapping Skynet: Calibration and Autonomic Self-Control of Structured Peer-to-Peer Networks. In IEEE P2P ’13: Proceedings of the International Conference on Peer-to-Peer Computing, 2013.
Abstract—Peer-to-peer systems scale to millions of nodes and provide routing and storage functions with best effort quality. In order to provide a guaranteed quality of the overlay functions, even under strong dynamics in the network with regard to peer capacities, online participation and usage patterns, we propose to calibrate the peer-to-peer overlay and to autonomously learn which qualities can be reached. For that, we simulate the peer- to-peer overlay systematically under a wide range of parameter configurations and use neural networks to learn the effects of the configurations on the quality metrics. Thus, by choosing a specific quality setting by the overlay operator, the network can tune itself to the learned parameter configurations that lead to the desired quality. Evaluation shows that the presented self-calibration succeeds in learning the configuration-quality interdependencies and that peer-to-peer systems can learn and adapt their behavior according to desired quality goals.
Hot Call Girls |Delhi |Hauz Khas ☎ 9711199171 Book Your One night Stand
IEEE P2P 2013 - Bootstrapping Skynet: Calibration and Autonomic Self-Control of Structured Peer-to-Peer Networks
1. Bootstrapping Skynet:
Calibration and Autonomic Self-Control of
Structured Peer-to-Peer Networks
Timo Klerx and Kalman Graffi
Department of Computer Science
University of Paderborn
Research Group Knowledge-Based Systems
Hans Kleine Büning
September 11, 2013
UNIVERSITY OF PADERBORN
Knowledge-Based Systems
4. Motivation Approach Evaluation Conclusion
Bootstrapping SkyNet
Towards self-optimization
SkyNet: Management layer in PeerfactSim.KOM
(P2P-)Systems become more and more complex
Applications
Parameters
Layers
. . .
Ideally, systems manage themselves
Choose parameters
Defend attacks
Restore network structure
. . .
Bootstrapping Skynet Klerx and Graffi 3/17
5. Motivation Approach Evaluation Conclusion
MAPE
How to achieve self-management?
Monitor
Analyze
Plan
Execute
Systems implementing the MAPE circuit are autonomous.
Everything except Planning is already implemented.
Bootstrapping Skynet Klerx and Graffi 4/17
7. Motivation Approach Evaluation Conclusion
Plan Phase
Idea
Offline
Gather data by simulation
Learn the interdependencies in the data
Construct a regressor with goal as input to compute parameter
values
Online
Define a desired goal
Ask the regressor for optimal parameter values
Change parameter values on every node
Bootstrapping Skynet Klerx and Graffi 6/17
8. Motivation Approach Evaluation Conclusion
Plan Phase
Idea
Offline
Gather data by simulation
Learn the interdependencies in the data
Construct a regressor with goal as input to compute parameter
values
Online
Define a desired goal
Ask the regressor for optimal parameter values
Change parameter values on every node
Bootstrapping Skynet Klerx and Graffi 6/17
9. Motivation Approach Evaluation Conclusion
Neural Networks
Basics
Classification and regression
(Often) supervised learning – need labeled training data
Learn effects of parameters
Input must be specified precisely
Can approximate arbitrary functions with arbitrary precision
Bootstrapping Skynet Klerx and Graffi 7/17
10. Motivation Approach Evaluation Conclusion
Data Generation
Data characteristics
Three types of figures
Environment Parameters (E, |E| = 5) – Changed by all users
node count, churn, . . .
Overlay Parameters (O, |O| = 8) – Changeable by single nodes
message timeout, max hop count, . . .
Metrics (M, |M| = 18) – Performance values
avg. hop count, avg. network messages in, . . .
View as function f : E × O ! M
Bootstrapping Skynet Klerx and Graffi 8/17
11. Motivation Approach Evaluation Conclusion
Data Generation
Combination approaches
Full factorial design
AQll possible combinations of parameters ni
=1 |pi |
Takes too much time
One factorial design
Only one parameter varied at a time
RPest set to default values ni
=1 |pi |
Few data points
Mixed factorial design
sj
Tradeoff between one and full factorial design
Some parameters (E) in full factorial design, others (O) set to
Qdefault values ej Pt
1 || ·
=k=1 |ok |
Bootstrapping Skynet Klerx and Graffi 9/17
12. Motivation Approach Evaluation Conclusion
Data Generation
Combination approaches
Full factorial design
AQll possible combinations of parameters ni
=1 |pi |
Takes too much time
One factorial design
Only one parameter varied at a time
RPest set to default values ni
=1 |pi |
Few data points
Mixed factorial design
sj
Tradeoff between one and full factorial design
Some parameters (E) in full factorial design, others (O) set to
Qdefault values ej Pt
1 || ·
=k=1 |ok |
Bootstrapping Skynet Klerx and Graffi 9/17
13. Motivation Approach Evaluation Conclusion
Data Generation
Combination approaches
Full factorial design
AQll possible combinations of parameters ni
=1 |pi |
Takes too much time
One factorial design
Only one parameter varied at a time
RPest set to default values ni
=1 |pi |
Few data points
Mixed factorial design
sj
Tradeoff between one and full factorial design
Some parameters (E) in full factorial design, others (O) set to
Qdefault values ej Pt
1 || ·
=k=1 |ok |
Bootstrapping Skynet Klerx and Graffi 9/17
14. Motivation Approach Evaluation Conclusion
Neural Networks
Learn the data characteristics
Remember function f : E × O ! M
Reorder f to ^f : M × E ! O
M: The preferred state
E: The current environment state
v 2 ^f : (m1, ...,mr , e1, ..., es , o1, ..., ot )
Approximate ^f : Predict the overlay parameter values when given
environment state and a goal
Only realistic goals as input
Train with resilient backpropagation
One neural network for each overlay parameter
Split data in three disjoint sets: training, validation, prediction
Bootstrapping Skynet Klerx and Graffi 10/17
15. Motivation Approach Evaluation Conclusion
Neural Networks
Learn the data characteristics
Remember function f : E × O ! M
Reorder f to ^f : M × E ! O
M: The preferred state
E: The current environment state
v 2 ^f : (m1, ...,mr , e1, ..., es , o1, ..., ot )
Approximate ^f : Predict the overlay parameter values when given
environment state and a goal
Only realistic goals as input
Train with resilient backpropagation
One neural network for each overlay parameter
Split data in three disjoint sets: training, validation, prediction
Bootstrapping Skynet Klerx and Graffi 10/17
16. Motivation Approach Evaluation Conclusion
Neural Networks
Learn the data characteristics
Remember function f : E × O ! M
Reorder f to ^f : M × E ! O
M: The preferred state
E: The current environment state
v 2 ^f : (m1, ...,mr , e1, ..., es , o1, ..., ot )
Metrics
...
Environment
Parameters
Overlay
Parameter
Hidden
Layer(s)
...
Approximate ^f : Predict the overlay parameter values when given
environment state and a goal
Only realistic goals as input
Train with resilient backpropagation
One neural network for each overlay parameter
Split data in three disjoint sets: training, validation, prediction
Bootstrapping Skynet Klerx and Graffi 10/17
18. Motivation Approach Evaluation Conclusion
Overview
Which generation approach leads to good results?
Mixed factorial design (65,100 combinations)
Most of the overlay parameter values are the default value
Always predict "default" results in small errors
One factorial design (80 combinations)
Use feature selection (CFS and PCA)
Not all parameters predicted successfully
Bootstrapping Skynet Klerx and Graffi 12/17
19. Motivation Approach Evaluation Conclusion
Prediction Quality
Error while/after training
o1 =message timeout
o6 =update fingertable interval
o7 =update neighbors interval
o8 =update successor interval
o2 =message resend
o3 =operation timeout
o4 =operation max. redos
o5 =max hop count
120
100
80
60
40
20
0
o1 o2 o3 o4 o5 o6 o7 o8
Error in Percent
Parameter
cfs
pca
no fs
(a) validation
120
100
80
60
40
20
0
o1 o2 o3 o4 o5 o6 o7 o8
Error in Percent
Parameter
cfs
pca
no fs
(b) prediction
Figure: Error on prediction and validation set
Bootstrapping Skynet Klerx and Graffi 13/17
22. Motivation Approach Evaluation Conclusion
Conclusion
Mixed factorial design not suitable
MAPE circuit closed in a proof-of-concept
Feature selection not beneficial
Evaluation results are ambiguous
Good results for some parameters, bad for others
Bootstrapping Skynet Klerx and Graffi 16/17
23. Motivation Approach Evaluation Conclusion
Future Work
Investigate the other parameters
Try full factorial design with less parameters, but more granular
Design more metrics
Embed the implemented MAPE circuit in a real system
Decentralize the neural network(s) – use local view
Bootstrapping Skynet Klerx and Graffi 17/17
24. Motivation Approach Evaluation Conclusion
Future Work
Investigate the other parameters
Try full factorial design with less parameters, but more granular
Design more metrics
Embed the implemented MAPE circuit in a real system
Decentralize the neural network(s) – use local view
Thank you for your attention!
Bootstrapping Skynet Klerx and Graffi 17/17
31. Parameter values Evaluation
Process of Evaluation
Parameter
Value
X
Parameter
Value
X‘
Metric
Vector
M
Metric
Vector
M‘
Neural
Network
Simulation Simulation
compare
compare
Bootstrapping Skynet Klerx and Graffi A-7
32. Parameter values Evaluation
Results of Evaluation
Table: Comparison of X, X0, M and M0
Timestamp X X0 M M0 |1 − M
M0 | |1 − X
X0 |
For o1 and m16
w1 6s 6s 0.14 0.15 0.07 0.00
w2 11s 11s 0.18 0.20 0.10 0.00
w3 19s 18s 0.24 0.27 0.11 0.06
For o6 and m2
w1 4s 4s 175.00 173.66 0.01 0.00
w2 25s 26s 39.60 38.34 0.03 0.04
w3 55s 54s 24.50 25.51 0.04 0.02
For o7 and m2
w1 6s 4.6s 65.80 77.27 0.15 0.30
w2 25s 27.5s 36.60 35.98 0.02 0.09
w3 51s 55s 31.70 31.63 0.00 0.07
For o8 and m2
w1 4s 3.9s 60.21 60.90 0.01 0.03
w2 37s 36s 34.27 34.10 0.00 0.03
w3 55s 52s 33.21 33.00 0.01 0.06
Bootstrapping Skynet Klerx and Graffi A-8