SlideShare ist ein Scribd-Unternehmen logo
1 von 61
Downloaden Sie, um offline zu lesen
Transfer Learning for
Performance Analysis of
Highly-Configurable Software
Pooyan Jamshidi

Carnegie Mellon University
cs.cmu.edu/~pjamshid
Goal: Enable developers/users
to find the right quality tradeoff
Today’s most popular systems are configurable
built
Empirical observations confirm that systems are
becoming increasingly configurable
08 7/2010 7/2012 7/2014
Release time
1/1999 1/2003 1/2007 1/2011
0
1/2014
N
Release time
02 1/2006 1/2010 1/2014
2.2.14
2.3.4
2.0.35
.3.24
Release time
Apache
1/2006 1/2008 1/2010 1/2012 1/2014
0
40
80
120
160
200
2.0.0
1.0.0
0.19.0
0.1.0
Hadoop
Numberofparameters
Release time
MapReduce
HDFS
[Tianyin Xu, et al., “Too Many Knobs…”, FSE’15]
Empirical observations confirm that systems are
becoming increasingly configurable
nia San Diego, ‡Huazhong Univ. of Science & Technology, †NetApp, Inc
tixu, longjin, xuf001, yyzhou}@cs.ucsd.edu
kar.Pasupathy, Rukma.Talwadker}@netapp.com
prevalent, but also severely
software. One fundamental
y of configuration, reflected
parameters (“knobs”). With
m software to ensure high re-
aunting, error-prone task.
nderstanding a fundamental
users really need so many
answer, we study the con-
including thousands of cus-
m (Storage-A), and hundreds
ce system software projects.
ng findings to motivate soft-
ore cautious and disciplined
these findings, we provide
ich can significantly reduce
A as an example, the guide-
ters and simplify 19.7% of
on existing users. Also, we
tion methods in the context
7/2006 7/2008 7/2010 7/2012 7/2014
0
100
200
300
400
500
600
700
Storage-A
Numberofparameters
Release time
1/1999 1/2003 1/2007 1/2011
0
100
200
300
400
500
5.6.2
5.5.0
5.0.16
5.1.3
4.1.0
4.0.12
3.23.0
1/2014
MySQL
Numberofparameters
Release time
1/1998 1/2002 1/2006 1/2010 1/2014
0
100
200
300
400
500
600
1.3.14
2.2.14
2.3.4
2.0.35
1.3.24
Numberofparameters
Release time
Apache
1/2006 1/2008 1/2010 1/2012 1/2014
0
40
80
120
160
200
2.0.0
1.0.0
0.19.0
0.1.0
Hadoop
Numberofparameters
Release time
MapReduce
HDFS
[Tianyin Xu, et al., “Too Many Knobs…”, FSE’15]
Configurations determine the performance
behavior
void Parrot_setenv(. . . name,. . . value){
#ifdef PARROT_HAS_SETENV
my_setenv(name, value, 1);
#else
int name_len=strlen(name);
int val_len=strlen(value);
char* envs=glob_env;
if(envs==NULL){
return;
}
strcpy(envs,name);
strcpy(envs+name_len,"=");
strcpy(envs+name_len + 1,value);
putenv(envs);
#endif
}
#ifdef LINUX
extern int Parrot_signbit(double x){
endif
else
PARROT_HAS_SETENV
LINUX
Speed
Energy
How do we understand performance behavior of
real-world highly-configurable systems that scale well…
… and enable developers/users to reason about
qualities (performance, energy) and to make tradeoff?
I build methods that enable software systems to
perform as desired in uncertain environments
Learning Control
Self-Adaptive
Systems
Configurable
Systems
Ph.D.
— Cloud auto-scaling [SEAMS ’14]
— Vertical elasticity [FGCS ’16, ICAC ’15]
— Control theory [SEAMS ’15, TAAS ’17]
Ph.D.
— Self-learning controller [QoSA ’16]
— Architectural principles [TOIT ’17]
Postdoc2 @ CMU (2016 — present)
— Transfer learning [SEAMS’17]
— Building theory [ASE ’17]
Postdoc1 @ Imperial (2014 – 16)
— Configuration optimization [MASCOTS ’17]
— Bayesian optimization
Outline
Case
Study
Transfer
Learning
Theory
Building
Guided
Sampling
Future
Directions
[SEAMS’17]
[ASE’17]
[FSE’18]
SocialSensor
•Identifying trending topics

•Identifying user defined topics

•Social media search
SocialSensor
Content AnalysisOrchestrator
Crawling
Search and Integration
Tweets: [5k-20k/min]
Every 10 min:
[100k tweets]
Tweets: [10M]
Fetch
Store
Push
Store
Crawled
items
FetchInternet
Challenges
Content AnalysisOrchestrator
Crawling
Search and Integration
Tweets: [5k-20k/min]
Every 10 min:
[100k tweets]
Tweets: [10M]
Fetch
Store
Push
Store
Crawled
items
FetchInternet
100X
10X
Real time
How can we gain a better performance without
using more resources?
Let’s try out different system configurations!
Opportunity: Data processing engines in the
pipeline were all configurable
> 100 > 100 > 100
2300
0 500 1000 1500
Throughput (ops/sec)
0
1000
2000
3000
4000
5000
Averagewritelatency(s)
Default configuration was bad, so was the expert’
Default
Recommended
by an expert Optimal
Configuration
better
better
0 0.5 1 1.5 2 2.5
Throughput (ops/sec) 10
4
0
50
100
150
200
250
300
Latency(ms)
Default configuration was bad, so was the expert’
Default
Recommended
by an expert
Optimal
Configuration
better
better
Why this is an important problem?
Significant time saving
• 2X-10X faster than worst
• Noticeably faster than median
• Default is bad
• Expert’s is not optimal
Large configuration space
• Exhaustive search is expensive
• Specific to hardware/workload/version
What did happen at the end?
• Achieved the objectives (100X user, same experience) 

• Saved money by reducing cloud resources up to 20%
• Our tool was able to identify configurations that was
consistently better than expert recommendation
Outline
Case
Study
Transfer
Learning
Theory
Building
Guided
Sampling
Future
Directions
To enable performance tradeoff, we need a model
to reason about qualities
void Parrot_setenv(. . . name,. . . value){
#ifdef PARROT_HAS_SETENV
my_setenv(name, value, 1);
#else
int name_len=strlen(name);
int val_len=strlen(value);
char* envs=glob_env;
if(envs==NULL){
return;
}
strcpy(envs,name);
strcpy(envs+name_len,"=");
strcpy(envs+name_len + 1,value);
putenv(envs);
#endif
}
#ifdef LINUX
endif
else
PARROT_HAS_SETENV
LINUX
f(·) = 5 + 3 ⇥ o1
Execution time (s)
f(o1 := 0) = 5
f(o1 := 1) = 8
What is a performance model?
f : C ! R
f(o1, o2) = 5 + 3o1 + 15o2 7o1 ⇥ o2
c =< o1, o2 >
c =< o1, o2, ..., o10 >
c =< o1, o2, ..., o100 >
···
How do we learn performance models?
Measure
Learn
TurtleBot
Optimization
Reasoning
Debugging
Configurations
f(o1, o2) = 5 + 3o1 + 15o2 7o1 ⇥ o2
Insight: Performance measurements of the real
system is “similar” to the ones from the simulators
Measure
Simulator (Gazebo)
Data
Configurations
Performance
So why not reuse these data,
instead of measuring on real robot?
We developed methods to make learning cheaper
via transfer learning
Target (Learn)Source (Given)
DataModel
Transferable
Knowledge
II. INTUITION
Understanding the performance behavior of configurable
software systems can enable (i) performance debugging, (ii)
performance tuning, (iii) design-time evolution, or (iv) runtime
adaptation [11]. We lack empirical understanding of how the
performance behavior of a system will vary when the environ-
ment of the system changes. Such empirical understanding will
provide important insights to develop faster and more accurate
learning techniques that allow us to make predictions and
optimizations of performance for highly configurable systems
in changing environments [10]. For instance, we can learn
performance behavior of a system on a cheap hardware in a
controlled lab environment and use that to understand the per-
formance behavior of the system on a production server before
shipping to the end user. More specifically, we would like to
know, what the relationship is between the performance of a
system in a specific environment (characterized by software
configuration, hardware, workload, and system version) to the
one that we vary its environmental conditions.
In this research, we aim for an empirical understanding of
A. Preliminary concept
In this section, we p
cepts that we use throu
enable us to concisely
1) Configuration and
the i-th feature of a co
enabled or disabled an
configuration space is m
all the features C =
Dom(Fi) = {0, 1}. A
a member of the confi
all the parameters are
range (i.e., complete ins
We also describe an
e = [w, h, v] drawn fr
W ⇥H ⇥V , where they
values for workload, ha
2) Performance mod
configuration space F
formance model is a b
given some observation
combination of system
II. INTUITION
rstanding the performance behavior of configurable
systems can enable (i) performance debugging, (ii)
ance tuning, (iii) design-time evolution, or (iv) runtime
on [11]. We lack empirical understanding of how the
ance behavior of a system will vary when the environ-
the system changes. Such empirical understanding will
important insights to develop faster and more accurate
techniques that allow us to make predictions and
ations of performance for highly configurable systems
ging environments [10]. For instance, we can learn
ance behavior of a system on a cheap hardware in a
ed lab environment and use that to understand the per-
e behavior of the system on a production server before
to the end user. More specifically, we would like to
hat the relationship is between the performance of a
n a specific environment (characterized by software
ation, hardware, workload, and system version) to the
we vary its environmental conditions.
s research, we aim for an empirical understanding of
A. Preliminary concepts
In this section, we provide formal definitions of
cepts that we use throughout this study. The forma
enable us to concisely convey concept throughout
1) Configuration and environment space: Let F
the i-th feature of a configurable system A whic
enabled or disabled and one of them holds by de
configuration space is mathematically a Cartesian
all the features C = Dom(F1) ⇥ · · · ⇥ Dom(F
Dom(Fi) = {0, 1}. A configuration of a syste
a member of the configuration space (feature spa
all the parameters are assigned to a specific valu
range (i.e., complete instantiations of the system’s pa
We also describe an environment instance by 3
e = [w, h, v] drawn from a given environment s
W ⇥H ⇥V , where they respectively represent sets
values for workload, hardware and system version.
2) Performance model: Given a software syste
configuration space F and environmental instances
formance model is a black-box function f : F ⇥
given some observations of the system performanc
combination of system’s features x 2 F in an en
formance model is a black-box function f : F ⇥ E ! R
given some observations of the system performance for each
combination of system’s features x 2 F in an environment
e 2 E. To construct a performance model for a system A
with configuration space F, we run A in environment instance
e 2 E on various combinations of configurations xi 2 F, and
record the resulting performance values yi = f(xi) + ✏i, xi 2
F where ✏i ⇠ N (0, i). The training data for our regression
models is then simply Dtr = {(xi, yi)}n
i=1. In other words, a
response function is simply a mapping from the input space to
a measurable performance metric that produces interval-scaled
data (here we assume it produces real numbers).
3) Performance distribution: For the performance model,
we measured and associated the performance response to each
configuration, now let introduce another concept where we
vary the environment and we measure the performance. An
empirical performance distribution is a stochastic process,
pd : E ! (R), that defines a probability distribution over
performance measures for each environmental conditions. To
tem version) to the
ons.
al understanding of
g via an informed
learning a perfor-
ed on a well-suited
the knowledge we
the main research
information (trans-
o both source and
ore can be carried
r. This transferable
[10].
s that we consider
uration is a set of
is the primary vari-
rstand performance
ike to understand
der study will be
formance model is a black-box function f : F ⇥ E
given some observations of the system performance fo
combination of system’s features x 2 F in an enviro
e 2 E. To construct a performance model for a syst
with configuration space F, we run A in environment in
e 2 E on various combinations of configurations xi 2 F
record the resulting performance values yi = f(xi) + ✏i
F where ✏i ⇠ N (0, i). The training data for our regr
models is then simply Dtr = {(xi, yi)}n
i=1. In other wo
response function is simply a mapping from the input sp
a measurable performance metric that produces interval-
data (here we assume it produces real numbers).
3) Performance distribution: For the performance m
we measured and associated the performance response t
configuration, now let introduce another concept whe
vary the environment and we measure the performanc
empirical performance distribution is a stochastic pr
pd : E ! (R), that defines a probability distributio
performance measures for each environmental conditio
Extract Reuse
Learn Learn
Goal: Gain strength by
transferring information
across environments
A simple transfer learning via model shift
log P (θ, Xobs )
Θ
P (θ|Xobs )
log P(θ, Xobs )
Θ
log P(θ, Xobs )
Θ
log P(θ, Xobs )
Θ
P(θ|Xobs ) P(θ|Xobs ) P(θ|Xobs )
Target
Source
Throughput
Machine
twice as fast
[Pavel Valov, et al. “Transferring performance prediction models…”, ICPE’17 ]
DataData
Data
Measure
Measure
Reuse Learn
TurtleBot
Simulator (Gazebo)
[P. Jamshidi, et al., “Transfer learning for improving model predictions ….”, SEAMS’17]
Configurations
Our transfer learning solution
f(o1, o2) = 5 + 3o1 + 15o2 7o1 ⇥ o2
input, x
Gaussian processes for performance modeling
t = n t = n + 1
Observation
Mean
Uncertainty
New
observation
output,f(x)
input, x
Gaussian Processes enables reasoning about
performance
Step 1: Fit GP to the data seen
so far

Step 2: Explore the model for
regions of most variance

Step 3: Sample that region

Step 4: Repeat
-1.5 -1 -0.5 0 0.5 1 1.5
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Configuration
Space
Empirical
Model
Experiment
Experiment
0 20 40 60 80 100 120 140 160 180 200
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Selection Criteria
Sequential Design
CoBot experiment: DARPA BRASS
0 2 4 6 8
Localization error [m]
10
15
20
25
30
35
40
CPUutilization[%]
Energy
constraint
Safety
constraint
Pareto
front
Sweet
Spot
better
better
no_of_particles=x
no_of_refinement=y
CoBot
experiment
5 10 15 20 25
5
10
15
20
25
0
5
10
15
20
25
5 10 15 20 25
5
10
15
20
25
0
5
10
15
20
25
5 10 15 20 25
5
10
15
20
25
0
5
10
15
20
25
5 10 15 20 25
5
10
15
20
25
0
5
10
15
20
25
Source
(given)
Target
(ground truth
6 months)
Prediction with
4 samples
Prediction with
Transfer learning
CPU [%] CPU [%]
Results: Other configurable systems
CoBot WordCount SOL
RollingSort Cassandra (HW) Cassandra (DB)
Transfer Learning for Improving Model Predictions
in Highly Configurable Software
Pooyan Jamshidi, Miguel Velez, Christian K¨astner
Carnegie Mellon University, USA
{pjamshid,mvelezce,kaestner}@cs.cmu.edu
Norbert Siegmund
Bauhaus-University Weimar, Germany
norbert.siegmund@uni-weimar.de
Prasad Kawthekar
Stanford University, USA
pkawthek@stanford.edu
Abstract—Modern software systems are built to be used in
dynamic environments using configuration capabilities to adapt to
changes and external uncertainties. In a self-adaptation context,
we are often interested in reasoning about the performance of
the systems under different configurations. Usually, we learn
a black-box model based on real measurements to predict
the performance of the system given a specific configuration.
However, as modern systems become more complex, there are
many configuration parameters that may interact and we end up
learning an exponentially large configuration space. Naturally,
this does not scale when relying on real measurements in the
actual changing environment. We propose a different solution:
Instead of taking the measurements from the real system, we
learn the model using samples from other sources, such as
simulators that approximate performance of the real system at
Predictive Model
Learn Model with
Transfer Learning
Measure Measure
Data
Source
Target
Simulator (Source) Robot (Target)
Adaptation
Fig. 1: Transfer learning for performance model learning.
order to identify the best performing configuration for a robot
Details: [SEAMS ’17]
Summary (transfer learning)
• Model for making tradeoff between qualities

• Scale to large space and environmental changes

• Transfer learning can help 

• Increase prediction accuracy

• Increase model reliability

• Decrease model building cost
5 10 15 20 25
5
10
15
20
25
0
5
10
15
20
25
5 10 15 20 25
5
10
15
20
25
0
5
10
15
20
25
5 10 15 20 25
5
10
15
20
25
0
5
10
15
20
25
5 10 15 20 25
5
10
15
20
25
0
5
10
15
20
25
Outline
Case
Study
Transfer
Learning
Theory
Building
Guided
Sampling
Future
Directions
Looking further: When transfer learning goes
wrong
10
20
30
40
50
60
AbsolutePercentageError[%]
Sources s s1 s2 s3 s4 s5 s6
noise-level 0 5 10 15 20 25 30
corr. coeff. 0.98 0.95 0.89 0.75 0.54 0.34 0.19
µ(pe) 15.34 14.14 17.09 18.71 33.06 40.93 46.75
It worked! It didn’t!
Insight: Predictions become
more accurate when the source
is more related to the target.
Non-transfer-learning
Key question: Can we develop a theory to explain
when transfer learning works?
NEXT STEPS
Target (Learn)Source (Given)
DataModel
Transferable
Knowledge
II. INTUITION
rstanding the performance behavior of configurable
e systems can enable (i) performance debugging, (ii)
mance tuning, (iii) design-time evolution, or (iv) runtime
on [11]. We lack empirical understanding of how the
mance behavior of a system will vary when the environ-
the system changes. Such empirical understanding will
important insights to develop faster and more accurate
g techniques that allow us to make predictions and
ations of performance for highly configurable systems
ging environments [10]. For instance, we can learn
mance behavior of a system on a cheap hardware in a
ed lab environment and use that to understand the per-
ce behavior of the system on a production server before
g to the end user. More specifically, we would like to
what the relationship is between the performance of a
in a specific environment (characterized by software
ration, hardware, workload, and system version) to the
t we vary its environmental conditions.
is research, we aim for an empirical understanding of
mance behavior to improve learning via an informed
g process. In other words, we at learning a perfor-
model in a changed environment based on a well-suited
g set that has been determined by the knowledge we
in other environments. Therefore, the main research
A. Preliminary concepts
In this section, we provide formal definitions of four con-
cepts that we use throughout this study. The formal notations
enable us to concisely convey concept throughout the paper.
1) Configuration and environment space: Let Fi indicate
the i-th feature of a configurable system A which is either
enabled or disabled and one of them holds by default. The
configuration space is mathematically a Cartesian product of
all the features C = Dom(F1) ⇥ · · · ⇥ Dom(Fd), where
Dom(Fi) = {0, 1}. A configuration of a system is then
a member of the configuration space (feature space) where
all the parameters are assigned to a specific value in their
range (i.e., complete instantiations of the system’s parameters).
We also describe an environment instance by 3 variables
e = [w, h, v] drawn from a given environment space E =
W ⇥H ⇥V , where they respectively represent sets of possible
values for workload, hardware and system version.
2) Performance model: Given a software system A with
configuration space F and environmental instances E, a per-
formance model is a black-box function f : F ⇥ E ! R
given some observations of the system performance for each
combination of system’s features x 2 F in an environment
e 2 E. To construct a performance model for a system A
with configuration space F, we run A in environment instance
e 2 E on various combinations of configurations xi 2 F, and
record the resulting performance values yi = f(xi) + ✏i, xi 2
ON
behavior of configurable
erformance debugging, (ii)
e evolution, or (iv) runtime
understanding of how the
will vary when the environ-
mpirical understanding will
op faster and more accurate
to make predictions and
ighly configurable systems
or instance, we can learn
on a cheap hardware in a
that to understand the per-
a production server before
cifically, we would like to
ween the performance of a
(characterized by software
and system version) to the
conditions.
empirical understanding of
learning via an informed
we at learning a perfor-
ment based on a well-suited
ned by the knowledge we
erefore, the main research
A. Preliminary concepts
In this section, we provide formal definitions of four con-
cepts that we use throughout this study. The formal notations
enable us to concisely convey concept throughout the paper.
1) Configuration and environment space: Let Fi indicate
the i-th feature of a configurable system A which is either
enabled or disabled and one of them holds by default. The
configuration space is mathematically a Cartesian product of
all the features C = Dom(F1) ⇥ · · · ⇥ Dom(Fd), where
Dom(Fi) = {0, 1}. A configuration of a system is then
a member of the configuration space (feature space) where
all the parameters are assigned to a specific value in their
range (i.e., complete instantiations of the system’s parameters).
We also describe an environment instance by 3 variables
e = [w, h, v] drawn from a given environment space E =
W ⇥H ⇥V , where they respectively represent sets of possible
values for workload, hardware and system version.
2) Performance model: Given a software system A with
configuration space F and environmental instances E, a per-
formance model is a black-box function f : F ⇥ E ! R
given some observations of the system performance for each
combination of system’s features x 2 F in an environment
e 2 E. To construct a performance model for a system A
with configuration space F, we run A in environment instance
e 2 E on various combinations of configurations xi 2 F, and
record the resulting performance values yi = f(xi) + ✏i, xi 2
oad, hardware and system version.
e model: Given a software system A with
ce F and environmental instances E, a per-
is a black-box function f : F ⇥ E ! R
rvations of the system performance for each
ystem’s features x 2 F in an environment
ruct a performance model for a system A
n space F, we run A in environment instance
combinations of configurations xi 2 F, and
ng performance values yi = f(xi) + ✏i, xi 2
(0, i). The training data for our regression
mply Dtr = {(xi, yi)}n
i=1. In other words, a
is simply a mapping from the input space to
ormance metric that produces interval-scaled
ume it produces real numbers).
e distribution: For the performance model,
associated the performance response to each
w let introduce another concept where we
ment and we measure the performance. An
mance distribution is a stochastic process,
that defines a probability distribution over
sures for each environmental conditions. To
ormance distribution for a system A with
ce F, similarly to the process of deriving
models, we run A on various combinations
2 F, for a specific environment instance
values for workload, hardware and system version.
2) Performance model: Given a software system A with
configuration space F and environmental instances E, a per-
formance model is a black-box function f : F ⇥ E ! R
given some observations of the system performance for each
combination of system’s features x 2 F in an environment
e 2 E. To construct a performance model for a system A
with configuration space F, we run A in environment instance
e 2 E on various combinations of configurations xi 2 F, and
record the resulting performance values yi = f(xi) + ✏i, xi 2
F where ✏i ⇠ N (0, i). The training data for our regression
models is then simply Dtr = {(xi, yi)}n
i=1. In other words, a
response function is simply a mapping from the input space to
a measurable performance metric that produces interval-scaled
data (here we assume it produces real numbers).
3) Performance distribution: For the performance model,
we measured and associated the performance response to each
configuration, now let introduce another concept where we
vary the environment and we measure the performance. An
empirical performance distribution is a stochastic process,
pd : E ! (R), that defines a probability distribution over
performance measures for each environmental conditions. To
construct a performance distribution for a system A with
configuration space F, similarly to the process of deriving
the performance models, we run A on various combinations
configurations xi 2 F, for a specific environment instance
Extract Reuse
Learn Learn
Q1: How source and target
are “related”?
Q2: What characteristics
are preserved?
Q3: What are the actionable
insights?
Our empirical study: We looked at different highly-
configurable systems to gain insights
[P. Jamshidi, et al., “Transfer learning for performance modeling of configurable systems….”, ASE’17]
SPEAR	(SAT	Solver)
Analysis	time
14	options	
16,384	configurations
SAT	problems
3	hardware
2	versions
X264	(video	encoder)
Encoding	time
16	options	
4,000 configurations
Video	quality/size
2	hardware
3	versions
SQLite	(DB	engine)
Query	time
14	options	
1,000 configurations
DB	Queries
2	hardware
2 versions
SaC (Compiler)
Execution	time
50	options	
71,267 configurations
10	Demo	programs
Observation 1: Linear shift happens only in limited
environmental changes
Soft Environmental change Severity Corr.
SPEAR
NUC/2 -> NUC/4 Small 1.00
Amazon_nano -> NUC Large 0.59
Hardware/workload/version V Large -0.10
x264
Version Large 0.06
Workload Medium 0.65
SQLite
write-seq -> write-batch Small 0.96
read-rand -> read-seq Medium 0.50
Target
Source
Throughput
Implication: Simple transfer learning is limited
to hardware changes in practice
log P(θ, Xobs)
Θ
l
P(θ|Xobs)
Θ
Figure 5: The first column shows the log joint probab
Soft Environmental change Severity Dim t-test
x264
Version Large
16
12 10
Hardware/workload/ver V Large 8 9
SQLite
write-seq -> write-batch V Large
14
3 4
read-rand -> read-seq Medium 1 1
SaC Workload V Large 50 16 10
Implication: Avoid wasting budget on non-informative part
of configuration space and focusing where it matters.
Observation 2: Influential options and interactions
are preserved across environments
216
250
= 0.000000000058
We only need to
explore part of
the space:
Transfer Learning for Performance Modeling of
Configurable Systems: An Exploratory Analysis
Pooyan Jamshidi
Carnegie Mellon University, USA
Norbert Siegmund
Bauhaus-University Weimar, Germany
Miguel Velez, Christian K¨astner
Akshay Patel, Yuvraj Agarwal
Carnegie Mellon University, USA
Abstract—Modern software systems provide many configura-
tion options which significantly influence their non-functional
properties. To understand and predict the effect of configuration
options, several sampling and learning strategies have been
proposed, albeit often with significant cost to cover the highly
dimensional configuration space. Recently, transfer learning has
been applied to reduce the effort of constructing performance
models by transferring knowledge about performance behavior
across environments. While this line of research is promising to
learn more accurate models at a lower cost, it is unclear why
and when transfer learning works for performance modeling. To
shed light on when it is beneficial to apply transfer learning, we
conducted an empirical study on four popular software systems,
varying software configurations and environmental conditions,
such as hardware, workload, and software versions, to identify
the key knowledge pieces that can be exploited for transfer
learning. Our results show that in small environmental changes
(e.g., homogeneous workload change), by applying a linear
transformation to the performance model, we can understand
the performance behavior of the target environment, while for
severe environmental changes (e.g., drastic workload change) we
can transfer only knowledge that makes sampling more efficient,
e.g., by reducing the dimensionality of the configuration space.
Index Terms—Performance analysis, transfer learning.
Fig. 1: Transfer learning is a form of machine learning that takes
advantage of transferable knowledge from source to learn an accurate,
reliable, and less costly model for the target environment.
their byproducts across environments is demanded by many
Details: [ASE ’17]
Outline
Case
Study
Transfer
Learning
Theory
Building
Guided
Sampling
Future
Directions
Insights from our empirical study lead to the
development of a guided sampling
DataData
Data
Measure
Measure
Reuse Learn
TurtleBot
Simulator (Gazebo)
!(#$, #&)	= 	5 + 3#$ + 15#& − 7#$×#&
Configurations
Simple transfer learning do not work in severe
changes
3 10 20 30 40 50 60 70
Sample Size
500
1000
1500
2000
MeanAbsolutePercentageError
L2S+GP
L2S+SEAMS
SEAMS
Model-shift
Random+CART
better
Negative transfer
High prediction error
Low prediction error as a
result of guided sampling
Details: [FSE ’18]
Research interests
Software
Engineering
Machine
Learning
Systems
[SEAMS ’14]
[ASE ’17]
[QoSA ’16]
[MASCOTS ’16]
[SEAMS ’17]
[CCGrid ’16]
Get inspired by opportunities in industry
Ph.D. Postdoc 1 (Imperial) Postdoc 2 (CMU)
Intel, Microsoft ATC DARPA
Outline
Case
Study
Transfer
Learning
Empirical
Study
Guided
Sampling
Vision
What will the software systems
of the future look like?
Software 2.0
Increasingly customized and configurable
VISION
Increasingly competing objectives
Accuracy
Training speed
Inference speed
Model size
Energy
Outline
Case
Study
Transfer
Learning
Empirical
Study
Guided
Sampling
Future
Direction
Deep neural
network as a
highly
configurable
system
of top/bottom conf.; M6/M7: Number of influential options; M8/M9: Number of options agree
Correlation btw importance of options; M11/M12: Number of interactions; M13: Number of inte
e↵ects; M14: Correlation btw the coe↵s;
Input #1
Input #2
Input #3
Input #4
Output
Hidden
layer
Input
layer
Output
layer
4 Technical Aims and Research Plan
We will pursue the following technical aims: (1) investigate potential criteria for e↵ectiv
exploration of the design space of DNN architectures (Section 4.2), (2) build analytical m
curately predict the performance of a given architecture configuration given other similar
which either have been measured in the target environments or other similar environm
measuring the network performance directly (Section 4.3), and (3), develop a tunni
that exploit the performance model from previous step to e↵ectively search for optima
(Section 4.4).
4.1 Project Timeline
We plan to complete the proposed project in two years. To mitigate project risks, we
project into three major phases:
8
Network
Design
Model
Compiler
Hybrid
Deployment
OS/
Hardware
Scopeof
thisProject
Neural Search
Hardware
Optimization
Hyper-parameter
DNNsystemdevelopmentstack
Deployment
Topology
Exploring the design space of deep networks
accuracy is reported on the CIFAR-10 test set. We note that the test set
tion, and it is only used for final model evaluation. We also evaluate the
in a large-scale setting on the ImageNet challenge dataset (Sect. 4.3).
globalpool
linear&softmax
sep.conv3x3/2
sep.conv3x3
sep.conv3x3/2
sep.conv3x3
sep.conv3x3
sep.conv3x3
image
conv3x3
globalpool
linear&softmax
large CIFAR-10 model
cell
cell
cell
cell
cell
cell
sep.conv3x3/2
sep.conv3x3
sep.conv3x3/2
sep.conv3x3
sep.conv3x3
sep.conv3x3
sep.conv3x3/2
globalpool
linear&softmax
ImageNet model
cell
cell
cell
cell
cell
cell
n models constructed using the cells optimized with architecture search.
during architecture search on CIFAR-10. Top-right: large CIFAR-10
milar approach has recently been used in (Zoph et al., 2017; Zhong et al., 2017).
rchitecture search is carried out entirely on the CIFAR-10 training set, which we split into two
ub-sets of 40K training and 10K validation images. Candidate models are trained on the training
ubset, and evaluated on the validation subset to obtain the fitness. Once the search process is over,
e selected cell is plugged into a large model which is trained on the combination of training and
alidation sub-sets, and the accuracy is reported on the CIFAR-10 test set. We note that the test set
never used for model selection, and it is only used for final model evaluation. We also evaluate the
ells, learned on CIFAR-10, in a large-scale setting on the ImageNet challenge dataset (Sect. 4.3).
sep.conv3x3/2
sep.conv3x3/2
sep.conv3x3
conv3x3
globalpool
linear&softmax
small CIFAR-10 model
cell
cell
cell
sep.conv3x3/2
sep.conv3x3
sep.conv3x3/2
sep.conv3x3
sep.conv3x3
sep.conv3x3
image
conv3x3
globalpool
linear&softmax
large CIFAR-10 model
cell
cell
cell
cell
cell
cell
sep.conv3x3/2
sep.conv3x3
sep.conv3x3/2
sep.conv3x3
sep.conv3x3
sep.conv3x3
image
conv3x3/2
conv3x3/2
sep.conv3x3/2
globalpool
linear&softmax
cell
cell
cell
cell
cell
cell
cell
Optimal Architecture
(Yesterday)
Optimal Architecture
(Today)
New Fraud Pattern
Exploring the design space of deep networks
0.2 0.4 0.6 0.8 1 1.2
Inference time [h]
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Validationerror
Default
Pareto
optimal
subset, and evaluated on the validation subset
the selected cell is plugged into a large mode
validation sub-sets, and the accuracy is report
is never used for model selection, and it is only
cells, learned on CIFAR-10, in a large-scale se
image
sep.conv3x3/2
sep.conv3x3/2
sep.conv3x3
conv3x3
globalpool
linear&softmax
small CIFAR-10 model
cell
cell
cell
sep.conv3x3
image
conv3x3/2
conv3x3/2
sep.conv3x3/2
Imag
cell
cell
cell
Figure 2: Image classification models construc
Top-left: small model used during architectu
model used for learned cell evaluation. Bottom
validation subset to obtain the fitness. Once the search process is over,
nto a large model which is trained on the combination of training and
ccuracy is reported on the CIFAR-10 test set. We note that the test set
tion, and it is only used for final model evaluation. We also evaluate the
in a large-scale setting on the ImageNet challenge dataset (Sect. 4.3).
globalpool
linear&softmax
sep.conv3x3/2
sep.conv3x3
sep.conv3x3/2
sep.conv3x3
sep.conv3x3
sep.conv3x3
image
conv3x3
globalpool
linear&softmax
large CIFAR-10 model
cell
cell
cell
cell
cell
cell
sep.conv3x3/2
sep.conv3x3
sep.conv3x3/2
sep.conv3x3
sep.conv3x3
sep.conv3x3
sep.conv3x3/2
globalpool
linear&softmax
ImageNet model
cell
cell
cell
cell
cell
cell
n models constructed using the cells optimized with architecture search.
during architecture search on CIFAR-10. Top-right: large CIFAR-10
valuation. Bottom: ImageNet model used for learned cell evaluation.
Architecture search is carried out entirely on the CIFAR-10 training set, which w
sub-sets of 40K training and 10K validation images. Candidate models are trained
subset, and evaluated on the validation subset to obtain the fitness. Once the search
the selected cell is plugged into a large model which is trained on the combination
validation sub-sets, and the accuracy is reported on the CIFAR-10 test set. We note
is never used for model selection, and it is only used for final model evaluation. We
cells, learned on CIFAR-10, in a large-scale setting on the ImageNet challenge data
image
sep.conv3x3/2
sep.conv3x3/2
sep.conv3x3
conv3x3
globalpool
linear&softmax
small CIFAR-10 model
cell
cell
cell
sep.conv3x3/2
sep.conv3x3
sep.conv3x3/2
sep.conv3x3
image
conv3x3
large CIFAR-10 model
cell
cell
cell
cell
cell
sep.conv3x3/2
sep.conv3x3
sep.conv3x3/2
sep.conv3x3
sep.conv3x3
sep.conv3x3
image
conv3x3/2
conv3x3/2
sep.conv3x3/2
globalpool
linear&softmax
ImageNet model
cell
cell
cell
cell
cell
cell
cell
Figure 2: Image classification models constructed using the cells optimized with arc
Top-left: small model used during architecture search on CIFAR-10. Top-right:
better
better
Insight: Learn a model on a cheaper workload to
explore the expensive workload faster
0 50 100 150 200 250 300
Network architecture number
1
2
3
4
5
6
7
8
9
Inferencetime[h]
0 50 100 150 200 250 300
Network architecture number
1
2
3
4
5
6
7
Inferencetime[m]
Workload W1
Hardware H1
Workload W2
Hardware H2
Interested?
Thanks
Many systems are now configurable
built
Given the ever growing configurable systems,
how can we enable learning practical models
that scale well and provide reliable predictions
for exploring the configuration space?
Transfer Learning for Performance Analysis of Highly-Configurable Software

Weitere ähnliche Inhalte

Ähnlich wie Transfer Learning for Performance Analysis of Highly-Configurable Software

Learning Software Performance Models for Dynamic and Uncertain Environments
Learning Software Performance Models for Dynamic and Uncertain EnvironmentsLearning Software Performance Models for Dynamic and Uncertain Environments
Learning Software Performance Models for Dynamic and Uncertain Environments
Pooyan Jamshidi
 
Real-Time Hardware Simulation with Portable Hardware-in-the-Loop (PHIL-Rebooted)
Real-Time Hardware Simulation with Portable Hardware-in-the-Loop (PHIL-Rebooted)Real-Time Hardware Simulation with Portable Hardware-in-the-Loop (PHIL-Rebooted)
Real-Time Hardware Simulation with Portable Hardware-in-the-Loop (PHIL-Rebooted)
Riley Waite
 

Ähnlich wie Transfer Learning for Performance Analysis of Highly-Configurable Software (20)

Requirements vs design vs runtime
Requirements vs design vs runtimeRequirements vs design vs runtime
Requirements vs design vs runtime
 
University course on aerospace projects management and se complete 2017
University course on aerospace projects management and se complete 2017University course on aerospace projects management and se complete 2017
University course on aerospace projects management and se complete 2017
 
Transfer Learning for Improving Model Predictions in Robotic Systems
Transfer Learning for Improving Model Predictions  in Robotic SystemsTransfer Learning for Improving Model Predictions  in Robotic Systems
Transfer Learning for Improving Model Predictions in Robotic Systems
 
Machine Learning meets DevOps
Machine Learning meets DevOpsMachine Learning meets DevOps
Machine Learning meets DevOps
 
An Effective PSO-inspired Algorithm for Workflow Scheduling
An Effective PSO-inspired Algorithm for Workflow Scheduling An Effective PSO-inspired Algorithm for Workflow Scheduling
An Effective PSO-inspired Algorithm for Workflow Scheduling
 
Building data fusion surrogate models for spacecraft aerodynamic problems wit...
Building data fusion surrogate models for spacecraft aerodynamic problems wit...Building data fusion surrogate models for spacecraft aerodynamic problems wit...
Building data fusion surrogate models for spacecraft aerodynamic problems wit...
 
A brief overview of java frameworks
A brief overview of java frameworksA brief overview of java frameworks
A brief overview of java frameworks
 
Pertemuan 5.pptx
Pertemuan 5.pptxPertemuan 5.pptx
Pertemuan 5.pptx
 
Cloud scale anomaly detection for software misconfigurations
Cloud scale anomaly detection for software misconfigurationsCloud scale anomaly detection for software misconfigurations
Cloud scale anomaly detection for software misconfigurations
 
An Uncertainty-Aware Approach to Optimal Configuration of Stream Processing S...
An Uncertainty-Aware Approach to Optimal Configuration of Stream Processing S...An Uncertainty-Aware Approach to Optimal Configuration of Stream Processing S...
An Uncertainty-Aware Approach to Optimal Configuration of Stream Processing S...
 
Continuous Architecting of Stream-Based Systems
Continuous Architecting of Stream-Based SystemsContinuous Architecting of Stream-Based Systems
Continuous Architecting of Stream-Based Systems
 
Exploring Emerging Technologies in the Extreme Scale HPC Co-Design Space with...
Exploring Emerging Technologies in the Extreme Scale HPC Co-Design Space with...Exploring Emerging Technologies in the Extreme Scale HPC Co-Design Space with...
Exploring Emerging Technologies in the Extreme Scale HPC Co-Design Space with...
 
Learning Software Performance Models for Dynamic and Uncertain Environments
Learning Software Performance Models for Dynamic and Uncertain EnvironmentsLearning Software Performance Models for Dynamic and Uncertain Environments
Learning Software Performance Models for Dynamic and Uncertain Environments
 
I046850
I046850I046850
I046850
 
Towards a Unified Data Analytics Optimizer with Yanlei Diao
Towards a Unified Data Analytics Optimizer with Yanlei DiaoTowards a Unified Data Analytics Optimizer with Yanlei Diao
Towards a Unified Data Analytics Optimizer with Yanlei Diao
 
A formal conceptual framework
A formal conceptual frameworkA formal conceptual framework
A formal conceptual framework
 
Aa4506146150
Aa4506146150Aa4506146150
Aa4506146150
 
2453
24532453
2453
 
Real-Time Hardware Simulation with Portable Hardware-in-the-Loop (PHIL-Rebooted)
Real-Time Hardware Simulation with Portable Hardware-in-the-Loop (PHIL-Rebooted)Real-Time Hardware Simulation with Portable Hardware-in-the-Loop (PHIL-Rebooted)
Real-Time Hardware Simulation with Portable Hardware-in-the-Loop (PHIL-Rebooted)
 
nlp dl 1.pdf
nlp dl 1.pdfnlp dl 1.pdf
nlp dl 1.pdf
 

Mehr von Pooyan Jamshidi

Learning LWF Chain Graphs: A Markov Blanket Discovery Approach
Learning LWF Chain Graphs: A Markov Blanket Discovery ApproachLearning LWF Chain Graphs: A Markov Blanket Discovery Approach
Learning LWF Chain Graphs: A Markov Blanket Discovery Approach
Pooyan Jamshidi
 
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...
Pooyan Jamshidi
 
Self learning cloud controllers
Self learning cloud controllersSelf learning cloud controllers
Self learning cloud controllers
Pooyan Jamshidi
 

Mehr von Pooyan Jamshidi (20)

Learning LWF Chain Graphs: A Markov Blanket Discovery Approach
Learning LWF Chain Graphs: A Markov Blanket Discovery ApproachLearning LWF Chain Graphs: A Markov Blanket Discovery Approach
Learning LWF Chain Graphs: A Markov Blanket Discovery Approach
 
A Framework for Robust Control of Uncertainty in Self-Adaptive Software Conn...
 A Framework for Robust Control of Uncertainty in Self-Adaptive Software Conn... A Framework for Robust Control of Uncertainty in Self-Adaptive Software Conn...
A Framework for Robust Control of Uncertainty in Self-Adaptive Software Conn...
 
Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation in Aut...
Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation in Aut...Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation in Aut...
Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation in Aut...
 
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...
 
Learning to Sample
Learning to SampleLearning to Sample
Learning to Sample
 
Integrated Model Discovery and Self-Adaptation of Robots
Integrated Model Discovery and Self-Adaptation of RobotsIntegrated Model Discovery and Self-Adaptation of Robots
Integrated Model Discovery and Self-Adaptation of Robots
 
Production-Ready Machine Learning for the Software Architect
Production-Ready Machine Learning for the Software ArchitectProduction-Ready Machine Learning for the Software Architect
Production-Ready Machine Learning for the Software Architect
 
Architecting for Scale
Architecting for ScaleArchitecting for Scale
Architecting for Scale
 
Sensitivity Analysis for Building Adaptive Robotic Software
Sensitivity Analysis for Building Adaptive Robotic SoftwareSensitivity Analysis for Building Adaptive Robotic Software
Sensitivity Analysis for Building Adaptive Robotic Software
 
Configuration Optimization Tool
Configuration Optimization ToolConfiguration Optimization Tool
Configuration Optimization Tool
 
Fuzzy Self-Learning Controllers for Elasticity Management in Dynamic Cloud Ar...
Fuzzy Self-Learning Controllers for Elasticity Management in Dynamic Cloud Ar...Fuzzy Self-Learning Controllers for Elasticity Management in Dynamic Cloud Ar...
Fuzzy Self-Learning Controllers for Elasticity Management in Dynamic Cloud Ar...
 
Configuration Optimization for Big Data Software
Configuration Optimization for Big Data SoftwareConfiguration Optimization for Big Data Software
Configuration Optimization for Big Data Software
 
Microservices Architecture Enables DevOps: Migration to a Cloud-Native Archit...
Microservices Architecture Enables DevOps: Migration to a Cloud-Native Archit...Microservices Architecture Enables DevOps: Migration to a Cloud-Native Archit...
Microservices Architecture Enables DevOps: Migration to a Cloud-Native Archit...
 
Towards Quality-Aware Development of Big Data Applications with DICE
Towards Quality-Aware Development of Big Data Applications with DICETowards Quality-Aware Development of Big Data Applications with DICE
Towards Quality-Aware Development of Big Data Applications with DICE
 
Self learning cloud controllers
Self learning cloud controllersSelf learning cloud controllers
Self learning cloud controllers
 
Workload Patterns for Quality-driven Dynamic Cloud Service Configuration and...
Workload Patterns for Quality-driven Dynamic Cloud Service Configuration and...Workload Patterns for Quality-driven Dynamic Cloud Service Configuration and...
Workload Patterns for Quality-driven Dynamic Cloud Service Configuration and...
 
Fuzzy Control meets Software Engineering
Fuzzy Control meets Software EngineeringFuzzy Control meets Software Engineering
Fuzzy Control meets Software Engineering
 
Autonomic Resource Provisioning for Cloud-Based Software
Autonomic Resource Provisioning for Cloud-Based SoftwareAutonomic Resource Provisioning for Cloud-Based Software
Autonomic Resource Provisioning for Cloud-Based Software
 
Cloud Migration Patterns: A Multi-Cloud Architectural Perspective
Cloud Migration Patterns: A Multi-Cloud Architectural PerspectiveCloud Migration Patterns: A Multi-Cloud Architectural Perspective
Cloud Migration Patterns: A Multi-Cloud Architectural Perspective
 
Autonomic Resource Provisioning for Cloud-Based Software
Autonomic Resource Provisioning for Cloud-Based SoftwareAutonomic Resource Provisioning for Cloud-Based Software
Autonomic Resource Provisioning for Cloud-Based Software
 

Kürzlich hochgeladen

1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
QucHHunhnh
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
kauryashika82
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
ciinovamais
 

Kürzlich hochgeladen (20)

1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
Dyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptxDyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptx
 
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxSKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
Asian American Pacific Islander Month DDSD 2024.pptx
Asian American Pacific Islander Month DDSD 2024.pptxAsian American Pacific Islander Month DDSD 2024.pptx
Asian American Pacific Islander Month DDSD 2024.pptx
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docx
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
Food safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfFood safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdf
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfUGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
 
Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptx
 

Transfer Learning for Performance Analysis of Highly-Configurable Software

  • 1. Transfer Learning for Performance Analysis of Highly-Configurable Software Pooyan Jamshidi Carnegie Mellon University cs.cmu.edu/~pjamshid
  • 2. Goal: Enable developers/users to find the right quality tradeoff
  • 3. Today’s most popular systems are configurable built
  • 4.
  • 5. Empirical observations confirm that systems are becoming increasingly configurable 08 7/2010 7/2012 7/2014 Release time 1/1999 1/2003 1/2007 1/2011 0 1/2014 N Release time 02 1/2006 1/2010 1/2014 2.2.14 2.3.4 2.0.35 .3.24 Release time Apache 1/2006 1/2008 1/2010 1/2012 1/2014 0 40 80 120 160 200 2.0.0 1.0.0 0.19.0 0.1.0 Hadoop Numberofparameters Release time MapReduce HDFS [Tianyin Xu, et al., “Too Many Knobs…”, FSE’15]
  • 6. Empirical observations confirm that systems are becoming increasingly configurable nia San Diego, ‡Huazhong Univ. of Science & Technology, †NetApp, Inc tixu, longjin, xuf001, yyzhou}@cs.ucsd.edu kar.Pasupathy, Rukma.Talwadker}@netapp.com prevalent, but also severely software. One fundamental y of configuration, reflected parameters (“knobs”). With m software to ensure high re- aunting, error-prone task. nderstanding a fundamental users really need so many answer, we study the con- including thousands of cus- m (Storage-A), and hundreds ce system software projects. ng findings to motivate soft- ore cautious and disciplined these findings, we provide ich can significantly reduce A as an example, the guide- ters and simplify 19.7% of on existing users. Also, we tion methods in the context 7/2006 7/2008 7/2010 7/2012 7/2014 0 100 200 300 400 500 600 700 Storage-A Numberofparameters Release time 1/1999 1/2003 1/2007 1/2011 0 100 200 300 400 500 5.6.2 5.5.0 5.0.16 5.1.3 4.1.0 4.0.12 3.23.0 1/2014 MySQL Numberofparameters Release time 1/1998 1/2002 1/2006 1/2010 1/2014 0 100 200 300 400 500 600 1.3.14 2.2.14 2.3.4 2.0.35 1.3.24 Numberofparameters Release time Apache 1/2006 1/2008 1/2010 1/2012 1/2014 0 40 80 120 160 200 2.0.0 1.0.0 0.19.0 0.1.0 Hadoop Numberofparameters Release time MapReduce HDFS [Tianyin Xu, et al., “Too Many Knobs…”, FSE’15]
  • 7. Configurations determine the performance behavior void Parrot_setenv(. . . name,. . . value){ #ifdef PARROT_HAS_SETENV my_setenv(name, value, 1); #else int name_len=strlen(name); int val_len=strlen(value); char* envs=glob_env; if(envs==NULL){ return; } strcpy(envs,name); strcpy(envs+name_len,"="); strcpy(envs+name_len + 1,value); putenv(envs); #endif } #ifdef LINUX extern int Parrot_signbit(double x){ endif else PARROT_HAS_SETENV LINUX Speed Energy
  • 8. How do we understand performance behavior of real-world highly-configurable systems that scale well… … and enable developers/users to reason about qualities (performance, energy) and to make tradeoff?
  • 9. I build methods that enable software systems to perform as desired in uncertain environments Learning Control Self-Adaptive Systems Configurable Systems Ph.D. — Cloud auto-scaling [SEAMS ’14] — Vertical elasticity [FGCS ’16, ICAC ’15] — Control theory [SEAMS ’15, TAAS ’17] Ph.D. — Self-learning controller [QoSA ’16] — Architectural principles [TOIT ’17] Postdoc2 @ CMU (2016 — present) — Transfer learning [SEAMS’17] — Building theory [ASE ’17] Postdoc1 @ Imperial (2014 – 16) — Configuration optimization [MASCOTS ’17] — Bayesian optimization
  • 11. SocialSensor •Identifying trending topics •Identifying user defined topics •Social media search
  • 12. SocialSensor Content AnalysisOrchestrator Crawling Search and Integration Tweets: [5k-20k/min] Every 10 min: [100k tweets] Tweets: [10M] Fetch Store Push Store Crawled items FetchInternet
  • 13. Challenges Content AnalysisOrchestrator Crawling Search and Integration Tweets: [5k-20k/min] Every 10 min: [100k tweets] Tweets: [10M] Fetch Store Push Store Crawled items FetchInternet 100X 10X Real time
  • 14. How can we gain a better performance without using more resources?
  • 15. Let’s try out different system configurations!
  • 16. Opportunity: Data processing engines in the pipeline were all configurable > 100 > 100 > 100 2300
  • 17.
  • 18. 0 500 1000 1500 Throughput (ops/sec) 0 1000 2000 3000 4000 5000 Averagewritelatency(s) Default configuration was bad, so was the expert’ Default Recommended by an expert Optimal Configuration better better
  • 19. 0 0.5 1 1.5 2 2.5 Throughput (ops/sec) 10 4 0 50 100 150 200 250 300 Latency(ms) Default configuration was bad, so was the expert’ Default Recommended by an expert Optimal Configuration better better
  • 20. Why this is an important problem? Significant time saving • 2X-10X faster than worst • Noticeably faster than median • Default is bad • Expert’s is not optimal Large configuration space • Exhaustive search is expensive • Specific to hardware/workload/version
  • 21. What did happen at the end? • Achieved the objectives (100X user, same experience) • Saved money by reducing cloud resources up to 20% • Our tool was able to identify configurations that was consistently better than expert recommendation
  • 23. To enable performance tradeoff, we need a model to reason about qualities void Parrot_setenv(. . . name,. . . value){ #ifdef PARROT_HAS_SETENV my_setenv(name, value, 1); #else int name_len=strlen(name); int val_len=strlen(value); char* envs=glob_env; if(envs==NULL){ return; } strcpy(envs,name); strcpy(envs+name_len,"="); strcpy(envs+name_len + 1,value); putenv(envs); #endif } #ifdef LINUX endif else PARROT_HAS_SETENV LINUX f(·) = 5 + 3 ⇥ o1 Execution time (s) f(o1 := 0) = 5 f(o1 := 1) = 8
  • 24. What is a performance model? f : C ! R f(o1, o2) = 5 + 3o1 + 15o2 7o1 ⇥ o2 c =< o1, o2 > c =< o1, o2, ..., o10 > c =< o1, o2, ..., o100 > ···
  • 25. How do we learn performance models? Measure Learn TurtleBot Optimization Reasoning Debugging Configurations f(o1, o2) = 5 + 3o1 + 15o2 7o1 ⇥ o2
  • 26. Insight: Performance measurements of the real system is “similar” to the ones from the simulators Measure Simulator (Gazebo) Data Configurations Performance So why not reuse these data, instead of measuring on real robot?
  • 27. We developed methods to make learning cheaper via transfer learning Target (Learn)Source (Given) DataModel Transferable Knowledge II. INTUITION Understanding the performance behavior of configurable software systems can enable (i) performance debugging, (ii) performance tuning, (iii) design-time evolution, or (iv) runtime adaptation [11]. We lack empirical understanding of how the performance behavior of a system will vary when the environ- ment of the system changes. Such empirical understanding will provide important insights to develop faster and more accurate learning techniques that allow us to make predictions and optimizations of performance for highly configurable systems in changing environments [10]. For instance, we can learn performance behavior of a system on a cheap hardware in a controlled lab environment and use that to understand the per- formance behavior of the system on a production server before shipping to the end user. More specifically, we would like to know, what the relationship is between the performance of a system in a specific environment (characterized by software configuration, hardware, workload, and system version) to the one that we vary its environmental conditions. In this research, we aim for an empirical understanding of A. Preliminary concept In this section, we p cepts that we use throu enable us to concisely 1) Configuration and the i-th feature of a co enabled or disabled an configuration space is m all the features C = Dom(Fi) = {0, 1}. A a member of the confi all the parameters are range (i.e., complete ins We also describe an e = [w, h, v] drawn fr W ⇥H ⇥V , where they values for workload, ha 2) Performance mod configuration space F formance model is a b given some observation combination of system II. INTUITION rstanding the performance behavior of configurable systems can enable (i) performance debugging, (ii) ance tuning, (iii) design-time evolution, or (iv) runtime on [11]. We lack empirical understanding of how the ance behavior of a system will vary when the environ- the system changes. Such empirical understanding will important insights to develop faster and more accurate techniques that allow us to make predictions and ations of performance for highly configurable systems ging environments [10]. For instance, we can learn ance behavior of a system on a cheap hardware in a ed lab environment and use that to understand the per- e behavior of the system on a production server before to the end user. More specifically, we would like to hat the relationship is between the performance of a n a specific environment (characterized by software ation, hardware, workload, and system version) to the we vary its environmental conditions. s research, we aim for an empirical understanding of A. Preliminary concepts In this section, we provide formal definitions of cepts that we use throughout this study. The forma enable us to concisely convey concept throughout 1) Configuration and environment space: Let F the i-th feature of a configurable system A whic enabled or disabled and one of them holds by de configuration space is mathematically a Cartesian all the features C = Dom(F1) ⇥ · · · ⇥ Dom(F Dom(Fi) = {0, 1}. A configuration of a syste a member of the configuration space (feature spa all the parameters are assigned to a specific valu range (i.e., complete instantiations of the system’s pa We also describe an environment instance by 3 e = [w, h, v] drawn from a given environment s W ⇥H ⇥V , where they respectively represent sets values for workload, hardware and system version. 2) Performance model: Given a software syste configuration space F and environmental instances formance model is a black-box function f : F ⇥ given some observations of the system performanc combination of system’s features x 2 F in an en formance model is a black-box function f : F ⇥ E ! R given some observations of the system performance for each combination of system’s features x 2 F in an environment e 2 E. To construct a performance model for a system A with configuration space F, we run A in environment instance e 2 E on various combinations of configurations xi 2 F, and record the resulting performance values yi = f(xi) + ✏i, xi 2 F where ✏i ⇠ N (0, i). The training data for our regression models is then simply Dtr = {(xi, yi)}n i=1. In other words, a response function is simply a mapping from the input space to a measurable performance metric that produces interval-scaled data (here we assume it produces real numbers). 3) Performance distribution: For the performance model, we measured and associated the performance response to each configuration, now let introduce another concept where we vary the environment and we measure the performance. An empirical performance distribution is a stochastic process, pd : E ! (R), that defines a probability distribution over performance measures for each environmental conditions. To tem version) to the ons. al understanding of g via an informed learning a perfor- ed on a well-suited the knowledge we the main research information (trans- o both source and ore can be carried r. This transferable [10]. s that we consider uration is a set of is the primary vari- rstand performance ike to understand der study will be formance model is a black-box function f : F ⇥ E given some observations of the system performance fo combination of system’s features x 2 F in an enviro e 2 E. To construct a performance model for a syst with configuration space F, we run A in environment in e 2 E on various combinations of configurations xi 2 F record the resulting performance values yi = f(xi) + ✏i F where ✏i ⇠ N (0, i). The training data for our regr models is then simply Dtr = {(xi, yi)}n i=1. In other wo response function is simply a mapping from the input sp a measurable performance metric that produces interval- data (here we assume it produces real numbers). 3) Performance distribution: For the performance m we measured and associated the performance response t configuration, now let introduce another concept whe vary the environment and we measure the performanc empirical performance distribution is a stochastic pr pd : E ! (R), that defines a probability distributio performance measures for each environmental conditio Extract Reuse Learn Learn Goal: Gain strength by transferring information across environments
  • 28. A simple transfer learning via model shift log P (θ, Xobs ) Θ P (θ|Xobs ) log P(θ, Xobs ) Θ log P(θ, Xobs ) Θ log P(θ, Xobs ) Θ P(θ|Xobs ) P(θ|Xobs ) P(θ|Xobs ) Target Source Throughput Machine twice as fast [Pavel Valov, et al. “Transferring performance prediction models…”, ICPE’17 ]
  • 29. DataData Data Measure Measure Reuse Learn TurtleBot Simulator (Gazebo) [P. Jamshidi, et al., “Transfer learning for improving model predictions ….”, SEAMS’17] Configurations Our transfer learning solution f(o1, o2) = 5 + 3o1 + 15o2 7o1 ⇥ o2
  • 30. input, x Gaussian processes for performance modeling t = n t = n + 1 Observation Mean Uncertainty New observation output,f(x) input, x
  • 31. Gaussian Processes enables reasoning about performance Step 1: Fit GP to the data seen so far Step 2: Explore the model for regions of most variance Step 3: Sample that region Step 4: Repeat -1.5 -1 -0.5 0 0.5 1 1.5 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Configuration Space Empirical Model Experiment Experiment 0 20 40 60 80 100 120 140 160 180 200 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Selection Criteria Sequential Design
  • 32. CoBot experiment: DARPA BRASS 0 2 4 6 8 Localization error [m] 10 15 20 25 30 35 40 CPUutilization[%] Energy constraint Safety constraint Pareto front Sweet Spot better better no_of_particles=x no_of_refinement=y
  • 33. CoBot experiment 5 10 15 20 25 5 10 15 20 25 0 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 0 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 0 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 0 5 10 15 20 25 Source (given) Target (ground truth 6 months) Prediction with 4 samples Prediction with Transfer learning CPU [%] CPU [%]
  • 34. Results: Other configurable systems CoBot WordCount SOL RollingSort Cassandra (HW) Cassandra (DB)
  • 35. Transfer Learning for Improving Model Predictions in Highly Configurable Software Pooyan Jamshidi, Miguel Velez, Christian K¨astner Carnegie Mellon University, USA {pjamshid,mvelezce,kaestner}@cs.cmu.edu Norbert Siegmund Bauhaus-University Weimar, Germany norbert.siegmund@uni-weimar.de Prasad Kawthekar Stanford University, USA pkawthek@stanford.edu Abstract—Modern software systems are built to be used in dynamic environments using configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at Predictive Model Learn Model with Transfer Learning Measure Measure Data Source Target Simulator (Source) Robot (Target) Adaptation Fig. 1: Transfer learning for performance model learning. order to identify the best performing configuration for a robot Details: [SEAMS ’17]
  • 36. Summary (transfer learning) • Model for making tradeoff between qualities • Scale to large space and environmental changes • Transfer learning can help • Increase prediction accuracy • Increase model reliability • Decrease model building cost 5 10 15 20 25 5 10 15 20 25 0 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 0 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 0 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 0 5 10 15 20 25
  • 38. Looking further: When transfer learning goes wrong 10 20 30 40 50 60 AbsolutePercentageError[%] Sources s s1 s2 s3 s4 s5 s6 noise-level 0 5 10 15 20 25 30 corr. coeff. 0.98 0.95 0.89 0.75 0.54 0.34 0.19 µ(pe) 15.34 14.14 17.09 18.71 33.06 40.93 46.75 It worked! It didn’t! Insight: Predictions become more accurate when the source is more related to the target. Non-transfer-learning
  • 39. Key question: Can we develop a theory to explain when transfer learning works? NEXT STEPS Target (Learn)Source (Given) DataModel Transferable Knowledge II. INTUITION rstanding the performance behavior of configurable e systems can enable (i) performance debugging, (ii) mance tuning, (iii) design-time evolution, or (iv) runtime on [11]. We lack empirical understanding of how the mance behavior of a system will vary when the environ- the system changes. Such empirical understanding will important insights to develop faster and more accurate g techniques that allow us to make predictions and ations of performance for highly configurable systems ging environments [10]. For instance, we can learn mance behavior of a system on a cheap hardware in a ed lab environment and use that to understand the per- ce behavior of the system on a production server before g to the end user. More specifically, we would like to what the relationship is between the performance of a in a specific environment (characterized by software ration, hardware, workload, and system version) to the t we vary its environmental conditions. is research, we aim for an empirical understanding of mance behavior to improve learning via an informed g process. In other words, we at learning a perfor- model in a changed environment based on a well-suited g set that has been determined by the knowledge we in other environments. Therefore, the main research A. Preliminary concepts In this section, we provide formal definitions of four con- cepts that we use throughout this study. The formal notations enable us to concisely convey concept throughout the paper. 1) Configuration and environment space: Let Fi indicate the i-th feature of a configurable system A which is either enabled or disabled and one of them holds by default. The configuration space is mathematically a Cartesian product of all the features C = Dom(F1) ⇥ · · · ⇥ Dom(Fd), where Dom(Fi) = {0, 1}. A configuration of a system is then a member of the configuration space (feature space) where all the parameters are assigned to a specific value in their range (i.e., complete instantiations of the system’s parameters). We also describe an environment instance by 3 variables e = [w, h, v] drawn from a given environment space E = W ⇥H ⇥V , where they respectively represent sets of possible values for workload, hardware and system version. 2) Performance model: Given a software system A with configuration space F and environmental instances E, a per- formance model is a black-box function f : F ⇥ E ! R given some observations of the system performance for each combination of system’s features x 2 F in an environment e 2 E. To construct a performance model for a system A with configuration space F, we run A in environment instance e 2 E on various combinations of configurations xi 2 F, and record the resulting performance values yi = f(xi) + ✏i, xi 2 ON behavior of configurable erformance debugging, (ii) e evolution, or (iv) runtime understanding of how the will vary when the environ- mpirical understanding will op faster and more accurate to make predictions and ighly configurable systems or instance, we can learn on a cheap hardware in a that to understand the per- a production server before cifically, we would like to ween the performance of a (characterized by software and system version) to the conditions. empirical understanding of learning via an informed we at learning a perfor- ment based on a well-suited ned by the knowledge we erefore, the main research A. Preliminary concepts In this section, we provide formal definitions of four con- cepts that we use throughout this study. The formal notations enable us to concisely convey concept throughout the paper. 1) Configuration and environment space: Let Fi indicate the i-th feature of a configurable system A which is either enabled or disabled and one of them holds by default. The configuration space is mathematically a Cartesian product of all the features C = Dom(F1) ⇥ · · · ⇥ Dom(Fd), where Dom(Fi) = {0, 1}. A configuration of a system is then a member of the configuration space (feature space) where all the parameters are assigned to a specific value in their range (i.e., complete instantiations of the system’s parameters). We also describe an environment instance by 3 variables e = [w, h, v] drawn from a given environment space E = W ⇥H ⇥V , where they respectively represent sets of possible values for workload, hardware and system version. 2) Performance model: Given a software system A with configuration space F and environmental instances E, a per- formance model is a black-box function f : F ⇥ E ! R given some observations of the system performance for each combination of system’s features x 2 F in an environment e 2 E. To construct a performance model for a system A with configuration space F, we run A in environment instance e 2 E on various combinations of configurations xi 2 F, and record the resulting performance values yi = f(xi) + ✏i, xi 2 oad, hardware and system version. e model: Given a software system A with ce F and environmental instances E, a per- is a black-box function f : F ⇥ E ! R rvations of the system performance for each ystem’s features x 2 F in an environment ruct a performance model for a system A n space F, we run A in environment instance combinations of configurations xi 2 F, and ng performance values yi = f(xi) + ✏i, xi 2 (0, i). The training data for our regression mply Dtr = {(xi, yi)}n i=1. In other words, a is simply a mapping from the input space to ormance metric that produces interval-scaled ume it produces real numbers). e distribution: For the performance model, associated the performance response to each w let introduce another concept where we ment and we measure the performance. An mance distribution is a stochastic process, that defines a probability distribution over sures for each environmental conditions. To ormance distribution for a system A with ce F, similarly to the process of deriving models, we run A on various combinations 2 F, for a specific environment instance values for workload, hardware and system version. 2) Performance model: Given a software system A with configuration space F and environmental instances E, a per- formance model is a black-box function f : F ⇥ E ! R given some observations of the system performance for each combination of system’s features x 2 F in an environment e 2 E. To construct a performance model for a system A with configuration space F, we run A in environment instance e 2 E on various combinations of configurations xi 2 F, and record the resulting performance values yi = f(xi) + ✏i, xi 2 F where ✏i ⇠ N (0, i). The training data for our regression models is then simply Dtr = {(xi, yi)}n i=1. In other words, a response function is simply a mapping from the input space to a measurable performance metric that produces interval-scaled data (here we assume it produces real numbers). 3) Performance distribution: For the performance model, we measured and associated the performance response to each configuration, now let introduce another concept where we vary the environment and we measure the performance. An empirical performance distribution is a stochastic process, pd : E ! (R), that defines a probability distribution over performance measures for each environmental conditions. To construct a performance distribution for a system A with configuration space F, similarly to the process of deriving the performance models, we run A on various combinations configurations xi 2 F, for a specific environment instance Extract Reuse Learn Learn Q1: How source and target are “related”? Q2: What characteristics are preserved? Q3: What are the actionable insights?
  • 40. Our empirical study: We looked at different highly- configurable systems to gain insights [P. Jamshidi, et al., “Transfer learning for performance modeling of configurable systems….”, ASE’17] SPEAR (SAT Solver) Analysis time 14 options 16,384 configurations SAT problems 3 hardware 2 versions X264 (video encoder) Encoding time 16 options 4,000 configurations Video quality/size 2 hardware 3 versions SQLite (DB engine) Query time 14 options 1,000 configurations DB Queries 2 hardware 2 versions SaC (Compiler) Execution time 50 options 71,267 configurations 10 Demo programs
  • 41. Observation 1: Linear shift happens only in limited environmental changes Soft Environmental change Severity Corr. SPEAR NUC/2 -> NUC/4 Small 1.00 Amazon_nano -> NUC Large 0.59 Hardware/workload/version V Large -0.10 x264 Version Large 0.06 Workload Medium 0.65 SQLite write-seq -> write-batch Small 0.96 read-rand -> read-seq Medium 0.50 Target Source Throughput Implication: Simple transfer learning is limited to hardware changes in practice log P(θ, Xobs) Θ l P(θ|Xobs) Θ Figure 5: The first column shows the log joint probab
  • 42. Soft Environmental change Severity Dim t-test x264 Version Large 16 12 10 Hardware/workload/ver V Large 8 9 SQLite write-seq -> write-batch V Large 14 3 4 read-rand -> read-seq Medium 1 1 SaC Workload V Large 50 16 10 Implication: Avoid wasting budget on non-informative part of configuration space and focusing where it matters. Observation 2: Influential options and interactions are preserved across environments 216 250 = 0.000000000058 We only need to explore part of the space:
  • 43. Transfer Learning for Performance Modeling of Configurable Systems: An Exploratory Analysis Pooyan Jamshidi Carnegie Mellon University, USA Norbert Siegmund Bauhaus-University Weimar, Germany Miguel Velez, Christian K¨astner Akshay Patel, Yuvraj Agarwal Carnegie Mellon University, USA Abstract—Modern software systems provide many configura- tion options which significantly influence their non-functional properties. To understand and predict the effect of configuration options, several sampling and learning strategies have been proposed, albeit often with significant cost to cover the highly dimensional configuration space. Recently, transfer learning has been applied to reduce the effort of constructing performance models by transferring knowledge about performance behavior across environments. While this line of research is promising to learn more accurate models at a lower cost, it is unclear why and when transfer learning works for performance modeling. To shed light on when it is beneficial to apply transfer learning, we conducted an empirical study on four popular software systems, varying software configurations and environmental conditions, such as hardware, workload, and software versions, to identify the key knowledge pieces that can be exploited for transfer learning. Our results show that in small environmental changes (e.g., homogeneous workload change), by applying a linear transformation to the performance model, we can understand the performance behavior of the target environment, while for severe environmental changes (e.g., drastic workload change) we can transfer only knowledge that makes sampling more efficient, e.g., by reducing the dimensionality of the configuration space. Index Terms—Performance analysis, transfer learning. Fig. 1: Transfer learning is a form of machine learning that takes advantage of transferable knowledge from source to learn an accurate, reliable, and less costly model for the target environment. their byproducts across environments is demanded by many Details: [ASE ’17]
  • 45. Insights from our empirical study lead to the development of a guided sampling DataData Data Measure Measure Reuse Learn TurtleBot Simulator (Gazebo) !(#$, #&) = 5 + 3#$ + 15#& − 7#$×#& Configurations
  • 46. Simple transfer learning do not work in severe changes 3 10 20 30 40 50 60 70 Sample Size 500 1000 1500 2000 MeanAbsolutePercentageError L2S+GP L2S+SEAMS SEAMS Model-shift Random+CART better Negative transfer High prediction error Low prediction error as a result of guided sampling
  • 48. Research interests Software Engineering Machine Learning Systems [SEAMS ’14] [ASE ’17] [QoSA ’16] [MASCOTS ’16] [SEAMS ’17] [CCGrid ’16]
  • 49. Get inspired by opportunities in industry Ph.D. Postdoc 1 (Imperial) Postdoc 2 (CMU) Intel, Microsoft ATC DARPA
  • 51. What will the software systems of the future look like?
  • 52. Software 2.0 Increasingly customized and configurable VISION Increasingly competing objectives Accuracy Training speed Inference speed Model size Energy
  • 54. Deep neural network as a highly configurable system of top/bottom conf.; M6/M7: Number of influential options; M8/M9: Number of options agree Correlation btw importance of options; M11/M12: Number of interactions; M13: Number of inte e↵ects; M14: Correlation btw the coe↵s; Input #1 Input #2 Input #3 Input #4 Output Hidden layer Input layer Output layer 4 Technical Aims and Research Plan We will pursue the following technical aims: (1) investigate potential criteria for e↵ectiv exploration of the design space of DNN architectures (Section 4.2), (2) build analytical m curately predict the performance of a given architecture configuration given other similar which either have been measured in the target environments or other similar environm measuring the network performance directly (Section 4.3), and (3), develop a tunni that exploit the performance model from previous step to e↵ectively search for optima (Section 4.4). 4.1 Project Timeline We plan to complete the proposed project in two years. To mitigate project risks, we project into three major phases: 8 Network Design Model Compiler Hybrid Deployment OS/ Hardware Scopeof thisProject Neural Search Hardware Optimization Hyper-parameter DNNsystemdevelopmentstack Deployment Topology
  • 55. Exploring the design space of deep networks accuracy is reported on the CIFAR-10 test set. We note that the test set tion, and it is only used for final model evaluation. We also evaluate the in a large-scale setting on the ImageNet challenge dataset (Sect. 4.3). globalpool linear&softmax sep.conv3x3/2 sep.conv3x3 sep.conv3x3/2 sep.conv3x3 sep.conv3x3 sep.conv3x3 image conv3x3 globalpool linear&softmax large CIFAR-10 model cell cell cell cell cell cell sep.conv3x3/2 sep.conv3x3 sep.conv3x3/2 sep.conv3x3 sep.conv3x3 sep.conv3x3 sep.conv3x3/2 globalpool linear&softmax ImageNet model cell cell cell cell cell cell n models constructed using the cells optimized with architecture search. during architecture search on CIFAR-10. Top-right: large CIFAR-10 milar approach has recently been used in (Zoph et al., 2017; Zhong et al., 2017). rchitecture search is carried out entirely on the CIFAR-10 training set, which we split into two ub-sets of 40K training and 10K validation images. Candidate models are trained on the training ubset, and evaluated on the validation subset to obtain the fitness. Once the search process is over, e selected cell is plugged into a large model which is trained on the combination of training and alidation sub-sets, and the accuracy is reported on the CIFAR-10 test set. We note that the test set never used for model selection, and it is only used for final model evaluation. We also evaluate the ells, learned on CIFAR-10, in a large-scale setting on the ImageNet challenge dataset (Sect. 4.3). sep.conv3x3/2 sep.conv3x3/2 sep.conv3x3 conv3x3 globalpool linear&softmax small CIFAR-10 model cell cell cell sep.conv3x3/2 sep.conv3x3 sep.conv3x3/2 sep.conv3x3 sep.conv3x3 sep.conv3x3 image conv3x3 globalpool linear&softmax large CIFAR-10 model cell cell cell cell cell cell sep.conv3x3/2 sep.conv3x3 sep.conv3x3/2 sep.conv3x3 sep.conv3x3 sep.conv3x3 image conv3x3/2 conv3x3/2 sep.conv3x3/2 globalpool linear&softmax cell cell cell cell cell cell cell Optimal Architecture (Yesterday) Optimal Architecture (Today) New Fraud Pattern
  • 56. Exploring the design space of deep networks 0.2 0.4 0.6 0.8 1 1.2 Inference time [h] 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Validationerror Default Pareto optimal subset, and evaluated on the validation subset the selected cell is plugged into a large mode validation sub-sets, and the accuracy is report is never used for model selection, and it is only cells, learned on CIFAR-10, in a large-scale se image sep.conv3x3/2 sep.conv3x3/2 sep.conv3x3 conv3x3 globalpool linear&softmax small CIFAR-10 model cell cell cell sep.conv3x3 image conv3x3/2 conv3x3/2 sep.conv3x3/2 Imag cell cell cell Figure 2: Image classification models construc Top-left: small model used during architectu model used for learned cell evaluation. Bottom validation subset to obtain the fitness. Once the search process is over, nto a large model which is trained on the combination of training and ccuracy is reported on the CIFAR-10 test set. We note that the test set tion, and it is only used for final model evaluation. We also evaluate the in a large-scale setting on the ImageNet challenge dataset (Sect. 4.3). globalpool linear&softmax sep.conv3x3/2 sep.conv3x3 sep.conv3x3/2 sep.conv3x3 sep.conv3x3 sep.conv3x3 image conv3x3 globalpool linear&softmax large CIFAR-10 model cell cell cell cell cell cell sep.conv3x3/2 sep.conv3x3 sep.conv3x3/2 sep.conv3x3 sep.conv3x3 sep.conv3x3 sep.conv3x3/2 globalpool linear&softmax ImageNet model cell cell cell cell cell cell n models constructed using the cells optimized with architecture search. during architecture search on CIFAR-10. Top-right: large CIFAR-10 valuation. Bottom: ImageNet model used for learned cell evaluation. Architecture search is carried out entirely on the CIFAR-10 training set, which w sub-sets of 40K training and 10K validation images. Candidate models are trained subset, and evaluated on the validation subset to obtain the fitness. Once the search the selected cell is plugged into a large model which is trained on the combination validation sub-sets, and the accuracy is reported on the CIFAR-10 test set. We note is never used for model selection, and it is only used for final model evaluation. We cells, learned on CIFAR-10, in a large-scale setting on the ImageNet challenge data image sep.conv3x3/2 sep.conv3x3/2 sep.conv3x3 conv3x3 globalpool linear&softmax small CIFAR-10 model cell cell cell sep.conv3x3/2 sep.conv3x3 sep.conv3x3/2 sep.conv3x3 image conv3x3 large CIFAR-10 model cell cell cell cell cell sep.conv3x3/2 sep.conv3x3 sep.conv3x3/2 sep.conv3x3 sep.conv3x3 sep.conv3x3 image conv3x3/2 conv3x3/2 sep.conv3x3/2 globalpool linear&softmax ImageNet model cell cell cell cell cell cell cell Figure 2: Image classification models constructed using the cells optimized with arc Top-left: small model used during architecture search on CIFAR-10. Top-right: better better
  • 57. Insight: Learn a model on a cheaper workload to explore the expensive workload faster 0 50 100 150 200 250 300 Network architecture number 1 2 3 4 5 6 7 8 9 Inferencetime[h] 0 50 100 150 200 250 300 Network architecture number 1 2 3 4 5 6 7 Inferencetime[m] Workload W1 Hardware H1 Workload W2 Hardware H2
  • 60. Many systems are now configurable built Given the ever growing configurable systems, how can we enable learning practical models that scale well and provide reliable predictions for exploring the configuration space?