SlideShare ist ein Scribd-Unternehmen logo
1 von 102
Maintainability ofTransformations inEvolving
MDE Ecosystems
JokinGarcía Pérez
Supervisor: Prof. Oscar Díaz
jokingarcia75@gmail.com
Index
1. Background
2.Problem statement
3. Model transformation co-evolution
4.Adapter-based approach to co-evolve generated SQL in M2Ttransformations
5.Testing MOFScript transformations with HandyMOF
6.Conclusions
7. Futurework
Index
1. Background
2.Problem statement
3. Model transformation co-evolution
4.Adapter-based approach to co-evolve generated SQL in M2Ttransformations
5.Testing MOFScript transformations with HandyMOF
6.Conclusions
7. Futurework
What isMDE?
It’s allaboutmodels
What isMDE?
Models are the
primary artifact
1
Models are the
primary artifact
Models are
abstractionsof reality
REALITY
abstractedIn
1 2
Models are the
primary artifact
Models are
abstractionsof reality
Models conform
tometamodels
REALITY
abstractedIn
Metamodel
1 2
3
Models are the
primary artifact
Models are
abstractionsof reality
Models conform
tometamodels
Models are
transformed
REALITY
abstractedIn
Metamodel
transformation
1 2
3 4
Human
Phitoplankton Zooplankton FishMussel
eats
Ecosystem
Balance
M2M
Transf.
M2T
Transf.
Metamodel
MDE Ecosystem
Conforms To
M2M
Transf.
Holisticmaintenance
Maintainability
50-90% of thetotalsw cost
MDE Maintainability
- Increase ofthe numberofartifacts
- Increase in the complexityofartifacts
- Larger upfrontinvestment
Evolution
Multidimensional
Co-Evolution
Adaptation
Index
1. Background
2.Problem statement
3. Model transformation co-evolution
4.Adapter-based approach to co-evolve generated SQL in M2Ttransformations
5.Testing MOFScript transformations with HandyMOF
6.Conclusions
7. Futurework
Problemstatement
Changeimpact
analysis
Change
propagation
Verify and validateChangedetection
Problemstatement
Changedetection
Problemstatement
Ma MbTab
MMa MMt MMb
Platform
M2T
MMb’
Platform’
Δ?
Δ?
Problemstatement
Changeimpact
analysis
Changedetection
Problemstatement
Ma MbTab
MMa MMt MMb
Platform
M2T
MMb’
Platform’
?
?
Problemstatement
Changeimpact
analysis
Change
propagation
Changedetection
Problemstatement
Ma MbTab
MMa MMt MMb
Platform
M2T
MMb’
Platform’
Δ
Δ
Problemstatement
Changeimpact
analysis
Change
propagation
Verify and validateChangedetection
Problemstatement
Ma MbTab
MMa MMt MMb
Platform
M2T
MMb’
Platform’
Index
1. Background
2.Problem statement
3. Model transformation co-evolution
4.Adapter-based approach to co-evolve generated SQL in M2Ttransformations
5.Testing MOFScript transformations with HandyMOF
6.Conclusions
7. Futurework
3. Model transformation co-evolution
1. Problem statement
2. Solution
3.Case study
Index
4. Proof-of-concept
Context Problem
statement
Metamodel evolution
Alternative versions, adapt to formalism, correct errors, refactor, …
1
Model co-evolution to metamodel changes
Bibliography
2
Problem Problem
statement
Metamodel evolution
Impacts on transformations
1
Manual migration of these transformations
Cumbersome and error-prone
2
Ma MbTab
MMa MMt MMb
Problem Problem
statement
Semi-automatic migration process
Impacts on transformations
1
3. Model transformation co-evolution
1. Problem statement
2. Solution
3.Case study
Index
4. Proof-of-concept
Process Solution
MM and evolved
MM
Detection: simple
changes
1
Normalized
transf.
Transf.
Simple changes
Changes
Similarity model
Co-evolved
transf.
Detection: complex
changes
Similarity analysis
CNFconversion
Co-evolution
2
3
4
Detectionstage Solution
Retrieve simple changes
Using a comparison tool
1
Convert simple changes to complex
Iftheyare semantically related
2
AddModelElement
UpdateAttribute
RemoveModelElement
SplitClass
Co-evolution stage Solution
Define correspondencesthat map the original transformation into an evolved
transformation
Taxonomy of changes based on the impact
Non BreakingChanges (NBC)
Breaking and Resolvable Changes (BRC)
Breaking and Unresolvable Changes(BUC)
“Everything is a model” philosophy: implemented as HOTs
Co-evolution stage Solution
Auxiliary steps
ConjunctiveNormal Form conversion
Similarityanalysis
not ((A and B) orC) -> (not A or not B) and not C
Source1
Target 1
Target 2
Similarity =1
Similarity=0
Minimumdeletion Solution
Delete only the strictly necessary rule fragments
String concatenation
s3 <- s1 + s2. Ifs1 is removed: s3 <-s2
Collections
Set{A, B, C}. IfA is removed: Set{B, C}
Boolean expressions
not ErasmusGrant or (speakEnglish and enrolledLastYear). IfspeakEnglish is removed: not ErasmusGrant
or enrolledLastYear
3. Model transformation co-evolution
1. Problem statement
2. Solution
3.Case study
Index
4. Proof-of-concept
Metamodels Case study
Extract superclass Case study
TheAssistantMVC's Multiple class is introduced in the target metamodel
Extract superclass is a NBC case. Noaction is needed
1
Deletemetaproperty Case study
Themetaproperty optionalis deleted from ExamElement
minimumdeletion is applied
2
Deletemetaproperty Case study
Opposite filters in two rules
Rule
2
(value>5 andoptional)orlong
(value > 5 orlong)
Rule
1
2
not((value>5 andoptional)orlong)
(not value>5or not optional) and not long
(not value>5and not long)
Type change Case study3
TheAssistantMVC's fontNamemetaproperty is changedfrom string to integer
BUCin most of the cases. Only few cases areNBC
Split class Case study4
TheAssistantMVC's OpenElementclass is splitted into OpenElement_1 and
OpenElement_2
Split class Case study4
One rule willbecome two, and bindings willbe moved to
the corresponding rule depending on the used metaproperty
Add class and property Case study5
New subclass ExerciseElementandmetaproperty style
3. Model transformation co-evolution
1. Problem statement
2. Solution
3.Case study
Index
4. Proof-of-concept
Proofof concept
Prototype for Ecore-basedMMs and ATL
Simple changes ->Complexchanges
HOT to adapt transformation to changes
Index
1. Background
2.Problem statement
3. Model transformation co-evolution
4.Adapter-based approach to co-evolve generated SQL in M2Ttransformations
5.Testing MOFScript transformations with HandyMOF
6.Conclusions
7. Futurework
4.Adapter-based approach to co-evolve generated SQL in M2Ttransformations
1. Problem statement
2.Case study
3. Solution
4. Evaluation
Index
5. Proof-of-concept
Context Problem
statement
Software componets on top of platforms
Dependencies
1
Platform evolution is a common situation
DB, API, …
2
CMS
Blog
Wiki
API
Application
Context Problem
statement
Perpetual beta state of platforms
Increasefrequency of releases
3
External dependency
Out of ourcontrol
4
Context Problem
statement
Domain model M2T Transf. Code
Transformation
code
Model refs. Embedded platform code
Problem Problem
statement
Different versions of the platform leave the code
and M2T transformation outdated
DB DB’
Code
M2T
Δ
?
Solution Problem
statement
Adapter to adapt generated code to new platform
DB DB’
Code
M2T
Δ
Code
Adapter
4.Adapter-based approach to co-evolve generated SQL in M2Ttransformations
1. Problem statement
2.Case study
3. Solution
4. Evaluation
Index
5. Proof-of-concept
MediaWikiDB in Wikiwhirl Case study
MediaWiki Case study
+ 40.000
+ 200
Used by Wikipedia
and more than40.000 wikis
Schema upgrades
MediaWikiDB in Wikiwhirl Case study
Platform-dependent
concepts
Refs. to domain model
4.Adapter-based approach to co-evolve generated SQL in M2Ttransformations
1. Problem statement
2.Case study
3. Solution
4. Evaluation
Index
5. Proof-of-concept
Solutionoverview Solution
Synchronize the generatedcodewith platform
Using adapters at runtime
DB’
Code
Adapter
M2T print("INSERTintocategorylinks (cl_from, cl_to, cl_sortkey,
cl_timestamp) VALUES(@pageId, '"+ categoryTitle + "','"+ pageTitle)
INSERTINTOcategorylinks(cl_from, cl_to, cl_sortkey, cl_timestamp, cl_type,
cl_sortkey_prefix, cl_collation) VALUES(@pageId, ‘Softwareproject’, ‘House_Testing’,
(DATE_FORMAT(CURRENT_TIMESTAMP(),‘%Y%m%d%k%i%s’),‘page’, ‘’,‘0’);
New columns“cl_type”, “cl_sortkey_prefix” and
“cl_collation”
Process outline Solution
Code
(MediaWiki DB)
New MediaWiki
schema
Domain
model
Difference
model
New schema
model
Old schema
model
Old MediaWiki
schema
Transformation
(M2T+ adapter)
Injection
(Schemol)
Injection
(Schemol)
Comparison
(EMFCompare)
1
2
3
Process outline Solution
Code
(MediaWiki DB)
New MediaWiki
schema
Domain
model
Difference
model
New schema
model
Old schema
model
Old MediaWiki
schema
Transformation
(M2T+ adapter)
Injection
(Schemol)
Injection
(Schemol)
Comparison
(EMFCompare)
1
2
3
All in one click
Differences between platforms: DB schema Solution
Injection
(Schemol)
Comparison
(EMFCompare)
Process: Schema Modification Operators (SMO) Solution
SMO %of usage Change
type
Adaptation
Createtable 8.9 NBC New commentinthetransformationontheexistenceofthistablein thenew version
Droptable 3.3 BRC Deletestatementassociatedtothetable
Renametable 1.1 BRC Updatename
Copytable 2.2 NBC (None)
Add column 38.7 NBC/BRC Forinsertstatements:iftheattributeisNotNull,addthenewcolumn in thestatementwitha
defaultvalue(fromtheDB ifthereisavailableoraccordingtothetypeifthereisnot)
Dropcolumn 26.4 BRC Deletethecolumn andthevaluein thestatement
Renamecolumn 16 BRC Updatename
Copycolumn 0.4 BRC Likeaddcolumncase
Movecolumn 1.5 BRC Likedropcolumn+addcolumncases
Adaptation Solution
“println” instructions arereplaced with “printSQL”1
Plaform-specific, schema-independent
Import “printSQL” library
ZQLextension
Foreach printSQL invocation
Iterate over the changesreported inthe Difference model
Checksif anyof the changesimpacts the currentstatement
Needed information to adapt thestatement is retrieved and added to a list of
parameters: the statement, affected table, column,…
2
3
4
A function that adapts the statement is called andnew statement is printed
Adaptation output Solution
Added
columns
Delete
tables
Delete
columns
Dump changes from code to transformation Solution
M2T
transformation
M2T
transformation’
HOT
Assist manual propagation
Record generation with changeto bedone and where(line and column in the transformation)
print(“select * from …”)
printSQL(“select * from …”, line, column)
RECORD:
#Added columns cl_type, cl_sortkey_prefix andcl_collation
#transformation line: 12, column: 11
INSERTINTOcategorylinks(cl_from, cl_to, cl_sortkey, cl_timestamp, cl_type,
cl_sortkey_prefix, cl_collation) VALUES(@pageId, ‘Softwareproject’, ‘House_Testing’,
(DATE_FORMAT(CURRENT_TIMESTAMP(),‘%Y%m%d%k%i%s’),‘page’, ‘’,‘0’);
Roles Solution
Producer
Injectorfor target platform
Consumer
Importadapter libraryin the transformation
Implementadapter as a library for transformation
Execute the batch
4.Adapter-based approach to co-evolve generated SQL in M2Ttransformations
1. Problem statement
2.Case study
3. Solution
4. Evaluation
Index
5. Proof-of-concept
Cost equations Evaluation
Manual Cost = D + P * #Impacts
D: Detection time
Assisted Cost = C + V * #Impacts
C: Configuration time
P: Propagation time
V: Verification time
Manualvs Assisted Evaluation
4.Adapter-based approach to co-evolve generated SQL in M2Ttransformations
1. Problem statement
2.Case study
3. Solution
4. Evaluation
Index
5. Proof-of-concept
Proofof concept
Library for MOFScript transformation
Batch that automatized the whole process: injection, comparison, execution
Empirical evaluation
Index
1. Background
2.Problem statement
3. Model transformation co-evolution
4.Adapter-based approach to co-evolve generated SQL in M2Ttransformations
5.Testing MOFScript transformations with HandyMOF
6.Conclusions
7. Futurework
5. Testing MOFScript transformations with HandyMOF
1. Motivating scenario
2. Problem
3. Solution
Index
4. Proof-of-concept
5.Demo
Motivating scenario Case study
M2T
Motivating scenario Case study
MofScript
Java
Transformation
Model
5. Testing MOFScript transformations with HandyMOF
1. Motivating scenario
2. Problem
3. Solution
Index
4. Proof-of-concept
5.Demo
Problem
M2T
Traceability
?
line line1 line line2
5. Testing MOFScript transformations with HandyMOF
1. Motivating scenario
2. Problem
3. Solution
Index
4. Proof-of-concept
5.Demo
Traceability in HandyMOF Solution
M2T M2T Calculatesphysical
positions
M2T
Traces
HOTtransformation1
Execution2
Solution
executed3 ?
White-box testing
Highly coupled to thelanguage
M2T: nowinnerlanguage
Hybrid approach Solution
Generate test suite with black-box testing1
Check lines coveredby this suite2
?
Testing in HandyMOF Solution
Test model generation1
Tracegeneration2
Pramana
Testing in HandyMOF Solution
Finding minimal model suite3
Model suite Minimal model suite
Architecture Solution
metamodels
models
Pramana
TraceGenerator
HandyMOF
MinimalModelSuiteFinder
transformations
traces
generated
code
5. Testing MOFScript transformations with HandyMOF
1. Motivating scenario
2. Problem
3. Solution
4. Proof-of-concept
Index
5.Demo
Proofof concept
HandyMOF tool
Visualizes graphicallytraceability between transformation and code
Coverage analysis of test suite intransformation
5. Testing MOFScript transformations with HandyMOF
1. Motivating scenario
2. Problem
3. Solution
Index
4. Proof-of-concept
5.Demo
http://onekin.org/downloads/public/screencasts/handyMOF.htm
Index
1. Background
2.Problem statement
3. Model transformation co-evolution
4.Adapter-based approach to co-evolve generated SQL in M2Ttransformations
5.Testing MOFScript transformations with HandyMOF
6.Conclusions
7. Futurework
Model transformation co-evolution Conclusions
Semi-automatic process to adapt transformations to metamodel evolution
Minimumdeletion
Map the original transformation into an evolved transformation that tackles MM changes
Ecore-based MMs
ATLtransformations
Derive complex changesfrom simple changes
Co-evolve generated code Conclusions
Mechanism to adapt codegenerated by M2T transformations to platform
evolution
Premises: platform instability and transformation coupling
Apply in a specific case study
MediaWiki DB
Assist dumping changestothe transformation
Testing M2T transformations Conclusions
line line1
line line2
executed3 ?
General contributions Conclusions
Improve traceability mechanisms
Point out andstudy the coupling between artifacts in MDE ecosystem and its
implication
Synchronization techniques for co-evolution
MM – M2Mtransf., Platform – M2Ttransf.
M2T transf. - code
MM – M2Mtransf. (HOT), Platform – M2T transf. (Adapter)
Index
1. Background
2.Problem statement
3. Model transformation co-evolution
4.Adapter-based approach to co-evolve generated SQL in M2Ttransformations
5.Testing MOFScript transformations with HandyMOF
6.Conclusions
7. Futurework
Model transformation co-evolution Future work
Empirical assessment
Real scenario
Co-evolve generated code Future work
Generalization: other platforms
Methodology for adapter development
Testing M2T transformations Future work
Assist in creating missing models
Improve minimal model suite algorithm
Add other M2t transformation languages
Evaluation
Future research Future work
Good practices in transformations
M2T transf.: model world – text world
Dissemination & contrast
Model Transformation Co-evolution: ASemi-automatic Approach
Software Language Engineering(SLE) 2012,Dresden (Germany)
An Adapter-Based Approach to Co-evolve GeneratedSQLin M2T
Transformations
Advanced Information Systems Engineering(CAiSE) 2014,Thessaloniki (Greece)
Testing MOFScript Transformations with HandyMOF
International Conferenceon Model Transformations (ICMT)2014,York(UK)
Comments
Questions
Prototypes: www.onekin.org/content/jokin-garcia-0
jokingarcia75@gmail.com

Weitere ähnliche Inhalte

Was ist angesagt?

Defaultification Refactoring: A Tool for Automatically Converting Java Method...
Defaultification Refactoring: A Tool for Automatically Converting Java Method...Defaultification Refactoring: A Tool for Automatically Converting Java Method...
Defaultification Refactoring: A Tool for Automatically Converting Java Method...Raffi Khatchadourian
 
ISO Metadata Improvements - Questions and Answers
ISO Metadata Improvements - Questions and AnswersISO Metadata Improvements - Questions and Answers
ISO Metadata Improvements - Questions and AnswersTed Habermann
 
19157 Questions and Answers
19157 Questions and Answers19157 Questions and Answers
19157 Questions and AnswersTed Habermann
 
Programming with Objective-C
Programming with Objective-CProgramming with Objective-C
Programming with Objective-CNagendra Ram
 
Granules and ISO Metadata
Granules and ISO MetadataGranules and ISO Metadata
Granules and ISO MetadataTed Habermann
 
Opal Hermes - towards representative benchmarks
Opal  Hermes - towards representative benchmarksOpal  Hermes - towards representative benchmarks
Opal Hermes - towards representative benchmarksMichaelEichberg1
 
survey of different data dependence analysis techniques
 survey of different data dependence analysis techniques survey of different data dependence analysis techniques
survey of different data dependence analysis techniquesINFOGAIN PUBLICATION
 
Poster on Automated Refactoring of Legacy Java Software to Default Methods
Poster on Automated Refactoring of Legacy Java Software to Default MethodsPoster on Automated Refactoring of Legacy Java Software to Default Methods
Poster on Automated Refactoring of Legacy Java Software to Default MethodsRaffi Khatchadourian
 
Can ISO 19157 support current NASA data quality metadata?
Can ISO 19157 support current NASA data quality metadata?Can ISO 19157 support current NASA data quality metadata?
Can ISO 19157 support current NASA data quality metadata?Ted Habermann
 
M256 Unit 1 - Software Development with Java
M256 Unit 1 - Software Development with JavaM256 Unit 1 - Software Development with Java
M256 Unit 1 - Software Development with JavaYaseen
 
Bad Code Smells
Bad Code SmellsBad Code Smells
Bad Code Smellskim.mens
 
Summarizing Software API Usage Examples Using Clustering Techniques
Summarizing Software API Usage Examples Using Clustering TechniquesSummarizing Software API Usage Examples Using Clustering Techniques
Summarizing Software API Usage Examples Using Clustering TechniquesNikos Katirtzis
 
Multi-Objective Cross-Project Defect Prediction
Multi-Objective Cross-Project Defect PredictionMulti-Objective Cross-Project Defect Prediction
Multi-Objective Cross-Project Defect PredictionSebastiano Panichella
 
ERA - Clustering and Recommending Collections of Code Relevant to Task
ERA - Clustering and Recommending Collections of Code Relevant to TaskERA - Clustering and Recommending Collections of Code Relevant to Task
ERA - Clustering and Recommending Collections of Code Relevant to TaskICSM 2011
 

Was ist angesagt? (20)

Defaultification Refactoring: A Tool for Automatically Converting Java Method...
Defaultification Refactoring: A Tool for Automatically Converting Java Method...Defaultification Refactoring: A Tool for Automatically Converting Java Method...
Defaultification Refactoring: A Tool for Automatically Converting Java Method...
 
ISO Metadata Improvements - Questions and Answers
ISO Metadata Improvements - Questions and AnswersISO Metadata Improvements - Questions and Answers
ISO Metadata Improvements - Questions and Answers
 
19157 Questions and Answers
19157 Questions and Answers19157 Questions and Answers
19157 Questions and Answers
 
Programming with Objective-C
Programming with Objective-CProgramming with Objective-C
Programming with Objective-C
 
Granules and ISO Metadata
Granules and ISO MetadataGranules and ISO Metadata
Granules and ISO Metadata
 
Opal Hermes - towards representative benchmarks
Opal  Hermes - towards representative benchmarksOpal  Hermes - towards representative benchmarks
Opal Hermes - towards representative benchmarks
 
survey of different data dependence analysis techniques
 survey of different data dependence analysis techniques survey of different data dependence analysis techniques
survey of different data dependence analysis techniques
 
Poster on Automated Refactoring of Legacy Java Software to Default Methods
Poster on Automated Refactoring of Legacy Java Software to Default MethodsPoster on Automated Refactoring of Legacy Java Software to Default Methods
Poster on Automated Refactoring of Legacy Java Software to Default Methods
 
C#
C#C#
C#
 
Can ISO 19157 support current NASA data quality metadata?
Can ISO 19157 support current NASA data quality metadata?Can ISO 19157 support current NASA data quality metadata?
Can ISO 19157 support current NASA data quality metadata?
 
M256 Unit 1 - Software Development with Java
M256 Unit 1 - Software Development with JavaM256 Unit 1 - Software Development with Java
M256 Unit 1 - Software Development with Java
 
Bad Code Smells
Bad Code SmellsBad Code Smells
Bad Code Smells
 
Summarizing Software API Usage Examples Using Clustering Techniques
Summarizing Software API Usage Examples Using Clustering TechniquesSummarizing Software API Usage Examples Using Clustering Techniques
Summarizing Software API Usage Examples Using Clustering Techniques
 
SAD10 - Refactoring
SAD10 - RefactoringSAD10 - Refactoring
SAD10 - Refactoring
 
Multi-Objective Cross-Project Defect Prediction
Multi-Objective Cross-Project Defect PredictionMulti-Objective Cross-Project Defect Prediction
Multi-Objective Cross-Project Defect Prediction
 
Ppt chapter10
Ppt chapter10Ppt chapter10
Ppt chapter10
 
C# interview questions
C# interview questionsC# interview questions
C# interview questions
 
L5 m256 block2_unit5
L5 m256 block2_unit5L5 m256 block2_unit5
L5 m256 block2_unit5
 
L5
L5L5
L5
 
ERA - Clustering and Recommending Collections of Code Relevant to Task
ERA - Clustering and Recommending Collections of Code Relevant to TaskERA - Clustering and Recommending Collections of Code Relevant to Task
ERA - Clustering and Recommending Collections of Code Relevant to Task
 

Ähnlich wie PhD Maintainability of transformations in evolving MDE ecosystems

ICMT 2016: Search-Based Model Transformations with MOMoT
ICMT 2016: Search-Based Model Transformations with MOMoTICMT 2016: Search-Based Model Transformations with MOMoT
ICMT 2016: Search-Based Model Transformations with MOMoTMartin Fleck
 
Thesis Defense (Gwendal DANIEL) - Nov 2017
Thesis Defense (Gwendal DANIEL) - Nov 2017Thesis Defense (Gwendal DANIEL) - Nov 2017
Thesis Defense (Gwendal DANIEL) - Nov 2017Gwendal Daniel
 
MDE=Model Driven Everything (Spanish Eclipse Day 2009)
MDE=Model Driven Everything (Spanish Eclipse Day 2009)MDE=Model Driven Everything (Spanish Eclipse Day 2009)
MDE=Model Driven Everything (Spanish Eclipse Day 2009)Jordi Cabot
 
A Taxonomy for Program Metamodels in Program Reverse Engineering
A Taxonomy for Program Metamodels in Program Reverse EngineeringA Taxonomy for Program Metamodels in Program Reverse Engineering
A Taxonomy for Program Metamodels in Program Reverse EngineeringHironori Washizaki
 
Code Refactoring
Code RefactoringCode Refactoring
Code Refactoringkim.mens
 
Searching for Quality: Genetic Algorithms and Metamorphic Testing for Softwar...
Searching for Quality: Genetic Algorithms and Metamorphic Testing for Softwar...Searching for Quality: Genetic Algorithms and Metamorphic Testing for Softwar...
Searching for Quality: Genetic Algorithms and Metamorphic Testing for Softwar...Annibale Panichella
 
Static and Adaptive Bug Fix Patterns
Static and Adaptive Bug Fix PatternsStatic and Adaptive Bug Fix Patterns
Static and Adaptive Bug Fix PatternsSung Kim
 
Efficient Validation of Large Models using the Mogwaï Tool
Efficient Validation of Large Models using the Mogwaï ToolEfficient Validation of Large Models using the Mogwaï Tool
Efficient Validation of Large Models using the Mogwaï ToolGwendal Daniel
 
Large Language Models for Test Case Evolution and Repair
Large Language Models for Test Case Evolution and RepairLarge Language Models for Test Case Evolution and Repair
Large Language Models for Test Case Evolution and RepairLionel Briand
 
MOSKitt Transformations And Traceability
MOSKitt Transformations And TraceabilityMOSKitt Transformations And Traceability
MOSKitt Transformations And TraceabilityJavier Muñoz
 
Effective Detection of Model Changes
Effective Detection of Model ChangesEffective Detection of Model Changes
Effective Detection of Model ChangesDavid Méndez-Acuña
 
CAiSE 2014 An adapter-based approach for M2T transformations
CAiSE 2014 An adapter-based approach for M2T transformationsCAiSE 2014 An adapter-based approach for M2T transformations
CAiSE 2014 An adapter-based approach for M2T transformationsJokin García Pérez
 
Search-Based Robustness Testing of Data Processing Systems
Search-Based Robustness Testing of Data Processing SystemsSearch-Based Robustness Testing of Data Processing Systems
Search-Based Robustness Testing of Data Processing SystemsLionel Briand
 
Tensors Are All You Need: Faster Inference with Hummingbird
Tensors Are All You Need: Faster Inference with HummingbirdTensors Are All You Need: Faster Inference with Hummingbird
Tensors Are All You Need: Faster Inference with HummingbirdDatabricks
 
Differences between method overloading and method overriding
Differences between method overloading and method overridingDifferences between method overloading and method overriding
Differences between method overloading and method overridingPinky Anaya
 
PHP modernization approach generating KDM models from PHP legacy code
PHP modernization approach generating KDM models from PHP legacy codePHP modernization approach generating KDM models from PHP legacy code
PHP modernization approach generating KDM models from PHP legacy codejournalBEEI
 
Master defence 2020 - Oleh Onyshchak - Image Recommendation for Wikipedia Ar...
 Master defence 2020 - Oleh Onyshchak - Image Recommendation for Wikipedia Ar... Master defence 2020 - Oleh Onyshchak - Image Recommendation for Wikipedia Ar...
Master defence 2020 - Oleh Onyshchak - Image Recommendation for Wikipedia Ar...Lviv Data Science Summer School
 
A DSL for Model Mutation and its Applications to Different Domains
A DSL for Model Mutation and its Applications to Different DomainsA DSL for Model Mutation and its Applications to Different Domains
A DSL for Model Mutation and its Applications to Different DomainsPablo Gómez Abajo
 
ALMOsT.js: an Agile Model to Model and Model to Text Transformation Framework
ALMOsT.js: an Agile Model to Model and Model to Text Transformation FrameworkALMOsT.js: an Agile Model to Model and Model to Text Transformation Framework
ALMOsT.js: an Agile Model to Model and Model to Text Transformation FrameworkCarlo Bernaschina
 

Ähnlich wie PhD Maintainability of transformations in evolving MDE ecosystems (20)

ICMT 2016: Search-Based Model Transformations with MOMoT
ICMT 2016: Search-Based Model Transformations with MOMoTICMT 2016: Search-Based Model Transformations with MOMoT
ICMT 2016: Search-Based Model Transformations with MOMoT
 
Thesis Defense (Gwendal DANIEL) - Nov 2017
Thesis Defense (Gwendal DANIEL) - Nov 2017Thesis Defense (Gwendal DANIEL) - Nov 2017
Thesis Defense (Gwendal DANIEL) - Nov 2017
 
MDE=Model Driven Everything (Spanish Eclipse Day 2009)
MDE=Model Driven Everything (Spanish Eclipse Day 2009)MDE=Model Driven Everything (Spanish Eclipse Day 2009)
MDE=Model Driven Everything (Spanish Eclipse Day 2009)
 
A Taxonomy for Program Metamodels in Program Reverse Engineering
A Taxonomy for Program Metamodels in Program Reverse EngineeringA Taxonomy for Program Metamodels in Program Reverse Engineering
A Taxonomy for Program Metamodels in Program Reverse Engineering
 
Code Refactoring
Code RefactoringCode Refactoring
Code Refactoring
 
Searching for Quality: Genetic Algorithms and Metamorphic Testing for Softwar...
Searching for Quality: Genetic Algorithms and Metamorphic Testing for Softwar...Searching for Quality: Genetic Algorithms and Metamorphic Testing for Softwar...
Searching for Quality: Genetic Algorithms and Metamorphic Testing for Softwar...
 
Static and Adaptive Bug Fix Patterns
Static and Adaptive Bug Fix PatternsStatic and Adaptive Bug Fix Patterns
Static and Adaptive Bug Fix Patterns
 
Efficient Validation of Large Models using the Mogwaï Tool
Efficient Validation of Large Models using the Mogwaï ToolEfficient Validation of Large Models using the Mogwaï Tool
Efficient Validation of Large Models using the Mogwaï Tool
 
Large Language Models for Test Case Evolution and Repair
Large Language Models for Test Case Evolution and RepairLarge Language Models for Test Case Evolution and Repair
Large Language Models for Test Case Evolution and Repair
 
MOSKitt Transformations And Traceability
MOSKitt Transformations And TraceabilityMOSKitt Transformations And Traceability
MOSKitt Transformations And Traceability
 
Effective Detection of Model Changes
Effective Detection of Model ChangesEffective Detection of Model Changes
Effective Detection of Model Changes
 
CAiSE 2014 An adapter-based approach for M2T transformations
CAiSE 2014 An adapter-based approach for M2T transformationsCAiSE 2014 An adapter-based approach for M2T transformations
CAiSE 2014 An adapter-based approach for M2T transformations
 
ReusingMT
ReusingMTReusingMT
ReusingMT
 
Search-Based Robustness Testing of Data Processing Systems
Search-Based Robustness Testing of Data Processing SystemsSearch-Based Robustness Testing of Data Processing Systems
Search-Based Robustness Testing of Data Processing Systems
 
Tensors Are All You Need: Faster Inference with Hummingbird
Tensors Are All You Need: Faster Inference with HummingbirdTensors Are All You Need: Faster Inference with Hummingbird
Tensors Are All You Need: Faster Inference with Hummingbird
 
Differences between method overloading and method overriding
Differences between method overloading and method overridingDifferences between method overloading and method overriding
Differences between method overloading and method overriding
 
PHP modernization approach generating KDM models from PHP legacy code
PHP modernization approach generating KDM models from PHP legacy codePHP modernization approach generating KDM models from PHP legacy code
PHP modernization approach generating KDM models from PHP legacy code
 
Master defence 2020 - Oleh Onyshchak - Image Recommendation for Wikipedia Ar...
 Master defence 2020 - Oleh Onyshchak - Image Recommendation for Wikipedia Ar... Master defence 2020 - Oleh Onyshchak - Image Recommendation for Wikipedia Ar...
Master defence 2020 - Oleh Onyshchak - Image Recommendation for Wikipedia Ar...
 
A DSL for Model Mutation and its Applications to Different Domains
A DSL for Model Mutation and its Applications to Different DomainsA DSL for Model Mutation and its Applications to Different Domains
A DSL for Model Mutation and its Applications to Different Domains
 
ALMOsT.js: an Agile Model to Model and Model to Text Transformation Framework
ALMOsT.js: an Agile Model to Model and Model to Text Transformation FrameworkALMOsT.js: an Agile Model to Model and Model to Text Transformation Framework
ALMOsT.js: an Agile Model to Model and Model to Text Transformation Framework
 

Kürzlich hochgeladen

How To Manage Restaurant Staff -BTRESTRO
How To Manage Restaurant Staff -BTRESTROHow To Manage Restaurant Staff -BTRESTRO
How To Manage Restaurant Staff -BTRESTROmotivationalword821
 
Implementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureImplementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureDinusha Kumarasiri
 
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024StefanoLambiase
 
Odoo 14 - eLearning Module In Odoo 14 Enterprise
Odoo 14 - eLearning Module In Odoo 14 EnterpriseOdoo 14 - eLearning Module In Odoo 14 Enterprise
Odoo 14 - eLearning Module In Odoo 14 Enterprisepreethippts
 
Sending Calendar Invites on SES and Calendarsnack.pdf
Sending Calendar Invites on SES and Calendarsnack.pdfSending Calendar Invites on SES and Calendarsnack.pdf
Sending Calendar Invites on SES and Calendarsnack.pdf31events.com
 
Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Velvetech LLC
 
Machine Learning Software Engineering Patterns and Their Engineering
Machine Learning Software Engineering Patterns and Their EngineeringMachine Learning Software Engineering Patterns and Their Engineering
Machine Learning Software Engineering Patterns and Their EngineeringHironori Washizaki
 
Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Andreas Granig
 
Powering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data StreamsPowering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data StreamsSafe Software
 
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanySuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanyChristoph Pohl
 
CRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. SalesforceCRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. SalesforceBrainSell Technologies
 
MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based projectAnoyGreter
 
SpotFlow: Tracking Method Calls and States at Runtime
SpotFlow: Tracking Method Calls and States at RuntimeSpotFlow: Tracking Method Calls and States at Runtime
SpotFlow: Tracking Method Calls and States at Runtimeandrehoraa
 
Folding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesFolding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesPhilip Schwarz
 
SensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving CarsSensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving CarsChristian Birchler
 
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...Angel Borroy López
 
Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)Ahmed Mater
 
20240415 [Container Plumbing Days] Usernetes Gen2 - Kubernetes in Rootless Do...
20240415 [Container Plumbing Days] Usernetes Gen2 - Kubernetes in Rootless Do...20240415 [Container Plumbing Days] Usernetes Gen2 - Kubernetes in Rootless Do...
20240415 [Container Plumbing Days] Usernetes Gen2 - Kubernetes in Rootless Do...Akihiro Suda
 
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Matt Ray
 

Kürzlich hochgeladen (20)

How To Manage Restaurant Staff -BTRESTRO
How To Manage Restaurant Staff -BTRESTROHow To Manage Restaurant Staff -BTRESTRO
How To Manage Restaurant Staff -BTRESTRO
 
Implementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureImplementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with Azure
 
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
 
Odoo 14 - eLearning Module In Odoo 14 Enterprise
Odoo 14 - eLearning Module In Odoo 14 EnterpriseOdoo 14 - eLearning Module In Odoo 14 Enterprise
Odoo 14 - eLearning Module In Odoo 14 Enterprise
 
Sending Calendar Invites on SES and Calendarsnack.pdf
Sending Calendar Invites on SES and Calendarsnack.pdfSending Calendar Invites on SES and Calendarsnack.pdf
Sending Calendar Invites on SES and Calendarsnack.pdf
 
Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...
 
Machine Learning Software Engineering Patterns and Their Engineering
Machine Learning Software Engineering Patterns and Their EngineeringMachine Learning Software Engineering Patterns and Their Engineering
Machine Learning Software Engineering Patterns and Their Engineering
 
Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024
 
Powering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data StreamsPowering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data Streams
 
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanySuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
 
CRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. SalesforceCRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. Salesforce
 
MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based project
 
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort ServiceHot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
 
SpotFlow: Tracking Method Calls and States at Runtime
SpotFlow: Tracking Method Calls and States at RuntimeSpotFlow: Tracking Method Calls and States at Runtime
SpotFlow: Tracking Method Calls and States at Runtime
 
Folding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesFolding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a series
 
SensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving CarsSensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving Cars
 
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
 
Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)
 
20240415 [Container Plumbing Days] Usernetes Gen2 - Kubernetes in Rootless Do...
20240415 [Container Plumbing Days] Usernetes Gen2 - Kubernetes in Rootless Do...20240415 [Container Plumbing Days] Usernetes Gen2 - Kubernetes in Rootless Do...
20240415 [Container Plumbing Days] Usernetes Gen2 - Kubernetes in Rootless Do...
 
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
 

PhD Maintainability of transformations in evolving MDE ecosystems

Hinweis der Redaktion

  1. Good morning. My name is Jokin García and I will present my dissertation, entitled “Mantainability of transformations in evolving MDE ecosystems”. This work was supervised by Professor Oscar Díaz and was developed in the Onekin Research Group, in the Department of Computer Languages and Systems of the University of the Basque Country. Let us begin.
  2. What is Model-Driven Engineering?
  3. MDE is a software engineering approach that uses abstraction as a way to manage complexity of systems.
  4. 1- A model represents concepts and relationships between them. MDE enacts models as first-class citizens in software engineering, following the “everything is a model” motto.
  5. 2- Some elements from the conceptualization of the system are used to construct a model of it.
  6. 3- MMs define concepts and theis relationships within a domain
  7. 4- transformation engines that use models to synthesize sw artifacts
  8. Ecosystem: physical and biological components of an environment considered in relation to each other as a unit An ecosystem is the result of a delicate and dynamic balance between its interacting components The ecosystem dynamics are traditionally represented in a trophic web If the disturbances become too important, ecosystems may get out of balance (e. g. a meteorite impact that made all dinosaurs extinct). The ability of an ecosystem to reorganize itself and return to an equilibrium close to the initial one is called its resilience. Because of the disturbance, the new equilibrium that is reached may be different from the original one (some types of organisms may have disappeared, and others may have taken up their place), so the ecosystem will have evolved.
  9. In computer science, there is no common definition of software ecosystem… but could be… Software ecosystems “a collection of software products which are developed and evolve together in the same environment”. Managing the complex interaction and dynamics of these ecosystems is an important problem that raises additional challenges compared to the maintenance of individual software systems. Architecturally, a software ecosystem consists of a platform, products built on top of that platform, and applications built on top of the platform that extend the products with functionality developed by external developers. While software ecosystems offer numerous advantages, their development also creates a network of complex interdependencies between elements throughout the whole ecosystem. If we focus on MDE ecosystem, models, metamodels, and transformations are heavily interrelated by foundation. MDE artifacts rarely exist in isolation. In fact, they are often tightly coupled: models conform to metamodels, transformations are defined from a metamodel to another metamodel (or code), code is generated for a particular platform, etc. In this sense, we can say that modeling artifacts live in an ecosystem.
  10. Several studies indicate that software maintenance accounts for at least 50% of the total production cost, and sometimes even exceeds 90% In MDE, maintainability is even more important, because in the context of software ecosystems, there are more sources of instability that can break the correction of the artifacts, as an artifact itself can change, but also its related artifacts.
  11. The following reasons sustain the importance of tackling maintainability in MDE: (p. 18) Increase in the number of artifacts. Within the MDE ecosystem, system definition is split along different concerns (a.k.a viewpoints) and abstraction layers. This increases the number of dependencies to be kept in sync. Increase in the complexity of artifacts: model-to-text transformations. These artifacts are more complex than the code they generate because their expressiveness includes the grammar of the code language, the grammar of the transformation language and references to the input model. This mixture and interleave of different types of concepts (e.g. java methods, control instructions from the transformation language, and concepts from the domain model Larger upfront investment: The upfront investment in MDE is larger than the one required for developing a single application. MDE ecosystem (i.e. metamodels, models, and transformations) surpass that of traditional development. MDE presumes certain stability along time in order to reuse the infrastructure. Therefore, Return Of Investment (ROI) is obtained in the medium/long run as distinct generated applications benefit from the MDE infrastructure. Although empirical evidence has proven that MDE has benefits on maintainability, it is also agreed that there are negative influences, as the need to keep models/code in sync
  12. The same way species adapt to the environment and compete with each other, sw adapts to the new requirements. Apps struggle for a larger market share in order to increase their profit. Software evolution is an inevitable process where software systems need to be continually adapted to the changing environment. MDE is not an exception. While introduction of model-driven engineering brings advantages, it also requires a new style of evolution. Since MDE hardwires many more architectural and design decisions than a traditional development, it requires multiple dimensions of evolution
  13. The biological phenomenon of co-evolution arises if the genetic composition of one species changes in response to a genetic change in another one. Software ecosystems evolve, and co-evolution denotes the necessary evolutionary mutual changes of software components that interact with each other after one evolves. In MDE, when one of the artifacts evolves, the system may have become inconsistent. Related artifacts must co-evolve accordingly to recover the consistency. Typical adaptation process: 1- change detection: there is a first phase where the difference between evolving artifact versions are retrieved 2- impact analysis: once that relationships between different versions have been made, it is assessed what parts of the system are likely to be affected by a change on the related artifacts. 3- change propagation (p. 61): once we know where have been affected the artifacts by what kind of changes, they will carry out some co-evolution actions. Sometimes, this co-evolution can be done automatically, and sometimes manually, depending on the complexity of the kind of impact. The aim of the co-evolution is to reestablish the synchronization between artifacts.
  14. Similarly to “traditional” software, MDE artifacts evolve over time as well. The evolution process followed by the software basically remains the same in MDE evolution: first, there exist a change that needs to be addressed, due to many reasons (i.e. error correction, more functionality, etc.); next it is assessed what the impact in the system will be; then the change is propagated to the system in a coherent way; and finally, it is verified that the system works as expected.
  15. There are many change sources. This Thesis focuses on two of them: MM and technological platform. The main difficulty is that the evolving artifacts may belong to different organizations. There might not be control over the evolution: only the initial and last state of the artifacts might be known.
  16. (anterior)
  17. Consequences of the previously detected changes must be studied. Specifically, (a) metamodel evolution impacts model-to-model transformations. (b) platform evolution impacts model-to-text transformations. This kind of transformations has platform-specific code hardcoded in print statements, and references from the model are interleaved in these statements. When the target platform evolves, these statements becomes outdated.
  18. (anterior)
  19. Once the impact of changes has been established, those changes have to be propagated to restore the coherence between related artifacts. This propagation is cumbersome and error-prone, and in some occasions, frequent. This advices to assist as much as possible this process.
  20. Contributions: A perfective action is proposed to recover the syntactical correctness of the transformation, and to ensure its coherence with the rest of the system. A semi-automatic coevolution process is proposed, which adapts the transformation to MM changes In the case of M2T transfs., a preventive mechanism is introduced to adapt dynamically the generated code to platform evolution
  21. Here, we focus on M2T transformations due to their external dependency with technological platforms. We have found that performing this maintainability task is difficult due to the chasm between the model-world and the text-world. Here, three different kind of artifacts (i.e. model, transformation and code) need to be in sync. To check this synchronization, traceability is needed among artifacts. However, template-based M2T transformation languages do not provide a traceability between the transformation and the generated code. Typical questions or doubts that usually arise include: “Has this part of the transformation executed with this input model and did it generate the proper code?” or “What transformation line did generate this code fragment?”. In addition, transformation coverage analysis (i.e. the extent to which the different model variants have been considered by the transformation) is also of interest to ensure a thorough validation.
  22. Supportive tool has been done to ascertain the correspondence between transformation statements and the code they generate and to facilitate a proper coverage of the transformation code
  23. In the next three sections we are going to see the contributions in more detail.
  24. First, we are going to analyse what happens with the transformation when the metamodels evolve
  25. In MDE metamodels are subject to evolution. This is accepted by researchers and practitioners. There are many reasons for this. During design alternative metamodel versions may be developed. During implementation metamodels may be adapted to a concrete metamodel formalism supported by a tool. Finally, during maintenance errors in a metamodel may be corrected. Moreover, parts of the metamodel may be redesigned due to a better understanding or to facilitate reuse The consecuence of this evolution on Mms, is that the artifacts which are related to this metamodel get outdated. This is the case of both models and transformations. - The impact of metamodel changes on models has been studied in several works. On the contrary, transformation co-evolution has received less attention. It was the missing piece
  26. The summary of the problem would be: - (contextualizar) MM evolution impacts on transformations, because M2M transformations are specified using concepts from input and output model´s metamodels. - (motivar) Manual migration is cumbersome and error-prone
  27. - (resolver) Having this into account we propose a semi-automatic migration process that adapts transformations to MM evolution.
  28. This picture outlines an overview of this process Is a process that, given the original source and target metamodels, the evolved source and/or target metamodel/s and the transformation, generates the adapted transformation model, that will be extracted to an .atl file. -In the first phase, the detection phase 1) given the original and evolved metamodels, simple changes are detected using a comparison tool. 2) Then, these changes are grouped into simple or complex changes where applicable, in a simple to complex transformation. - Then, in the Co-evolution phase we have two auxiliary steps and the co-evolution itself. 3) In the similarity analysis a similarity model is derived from the input and output Mms (las últimas versiones) and will be optionally an input in the adaptation to infere where to map new elements. 4) CNF conversion: On the other hand, conditional expressions might need to be rewritten in CNF conversion phase to make possible to apply minimum deletion algorithms to the transformation. 5) Last, the Migrator performs the actual adaptation. Adaptation is implemented with a transformation modification HOT pattern
  29. As a comparison tool to obtain the simple changes we use EMF Compare. This tool takes two models as input and obtains the differences along the Difference metamodel. We have done an extension to this metamodel in order to support the concept of complex change. Having this metamodel, we are able to transform the model of simple changes to a model of complex changes using a transformation. The reason for this is that we want changes to conform a meaningful transation on the MM to not miss the intention of the designer and avoid misunderstandings.
  30. Once we have obtained the changes model, we are going to use it as input in the next phase: the co-evolution Our adaptation approach is based on a taxonomy of changes. This taxonomy of changes is divided into three groups based on the impact on transformations: - NBC: These changes have no impact on the transformation. - BRC: These changes do impact the transformation rules, but this impact is amenable to be automated. - BUC: These changes also impact the transformation, but full automatization is not possible and user intervention is required. We don't have time to see in detail all the changes. We will study 5 scenarios which are representative.
  31. Before we deal with the adaptation we need two auxiliary steps: one related to subtractive changes and the other related to additive changes. - CNF conversion: CNF is a conjunction of clauses, where a clause is a disjunction of literals. (i.e. ANDs that inside have ORs) In order to delete transformation structures properly, it is important to have the expressions normalized. - Similarity analysis is used to achieve a higher degree of automatization in additive changes. A similarity analysis is conducted between source and target Mms, to look for a matching element to the new elements. The effectiveness depends a lot on the semantic gap between metamodels. (This is based on metamodel matching works.)
  32. When a metaclass or a metaproperty is deleted, affected transformation elements have to be removed while keeping the transformation logic coherent. But if we delete everything and leave the transformation empty, it would compile properly but for sure it is not what we expected. What we expect is to removed only the necessary elements. There are different language structures to have into account:Some of them are very intuitive, as concatenations or collections, where we only remove the affected elements, but boolean expressions are not straightforward Boolean expressions are important because they are generally used in rule filters. First, we need to convert the expression to CNF, as we said before, using equivalence rules of Predicate Calculus. Then we need to apply conversion rules. there are two removal policies:- One is R subindex T, which means that the removed literal is satisfied by default- The other is R subindex F, which means that it is unsatisfied by default Why this? In the process of metamodel redesign, the designer could help giving the reason to take the decision of removing a metaproperty from the metamodel. For example, if all students in the university had a very good level of English (because it is a new precondition for the enrollment), it could be consideredas satised by default, and in case of removing the speakEnglish metaproperty, its value could be reinterpreted as removed&satised-by-default (RT). On the other hand, if the university had decided not to participate in the Erasmus Program, no student would have such grant, and in case of removing the ErasmusGrant metaproperty, its value could be reinterpreted as removed&unsatised-by-default (RF). If, in the previous example, there had been this expression not ErasmusGrant or (speakEnglish and enrolledLastYear), and later the redesign process decided to remove the speakEnglish metaproperty, then according to the truth table the expression would be rewritten as not ErasmusGrant or enrolledLastYear ; if the removed metaproperty had been ErasmusGrant with RF policy, then the new expression would have been true.
  33. Now I will use an scenario to illustrate some of the change types: We use a popular transformation from bibliography that transforms exam questions to Web-based exams along the MVC pattern. Left is the Exam MM, which is the input and right the Assistant MM, which is the output I will introduce some examples of changes in a metamodel: -Scenario 1. The AssistantMVC's Multiple class is introduced in the target metamodel. This new class abstracts away the commonality of three existing classes: MultipleChoiceController, MultipleChoiceView and MultipleChoice. - Scenario 2. Property optional is deleted from ExamXML's ExamElement. - Scenario 3. The AssistantMVC's fontColor metaproperty is changed from string to integer. -Scenario 4. The ExamXML's OpenElement class is splitted into OpenElement_1 and OpenElement_2. -Scenario 5. New subclass ExerciseElement is added to ExamElement metaclass, and a new property style is added to View target metaclass.
  34. - Scenario 1: It must be noted the importance of having this change as a complex change, otherwise it would be interpreted as a set of simple breaking changes instead of one non-breaking complex change
  35. This minimum deletion deserves further explanation
  36. The idea is applied in our second scenario Quite common situation: we have the opposite filter in two rules. Using minimal deletion we do not need to delete any of the rules Back to our second scenario (i.e. removal of optional from ExamElement), consider we have two rules whose filters refer to optional : In this way, surgically removal permits to limit the impact of deletion of properties in the associated rules.
  37. - The problem with the types is that ATL does not check the types in compilation time, so a transformation can be syntactically right, but give an error in runtime “Only few cases are NBC”: this is the case when a type changes to a subclass or in the case of numerical values (integer, double, ...). In the other cases, we can only warn the designer about a possible error
  38. The forth scenario is an example of a complex breaking change. As result, rules having OpenElement as source should co-evolved. This is the case of the OpenQuestion rule, which is splitted in two rules: OpenQuestion_1 and OpenQuestion_2. The former contains the bindings related to OpenElement_1 while the latter keeps the bindings for OpenElement_2.
  39. When the metaclass is splitted, some of the attributes will remain in one metaclass and others in the other. In this case, attr1 remains in OpenElement_1 and attr2 will be moved to OpenElement_2. Bindings corresponding to each attribute will belong to the corresponding rule.
  40. Additive evolution is a NBC case. However, it is not unusual to need new rules or bindings to maintain the metamodel coverage level. For this purpose we include in the co-evolution the option to generate partially new rules, as they are not fully automatable. If we use a similarity model, the migrator may be able to fulfill the generated skeletons with the missing information
  41. The approach is realized for EMOF/ Ecore-based metamodels, and ATL transformations. Two are the main artifacts of the prototype: a transformation that transforms simplex changes into complex changes and a HOT that adapts the transformation to changes.
  42. context: Broadly speaking, software components, often, do not work in isolation, but are built on top of platforms that provide some functionality. While this offers numerous advantages, it also creates dependencies. The evolution of these platforms is a common situation. One paradigmatic platform is a database . In the database world, the evolution of schemas has always been a concern. This is the platform that has been used in this work Another example would be API s, which evolution can make client applications get outdated.
  43. Two characteristics of platforms that make them problematic, are: On the one hand, the perpetual beta phenomenon. This means that developers have to work with software components that are in a beta version as if they were in production level. This increases the frequency of releases, and therefore the number of the required co- evolution actions. - On the other hand, the platform is often an external dependency, i.e., it belongs to a different organization. Changes in these components are usually out of the control of the rest of partners, and might be accompanied by poor documentation, lost communication with the partner responsible for the change, and so on. This rules out the possibility of tracking platform upgrades to be later replicated.
  44. M2T transformations are composed of static and dynamic parts. They interleave target platform code, instructions from the transformation language (conditional and iteratation instructions) and references to the input model. Platform-specific code (information of the tables, columns, ... ) is embedded in the transformation Therefore, there is domain variability but not platform variability. • In a database scenario, transformations do not specify but construct SQL scripts. The SQL script is dynamically generated once references to the input model are resolved. In the transformation, there are references to a model which metamodel is unknown a priori, that will be resolved at runtime.
  45. problem: Forward Engineering advocates for code to be generated dynamically through M2T transformations that target a specific platform. In this setting, platform evolution can leave the transformation, and hence the generated code, outdated . Where is platform-dependent information? In the transformation. MDA guide: 10 years later http://modeling-languages.com/anybody-using-both-mda-platform-independent-and-platform-specific-models/
  46. Solution: It is proposed, to do transformations more resilient to changes to add an adaptability mechanism for the most vulnerable parts of the transformation, those which are dependent on the platform. The solution it is proposed for this problem is to use the well-known adapter pattern from Object Oriented with M2T transformations.
  47. Solution: It is proposed, to do transformations more resilient to changes to add an adaptability mechanism for the most vulnerable parts of the transformation, those which are dependent on the platform. The solution it is proposed for this problem is to use the well-known adapter pattern from Object Oriented with M2T transformations.
  48. MediaWiki is a wiki engine, currently used by almost 40,000 wikis. In a 4½ year period, the MediaWiki DB had 171 schema upgrades. This gives us an idea of the importance and frequency of changes.
  49. This slide shows a snippet of the transformation from the mindmap to the wiki. These statements are built upon the DB schema of MediaWiki, and in so doing, create an external dependency of WikiWhirl w.r.t. MediaWiki. We can see the table and column names in the prints
  50. To tackle the mentioned problem, data manipulation requests (i.e. insert, delete, update, select) are re-directed to the adapter during the transformation. The adapter outputs the code according to the latest schema release. In this way, the main source of instability (i.e. schema upgrades) is isolated in the adapter. We do not need to change the transformation.
  51. The process overview would be the following. First, DB schemas (i.e. New schema, Old Schema) are injected as Ecore models with Schemol tool(step 1); next, the schema difference is computed (i.e. Difference model) with EMFCompare (step 2); finally, this schema difference feeds the adapter used by the transformation (i.e. MOFScript program).
  52. As everything is implemeted in Java and Ant, the whole process can be executed with a batch file.
  53. Inject and compare example : As we can see, “trackbacks” and “math” tables have been removed. “user_options” column has been removed from “user” table and three new columns have been added to “categorylinks” table. As we can see, both schema versions are transformed into models with Schemol, and after comparing with EMFCompare, a difference model is retrieved.
  54. The Difference model is described as a set of DB operators. Curino et al. proved that a set of (eleven) Schema Modification Operators (SMO ) can completely describe a complex schema evolution scenario (in fact, they did the experiment for the Wikipedia). The Table indicates the frequency of these change for the MediaWiki case. Most frequent changes (e.g. 'create table', 'add column', 'drop column' or 'rename column') can be identified from schema differences. Complex changes (e.g. 'distribute table' or 'merge table') are a sequence of simple changes. Fortunately, as we can see in the “change type ” column, most of the changes are NBC or BRC, which means that human intervention is not required for their adaptation. For each SMO, there is an adaptation action that restores the consistency. For instance, if a column is removed, that column will be removed as well from the statements. NBC: Non-Breaking Changes. Changes that do not affect the code or transformation BRC: Breaking Resolvable Changes: Changes that can be automatically propagated BUC: Breaking Unresolvable Changes: changes that require human intervention to propagate the changes. In [Model transformation co-evolution: a semi-automatic approach] we propose some rules (implemented in a M2M transformation) to relate simple changes to build complex ones. For instance, there is a move column case if the same column is deleted from a table and added to another table. Unfortunately, distribute table and merge table cases cannot be automatically detected and therefore are not included in the table. This kind of changes tend to be scarce. For MediaWiki, 'distribute table' never occurred while 'merge table' accounts for 1,5% of the total changes.
  55. The adapter is platform-specific, and will only adapt SQL code, but it is schema-agnostic (it does not matter what type of DB schema has to be managed). 3-The approach mainly consists of replacing the “print” statements with invocations to the adapter (e.g. printSQL function). On the invocation, the adapter checks whether the <SQL statement> acts upon a table that is being subject to change. If so, the adapter returns a piece of SQL code compliant with the new DB schema. The adaptations are implemented in a library that has to be imported in the M2T transformation. This library contains functions that adapt the statements to the last version of the platform, leaving the transformation untouched. The adapter is implicitly called by the transformation at runtime, adapting the statements to the changes. Implementation wise, the adapter has two inputs: the Difference model and the model for the new schema model (to obtain the full description of new attributes, if applicable). The ZQL open-source SQL parser is used to parse SQL statements to Java structures. This parser is extended to account for adaptation functions to modify the statements (e.g. removeColumn). The snippet provides a glimpse of the adapter for the case “remove column”. The structure is similar for the other adaptations too. It starts by iterating over the changes reported in the Difference model (line 5). Next, it checks (line 6) that the deleted column's table corresponds with the table name of the statement (retrieved in lines 3-4). Then, all, the statement, the table name and the removed column are added to a list of parameters (lines 7-10). Finally, the adapter outputs an SQL statement without the removed column, using a function with the list of parameters that modifies the expression (lines 12-13).
  56. Going back to our scenario, this would an example of the output: 1. the introduction of three new attributes in the “categorylinks” table, namely, cl_type, cl_sortkey_prefix and cl_collation. Accordingly, the adapter modifies SQL insert/update statements where new columns which are 'Not Null' are initialized with their default values; 2. the deletion of tables “math” and “trackback”. This causes the affected printSQL statements to be left as a comment; 3. the deletion of column “user_options” in the “user” table. Consequently, the affected printSQL statements, output the SQL but removing the affected column. In addition, a comment is introduced to note this fact (lines 8-13 below).
  57. After some evolution iterations, the developer may decide that she wants to transfer the changes done by the adapter to the transformation itself. How can we dump the adaptations done in the code into the transformation? What I propose is a semi-automatic solution: to do an impact analysis of the changes in the platform so the developer can adapt the transformation. In the last step of the process (3) apart from adapting the generated code, it is created a record of the changes done, in case the developer wants to update the transformation itself so it can serve as an aid. This record contains the platform change, the transformation position affected by it (line and column) and the new statement In order to do this impact analysis, first it is needed to add as parameters in the prints the line and column of them in the transformation. This is done automatically using a Higher-Order Transformation.
  58. I want to make notice that there are two roles. On the one hand the adapter producers are those that implements the adapter for a specific platform. They have to implement both the injector and the adapter. On the other hand the consumers are transformation developers that simply use the adapter.
  59. It has been done an evaluation that compares the performance of the approach with the manual adaptation. The experiment was conducted by 8 PhD students. This evaluation is done from the point of view of the consumer. They were two groups: one of them had to do the adaptation manually and the other using the adapter: - Manual: participants had to check the MediaWiki website, navigate through the hyperlinks, and collect those changes that might impact the code. The experiment outputted an average of 38' for D_Mediawiki. I found that this was the most cumbersome task. But it will depend on the scenario. Next, the designer peers at the code, updates it, and checks the generated code. On average, this accounts for 4' for a single update (i.e. PBR) -Assisted: Participants conducted two tasks: (1) configuration of the batch that launches the assisted adaptation, and (2), verification of the generated SQL script. (Some developers check what have been generated by the transformation and others not) To compute the profitability of the approach for another platform, it is suggested to apply specific constant values (D, P, V) to cost equations. D: the time estimated for detecting whether the new MediaWiki release impacts the transformation, P: the time needed to Propagate a single change to the MOFScript code, and #Impacts: the number of instructions in the transformation Impacted by the upgrade. D very much depends on the documentation available. C: the time needed to Configure the batch; V: the time needed to Verify that a single automatically adapted instruction is correct and to alter it, if applicable.
  60. The cost reduction rests on the existence of an infrastructure, namely, the adapter and the batch. The adapter is domain-agnostic, and hence, can be reused in other domains. On these grounds, I do not consider the adapter as part of the development effort. As I said, the evaluation is from the point of view of the consumer. However, there is a cost of familiarizing with the tool, that includes the configuration of the batch (e.g. DB settings, file paths and the like), and above all, the learning time. We estimated this accounts for 120' (reflected as the upfront investment for the assisted approach in Figure). Penalize On these grounds, the breakeven is reached after the third release.
  61. The adapter is supported as a library of the transformation. A batch script automatizes the whole process, including the injection and comparison of the platforms and execution of the transformation. An empirical evaluation has been done to assess the profitability of the approach.
  62. This is a fairly simple scenario. We have a model that contains maps, where each map has different addresses. These models are transformed into java code that uses the Google Maps API to show the addresses on the map. Depending on the address type the map might show different things. For example, in the case of restaurants - Each map contains a set of addresses. And each address has a name, position, description, telephone and pictures - Input map model (map containing addresses) and desired output
  63. Lets look at the transformation that accomplishes this. At first sight it is not particularly intuitive or understandable. Among other things it is because it combines three different languages together. First, the transformation language, MOFScript in the example. Second, Java, the language of the code we want to generate. And third, references to the input model which will not be resolved until execution time.
  64. So given a model to text transformation, first, we wanted to know which transformation line generated a particular code line and, second, we wanted to know what each transformation line generated. The answer to this is clear, traceability! And this is where MOFScript comes into play. MOFScript already provides traceability between the input model and the generated code. MOFScript does provide some traceability between the transformation and the generated code. In particular, at best it indicates which rule generates a particular code line. However, this was not enough for our developers, each rule may have lots of code including if statements, loops and so on. They wanted a more fine grained traceability, that would tell them exactly which transformation line generated the code. In this manner, HandyMOF complements the traceability provided by MOFScript by adding fine grained traceability between the transformation and the generated code.
  65. The generation of traces is a two step process: First, transf. is modified to include in each print statement its number of line and column. Then, when executing the transformation, a trace model is generated, as well as the ordinary output
  66. Coverage analysis: white-box techniques capture the mechanics of the transformation by covering every individual step. The drawback of white-box testing approaches is that they are tightly coupled to the transformation language
  67. This is the reason why, while aiming at the same goals as white-box testing (i.e., covering every step of the transformation), we opted to realize it using a mixed approach. The model test suite is generated using black-box techniques and then both input models and the generated code are traced to the transformation. The purpose is twofold: (1) if a bug is detected in the generated code, it can be traced back to the transformation line that generated it, and (2) the transformation coverage obtained by the model test suite can be calculated based on transformation lines being transited.
  68. Creation of test suite: First, test models are generated using Pramana. It is a tool that implements black-box testing by automatically generating 'model suites' for metamodel coverage. Then, a trace generator traces the generated code and the M2T transf.
  69. The goal is to get the input models that obtain a 100% coverage of the transformation code. However, to the best of our knowledge no tool exists that, given an input domain MM and a M2T transformation, generates the models that provide full coverage of the transformation. As a result, I opted for using Pramana, a tool that implements black-box testing for MMs. But MM coverage is not enough for transformation coverage. MinimalModelSuiteFinder obtains the minimal set of input models that get the higher coverage percentage of the transformation code. Finding a global optimum is NP-hard, so the presented greedy algorithm ensures a first solution.
  70. This figure depicts the main components and flows of HandyMOF. The Project Explorer handles the folder structure. Pramana provides input models from the corresponding MM. Finally, HandyMOF consumes input models and transformations to obtain its own trace models
  71. HandyMOF, a tool for debugging MOFScript transformations. This tool visualizes graphically the relationship between transformation and its generated code.
  72. (mejor hacer en vivo)
  73. tackles the impact of platform evolution on model-to-text transformations. The strategy used to avoid the desynchronization of transformations when the target platform evolves, is to use the adapter pattern to adapt the generated code upon platform upgrades. This approach has been tested in a particular scenario: using databases as platform. The adapter, using general recovery strategies, turns SQL statements based on the old schema into SQL statements based on the new schema.
  74. The suitability of the approach boils down to two main factors: the DB schema stability and the transformation coupling (i.e. number of SQL instructions in the MOFScript code). If the DB schema stability is low (i.e. large number of releases) and the transformation coupling is high, the cost of keeping the transformation in sync increases sharply. In this scenario, we advocate for a preventive approach using an adapter that turns generated SQL statements based on the old DB schema into statements based on the new schema. - It has been applied in a specific case study (Mediawiki and Wikiwhirl) - And last, it has been proposed a way to dump those changes done in the code to the transformation.
  75. presents a tool that helps the developers to debug model-to-text transformations and to assess the completeness of the input model suite. The problem with black-box testing, is that it does not guarantee the transformation coverage. To help with this problem, the tool does a transformation coverage analysis. If the obtained coverage is not complete, the developer can create input models that cover the missing transformation lines. Moreover, the tool allows the debugging of the transformation: if a bug is detected in the generated code, it can be traced back to the generating print statement; and each generator statement (i.e., 'print') can be traced to the generated code line.
  76. Improve traceability mechanisms: improve understability, error correction Synchronization techniques for co-evolution: These are already known techniques, so the contribution is their application in this context.
  77. Repository lack. For the research point of view, priceless
  78. -The main issue for me is that it has been proposed an adaptability technique and tried it in one platform. The question that arises is if it could be used with other platforms. For instance, with API evolution, XML configuration files and so on. I am more of synthetic thinking than analytic thinking, so I start from an example and then try to abstract. - Related with the previous issue: For the role of the producer, it is needed a generic methodology that defines the steps needed to develop an adapter for any domain. This evaluation has limitations: the number of participants is too small for what is recommended, and that there are few DB iterations. Regarding the participant number (8) I could not find more people with the required knowledge.
  79. Apart from these short-term improvements focused on overcoming limitations of presented approaches, I also would like to envisage some long-term considerations about transformation maintainability. -The same way they exist patterns and good practices in imperative programming, after some years of many people using transformations, it is time to collect that knowledge from the experts. The most obvious way to improve maintainability in transformations is increasing their quality. - There exist a gap between “model world” and “text world” that is produced when executing the model-to-text transformation. In my opinion, the transformation must be “aware” of what kind of code is generating, and not consider it as plain text.