SlideShare ist ein Scribd-Unternehmen logo
1 von 24
Master thesis presentation


          Impact of design complexity
              on software quality
                                 Student: Nguyen Duc Anh
                    First supervisor: Marcus Ciolkowski, Fraunhofer IESE
                         Second supervisor: Sebastian Barney, BTH
                    General supervisor: Prof. Dr. Dr. h.c. Dieter Rombach


Feb 21, 2013                                                                1



© Fraunhofer IESE
Agenda
  Motivation
  Problem statement
  Research methodology
  Research result
  Threat of validity
  Conclusion
  Future work


Feb 21, 2013              2



© Fraunhofer IESE
Motivation




 High complexity leads to high cost and low quality



Feb 21, 2013                                          3



© Fraunhofer IESE
Problem statement
                     SQ1: Which cost & quality attributes are predicted
                      using design complexity metrics?
   W hat is the      SQ2: What (kind of ) design complexity metrics are
      impact          most frequently used in literature?
        of
                     SQ3: Which complexity metrics are potential
      design
                      predictors of quality attribute?
    complexity
        on           SQ4: Is there an overall influence of these metrics
     software         on quality attributes? If yes, what are the impacts of
   cost &quality      those metrics on those attributes?
          ?          SQ5: If no, what explains the inconsistency between
                      studies? Is this explanation consistent across
                      different metrics?
Feb 21, 2013                                                              4



© Fraunhofer IESE
Research methodology


   W hat is the     Search for relevant publications
      impact        Extract information about design complexity
        of           metrics & quality attributes
      design
    complexity      Extract numerical representation of impact
        on           relationship & context factors
     software
                    Synthesize data & interpret results
   cost &quality
          ?


Feb 21, 2013                                                       5



© Fraunhofer IESE
Study selection result
 Search range: 1960 to 2010
Scope: Object oriented metrics




 Feb 21, 2013                    6



 © Fraunhofer IESE
Research result
SQ1 - Which quality attributes are predicted using software design metrics?
                                                                                 Probability of a
                                                                                module to be faulty


                                                                                Effort to maintain a
                                                                                 software module


                                                                                Number of fault per
                                                                                     LOC

                                                                                  Probability of a
                                                                                   module to be
                                                                                     changed




    Cost (effort) is excluded due to lack of sufficient number of investigated studies
 Feb 21, 2013                                                                                        7



 © Fraunhofer IESE
Research result
SQ2- What kind of complexity metrics is most frequently used in literature?
                                    Design complexity dimension
                    No of studies




Feb 21, 2013                                                             8



© Fraunhofer IESE
Research result
 SQ2- What complexity metrics is most frequently used in literature?
                Design complexity metric: Chidamber & Kemerer (CK) metric set (*)
                             Fault proneness                           Maintainability
                                                    No of                                 No of
                    Metric               Type                 Metric         Type
                                                    studies                              studies
NOC (Number Of Children)              inheritance     28      WMC           scale           9
DIT (Depth of Inheritance Tree)       inheritance     27      RFC          coupling        8
CBO (Coupling Between Object)          coupling       22       DIT         coupling        7
LCOM (Lack of Cohesion                 cohesion       22      NOC         inheritance      6
between Method)                                               CBO          coupling        4
WMC (Weighted Method Count)             scale         22      LCOM        cohesion         3
RFC (Response For a Class)             coupling       21        …             …            3
                      …                   …           12



Feb 21, 2013                                                                                       9

                        ( )S . C id m e a dC . K m rr“AM t s u efr be t
                        * .R h a b r n .F e ee,         er S it o O jc
                                                          ic
© Fraunhofer IESE       O ie tdD s n IEEE Trans. Softw. Eng., v l2 ,
                          r ne eig ,”                             o. 0
                        1 9 , p . 4 64 3
                         94 p 7- 9
Research result
SQ3 - Which complexity metrics are potential predictors of fault proneness?

  Potential prediction – Statistical correlation analysis
  Correlation coefficient
                     Spearman
                     Odds ratios (estimated from univariate logistic regression model)
  Significant correlation
  Vote counting
                     Count the number of reported significant impacts over total
                      number of studies



Feb 21, 2013                                                                              10



© Fraunhofer IESE
Research result
SQ3 - Which complexity metrics are potential predictors of fault proneness?
(Ex a m p le : Vo te c o unting fo r Sp e a rm a n c o rre la tio n c o e ffic ie nt in Fa ult p ro ne ne s s s tud ie s )

                                               Out comes                                                    ≤ 50% 
                                                                                                              ≥ 50% no
                        No of                                   Proportiona               Positive
       Metric                                       No of non l ratio of +                                    impact !
                                                                                                           positive impact
                       studies      No of + No of -                                       impact?
                                                    significant
NOC                       19          6       1         1 2        3%
                                                                    2                       No
DIT                       14          2       0         1 2        1%
                                                                    4                       No
CBO                       17          10      0          7         5%
                                                                    9                       Ys
                                                                                             e
LCOM                      14          6       0          8         4%
                                                                    3                       No             Except NOC,
W MC                      26          18      0          8         6%
                                                                    9                       Ys
                                                                                             e           DIT, LCOM listed
RFC                       15          9       0          6         6%
                                                                    0                       Ys
                                                                                             e
                                                                                                            metrics are
W MC McCabe               16          11      0          5         6%
                                                                    9                       Ys
                                                                                             e
SDMC                      6           6       0          0         10
                                                                    0%                      Ys
                                                                                             e               potential
AMC                       6           6       0          0         10
                                                                    0%                      Ys
                                                                                             e           predictor of fault
NIM                       6           6       0          0         10
                                                                    0%                      Ys
                                                                                             e             proneness !
NCM                       6           6       0          0         10
                                                                    0%                      Ys
                                                                                             e
NTM                       6           6       0          0         10
                                                                    0%                      Ys
                                                                                             e

Feb 21, 2013                                                                                                                 11



© Fraunhofer IESE
Research result
SQ3 - Which complexity metrics are potential predictors of fault proneness?

  Strength of correlation (*)



                         Trivial     Small    Medium              Large

  Meta analysis
                     Synthesize reported correlation coefficients
                     Assess the agreement among studies about aggregated result




Feb 21, 2013                                                                                 12

                      (*) J. Cohen, Statistical Power Analysis for the Behavioral Science,
© Fraunhofer IESE          Lawrence Erlbaum Hillsdale, New Jersey, 1988.
Research result
SQ4 - Is there an overall influence of these metrics on fault proneness?

                                                            95% confidence interval
                                                           of aggregated correlation
                                                            coefficient between the
                                                                metric and fault
                                                                   proneness




                        Trivial   Small   Medium   Large


        Scale, coupling metrics are stronger correlated than cohesion,
         inheritance metric
        LOC is strongest correlated to fault proneness
Feb 21, 2013                                                                     13



© Fraunhofer IESE
Research result
 SQ4 - Is there an overall influence of these metrics on fault proneness?
(Ex a m p le : M ta a na ly s is fo r Sp e a rm a n c o e ffic ie nt o f m e tric RFC in Fa ult p ro ne ne s s s tud ie s )
                e

                                                                           Forest plot of RFC


         Aggregated results
Global Spearman             0.31
coefficient
95% Confidence              [0.22;0.40]
Interval
P-value                     0.000




 Feb 21, 2013                                                                                                            14



 © Fraunhofer IESE
Research result
 SQ4 - Is there an overall influence of these metrics on fault proneness?
(Ex a m p le : M ta a na ly s is fo r Sp e a rm a n c o e ffic ie nt o f m e tric RFC in Fa ult p ro ne ne s s s tud ie s )
                e


        Is this result consistent across studies?
                                                                                     Metric               I2
                I2 test for heterogeneity !
                                                                                   CO
                                                                                    B                   9%
                                                                                                         5
                                                                                   DT
                                                                                    I                   8%
                                                                                                         3
                                                                                   N C
                                                                                    O                   7%
                                                                                                         5
                                                                                   LO
                                                                                    CM                  7%
                                                                                                         4
                                                                                   RC
                                                                                    F                   7%
                                                                                                         8
                         RFC: I2=78%                                               W C
                                                                                    M                   9%
                                                                                                         3
                                                                                   LC
                                                                                    O                   8%
                                                                                                         4




Feb 21, 2013                                                                                                             15



© Fraunhofer IESE
Research result
SQ4* - How many cases is enough to draw the statistically significant conclusion?
   (Ex a m p le : Po we r a na ly s is fo r Sp e a rm a n c o e ffic ie nt o f m e tric RFC in Fa ult p ro ne ne s s s tud ie s )




    α value                     0.1
    Tails                       2
    Expected effect size        0.31
    Expected power              80%




                                    Number of cases needed: 60 cases !
   Feb 21, 2013                                                                                                              16



   © Fraunhofer IESE
Research result
 SQ5: What explains the inconsistency between studies? Is this
 explanation consistent across different metrics?
  Moderator variable
           Programming Language: C++ & Java
           Project type: Open source, Closed source academic & Closed source
            industry
           Defect collection phase: Pre release defects & Post release defects
           Business domain: Embedded system & Information system
           Dataset size: Small, Medium & Large
  Are the correlations different across each moderator variable?


Feb 21, 2013                                                                      17



© Fraunhofer IESE
Research result
 SQ5: What explains the inconsistency between studies? Is this
 explanation consistent across different metrics?

            Metric
               Programming Project type Defect col. Business   Dataset size     Variance
                 Language                 Phase     Domain                    explanation
           CO
            B       6%         4%          83%        4 %         8%           in percent
           DT
            I       3%         0%          2%
                                            0         0 %         1%
           N C
            O      3%
                    4         2%4          1%
                                            5        2%2          1%
                                                                   4
           LO
            CM      1%         0%          60%        0 %         6%
           RC
            F       5%         3%          78%        3 %         2%
           W C
            M      3%
                    2          4%          60%        4 %         3%
           LC
            O       7%         2%          51%       1%5          0%

  Remaining inconsistency is still excessive
  No consistent explanation for heterogeneity across metrics !

Feb 21, 2013                                                                         18



© Fraunhofer IESE
Comparison of results with perception in literature
 Vote counting & meta analysis common claims in literature


             Common claims in literature                     In Lit.   Ours
The more classes a given class is
coupled, the more likely that class is                       Yes       Yes
faulty
The more methods that can potentially
be executed in response to a message                         Yes       Yes
received by an object of a given class,
the more likely inheritance tree for a
The deeper the that class is faulty
given class is, the more likely that class                   Yes       No
is faulty
The more immediate sub-classes a given
class has, the more likely that class is                      No       No
faulty
The less similar methods within a given
class, the more likely that class is                         Yes       No
faulty
The more local methods a given class                         Yes       Yes
has, the more likely that class is faulty
The 2013
 Feb 21, larger size a given class has, the                  Yes       Yes
                                                                         19
more likely that class is faulty
Do the effects of CK metrics differ                          Yes       No
 © Fraunhofer IESE
across different programming languages
Do the effects of CK metrics differ
Limitation
  Internal validity
                     Selection of publications
                     Quality of selected studies.
  External validity
                     Limitation to models with single complexity metric
                     Limitation to object oriented systems
  Conclusion validity
                     Lack of comparable studies
                     Lack of reported context information



Feb 21, 2013                                                               20



© Fraunhofer IESE
Conclusion
 SQ1: Most common predicted attributes:
                     Fault proneness & Maintainability
 SQ2: Most common design complexity dimension & metric:
                     Coupling: CBO, RFC
                     Scale: WMC
                     Inheritance: DIT, NOC
                     Cohesion: LCOM
 SQ3,4: Overall impact of design complexity on software quality:
                     Moderate impact of WMC, CBO, RFC on fault proneness
                     LOC shows strongest impact on fault proneness !
 SQ5: What explains the inconsistency between studies?
                     Not able to explain for the inconsistency
                     Defect collection phase explains part of the inconsistency
                                                                                   21



© Fraunhofer IESE
Interpretation
 Look for quality predictor in source code: LOC
 Look for quality predictor in design: CBO, RFC and WMC
 Build different prediction models for pre release and post
  release defect
 Need context information to increase predictive performance
 Adapt the design metrics for any software systems




Feb 21, 2013                                               22



© Fraunhofer IESE
Future work

                                     Construction of a generic
          Quality benchmarking
                                        model prediction
     System A           System B

    CBO       XXX       CBO    XXX


    RFC       XXX   ?   RFC    XXX


    WMC       XXX       WMC    XXX


    LCOM      XXX       LCOM   XXX


    DIT       XXX       DIT    XXX




Feb 21, 2013                                                     23



© Fraunhofer IESE
Q&A




Feb 21, 2013        24



© Fraunhofer IESE

Weitere ähnliche Inhalte

Andere mochten auch

The Good The Bad And The Ugly The Impact Of Off Flavours And Taints On Produc...
The Good The Bad And The Ugly The Impact Of Off Flavours And Taints On Produc...The Good The Bad And The Ugly The Impact Of Off Flavours And Taints On Produc...
The Good The Bad And The Ugly The Impact Of Off Flavours And Taints On Produc...BillSimpson19
 
Cost of software quality ( software quality assurance )
Cost of software quality ( software quality assurance )Cost of software quality ( software quality assurance )
Cost of software quality ( software quality assurance )Kiran Hanjar
 
High Quality Software Development with Agile and Scrum
High Quality Software Development with Agile and ScrumHigh Quality Software Development with Agile and Scrum
High Quality Software Development with Agile and ScrumLemi Orhan Ergin
 
SAJNA RESCUME
SAJNA RESCUMESAJNA RESCUME
SAJNA RESCUMESajna Ali
 
Ancaman global freemasonry
Ancaman global freemasonryAncaman global freemasonry
Ancaman global freemasonrySirajuddin Putra
 
Sage Credo--Our Code for Good Work
Sage Credo--Our Code for Good WorkSage Credo--Our Code for Good Work
Sage Credo--Our Code for Good WorkRobert Houghton
 
An Economics of Nano technology: The relative importance of Governmental fund...
An Economics of Nano technology: The relative importance of Governmental fund...An Economics of Nano technology: The relative importance of Governmental fund...
An Economics of Nano technology: The relative importance of Governmental fund...IJERA Editor
 
Panevėžio juozo miltinio
Panevėžio juozo miltinioPanevėžio juozo miltinio
Panevėžio juozo miltiniomiride
 
Corporate Presentation October 2014
Corporate Presentation October 2014Corporate Presentation October 2014
Corporate Presentation October 2014PretiumR
 

Andere mochten auch (14)

The Good The Bad And The Ugly The Impact Of Off Flavours And Taints On Produc...
The Good The Bad And The Ugly The Impact Of Off Flavours And Taints On Produc...The Good The Bad And The Ugly The Impact Of Off Flavours And Taints On Produc...
The Good The Bad And The Ugly The Impact Of Off Flavours And Taints On Produc...
 
Cost of software quality ( software quality assurance )
Cost of software quality ( software quality assurance )Cost of software quality ( software quality assurance )
Cost of software quality ( software quality assurance )
 
High Quality Software Development with Agile and Scrum
High Quality Software Development with Agile and ScrumHigh Quality Software Development with Agile and Scrum
High Quality Software Development with Agile and Scrum
 
Brand Archetyping (Admap Article 1)
Brand Archetyping (Admap Article 1)Brand Archetyping (Admap Article 1)
Brand Archetyping (Admap Article 1)
 
SAJNA RESCUME
SAJNA RESCUMESAJNA RESCUME
SAJNA RESCUME
 
I guerra mundial
I guerra mundialI guerra mundial
I guerra mundial
 
domus recortable
domus recortabledomus recortable
domus recortable
 
Ancaman global freemasonry
Ancaman global freemasonryAncaman global freemasonry
Ancaman global freemasonry
 
Sage Credo--Our Code for Good Work
Sage Credo--Our Code for Good WorkSage Credo--Our Code for Good Work
Sage Credo--Our Code for Good Work
 
Portfolio
PortfolioPortfolio
Portfolio
 
An Economics of Nano technology: The relative importance of Governmental fund...
An Economics of Nano technology: The relative importance of Governmental fund...An Economics of Nano technology: The relative importance of Governmental fund...
An Economics of Nano technology: The relative importance of Governmental fund...
 
Panevėžio juozo miltinio
Panevėžio juozo miltinioPanevėžio juozo miltinio
Panevėžio juozo miltinio
 
Corporate Presentation October 2014
Corporate Presentation October 2014Corporate Presentation October 2014
Corporate Presentation October 2014
 
Dennis Crowley - Foursquare
Dennis Crowley - FoursquareDennis Crowley - Foursquare
Dennis Crowley - Foursquare
 

Ähnlich wie Impact of design complexity on software quality - A systematic review

Software Defect Prediction Using Local and Global Analysis
Software Defect Prediction Using Local and Global AnalysisSoftware Defect Prediction Using Local and Global Analysis
Software Defect Prediction Using Local and Global AnalysisEditor IJMTER
 
A value added predictive defect type distribution model
A value added predictive defect type distribution modelA value added predictive defect type distribution model
A value added predictive defect type distribution modelUmeshchandraYadav5
 
A Study on MDE Approaches for Engineering Wireless Sensor Networks
A Study on MDE Approaches  for Engineering Wireless Sensor Networks A Study on MDE Approaches  for Engineering Wireless Sensor Networks
A Study on MDE Approaches for Engineering Wireless Sensor Networks Ivano Malavolta
 
DSUS_SDM2012_Jie
DSUS_SDM2012_JieDSUS_SDM2012_Jie
DSUS_SDM2012_JieMDO_Lab
 
Ijess complimentary copy vol1issue3
Ijess complimentary copy vol1issue3Ijess complimentary copy vol1issue3
Ijess complimentary copy vol1issue3surendergupta1978
 
Ijess complimentary copy vol1issue3
Ijess complimentary copy vol1issue3Ijess complimentary copy vol1issue3
Ijess complimentary copy vol1issue3surendergupta1978
 
The International Journal of Engineering and Science (IJES)
The International Journal of Engineering and Science (IJES)The International Journal of Engineering and Science (IJES)
The International Journal of Engineering and Science (IJES)theijes
 
Volume 2-issue-6-1983-1986
Volume 2-issue-6-1983-1986Volume 2-issue-6-1983-1986
Volume 2-issue-6-1983-1986Editor IJARCET
 
Volume 2-issue-6-1983-1986
Volume 2-issue-6-1983-1986Volume 2-issue-6-1983-1986
Volume 2-issue-6-1983-1986Editor IJARCET
 
JELINSKI-MORANDA SOFTWARE RELIABILITY GROWTH MODEL: A BRIEF LITERATURE AND MO...
JELINSKI-MORANDA SOFTWARE RELIABILITY GROWTH MODEL: A BRIEF LITERATURE AND MO...JELINSKI-MORANDA SOFTWARE RELIABILITY GROWTH MODEL: A BRIEF LITERATURE AND MO...
JELINSKI-MORANDA SOFTWARE RELIABILITY GROWTH MODEL: A BRIEF LITERATURE AND MO...ijseajournal
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
A simplified predictive framework for cost evaluation to fault assessment usi...
A simplified predictive framework for cost evaluation to fault assessment usi...A simplified predictive framework for cost evaluation to fault assessment usi...
A simplified predictive framework for cost evaluation to fault assessment usi...IJECEIAES
 
THE REMOVAL OF NUMERICAL DRIFT FROM SCIENTIFIC MODELS
THE REMOVAL OF NUMERICAL DRIFT FROM SCIENTIFIC MODELSTHE REMOVAL OF NUMERICAL DRIFT FROM SCIENTIFIC MODELS
THE REMOVAL OF NUMERICAL DRIFT FROM SCIENTIFIC MODELSIJSEA
 
A Structured Approach for Conducting a Series of Controlled Experiments in So...
A Structured Approach for Conducting a Series of Controlled Experiments in So...A Structured Approach for Conducting a Series of Controlled Experiments in So...
A Structured Approach for Conducting a Series of Controlled Experiments in So...Richard Müller
 
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
 
AIAA-MAO-DSUS-2012
AIAA-MAO-DSUS-2012AIAA-MAO-DSUS-2012
AIAA-MAO-DSUS-2012OptiModel
 
Object Detection with Computer Vision
Object Detection with Computer VisionObject Detection with Computer Vision
Object Detection with Computer VisionIRJET Journal
 

Ähnlich wie Impact of design complexity on software quality - A systematic review (20)

Software Defect Prediction Using Local and Global Analysis
Software Defect Prediction Using Local and Global AnalysisSoftware Defect Prediction Using Local and Global Analysis
Software Defect Prediction Using Local and Global Analysis
 
A value added predictive defect type distribution model
A value added predictive defect type distribution modelA value added predictive defect type distribution model
A value added predictive defect type distribution model
 
A Study on MDE Approaches for Engineering Wireless Sensor Networks
A Study on MDE Approaches  for Engineering Wireless Sensor Networks A Study on MDE Approaches  for Engineering Wireless Sensor Networks
A Study on MDE Approaches for Engineering Wireless Sensor Networks
 
DSUS_SDM2012_Jie
DSUS_SDM2012_JieDSUS_SDM2012_Jie
DSUS_SDM2012_Jie
 
J034057065
J034057065J034057065
J034057065
 
Ijess complimentary copy vol1issue3
Ijess complimentary copy vol1issue3Ijess complimentary copy vol1issue3
Ijess complimentary copy vol1issue3
 
Ijess complimentary copy vol1issue3
Ijess complimentary copy vol1issue3Ijess complimentary copy vol1issue3
Ijess complimentary copy vol1issue3
 
The International Journal of Engineering and Science (IJES)
The International Journal of Engineering and Science (IJES)The International Journal of Engineering and Science (IJES)
The International Journal of Engineering and Science (IJES)
 
Volume 2-issue-6-1983-1986
Volume 2-issue-6-1983-1986Volume 2-issue-6-1983-1986
Volume 2-issue-6-1983-1986
 
Volume 2-issue-6-1983-1986
Volume 2-issue-6-1983-1986Volume 2-issue-6-1983-1986
Volume 2-issue-6-1983-1986
 
JELINSKI-MORANDA SOFTWARE RELIABILITY GROWTH MODEL: A BRIEF LITERATURE AND MO...
JELINSKI-MORANDA SOFTWARE RELIABILITY GROWTH MODEL: A BRIEF LITERATURE AND MO...JELINSKI-MORANDA SOFTWARE RELIABILITY GROWTH MODEL: A BRIEF LITERATURE AND MO...
JELINSKI-MORANDA SOFTWARE RELIABILITY GROWTH MODEL: A BRIEF LITERATURE AND MO...
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
A simplified predictive framework for cost evaluation to fault assessment usi...
A simplified predictive framework for cost evaluation to fault assessment usi...A simplified predictive framework for cost evaluation to fault assessment usi...
A simplified predictive framework for cost evaluation to fault assessment usi...
 
4213ijsea04
4213ijsea044213ijsea04
4213ijsea04
 
THE REMOVAL OF NUMERICAL DRIFT FROM SCIENTIFIC MODELS
THE REMOVAL OF NUMERICAL DRIFT FROM SCIENTIFIC MODELSTHE REMOVAL OF NUMERICAL DRIFT FROM SCIENTIFIC MODELS
THE REMOVAL OF NUMERICAL DRIFT FROM SCIENTIFIC MODELS
 
A Structured Approach for Conducting a Series of Controlled Experiments in So...
A Structured Approach for Conducting a Series of Controlled Experiments in So...A Structured Approach for Conducting a Series of Controlled Experiments in So...
A Structured Approach for Conducting a Series of Controlled Experiments in So...
 
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...
 
AIAA-MAO-DSUS-2012
AIAA-MAO-DSUS-2012AIAA-MAO-DSUS-2012
AIAA-MAO-DSUS-2012
 
Object Detection with Computer Vision
Object Detection with Computer VisionObject Detection with Computer Vision
Object Detection with Computer Vision
 

Mehr von Anh Nguyen Duc

Software Startup Engineering: A Systematic Mapping Study
Software Startup Engineering: A Systematic Mapping StudySoftware Startup Engineering: A Systematic Mapping Study
Software Startup Engineering: A Systematic Mapping StudyAnh Nguyen Duc
 
A preliminary study of agility in business and production – Cases of early-s...
A preliminary study of agility in business and production –  Cases of early-s...A preliminary study of agility in business and production –  Cases of early-s...
A preliminary study of agility in business and production – Cases of early-s...Anh Nguyen Duc
 
Achieving product market fit in startup context - The-state-of-practices and ...
Achieving product market fit in startup context - The-state-of-practices and ...Achieving product market fit in startup context - The-state-of-practices and ...
Achieving product market fit in startup context - The-state-of-practices and ...Anh Nguyen Duc
 
Introduction to Global Software Engineering TDT4140
Introduction to Global Software Engineering TDT4140Introduction to Global Software Engineering TDT4140
Introduction to Global Software Engineering TDT4140Anh Nguyen Duc
 
Sharing economy and Vietnam startup prospect (Kinh tế chia sẻ và cơ hội khởi ...
Sharing economy and Vietnam startup prospect (Kinh tế chia sẻ và cơ hội khởi ...Sharing economy and Vietnam startup prospect (Kinh tế chia sẻ và cơ hội khởi ...
Sharing economy and Vietnam startup prospect (Kinh tế chia sẻ và cơ hội khởi ...Anh Nguyen Duc
 
Application of economic model in software maintenance
Application of economic model in software maintenanceApplication of economic model in software maintenance
Application of economic model in software maintenanceAnh Nguyen Duc
 
Supporting team coordination of software development across multiple companies
Supporting team coordination of software development across multiple companiesSupporting team coordination of software development across multiple companies
Supporting team coordination of software development across multiple companiesAnh Nguyen Duc
 
On the role of boundary spanners as a team coordination mechanism in organisa...
On the role of boundary spanners as a team coordination mechanism in organisa...On the role of boundary spanners as a team coordination mechanism in organisa...
On the role of boundary spanners as a team coordination mechanism in organisa...Anh Nguyen Duc
 
Coordination of software development teams across organizational boundary – A...
Coordination of software development teams across organizational boundary – A...Coordination of software development teams across organizational boundary – A...
Coordination of software development teams across organizational boundary – A...Anh Nguyen Duc
 
Forking and coordination in multi-platform development
Forking and coordination in multi-platform developmentForking and coordination in multi-platform development
Forking and coordination in multi-platform developmentAnh Nguyen Duc
 
Dispersion, coordination and performance in GSD: a systematic review
Dispersion, coordination and performance in GSD: a systematic reviewDispersion, coordination and performance in GSD: a systematic review
Dispersion, coordination and performance in GSD: a systematic reviewAnh Nguyen Duc
 
Supporting team coordination across organizational boundary in GSD
Supporting team coordination across organizational boundary in GSDSupporting team coordination across organizational boundary in GSD
Supporting team coordination across organizational boundary in GSDAnh Nguyen Duc
 

Mehr von Anh Nguyen Duc (12)

Software Startup Engineering: A Systematic Mapping Study
Software Startup Engineering: A Systematic Mapping StudySoftware Startup Engineering: A Systematic Mapping Study
Software Startup Engineering: A Systematic Mapping Study
 
A preliminary study of agility in business and production – Cases of early-s...
A preliminary study of agility in business and production –  Cases of early-s...A preliminary study of agility in business and production –  Cases of early-s...
A preliminary study of agility in business and production – Cases of early-s...
 
Achieving product market fit in startup context - The-state-of-practices and ...
Achieving product market fit in startup context - The-state-of-practices and ...Achieving product market fit in startup context - The-state-of-practices and ...
Achieving product market fit in startup context - The-state-of-practices and ...
 
Introduction to Global Software Engineering TDT4140
Introduction to Global Software Engineering TDT4140Introduction to Global Software Engineering TDT4140
Introduction to Global Software Engineering TDT4140
 
Sharing economy and Vietnam startup prospect (Kinh tế chia sẻ và cơ hội khởi ...
Sharing economy and Vietnam startup prospect (Kinh tế chia sẻ và cơ hội khởi ...Sharing economy and Vietnam startup prospect (Kinh tế chia sẻ và cơ hội khởi ...
Sharing economy and Vietnam startup prospect (Kinh tế chia sẻ và cơ hội khởi ...
 
Application of economic model in software maintenance
Application of economic model in software maintenanceApplication of economic model in software maintenance
Application of economic model in software maintenance
 
Supporting team coordination of software development across multiple companies
Supporting team coordination of software development across multiple companiesSupporting team coordination of software development across multiple companies
Supporting team coordination of software development across multiple companies
 
On the role of boundary spanners as a team coordination mechanism in organisa...
On the role of boundary spanners as a team coordination mechanism in organisa...On the role of boundary spanners as a team coordination mechanism in organisa...
On the role of boundary spanners as a team coordination mechanism in organisa...
 
Coordination of software development teams across organizational boundary – A...
Coordination of software development teams across organizational boundary – A...Coordination of software development teams across organizational boundary – A...
Coordination of software development teams across organizational boundary – A...
 
Forking and coordination in multi-platform development
Forking and coordination in multi-platform developmentForking and coordination in multi-platform development
Forking and coordination in multi-platform development
 
Dispersion, coordination and performance in GSD: a systematic review
Dispersion, coordination and performance in GSD: a systematic reviewDispersion, coordination and performance in GSD: a systematic review
Dispersion, coordination and performance in GSD: a systematic review
 
Supporting team coordination across organizational boundary in GSD
Supporting team coordination across organizational boundary in GSDSupporting team coordination across organizational boundary in GSD
Supporting team coordination across organizational boundary in GSD
 

Impact of design complexity on software quality - A systematic review

  • 1. Master thesis presentation Impact of design complexity on software quality Student: Nguyen Duc Anh First supervisor: Marcus Ciolkowski, Fraunhofer IESE Second supervisor: Sebastian Barney, BTH General supervisor: Prof. Dr. Dr. h.c. Dieter Rombach Feb 21, 2013 1 © Fraunhofer IESE
  • 2. Agenda  Motivation  Problem statement  Research methodology  Research result  Threat of validity  Conclusion  Future work Feb 21, 2013 2 © Fraunhofer IESE
  • 3. Motivation High complexity leads to high cost and low quality Feb 21, 2013 3 © Fraunhofer IESE
  • 4. Problem statement  SQ1: Which cost & quality attributes are predicted using design complexity metrics? W hat is the  SQ2: What (kind of ) design complexity metrics are impact most frequently used in literature? of  SQ3: Which complexity metrics are potential design predictors of quality attribute? complexity on  SQ4: Is there an overall influence of these metrics software on quality attributes? If yes, what are the impacts of cost &quality those metrics on those attributes? ?  SQ5: If no, what explains the inconsistency between studies? Is this explanation consistent across different metrics? Feb 21, 2013 4 © Fraunhofer IESE
  • 5. Research methodology W hat is the Search for relevant publications impact Extract information about design complexity of metrics & quality attributes design complexity Extract numerical representation of impact on relationship & context factors software Synthesize data & interpret results cost &quality ? Feb 21, 2013 5 © Fraunhofer IESE
  • 6. Study selection result Search range: 1960 to 2010 Scope: Object oriented metrics Feb 21, 2013 6 © Fraunhofer IESE
  • 7. Research result SQ1 - Which quality attributes are predicted using software design metrics? Probability of a module to be faulty Effort to maintain a software module Number of fault per LOC Probability of a module to be changed Cost (effort) is excluded due to lack of sufficient number of investigated studies Feb 21, 2013 7 © Fraunhofer IESE
  • 8. Research result SQ2- What kind of complexity metrics is most frequently used in literature? Design complexity dimension No of studies Feb 21, 2013 8 © Fraunhofer IESE
  • 9. Research result SQ2- What complexity metrics is most frequently used in literature? Design complexity metric: Chidamber & Kemerer (CK) metric set (*) Fault proneness Maintainability No of No of Metric Type Metric Type studies studies NOC (Number Of Children) inheritance 28 WMC scale 9 DIT (Depth of Inheritance Tree) inheritance 27 RFC coupling 8 CBO (Coupling Between Object) coupling 22 DIT coupling 7 LCOM (Lack of Cohesion cohesion 22 NOC inheritance 6 between Method) CBO coupling 4 WMC (Weighted Method Count) scale 22 LCOM cohesion 3 RFC (Response For a Class) coupling 21 … … 3 … … 12 Feb 21, 2013 9 ( )S . C id m e a dC . K m rr“AM t s u efr be t * .R h a b r n .F e ee, er S it o O jc ic © Fraunhofer IESE O ie tdD s n IEEE Trans. Softw. Eng., v l2 , r ne eig ,” o. 0 1 9 , p . 4 64 3 94 p 7- 9
  • 10. Research result SQ3 - Which complexity metrics are potential predictors of fault proneness?  Potential prediction – Statistical correlation analysis  Correlation coefficient  Spearman  Odds ratios (estimated from univariate logistic regression model)  Significant correlation  Vote counting  Count the number of reported significant impacts over total number of studies Feb 21, 2013 10 © Fraunhofer IESE
  • 11. Research result SQ3 - Which complexity metrics are potential predictors of fault proneness? (Ex a m p le : Vo te c o unting fo r Sp e a rm a n c o rre la tio n c o e ffic ie nt in Fa ult p ro ne ne s s s tud ie s ) Out comes ≤ 50%  ≥ 50% no No of Proportiona Positive Metric No of non l ratio of + impact ! positive impact studies No of + No of - impact? significant NOC 19 6 1 1 2 3% 2 No DIT 14 2 0 1 2 1% 4 No CBO 17 10 0 7 5% 9 Ys e LCOM 14 6 0 8 4% 3 No Except NOC, W MC 26 18 0 8 6% 9 Ys e DIT, LCOM listed RFC 15 9 0 6 6% 0 Ys e metrics are W MC McCabe 16 11 0 5 6% 9 Ys e SDMC 6 6 0 0 10 0% Ys e potential AMC 6 6 0 0 10 0% Ys e predictor of fault NIM 6 6 0 0 10 0% Ys e proneness ! NCM 6 6 0 0 10 0% Ys e NTM 6 6 0 0 10 0% Ys e Feb 21, 2013 11 © Fraunhofer IESE
  • 12. Research result SQ3 - Which complexity metrics are potential predictors of fault proneness?  Strength of correlation (*) Trivial Small Medium Large  Meta analysis  Synthesize reported correlation coefficients  Assess the agreement among studies about aggregated result Feb 21, 2013 12 (*) J. Cohen, Statistical Power Analysis for the Behavioral Science, © Fraunhofer IESE Lawrence Erlbaum Hillsdale, New Jersey, 1988.
  • 13. Research result SQ4 - Is there an overall influence of these metrics on fault proneness? 95% confidence interval of aggregated correlation coefficient between the metric and fault proneness Trivial Small Medium Large  Scale, coupling metrics are stronger correlated than cohesion, inheritance metric  LOC is strongest correlated to fault proneness Feb 21, 2013 13 © Fraunhofer IESE
  • 14. Research result SQ4 - Is there an overall influence of these metrics on fault proneness? (Ex a m p le : M ta a na ly s is fo r Sp e a rm a n c o e ffic ie nt o f m e tric RFC in Fa ult p ro ne ne s s s tud ie s ) e Forest plot of RFC Aggregated results Global Spearman 0.31 coefficient 95% Confidence [0.22;0.40] Interval P-value 0.000 Feb 21, 2013 14 © Fraunhofer IESE
  • 15. Research result SQ4 - Is there an overall influence of these metrics on fault proneness? (Ex a m p le : M ta a na ly s is fo r Sp e a rm a n c o e ffic ie nt o f m e tric RFC in Fa ult p ro ne ne s s s tud ie s ) e Is this result consistent across studies? Metric I2 I2 test for heterogeneity ! CO B 9% 5 DT I 8% 3 N C O 7% 5 LO CM 7% 4 RC F 7% 8 RFC: I2=78% W C M 9% 3 LC O 8% 4 Feb 21, 2013 15 © Fraunhofer IESE
  • 16. Research result SQ4* - How many cases is enough to draw the statistically significant conclusion? (Ex a m p le : Po we r a na ly s is fo r Sp e a rm a n c o e ffic ie nt o f m e tric RFC in Fa ult p ro ne ne s s s tud ie s ) α value 0.1 Tails 2 Expected effect size 0.31 Expected power 80% Number of cases needed: 60 cases ! Feb 21, 2013 16 © Fraunhofer IESE
  • 17. Research result SQ5: What explains the inconsistency between studies? Is this explanation consistent across different metrics?  Moderator variable  Programming Language: C++ & Java  Project type: Open source, Closed source academic & Closed source industry  Defect collection phase: Pre release defects & Post release defects  Business domain: Embedded system & Information system  Dataset size: Small, Medium & Large  Are the correlations different across each moderator variable? Feb 21, 2013 17 © Fraunhofer IESE
  • 18. Research result SQ5: What explains the inconsistency between studies? Is this explanation consistent across different metrics? Metric Programming Project type Defect col. Business Dataset size Variance Language Phase Domain explanation CO B 6% 4% 83% 4 % 8% in percent DT I 3% 0% 2% 0 0 % 1% N C O 3% 4 2%4 1% 5 2%2 1% 4 LO CM 1% 0% 60% 0 % 6% RC F 5% 3% 78% 3 % 2% W C M 3% 2 4% 60% 4 % 3% LC O 7% 2% 51% 1%5 0%  Remaining inconsistency is still excessive  No consistent explanation for heterogeneity across metrics ! Feb 21, 2013 18 © Fraunhofer IESE
  • 19. Comparison of results with perception in literature Vote counting & meta analysis common claims in literature Common claims in literature In Lit. Ours The more classes a given class is coupled, the more likely that class is Yes Yes faulty The more methods that can potentially be executed in response to a message Yes Yes received by an object of a given class, the more likely inheritance tree for a The deeper the that class is faulty given class is, the more likely that class Yes No is faulty The more immediate sub-classes a given class has, the more likely that class is No No faulty The less similar methods within a given class, the more likely that class is Yes No faulty The more local methods a given class Yes Yes has, the more likely that class is faulty The 2013 Feb 21, larger size a given class has, the Yes Yes 19 more likely that class is faulty Do the effects of CK metrics differ Yes No © Fraunhofer IESE across different programming languages Do the effects of CK metrics differ
  • 20. Limitation  Internal validity  Selection of publications  Quality of selected studies.  External validity  Limitation to models with single complexity metric  Limitation to object oriented systems  Conclusion validity  Lack of comparable studies  Lack of reported context information Feb 21, 2013 20 © Fraunhofer IESE
  • 21. Conclusion  SQ1: Most common predicted attributes:  Fault proneness & Maintainability  SQ2: Most common design complexity dimension & metric:  Coupling: CBO, RFC  Scale: WMC  Inheritance: DIT, NOC  Cohesion: LCOM  SQ3,4: Overall impact of design complexity on software quality:  Moderate impact of WMC, CBO, RFC on fault proneness  LOC shows strongest impact on fault proneness !  SQ5: What explains the inconsistency between studies?  Not able to explain for the inconsistency  Defect collection phase explains part of the inconsistency 21 © Fraunhofer IESE
  • 22. Interpretation  Look for quality predictor in source code: LOC  Look for quality predictor in design: CBO, RFC and WMC  Build different prediction models for pre release and post release defect  Need context information to increase predictive performance  Adapt the design metrics for any software systems Feb 21, 2013 22 © Fraunhofer IESE
  • 23. Future work Construction of a generic Quality benchmarking model prediction System A System B CBO XXX CBO XXX RFC XXX ? RFC XXX WMC XXX WMC XXX LCOM XXX LCOM XXX DIT XXX DIT XXX Feb 21, 2013 23 © Fraunhofer IESE
  • 24. Q&A Feb 21, 2013 24 © Fraunhofer IESE

Hinweis der Redaktion

  1. Today I would like to present my master thesis in the topic The impact of design complexity on software cost and quality. The thesis is perpormed under direct supervision of Marcus Ciolkowski and general supervision of Professor Dieter Rombach.
  2. Here is agenda for the presentation. Firstly, I will present the motivation towards the research topic, including the importance of the topic for software practice and research community. Then, the research problem is formally stated as research questions. In the research methodology, I will present the approaches to answer these questions. The next two parts are research result and interpretation. At last, I would like to discuss some significant threat to our research validitiy and future works.
  3. It is a common hypothesis that the structural feature of a design such as coupling, cohesion, inheritance has impact on external quality attributes. It is reasoned that a complex design structure can take a developer or tester more effort to understand, implement and maintain. Therefore It could lead to an undesired software quality such as increased fault proneness or reduced maintainability. Though m any studies investigate the relationship between design complexity and cost and quality, it is unclear what we have learned from these studies, because no systematic synthesis exists to date !
  4. This master thesis address the main research question: What is the impact of design complexity on software quality? This question (RQ) is divided into five sub questions (SQ). In particular, we would like to know
  5. We use four research methods to answer these five sub questions as shown in the diagram The literature review is used to achieve an quick impression about what type of cost and quality attributes are investigated. Then a systematic literature review is performed with focus on the most common quality attributes in literature. The data extracted from the systematic literature review will be used as input data for synthesis methods. Two available quantitative synthesis methods are vote counting and meta analysis. Vote counting is selected to answer the sub question 3. A design metric is a potential predictor of software quality if major portion of studies that investigate their relationship vote for it. Meta analysis is used to synthesize and quantify the global impact of a design metric on an external quality, which answer for SQ4. Meta analysis procedure also includes explanation for disagreement between studies which answer for SQ5.
  6. This slide presents the result of studies search and selection process. After searching in three electronic database, namely Scopus, IEEE Explorer and ACM Digital Library, we found 39 papers. After that, the reference scan and search for grey literature give us 18 papers more. In total the systematic search results in 57 primary studies. These two pictures shows the distribution of primary studies over publication year and publication channel. It is revealed that the number of papers in the topics increasing last 5 years. Besides, the selected papers mainly come from high quality source, like book chapter, international journal or conferences
  7. From this slide, I present the results for answering research questions. The diagram shows cost and quality attributes that are investigated in design complexity studies. The external quality attributes fall into three categories. Reliability attributes such as fault proneness, fault density, vulnerability. Maintainability and sub categories like testability, changeability. Development effort such as implementation cost, debugging effort, refactoring effort. It is noticed that main portion of studies focuses on fault proneness with 45% of total studies and maintainability with 25% of the studies. Fault proneness is the probability of a class to be faulty. Maintainability involves the effort necessary to maintain a class. Since only these two attributes are investigated in efficient number of studies, fault proneness and maintainability will be considered for SQ3, 4, 5.
  8. This slide presents the result for SQ2. The most frequently proposed and used design metrics focuses on coupling, cohesion, inheritance, scale and polymorphism aspect. The largest number of metrics is coupling metrics and followed by scale, inheritance and cohesion. Interestingly, this order is the same for both fault proneness and maintainability studies. In term of design metrics, C&K metric set is the most common used. Here I explain the definition of those metrics. NOC is number of child, … DIT is nu
  9. In this slide, We recall some basic concepts related to the topic. How to measure the impact? How to know the impact is strong or weak? How to know the impact happened not by chance? The impact between a design complexity metric and cost and quality is quantified by statistical correlation. Correlation analysis investigates the extent to which changes in the value of a variable (such as value of complexity metric in a class) are associated with changes in another variable (such as number of defect in a class). The intensity of the correlation is called effect size. There are three common effect sizes used in the correlational study: Spearman, Pearson and Odds ratios. For the purpose of demonstration, in coming slides, we consider the impact in term of Spearman correlation coefficient. The impact can be positive or negative. Positive impact means the increase in value of one variable will lead to the increase in value of another variable. Negative impact means …. The absolute value of Spearman coefficient range from 0 to 1. Cohen defined the coefficient smaller than 0.1 trivial, small, medium or large by this value. To know whether the impact happens by chance, we use a statistical index called p value. The p value of 0.05 or significance level at 5% means only 5% the measured impact happens by chance. It is noticed that correlation does not imply causation due to confounding factors. However it is still an effective method to select candidate variables for cause effect relationship
  10. To find whether a design metric is a potential predictor of external attributes, we test each design metric with the following hypothesis: H0: There is no positive impact of metric X on quality attribute Y. Vote counting says that H0 is rejected if ratio of the number of reported positive significant effect sizes and total number of reported effect size are larger than 0.5. The table shows the result of hypothesis test for some metrics in Fault proneness studies. The procedure is performed similarly for hypothesis of negative impact.
  11. In this slide, We recall some basic concepts related to the topic. How to measure the impact? How to know the impact is strong or weak? How to know the impact happened not by chance? The impact between a design complexity metric and cost and quality is quantified by statistical correlation. Correlation analysis investigates the extent to which changes in the value of a variable (such as value of complexity metric in a class) are associated with changes in another variable (such as number of defect in a class). The intensity of the correlation is called effect size. There are three common effect sizes used in the correlational study: Spearman, Pearson and Odds ratios. For the purpose of demonstration, in coming slides, we consider the impact in term of Spearman correlation coefficient. The impact can be positive or negative. Positive impact means the increase in value of one variable will lead to the increase in value of another variable. Negative impact means …. The absolute value of Spearman coefficient range from 0 to 1. Cohen defined the coefficient smaller than 0.1 trivial, small, medium or large by this value. To know whether the impact happens by chance, we use a statistical index called p value. The p value of 0.05 or significance level at 5% means only 5% the measured impact happens by chance. It is noticed that correlation does not imply causation due to confounding factors. However it is still an effective method to select candidate variables for cause effect relationship
  12. Appearance of high heterogeneity indicates that the effect sizes is coming from the heterogeneous population. In other words, it may exists the subgroups within the population and the true effect of those subgroups are different. In this case the aggregation should take into account the between subgroup variation as well. The calculation method for this is called random effect model. The table shows the results of aggregation of Spearman coefficient for 6 design metrics and LOC. We found the high level of heterogeneity in all of these metrics and therefore, we use a random effect model in all cases. This diagram show the comparison of 95% confidence interval of effect size among 7 metrics.
  13. The significance level can tell us whether a metric is theoretically correlated to an external quality attribute. But in order to be practically meaningful, the strength of impact should be large enough. Meta analysis are applied here to quantify and synthesize reported Spearman coefficient in different study. The example of global Spearman coefficient estimation of RFC in Fault proneness studies are shown in the Diagram. Each reported Spearman coefficient is weighed by the data set size. The rectangle represents for the weight of the effect size and its position in the axis is its magnitude. The line is … And the diamond is the aggregated effect size. We can see that all of reported spearman coefficient is larger than 0 which indicates an positive impact. I square is an index represented for heterogeneity among reported effect size. If I square larger than 70% means the high heterogeneity level.
  14. In the previous questions, we found a high heterogeneity in population of all investigated metrics. And we want to find an explanation for this. One available approach is subgroup analysis. That is, we attempt to find a moderator variable that are able to account for a significant part of the observed variation. The heterogeneity test is performed for each sub group. The ratio between within subgroup heterogeneity and whole population heterogeneity is ve and it is the percentage of variance explained by the moderator variable. We calculate ve value for each suspect moderator variable and for each design metric. The moderator variable here is the characteristics of the dataset that we extracted before. The results shows that Defect collection phase can explain more than 50% of observed variance in 5 out of 7 investigated metric. Domain can explain 76% of variance in case of NOC. In some cases, for example RFC and WMC, the defect collection phase can distinguish the 95% confidence interval of pre release defect and post release defect. The correlation between metrics and pre release defects are stronger than with post release defects. The number of post release defects is likely less than number of pre release defects due to the testing process. Therefore, the faulty class is less likely to be correlated to design complexity due to smaller probability to be detected.
  15. In this slide, we show the comparison between our results and the perceptions in literature. The results from vote counting and meta analysis statistically confirms the common claims about relationship between design metrics and software fault proneness. In general, our results agree with intuitive perception about relationship between CK metrics expect for DIT and LCOM. It is surprising to us that programming language cannot explain for the difference in the effect of CK metrics on fault proneness.
  16. threats to validity could come from the systematic review and meta analysis procedures. The bias in study selection is one threat to validity due to a single reviewer. The variety of quality of selected studies is a trade off to the desire to receive all reported effect size. The limitation of research design in observational and historical methods is a shortage of the research area. The conclusion validity includes the lack of information reported in studies, such as raw data for Univariate logistic regression and moderator variables. It suggests a further improvement in reported information for the purpose of aggregation.
  17. This slide summarizes the results of your research.
  18. This slide summarizes the results of your research.
  19. Compare before and after rework Influence of context setting