SlideShare ist ein Scribd-Unternehmen logo
1 von 40
Downloaden Sie, um offline zu lesen
Application Profiling & Analysis

                 Rishi Pathak
National PARAM Supercomputing Facility, C-DAC
               riship@cdac.in
What is application profiling
• Profiling
   – Recording of summary information during execution
      • inclusive, exclusive time, # calls, hardware counter statistics, …
   – Reflects performance behavior of program entities
      • functions, loops, basic blocks
      • user-defined “semantic” entities
   – Very good for low-cost performance assessment
   – Helps to expose performance bottlenecks and hotspots
   – Implemented through either
      • sampling: periodic OS interrupts or hardware counter traps
      • measurement: direct insertion of measurement code
Sampling vs. Instrumentation
                     Sampling                                       Instrumentation
Overhead             Typically about 1%                             High, may be 500% !

System-wide          Yes, profiles all app, drivers, OS functions   Just application and
profiling                                                           instrumented DLLs

Detect unexpected    Yes , can detect other programs using OS       No
events               resources

Setup                None                                           Automatic ins. of data
                                                                    collection stubs required

Data collected       Counters, processor an OS state                Call graph , call times,
                                                                    critical path
Data granularity     Assembly level instr., with src line           Functions, sometimes
                                                                    statements
Detects              No, Limited to processes , threads             Yes – can see algorithm,
algorithmic issues                                                  call path is expensive




                                          Profiling Tools                                       3
Inclusive v/s Exclusive Profiling
    int main( )
    { /* takes 100 secs     */
      f1(); /* takes 20     secs */
     /* other work */
      f2(); /* takes 50     secs */
      f1(); /* takes 20     secs */

     /* other work */
    }

    /* similar for other metrics, such
    as hardware performance counters,
    etc. */



    •   Inclusive time for main
         –   100 secs


    •   Exclusive time for main
         –   100-20-50-20=10 secs
What are Application Traces
• Tracing
  – Recording of information about significant points (events) during program
    execution
      • entering/exiting code region (function, loop, block, …)
      • thread/process interactions (e.g., send/receive message)
  – Save information in event record
      • timestamp
      • CPU identifier, thread identifier
      • Event type and event-specific information
  – Event trace is a time-sequenced stream of event records
  – Can be used to reconstruct dynamic program behavior
  – Typically requires code instrumentation
Profiling v/s Tracing

• Profiling
   – Summary statistics of performance metrics
       •   Number of times a routine was invoked
       •   Exclusive, inclusive time/hpm counts spent executing it
       •   Number of instrumented child routines invoked, etc.
       •   Structure of invocations (call-trees/call-graphs)
       •   Memory, message communication sizes

• Tracing
   –   When and where events took place along a global timeline
       •   Time-stamped log of events
       •   Message communication events (sends/receives) are tracked
       •   Shows when and from/to where messages were sent
       •   Large volume of performance data generated usually leads to more
           perturbation in the program
The Big Picture

   Sampling                Instrumentation




               Profiling




Analysis           Optimization
Instrumentation & Sampling
Measurements - Instrumentation
Instrumentation - Adding measurement probes
to the code to observe its execution
  – Can be done on several levels
  – Different techniques for different levels
  – Different overheads and levels of accuracy with each technique
  – No instrumentation: run in a simulator. E.g., Valgrind
Measurements - Instrumentation
• Source code instrumentation
  – User added time measurement, etc. (e.g.,
    printf(), gettimeofday())
                 Measurements - Instrumentation
  – Many tools expose mechanisms for source code
    instrumentation in addition to automatic
    instrumentation facilities they offer

  – Instrument program phases:
     • initialization/main iteration loop/data post processing
Measurements - Instrumentation
• Preprocessor Instrumentation
  – Example: Instrumenting OpenMP constructs with
    Opari
         Orignial          Pre-          Modified (instrumented)
       source code      processor              source code

  – Preprocessor operation
                                         This is used for OpenMP
                                             analysis in tools such as
                                             KoJak/Scalasca/ompP
        /* ORIGINAL CODE in parallel region */

                                        Instrumentation
                                         added by Opari


  – Example: Instrumenta
Measurements - Instrumentation
• Compiler Instrumentation
  – Many compilers can instrument functions
    automatically
  – GNU compiler flag: -finstrument-
    functions
  – Automatically calls functions on function
    entry/exit that a tool can capture
  – Not standardized across compilers, often
    undocumented flags, sometimes not available at
    all
Measurements - Instrumentation
– GNU compiler example:
void __cyg_profile_func_enter(void *this, void *callsite)
{
       /* called on function entry */
}

void __cyg_profile_func_exit(void *this, void *callsite)
{
       /* called just before returning from function */
}
Measurements - Instrumentation
 • Library Instrumentation:




MPI library interposition

     – All functions are available under two names: MPI_xxx and PMPI_xxx, MPI_xxx symbols
       are weak, can be over-written by interposition library
     – Measurement code in the interposition library measures begin, end, transmitted data,
       etc… and calls corresponding PMPI routine.
     – Not all MPI functions need to be instrumented
Measurements - Instrumentation
• Binary Instrumentation
  – Static binary instrumentation
      • LD_PRELOAD(Linux)
      • DLL injection(MS)

  – Debuggers
     – Breakpoints (Software & Hardware)

  • Dynamic binary instrumentation

      • Injection of instrumentation code into a running process.
      • Tools: PIN(Intel), Valgrind
Measurements - Sampling
Using event triggers
  – Reoccurring - program counter is sampled many
    times
  – Histogram of program contexts(CCT)
  – Sufficiently large number of samples required
  – Uniformity in event triggers wrt execution time
Measurements - Sampling
Event trigger types
• Synchronous
  – Initiated by direct program action
  – E.g. memory allocation, I/O, and inter-process
    communication(including MPI communication)
• Asynchronous
  –   Not initiated by direct program action
  –   OS timer interrupt or
  –   Hardware performance counter events
  –   E.g. CPU, floating point instructions, clock cycles etc.
Profiling Tools


• Gprof
• Intel VTune
• Scalasca
• HPC Tool Kit
            Profiling Tools   18
HPC Tool Kit
• Name: HPCToolkit
• Developer: Rice University
• Website:
  – http://hpctoolkit.org




                            19
HPCToolkit Overview
•   Consists of
     – hpcviewer
          • Sorts by any collected metric, from any processes displayed
          • Displays samples at various levels in call hierarchy through “flattening”
          • Allows user to focus in on interesting sections of the program through “zooming”
     – Hpcprof and hpcprof-mpi
          • Correlating dynamic profiles with static source code structure
     – hpcrun
          • Application profiling using statistical sampling
          • Hpcrun-flat – for collection of `flat' profile
     – Hpcstruct
          • Recovers static program structure such as procedures and loop nests
     – Hpctraceviewer
     – Hpclink
          • For statically-linked executables (e.g. for Cray XT or BG/P)




                                                     20
Available Metrics in HPCToolkit
•   Metrics, obtained by sampling/profiling
     – PAPI Hardware counters
     – OS program counters
•   Wallclock time (WALLCLK)
     – However, can’t get PAPI metrics and Wallclock time in a single run
•   Derived metrics
     – Combination of existing metrics created by specifying a mathematical formula in an XML
       configuration file.
•   Source Code Correlation
     – Metrics reflect exclusive time spent in function based on counter overflow events
     – Metrics correlated at the source line level and the loop level
     – Metrics are related back to source code loops (even if code has been significantly altered
       by optimization) (“bloop”)




                                               21
HPCToolKit workflow
hpcviewer Views

• Calling context view
   – top-down view shows dynamic calling contexts in which costs were
     incurred

• Caller’s view
   – bottom-up view apportions costs incurred in a routine to the
     routine’s dynamic calling contexts

• Flat view
   – aggregates all costs incurred by a routine in any context and shows
     the details of where they were incurred within the routine
hpcviewer User Interface
hpctraceviewer Views

• Trace view (left, top)
   – Time on the horizontal axis
   – Process (or thread) rank on the vertical axis
• Depth view (left, bottom) & Summary view
   – Call-path/time view for the process rank selected
• Call view (right, top)
   – Current call path depth that defines the hierarchical slice
     shown in the Trace View
   – Actual call path for the point selected by the Trace View's
     crosshair
Hpctraceviewer user interface - DEMO
Call Path Profiling: Costs in Context

 Event-based sampling method for performance measurement
• When a profile event occurs, e.g. a timer expires
   – determine context in which cost is incurred
        • unwind call stack to determine set of active procedure frames
   – attribute cost of sample to PC in calling context
• Benefits
   –   monitor unmodified fully optimized code
   –   language independent – C/C++, Fortran, assembly code, …
   –   accurate
   –   low overhead (1K samples per second has ~ 3-5% overhead)
Demo for :
• Hpcrun (list events & proifiling)
• Hpcstruct
• hpcprof
PAPI
• Performance Application Programming
  Interface
   – The purpose of the PAPI project is to design, standardize
      and implement a portable and efficient API to access the
      hardware performance monitor counters found on most
      modern microprocessors.
• Parallel Tools Consortium project started in 1998
• Developed by University of Tennessee, Knoxville
• http://icl.cs.utk.edu/papi/
PAPI - Support
• Unix/Linux
  – Perfctr kernel patch for kernel < 2.6.30
  – Perf package for kernel >= 2.6.30
PAPI - Implementation
                 3rd Party and GUI Tools

Portable                           PAPI High Level
 Layer      PAPI Low Level



            PAPI Machine Dependent Substrate
Machine                           Kernel Extension
Specific
 Layer                       Operating System

                Hardware Performance Counters
PAPI - Hardware Events
• Preset Events(Platform neutral)
    – Standard set of over 100 events for application performance tuning
    – No standardization of the exact definition
    – Mapped to either single or linear combinations of native events on each
      platform
    – Use papi_avail utility to see what preset events are available on a given
      platform
    – PAPI_TOT_INS
• Native Events(Platform dependent)
    – Any event countable by the CPU
    – Same interface as for preset events
    – Use papi_native_avail utility to see all available native events
    – L3_MISSES
• Use papi_event_chooser utility to select a compatible set of events
PAPI events demo
               papi_avail
           papi_native_avail
        papi_event_chooser
Availability for QPI h/w performance
      counters using /sbin/lspci
PAPI-Derived Metrics
                            Metric                                                              Formula
                                                                   Instructions
               Graduated instructions per cycle                                       PAPI_TOT_INS/PAPI_TOT_CYC
                 Issued instructions per cycle                                        PAPI_TOT_IIS/PAPI_TOT_CYC

       Graduated floating point instructions per cycle                                 PAPI_FP_INS/PAPI_TOT_CYC

            Percentage floating point instructions                                     PAPI_FP_INS/PAPI_TOT_INS

    Ratio of graduated instructions to issued instructions                            PAPI_TOT_INS/PAPI_TOT_IIS


        Percentage of cycles with no instruction issue                            100.0 * (PAPI_STL_ICY/PAPI_TOT_CYC)

               Data references per instruction                                         PAPI_L1_DCA/PAPI_TOT_INS

Ratio of floating point instructions to L1 data cache accesses                          PAPI_FP_INS/PAPI_L1_DCA


Ratio of floating point instructions to L2 cache accesses (data)                        PAPI_FP_INS/PAPI_L2_DCA


       Issued instructions per L1 instruction cache miss                               PAPI_TOT_IIS/PAPI_L1_ICM


    Graduated instructions per L1 instruction cache miss                               PAPI_TOT_INS/PAPI_L1_ICM

                L1 instruction cache miss ratio                                         PAPI_L2_ICR/PAPI_L1_ICR
PAPI-Derived Metrics
                      Metric                                                            Formula
                                                  Cache & Memory Hierarchy
         Graduated loads & stores per cycle                                    PAPI_LST_INS/PAPI_TOT_CYC

Graduated loads & stores per floating point instruction                         PAPI_LST_INS/PAPI_FP_INS


              L1 cache line reuse (data)                         ((PAPI_LST_INS - PAPI_L1_DCM) / PAPI_L1_DCM)

                L1 cache data hit rate                                   1.0 - (PAPI_L1_DCM/PAPI_LST_INS)
            L1 data cache read miss ratio                                        PAPI_L1_DCM/PAPI_L1_DCA

              L2 cache line reuse (data)                          ((PAPI_L1_DCM - PAPI_L2_DCM) / PAPI_L2_DCM)

                L2 cache data hit rate                                       1.0 - (PAPI_L2_DCM/PAPI_L1_DCM)
                 L2 cache miss ratio                                             PAPI_L2_TCM/PAPI_L2_TCA

              L3 cache line reuse (data)                          ((PAPI_L2_DCM - PAPI_L3_DCM) / PAPI_L3_DCM)

                L3 cache data hit rate                                       1.0 - (PAPI_L3_DCM/PAPI_L2_DCM)
               L3 data cache miss ratio                                          PAPI_L3_DCM/PAPI_L3_DCA
               L3 cache data read ratio                                          PAPI_L3_DCR/PAPI_L3_DCA
            L3 cache instruction miss ratio                                      PAPI_L3_ICM/PAPI_L3_ICR
                                                                 ((PAPI_Lx_TCM * Lx_linesize) / PAPI_TOT_CYC)
              Bandwidth used (Lx cache)
                                                                                 * Clock(MHz)
PAPI-Derived Metrics
                      Metric                                                   Formula
                                                     Branching

Ratio of mispredicted to correctly predicted branches                  PAPI_BR_MSP/PAPI_BR_PRC

                                                   Processor Stalls

  Percentage of cycles waiting for memory access                 100.0 * (PAPI_MEM_SCY/PAPI_TOT_CYC)

    Percentage of cycles stalled on any resource                 100.0 * (PAPI_RES_STL/PAPI_TOT_CYC)

                                              Aggregate Performance

                MFLOPS (CPU cycles)                           (PAPI_FP_INS/PAPI_TOT_CYC) * Clock(MHz)

                 MFLOPS (effective)                                   PAPI_FP_INS/Wallclock time

                 MIPS (CPU cycles)                           (PAPI_TOT_INS/PAPI_TOT_CYC) * Clock(MHz)

                  MIPS (effective)                                    PAPI_TOT_INS/Wallclock time

                Processor utilization                          (PAPI_TOT_CYC*Clock) / Wallclock time


            http://perfsuite.ncsa.illinois.edu/psprocess/metrics.shtml
Component PAPI (PAPI-C)
• Goals:
   – Support for simultaneous access to on- and off-processor
     counters
   – Isolation of hardware dependent code in a separable
     ‘substrate’ module
   – Extension of platform independent code to support
     multiple simultaneous substrates
   – API calls to support access to any of several substrates

• Released in PAPI 4.0
Extension to PAPI to
           Support Multiple Substrates

                                                      PAPI High Level


            PAPI Low Level
Portable
 Layer                       Hardware Independent Layer




            PAPI Machine Dependent Substrate        PAPI Machine Dependent Substrate
Machine                          Kernel Extension                         Kernel Extension
Specific
                       Operating System                Operating System
 Layer
               Hardware Performance Counters           Off-Processor Hardware Counters
High-level tools that use PAPI
• TAU (U Oregon)

• HPCToolkit (Rice Univ)

• KOJAK (UTK, FZ Juelich)

• PerfSuite (NCSA)

• SCALASCA

• Open|Speedshop (SGI)

• Intel Vtune
• Hpcviewer demo (trace, flops and clock cycles)
  – Nek5000(CFD solver using spectral element
    method)
  – Xhpl(Linpack)

Weitere ähnliche Inhalte

Was ist angesagt?

Effective service and resource management with systemd
Effective service and resource management with systemdEffective service and resource management with systemd
Effective service and resource management with systemdDavid Timothy Strauss
 
Linux Kernel Init Process
Linux Kernel Init ProcessLinux Kernel Init Process
Linux Kernel Init ProcessKernel TLV
 
Terraform vs Pulumi
Terraform vs PulumiTerraform vs Pulumi
Terraform vs PulumiHoaiNam307
 
Q2.12: Debugging with GDB
Q2.12: Debugging with GDBQ2.12: Debugging with GDB
Q2.12: Debugging with GDBLinaro
 
Namespaces and cgroups - the basis of Linux containers
Namespaces and cgroups - the basis of Linux containersNamespaces and cgroups - the basis of Linux containers
Namespaces and cgroups - the basis of Linux containersKernel TLV
 
CSW2017 Qinghao tang+Xinlei ying vmware_escape_final
CSW2017 Qinghao tang+Xinlei ying vmware_escape_finalCSW2017 Qinghao tang+Xinlei ying vmware_escape_final
CSW2017 Qinghao tang+Xinlei ying vmware_escape_finalCanSecWest
 
CI CD Jenkins for Swift Deployment
CI CD Jenkins for Swift DeploymentCI CD Jenkins for Swift Deployment
CI CD Jenkins for Swift DeploymentBintang Thunder
 
Cilium - BPF & XDP for containers
 Cilium - BPF & XDP for containers Cilium - BPF & XDP for containers
Cilium - BPF & XDP for containersDocker, Inc.
 
Introduction to Embedded Linux
Introduction to Embedded LinuxIntroduction to Embedded Linux
Introduction to Embedded LinuxHossain Reja
 
Parallel programming c++ win10 msmpi visual studio
Parallel programming c++ win10 msmpi visual studioParallel programming c++ win10 msmpi visual studio
Parallel programming c++ win10 msmpi visual studiopraveench1888
 
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxCon
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxConAnatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxCon
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxConJérôme Petazzoni
 
Kubernetes Clusters as a Service with Gardener
Kubernetes Clusters as a Service with GardenerKubernetes Clusters as a Service with Gardener
Kubernetes Clusters as a Service with GardenerQAware GmbH
 
Linux Kernel vs DPDK: HTTP Performance Showdown
Linux Kernel vs DPDK: HTTP Performance ShowdownLinux Kernel vs DPDK: HTTP Performance Showdown
Linux Kernel vs DPDK: HTTP Performance ShowdownScyllaDB
 
eBPF in the view of a storage developer
eBPF in the view of a storage developereBPF in the view of a storage developer
eBPF in the view of a storage developerRichárd Kovács
 
Embedded Linux Basics
Embedded Linux BasicsEmbedded Linux Basics
Embedded Linux BasicsMarc Leeman
 
Room 1 - 6 - Trần Quốc Sang - Autoscaling for multi cloud platform based on S...
Room 1 - 6 - Trần Quốc Sang - Autoscaling for multi cloud platform based on S...Room 1 - 6 - Trần Quốc Sang - Autoscaling for multi cloud platform based on S...
Room 1 - 6 - Trần Quốc Sang - Autoscaling for multi cloud platform based on S...Vietnam Open Infrastructure User Group
 

Was ist angesagt? (20)

Effective service and resource management with systemd
Effective service and resource management with systemdEffective service and resource management with systemd
Effective service and resource management with systemd
 
Linux Kernel Init Process
Linux Kernel Init ProcessLinux Kernel Init Process
Linux Kernel Init Process
 
Terraform vs Pulumi
Terraform vs PulumiTerraform vs Pulumi
Terraform vs Pulumi
 
Q2.12: Debugging with GDB
Q2.12: Debugging with GDBQ2.12: Debugging with GDB
Q2.12: Debugging with GDB
 
Explore Android Internals
Explore Android InternalsExplore Android Internals
Explore Android Internals
 
Namespaces and cgroups - the basis of Linux containers
Namespaces and cgroups - the basis of Linux containersNamespaces and cgroups - the basis of Linux containers
Namespaces and cgroups - the basis of Linux containers
 
CSW2017 Qinghao tang+Xinlei ying vmware_escape_final
CSW2017 Qinghao tang+Xinlei ying vmware_escape_finalCSW2017 Qinghao tang+Xinlei ying vmware_escape_final
CSW2017 Qinghao tang+Xinlei ying vmware_escape_final
 
CI CD Jenkins for Swift Deployment
CI CD Jenkins for Swift DeploymentCI CD Jenkins for Swift Deployment
CI CD Jenkins for Swift Deployment
 
Cilium - BPF & XDP for containers
 Cilium - BPF & XDP for containers Cilium - BPF & XDP for containers
Cilium - BPF & XDP for containers
 
Introduction to Embedded Linux
Introduction to Embedded LinuxIntroduction to Embedded Linux
Introduction to Embedded Linux
 
Parallel programming c++ win10 msmpi visual studio
Parallel programming c++ win10 msmpi visual studioParallel programming c++ win10 msmpi visual studio
Parallel programming c++ win10 msmpi visual studio
 
systemd
systemdsystemd
systemd
 
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxCon
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxConAnatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxCon
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxCon
 
Kubernetes Clusters as a Service with Gardener
Kubernetes Clusters as a Service with GardenerKubernetes Clusters as a Service with Gardener
Kubernetes Clusters as a Service with Gardener
 
Linux Device Tree
Linux Device TreeLinux Device Tree
Linux Device Tree
 
Firmadyne
FirmadyneFirmadyne
Firmadyne
 
Linux Kernel vs DPDK: HTTP Performance Showdown
Linux Kernel vs DPDK: HTTP Performance ShowdownLinux Kernel vs DPDK: HTTP Performance Showdown
Linux Kernel vs DPDK: HTTP Performance Showdown
 
eBPF in the view of a storage developer
eBPF in the view of a storage developereBPF in the view of a storage developer
eBPF in the view of a storage developer
 
Embedded Linux Basics
Embedded Linux BasicsEmbedded Linux Basics
Embedded Linux Basics
 
Room 1 - 6 - Trần Quốc Sang - Autoscaling for multi cloud platform based on S...
Room 1 - 6 - Trần Quốc Sang - Autoscaling for multi cloud platform based on S...Room 1 - 6 - Trần Quốc Sang - Autoscaling for multi cloud platform based on S...
Room 1 - 6 - Trần Quốc Sang - Autoscaling for multi cloud platform based on S...
 

Ähnlich wie HPC Application Profiling and Analysis

2010 02 instrumentation_and_runtime_measurement
2010 02 instrumentation_and_runtime_measurement2010 02 instrumentation_and_runtime_measurement
2010 02 instrumentation_and_runtime_measurementPTIHPA
 
Monitorama 2015 Netflix Instance Analysis
Monitorama 2015 Netflix Instance AnalysisMonitorama 2015 Netflix Instance Analysis
Monitorama 2015 Netflix Instance AnalysisBrendan Gregg
 
Visual Studio 2013 Profiling
Visual Studio 2013 ProfilingVisual Studio 2013 Profiling
Visual Studio 2013 ProfilingDenis Dudaev
 
Embedded systems notes
Embedded systems notesEmbedded systems notes
Embedded systems notesShikha Sharma
 
AADL: Architecture Analysis and Design Language
AADL: Architecture Analysis and Design LanguageAADL: Architecture Analysis and Design Language
AADL: Architecture Analysis and Design LanguageIvano Malavolta
 
The differing ways to monitor and instrument
The differing ways to monitor and instrumentThe differing ways to monitor and instrument
The differing ways to monitor and instrumentJonah Kowall
 
MeetUp Monitoring with Prometheus and Grafana (September 2018)
MeetUp Monitoring with Prometheus and Grafana (September 2018)MeetUp Monitoring with Prometheus and Grafana (September 2018)
MeetUp Monitoring with Prometheus and Grafana (September 2018)Lucas Jellema
 
Performance analysis and troubleshooting using DTrace
Performance analysis and troubleshooting using DTracePerformance analysis and troubleshooting using DTrace
Performance analysis and troubleshooting using DTraceGraeme Jenkinson
 
TAU for Accelerating AI Applications at OpenPOWER Summit Europe
TAU for Accelerating AI Applications at OpenPOWER Summit Europe TAU for Accelerating AI Applications at OpenPOWER Summit Europe
TAU for Accelerating AI Applications at OpenPOWER Summit Europe OpenPOWERorg
 
Monitoring and Instrumentation Strategies: Tips and Best Practices - AppSphere16
Monitoring and Instrumentation Strategies: Tips and Best Practices - AppSphere16Monitoring and Instrumentation Strategies: Tips and Best Practices - AppSphere16
Monitoring and Instrumentation Strategies: Tips and Best Practices - AppSphere16AppDynamics
 
Continuous Profiling in Production: What, Why and How
Continuous Profiling in Production: What, Why and HowContinuous Profiling in Production: What, Why and How
Continuous Profiling in Production: What, Why and HowSadiq Jaffer
 
TAU E4S ON OpenPOWER /POWER9 platform
TAU E4S ON OpenPOWER /POWER9 platformTAU E4S ON OpenPOWER /POWER9 platform
TAU E4S ON OpenPOWER /POWER9 platformGanesan Narayanasamy
 
Instrumentation and measurement
Instrumentation and measurementInstrumentation and measurement
Instrumentation and measurementDr.M.Prasad Naidu
 
Building the Internet of Things with Thingsquare and Contiki - day 2 part 1
Building the Internet of Things with Thingsquare and Contiki - day 2 part 1Building the Internet of Things with Thingsquare and Contiki - day 2 part 1
Building the Internet of Things with Thingsquare and Contiki - day 2 part 1Adam Dunkels
 
Data Onboarding Breakout Session
Data Onboarding Breakout SessionData Onboarding Breakout Session
Data Onboarding Breakout SessionSplunk
 
Performance eng prakash.sahu
Performance eng prakash.sahuPerformance eng prakash.sahu
Performance eng prakash.sahuDr. Prakash Sahu
 
Librato's Joseph Ruscio at Heroku's 2013: Instrumenting 12-Factor Apps
Librato's Joseph Ruscio at Heroku's 2013: Instrumenting 12-Factor AppsLibrato's Joseph Ruscio at Heroku's 2013: Instrumenting 12-Factor Apps
Librato's Joseph Ruscio at Heroku's 2013: Instrumenting 12-Factor AppsHeroku
 
[CB16] COFI break – Breaking exploits with Processor trace and Practical cont...
[CB16] COFI break – Breaking exploits with Processor trace and Practical cont...[CB16] COFI break – Breaking exploits with Processor trace and Practical cont...
[CB16] COFI break – Breaking exploits with Processor trace and Practical cont...CODE BLUE
 

Ähnlich wie HPC Application Profiling and Analysis (20)

2010 02 instrumentation_and_runtime_measurement
2010 02 instrumentation_and_runtime_measurement2010 02 instrumentation_and_runtime_measurement
2010 02 instrumentation_and_runtime_measurement
 
Monitorama 2015 Netflix Instance Analysis
Monitorama 2015 Netflix Instance AnalysisMonitorama 2015 Netflix Instance Analysis
Monitorama 2015 Netflix Instance Analysis
 
Visual Studio 2013 Profiling
Visual Studio 2013 ProfilingVisual Studio 2013 Profiling
Visual Studio 2013 Profiling
 
Embedded systems notes
Embedded systems notesEmbedded systems notes
Embedded systems notes
 
AADL: Architecture Analysis and Design Language
AADL: Architecture Analysis and Design LanguageAADL: Architecture Analysis and Design Language
AADL: Architecture Analysis and Design Language
 
Visual Studio Profiler
Visual Studio ProfilerVisual Studio Profiler
Visual Studio Profiler
 
The differing ways to monitor and instrument
The differing ways to monitor and instrumentThe differing ways to monitor and instrument
The differing ways to monitor and instrument
 
MeetUp Monitoring with Prometheus and Grafana (September 2018)
MeetUp Monitoring with Prometheus and Grafana (September 2018)MeetUp Monitoring with Prometheus and Grafana (September 2018)
MeetUp Monitoring with Prometheus and Grafana (September 2018)
 
Performance analysis and troubleshooting using DTrace
Performance analysis and troubleshooting using DTracePerformance analysis and troubleshooting using DTrace
Performance analysis and troubleshooting using DTrace
 
TAU for Accelerating AI Applications at OpenPOWER Summit Europe
TAU for Accelerating AI Applications at OpenPOWER Summit Europe TAU for Accelerating AI Applications at OpenPOWER Summit Europe
TAU for Accelerating AI Applications at OpenPOWER Summit Europe
 
Monitoring and Instrumentation Strategies: Tips and Best Practices - AppSphere16
Monitoring and Instrumentation Strategies: Tips and Best Practices - AppSphere16Monitoring and Instrumentation Strategies: Tips and Best Practices - AppSphere16
Monitoring and Instrumentation Strategies: Tips and Best Practices - AppSphere16
 
Continuous Profiling in Production: What, Why and How
Continuous Profiling in Production: What, Why and HowContinuous Profiling in Production: What, Why and How
Continuous Profiling in Production: What, Why and How
 
TAU E4S ON OpenPOWER /POWER9 platform
TAU E4S ON OpenPOWER /POWER9 platformTAU E4S ON OpenPOWER /POWER9 platform
TAU E4S ON OpenPOWER /POWER9 platform
 
Instrumentation and measurement
Instrumentation and measurementInstrumentation and measurement
Instrumentation and measurement
 
TAU on Power 9
TAU on Power 9TAU on Power 9
TAU on Power 9
 
Building the Internet of Things with Thingsquare and Contiki - day 2 part 1
Building the Internet of Things with Thingsquare and Contiki - day 2 part 1Building the Internet of Things with Thingsquare and Contiki - day 2 part 1
Building the Internet of Things with Thingsquare and Contiki - day 2 part 1
 
Data Onboarding Breakout Session
Data Onboarding Breakout SessionData Onboarding Breakout Session
Data Onboarding Breakout Session
 
Performance eng prakash.sahu
Performance eng prakash.sahuPerformance eng prakash.sahu
Performance eng prakash.sahu
 
Librato's Joseph Ruscio at Heroku's 2013: Instrumenting 12-Factor Apps
Librato's Joseph Ruscio at Heroku's 2013: Instrumenting 12-Factor AppsLibrato's Joseph Ruscio at Heroku's 2013: Instrumenting 12-Factor Apps
Librato's Joseph Ruscio at Heroku's 2013: Instrumenting 12-Factor Apps
 
[CB16] COFI break – Breaking exploits with Processor trace and Practical cont...
[CB16] COFI break – Breaking exploits with Processor trace and Practical cont...[CB16] COFI break – Breaking exploits with Processor trace and Practical cont...
[CB16] COFI break – Breaking exploits with Processor trace and Practical cont...
 

HPC Application Profiling and Analysis

  • 1. Application Profiling & Analysis Rishi Pathak National PARAM Supercomputing Facility, C-DAC riship@cdac.in
  • 2. What is application profiling • Profiling – Recording of summary information during execution • inclusive, exclusive time, # calls, hardware counter statistics, … – Reflects performance behavior of program entities • functions, loops, basic blocks • user-defined “semantic” entities – Very good for low-cost performance assessment – Helps to expose performance bottlenecks and hotspots – Implemented through either • sampling: periodic OS interrupts or hardware counter traps • measurement: direct insertion of measurement code
  • 3. Sampling vs. Instrumentation Sampling Instrumentation Overhead Typically about 1% High, may be 500% ! System-wide Yes, profiles all app, drivers, OS functions Just application and profiling instrumented DLLs Detect unexpected Yes , can detect other programs using OS No events resources Setup None Automatic ins. of data collection stubs required Data collected Counters, processor an OS state Call graph , call times, critical path Data granularity Assembly level instr., with src line Functions, sometimes statements Detects No, Limited to processes , threads Yes – can see algorithm, algorithmic issues call path is expensive Profiling Tools 3
  • 4. Inclusive v/s Exclusive Profiling int main( ) { /* takes 100 secs */ f1(); /* takes 20 secs */ /* other work */ f2(); /* takes 50 secs */ f1(); /* takes 20 secs */ /* other work */ } /* similar for other metrics, such as hardware performance counters, etc. */ • Inclusive time for main – 100 secs • Exclusive time for main – 100-20-50-20=10 secs
  • 5. What are Application Traces • Tracing – Recording of information about significant points (events) during program execution • entering/exiting code region (function, loop, block, …) • thread/process interactions (e.g., send/receive message) – Save information in event record • timestamp • CPU identifier, thread identifier • Event type and event-specific information – Event trace is a time-sequenced stream of event records – Can be used to reconstruct dynamic program behavior – Typically requires code instrumentation
  • 6. Profiling v/s Tracing • Profiling – Summary statistics of performance metrics • Number of times a routine was invoked • Exclusive, inclusive time/hpm counts spent executing it • Number of instrumented child routines invoked, etc. • Structure of invocations (call-trees/call-graphs) • Memory, message communication sizes • Tracing – When and where events took place along a global timeline • Time-stamped log of events • Message communication events (sends/receives) are tracked • Shows when and from/to where messages were sent • Large volume of performance data generated usually leads to more perturbation in the program
  • 7. The Big Picture Sampling Instrumentation Profiling Analysis Optimization
  • 9. Measurements - Instrumentation Instrumentation - Adding measurement probes to the code to observe its execution – Can be done on several levels – Different techniques for different levels – Different overheads and levels of accuracy with each technique – No instrumentation: run in a simulator. E.g., Valgrind
  • 10. Measurements - Instrumentation • Source code instrumentation – User added time measurement, etc. (e.g., printf(), gettimeofday()) Measurements - Instrumentation – Many tools expose mechanisms for source code instrumentation in addition to automatic instrumentation facilities they offer – Instrument program phases: • initialization/main iteration loop/data post processing
  • 11. Measurements - Instrumentation • Preprocessor Instrumentation – Example: Instrumenting OpenMP constructs with Opari Orignial Pre- Modified (instrumented) source code processor source code – Preprocessor operation This is used for OpenMP analysis in tools such as KoJak/Scalasca/ompP /* ORIGINAL CODE in parallel region */ Instrumentation added by Opari – Example: Instrumenta
  • 12. Measurements - Instrumentation • Compiler Instrumentation – Many compilers can instrument functions automatically – GNU compiler flag: -finstrument- functions – Automatically calls functions on function entry/exit that a tool can capture – Not standardized across compilers, often undocumented flags, sometimes not available at all
  • 13. Measurements - Instrumentation – GNU compiler example: void __cyg_profile_func_enter(void *this, void *callsite) { /* called on function entry */ } void __cyg_profile_func_exit(void *this, void *callsite) { /* called just before returning from function */ }
  • 14. Measurements - Instrumentation • Library Instrumentation: MPI library interposition – All functions are available under two names: MPI_xxx and PMPI_xxx, MPI_xxx symbols are weak, can be over-written by interposition library – Measurement code in the interposition library measures begin, end, transmitted data, etc… and calls corresponding PMPI routine. – Not all MPI functions need to be instrumented
  • 15. Measurements - Instrumentation • Binary Instrumentation – Static binary instrumentation • LD_PRELOAD(Linux) • DLL injection(MS) – Debuggers – Breakpoints (Software & Hardware) • Dynamic binary instrumentation • Injection of instrumentation code into a running process. • Tools: PIN(Intel), Valgrind
  • 16. Measurements - Sampling Using event triggers – Reoccurring - program counter is sampled many times – Histogram of program contexts(CCT) – Sufficiently large number of samples required – Uniformity in event triggers wrt execution time
  • 17. Measurements - Sampling Event trigger types • Synchronous – Initiated by direct program action – E.g. memory allocation, I/O, and inter-process communication(including MPI communication) • Asynchronous – Not initiated by direct program action – OS timer interrupt or – Hardware performance counter events – E.g. CPU, floating point instructions, clock cycles etc.
  • 18. Profiling Tools • Gprof • Intel VTune • Scalasca • HPC Tool Kit Profiling Tools 18
  • 19. HPC Tool Kit • Name: HPCToolkit • Developer: Rice University • Website: – http://hpctoolkit.org 19
  • 20. HPCToolkit Overview • Consists of – hpcviewer • Sorts by any collected metric, from any processes displayed • Displays samples at various levels in call hierarchy through “flattening” • Allows user to focus in on interesting sections of the program through “zooming” – Hpcprof and hpcprof-mpi • Correlating dynamic profiles with static source code structure – hpcrun • Application profiling using statistical sampling • Hpcrun-flat – for collection of `flat' profile – Hpcstruct • Recovers static program structure such as procedures and loop nests – Hpctraceviewer – Hpclink • For statically-linked executables (e.g. for Cray XT or BG/P) 20
  • 21. Available Metrics in HPCToolkit • Metrics, obtained by sampling/profiling – PAPI Hardware counters – OS program counters • Wallclock time (WALLCLK) – However, can’t get PAPI metrics and Wallclock time in a single run • Derived metrics – Combination of existing metrics created by specifying a mathematical formula in an XML configuration file. • Source Code Correlation – Metrics reflect exclusive time spent in function based on counter overflow events – Metrics correlated at the source line level and the loop level – Metrics are related back to source code loops (even if code has been significantly altered by optimization) (“bloop”) 21
  • 23. hpcviewer Views • Calling context view – top-down view shows dynamic calling contexts in which costs were incurred • Caller’s view – bottom-up view apportions costs incurred in a routine to the routine’s dynamic calling contexts • Flat view – aggregates all costs incurred by a routine in any context and shows the details of where they were incurred within the routine
  • 25. hpctraceviewer Views • Trace view (left, top) – Time on the horizontal axis – Process (or thread) rank on the vertical axis • Depth view (left, bottom) & Summary view – Call-path/time view for the process rank selected • Call view (right, top) – Current call path depth that defines the hierarchical slice shown in the Trace View – Actual call path for the point selected by the Trace View's crosshair
  • 27. Call Path Profiling: Costs in Context Event-based sampling method for performance measurement • When a profile event occurs, e.g. a timer expires – determine context in which cost is incurred • unwind call stack to determine set of active procedure frames – attribute cost of sample to PC in calling context • Benefits – monitor unmodified fully optimized code – language independent – C/C++, Fortran, assembly code, … – accurate – low overhead (1K samples per second has ~ 3-5% overhead)
  • 28. Demo for : • Hpcrun (list events & proifiling) • Hpcstruct • hpcprof
  • 29. PAPI • Performance Application Programming Interface – The purpose of the PAPI project is to design, standardize and implement a portable and efficient API to access the hardware performance monitor counters found on most modern microprocessors. • Parallel Tools Consortium project started in 1998 • Developed by University of Tennessee, Knoxville • http://icl.cs.utk.edu/papi/
  • 30. PAPI - Support • Unix/Linux – Perfctr kernel patch for kernel < 2.6.30 – Perf package for kernel >= 2.6.30
  • 31. PAPI - Implementation 3rd Party and GUI Tools Portable PAPI High Level Layer PAPI Low Level PAPI Machine Dependent Substrate Machine Kernel Extension Specific Layer Operating System Hardware Performance Counters
  • 32. PAPI - Hardware Events • Preset Events(Platform neutral) – Standard set of over 100 events for application performance tuning – No standardization of the exact definition – Mapped to either single or linear combinations of native events on each platform – Use papi_avail utility to see what preset events are available on a given platform – PAPI_TOT_INS • Native Events(Platform dependent) – Any event countable by the CPU – Same interface as for preset events – Use papi_native_avail utility to see all available native events – L3_MISSES • Use papi_event_chooser utility to select a compatible set of events
  • 33. PAPI events demo papi_avail papi_native_avail papi_event_chooser Availability for QPI h/w performance counters using /sbin/lspci
  • 34. PAPI-Derived Metrics Metric Formula Instructions Graduated instructions per cycle PAPI_TOT_INS/PAPI_TOT_CYC Issued instructions per cycle PAPI_TOT_IIS/PAPI_TOT_CYC Graduated floating point instructions per cycle PAPI_FP_INS/PAPI_TOT_CYC Percentage floating point instructions PAPI_FP_INS/PAPI_TOT_INS Ratio of graduated instructions to issued instructions PAPI_TOT_INS/PAPI_TOT_IIS Percentage of cycles with no instruction issue 100.0 * (PAPI_STL_ICY/PAPI_TOT_CYC) Data references per instruction PAPI_L1_DCA/PAPI_TOT_INS Ratio of floating point instructions to L1 data cache accesses PAPI_FP_INS/PAPI_L1_DCA Ratio of floating point instructions to L2 cache accesses (data) PAPI_FP_INS/PAPI_L2_DCA Issued instructions per L1 instruction cache miss PAPI_TOT_IIS/PAPI_L1_ICM Graduated instructions per L1 instruction cache miss PAPI_TOT_INS/PAPI_L1_ICM L1 instruction cache miss ratio PAPI_L2_ICR/PAPI_L1_ICR
  • 35. PAPI-Derived Metrics Metric Formula Cache & Memory Hierarchy Graduated loads & stores per cycle PAPI_LST_INS/PAPI_TOT_CYC Graduated loads & stores per floating point instruction PAPI_LST_INS/PAPI_FP_INS L1 cache line reuse (data) ((PAPI_LST_INS - PAPI_L1_DCM) / PAPI_L1_DCM) L1 cache data hit rate 1.0 - (PAPI_L1_DCM/PAPI_LST_INS) L1 data cache read miss ratio PAPI_L1_DCM/PAPI_L1_DCA L2 cache line reuse (data) ((PAPI_L1_DCM - PAPI_L2_DCM) / PAPI_L2_DCM) L2 cache data hit rate 1.0 - (PAPI_L2_DCM/PAPI_L1_DCM) L2 cache miss ratio PAPI_L2_TCM/PAPI_L2_TCA L3 cache line reuse (data) ((PAPI_L2_DCM - PAPI_L3_DCM) / PAPI_L3_DCM) L3 cache data hit rate 1.0 - (PAPI_L3_DCM/PAPI_L2_DCM) L3 data cache miss ratio PAPI_L3_DCM/PAPI_L3_DCA L3 cache data read ratio PAPI_L3_DCR/PAPI_L3_DCA L3 cache instruction miss ratio PAPI_L3_ICM/PAPI_L3_ICR ((PAPI_Lx_TCM * Lx_linesize) / PAPI_TOT_CYC) Bandwidth used (Lx cache) * Clock(MHz)
  • 36. PAPI-Derived Metrics Metric Formula Branching Ratio of mispredicted to correctly predicted branches PAPI_BR_MSP/PAPI_BR_PRC Processor Stalls Percentage of cycles waiting for memory access 100.0 * (PAPI_MEM_SCY/PAPI_TOT_CYC) Percentage of cycles stalled on any resource 100.0 * (PAPI_RES_STL/PAPI_TOT_CYC) Aggregate Performance MFLOPS (CPU cycles) (PAPI_FP_INS/PAPI_TOT_CYC) * Clock(MHz) MFLOPS (effective) PAPI_FP_INS/Wallclock time MIPS (CPU cycles) (PAPI_TOT_INS/PAPI_TOT_CYC) * Clock(MHz) MIPS (effective) PAPI_TOT_INS/Wallclock time Processor utilization (PAPI_TOT_CYC*Clock) / Wallclock time http://perfsuite.ncsa.illinois.edu/psprocess/metrics.shtml
  • 37. Component PAPI (PAPI-C) • Goals: – Support for simultaneous access to on- and off-processor counters – Isolation of hardware dependent code in a separable ‘substrate’ module – Extension of platform independent code to support multiple simultaneous substrates – API calls to support access to any of several substrates • Released in PAPI 4.0
  • 38. Extension to PAPI to Support Multiple Substrates PAPI High Level PAPI Low Level Portable Layer Hardware Independent Layer PAPI Machine Dependent Substrate PAPI Machine Dependent Substrate Machine Kernel Extension Kernel Extension Specific Operating System Operating System Layer Hardware Performance Counters Off-Processor Hardware Counters
  • 39. High-level tools that use PAPI • TAU (U Oregon) • HPCToolkit (Rice Univ) • KOJAK (UTK, FZ Juelich) • PerfSuite (NCSA) • SCALASCA • Open|Speedshop (SGI) • Intel Vtune
  • 40. • Hpcviewer demo (trace, flops and clock cycles) – Nek5000(CFD solver using spectral element method) – Xhpl(Linpack)