SlideShare ist ein Scribd-Unternehmen logo
1 von 84
Downloaden Sie, um offline zu lesen
Devel::NYTProf
  Perl Source Code Profiler


Tim Bunce - YAPC::EU - Pisa - August 2010
Devel::DProf Is Broken
$ perl -we 'print "sub s$_ { sqrt(42) for 1..100 };
 s$_({});n" for 1..1000' > x.pl
Devel::DProf Is Broken
$ perl -we 'print "sub s$_ { sqrt(42) for 1..100 };
 s$_({});n" for 1..1000' > x.pl

$ perl -d:DProf x.pl
Devel::DProf Is Broken
$ perl -we 'print "sub s$_ { sqrt(42) for 1..100 };
 s$_({});n" for 1..1000' > x.pl

$ perl -d:DProf x.pl

$ dprofpp -r
  Total Elapsed Time =    0.108 Seconds
           Real Time =    0.108 Seconds
  Exclusive Times
  %Time ExclSec CumulS #Calls sec/call Csec/c   Name
   9.26   0.010 0.010       1   0.0100 0.0100   main::s76
   9.26   0.010 0.010       1   0.0100 0.0100   main::s323
   9.26   0.010 0.010       1   0.0100 0.0100   main::s626
   9.26   0.010 0.010       1   0.0100 0.0100   main::s936
   0.00       - -0.000      1        -      -   main::s77
   0.00       - -0.000      1        -      -   main::s82
Evolution
Devel::DProf          |   1995   |   Subroutine
Devel::SmallProf      |   1997   |   Line
Devel::AutoProfiler   |   2002   |   Subroutine
Devel::Profiler       |   2002   |   Subroutine
Devel::Profile        |   2003   |   Subroutine
Devel::FastProf       |   2005   |   Line
Devel::DProfLB        |   2006   |   Subroutine
Devel::WxProf         |   2008   |   Subroutine
Devel::Profit         |   2008   |   Line
Devel::NYTProf v1     |   2008   |   Line
Devel::NYTProf v2     |   2008   |   Line + Subroutine
Devel::NYTProf v3     |   2009   |   Line & Sub + Opcode
Devel::NYTProf v4     |   2010   |   Line & Sub & Op + Eval
Profiling 101
  The Basics
What To Measure?

              CPU Time   Real Time


Subroutines
                 ?          ?
Statements
                 ?          ?
CPU Time vs Real Time
• CPU time
 - Measures time CPU sent executing your code
 - Not (much) affected by other load on system
 - Doesn’t include time spent waiting for i/o etc.


• Real time
 - Measures the elapsed time-of-day
 - Your time is affected by other load on system
 - Includes time spent waiting for i/o etc.
Subroutine vs Statement
• Subroutine Profiling
 - Measures time between subroutine entry and exit
 - That’s the Inclusive time. Exclusive by subtraction.
 - Reasonably fast, reasonably small data files


• Problems
 - Can be confused by funky control flow (goto &sub)
 - No insight into where time spent within large subs
 - Doesn’t measure code outside of a sub
Subroutine vs Statement
• Line/Statement profiling
 - Measure time from start of one statement to next
 - Exclusive time (except includes built-ins & xsubs)
 - Fine grained detail


• Problems
 -   Very expensive in CPU & I/O
 -   Assigns too much time to some statements
 -   Too much detail for large subs
 -   Hard to get overall subroutine times
Devel::NYTProf
v1 Innovations

• Fork by Adam Kaplan of Devel::FastProf
  - working at the New York Times
• HTML report borrowed from Devel::Cover
• More accurate: Discounts profiler overhead
  including cost of writing to the file
• Test suite!
v2 Innovations
• Profiles time per block!
  - Statement times can be aggregated
    to enclosing block
    and enclosing sub

• Dual Profilers!
  - Is a statement profiler
    and a subroutine profiler
    concurrently
v2 Innovations
• Subroutine profiler
 -   tracks subroutine time per calling location
 -   even for xsubs
 -   calculates exclusive time on-the-fly
 -   discounts cost of statement profiler
 -   immune from funky control flow
 -   in memory, writes to file at end
 -   extremely fast
v2 Innovations
• Statement profiler gives correct timing
  after leave ops
 - last statement in loops doesn’t accumulate
   time spent evaluating the condition
 - last statement in subs doesn’t accumulate time
   spent in remainder of calling statement
 - previous statement profilers didn’t fix this
 - slightly dependant on perl version
v2 Other Features
•   Profiles compile-time activity
•   Profiling can be enabled & disabled on the fly
•   Handles forks with no overhead
•   Correct timing for mod_perl
•   Sub-microsecond resolution
•   Multiple clocks, including high-res CPU time
•   Can snapshot source code & evals into profile
•   Built-in zip compression
v3 Features
•   Profiles slow opcodes: system calls, regexps, ...
•   Subroutine caller name noted, for call-graph
•   Handles goto ⊂ e.g. AUTOLOAD
•   HTML report includes interactive TreeMaps
•   Outputs call-graph in Graphviz dot format
•   High resolution timer on Mac OS X (100ns)
•   Merge multiple profiles
v4 Features

•   Profile reporting of code inside string evals
•   Smart handling of high numbers of evals
•   Smart handling of ‘duplicate’ anon subs
•   Better handling of assorted egde-cases
•   Detection of slow regex match vars: $& $' $`
Cost of Profiling
                                  Time              File Size
       Perl                          x1                   -
       DProf                       x 4.9                  60.7 MB
       SmallProf                 x 22.0                   -
       FastProf                    x 6.3                  42.9 MB
       NYTProf                   x 3.6                    19.6 MB
        + blocks=0                 x 3.3                  16.9 MB
        + stmts=0                  x 2.4                      1.2 MB

NYTProf v4.04 running perl 5.12.0 perlcritic 1.088 on lib/Perl/Critic/Policy
Effect of Profiling

• Your code
  - runs more slowly
  - uses more memory, e.g. saves string eval src code
  - context switches much more often
• Non-closures are as slow to create as closures
  - I don’t know why (perhaps perl keeps the sub
    lexicals around for introspection by the ‘debugger’)
Running NYTProf
   perl -d:NYTProf ...
   perl -MDevel::NYTProf ...


Configure profiler via the NYTPROF env var
   perldoc Devel::NYTProf for the details


To profile code that’s invoked elsewhere:
   PERL5OPT=-d:NYTProf
   NYTPROF=file=/tmp/nytprof.out:addpid=1:...
Reporting: KCachegrind
• KCachegrind call graph - new and cool
   - contributed by C. L. Kao.
   - requires KCachegrind

  $ nytprofcg   # generates nytprof.callgraph
  $ kcachegrind # load the file via the gui
KCachegrind
Reporting: HTML
• HTML report
   - page per source file, annotated with times and links
   - subroutine index table with sortable columns
   - interactive Treemap of subroutine times
   - generates Graphviz dot file of call graph
   - -m (--minimal) faster generation but less detailed

$ nytprofhtml # writes HTML report in ./nytprof/...
$ nytprofhtml --file=/tmp/nytprof.out.793 --open
Summary




                             Links to annotated
                                source code




Link to sortable table
      of all subs
                          Timings for perl builtins
Exclusive vs. Inclusive
• Exclusive Time is best for Bottom Up
  - Detail of time spent “in the code of this sub”
  - Where the time actually gets spent
  - Useful for localized (peephole) optimisation


• Inclusive Time is best for Top Down
  - Overview of time spent “in and below this sub”
  - Useful to prioritize structural optimizations
Color coding based on        Overall time spent in and below this sub
Median Average Deviation
relative to rest of this file               (in + below)




Time between starting this perl
statement and starting the next.
So includes overhead of calls to
           perl subs.               Timings for each location calling into,
                                          or out of, the subroutine
Treemap showing relative
                 proportions of exclusive time




                                  Boxes represent subroutines
                                   Colors only used to show
                                packages (and aren’t pretty yet)




Hover over box to see details
                                          Click to drill-down one level
                                              in package hierarchy
Calls between packages
Calls to/from/within package
Let’s take a look...
DEMO
Optimizing
  Hints & Tips
Beware My Examples!


     Do your own testing
   With your own perl binary
    On your own hardware
Beware 2!
Beware 2!



Take care comparing code fragments!
Edge-effects at loop and scope boundaries.
Statement time includes time getting to the next
perl statement, wherever that may be.
Beware Your Examples!
Beware Your Examples!




Consider effect of CPU-level data and code caching
Tends to make second case look faster!
Swap the order to double-check alternatives
Phase 0
Before you start
DONʼT
DO IT!
“The First Rule of Program Optimization:
Don't do it.

The Second Rule of Program Optimization
(for experts only!): Don't do it yet.”

- Michael A. Jackson
Why not?
“More computing sins are committed in the
name of efficiency (without necessarily
achieving it) than for any other single
reason - including blind stupidity.”

- W.A. Wulf
“We should forget about small efficiencies,
say about 97% of the time: premature
optimization is the root of all evil.”


- Donald Knuth
“We should forget about small efficiencies,
say about 97% of the time: premature
optimization is the root of all evil.
Yet we should not pass up our
opportunities in that critical 3%.”
- Donald Knuth
How?
“Throw hardware at it!”

Hardware is Cheap, Programmers are
Expensive.

Hardware upgrades can be less risky than
software optimizations.
“Bottlenecks occur in surprising places, so
don't try to second guess and put in a
speed hack until you have proven that's
where the bottleneck is.”

- Rob Pike
“Measure twice, cut once.”

- Old Carpenter’s Maxim
Phase 1
Low Hanging Fruit
Low Hanging Fruit
1.   Profile code running representative workload.
2.   Look at Exclusive Time of subroutines.
3.   Do they look reasonable?
4.   Examine worst offenders.
5.   Fix only simple local problems.
6.   Profile again.
7.   Fast enough? Then STOP!
8.   Rinse and repeat once or twice, then move on.
“Simple Local Fixes”


 Changes unlikely to introduce bugs
Move invariant
 expressions
 out of loops
Avoid->repeated->chains
  ->of->accessors(...);
Avoid->repeated->chains
  ->of->accessors(...);

    Use a temporary variable
Use faster accessors
          Class::Accessor
          -> Class::Accessor::Fast
          --> Class::Accessor::Faster
          ---> Class::Accessor::Fast::XS
          ----> Class::XSAccessor

These aren’t all compatible so consider your actual usage.
Avoid calling subs that
 don’t do anything!
  my $unused_variable = $self->get_foo;


  my $is_logging = $log->info(...);
  while (...) {
      $log->info(...) if $is_logging;
      ...
  }
Exit subs and loops early
  Delay initializations

  return if not ...a cheap test...;
  return if not ...a more expensive test...;
  my $foo = ...initializations...;
  ...body of subroutine...
Fix silly code

-   return exists $nav_type{$country}{$key}
-               ? $nav_type{$country}{$key}
-               : undef;

+   return $nav_type{$country}{$key};
Beware pathological
    regular expressions


Devel::NYTProf shows regular expression opcodes
Avoid unpacking args
  in very hot subs
   sub foo { shift->delegate(@_) }

   sub bar {
       return shift->{bar} unless @_;
       return $_[0]->{bar} = $_[1];
   }
Avoid unnecessary
(capturing parens)
     in regex
Retest.

Fast enough?

        STOP!
Put the profiler down and walk away
Phase 2
Deeper Changes
Profile with a
    known workload


E.g., 1000 identical requests
Check Inclusive Times
 (especially top-level subs)


Reasonable percentage
  for the workload?
Check subroutine
   call counts

    Reasonable
for the workload?
Add caching
 if appropriate
to reduce calls


Remember invalidation!
Walk up call chain
to find good spots
    for caching


 Remember invalidation!
Creating many objects
  that don’t get used?


Try a lightweight proxy
e.g. DateTime::Tiny, DateTimeX::Lite, DateTime::LazyInit
Reconfigure your Perl
 can yield significant gains with little effort

       thread support costs ~2..30%
       debugging support costs ~15%
  Also consider: usemymalloc, use64bitint,
   use64bitall, uselongdouble, optimize,
and compiler (system vs gcc vs commercial).
       Consider binary compatibility
        vs new installation directory.
     Rerun make test before installing.
Retest.

Fast enough?

        STOP!
Put the profiler down and walk away.
Phase 3
Structural Changes
Push loops down


-   $object->walk($_) for @dogs;

+   $object->walk_these(@dogs);
Change the data
   structure

hashes <–> arrays
Change the algorithm


What’s the “Big O”?
O(n 2) or O(logn) or ...
Rewrite hot-spots
    in XS / C

Consider Inline::C but beware of
        deployment issues.
Small changes add up!

  “I achieved my fast times by
  multitudes of 1% reductions”

         - Bill Raymond
See also “Top 10 Perl
  Performance Tips”
• A presentation by Perrin Harkins
• Covers higher-level issues, including
  - Good DBI usage
  - Fastest modules for serialization, caching,
    templating, HTTP requests etc.
• http://docs.google.com/present/view?id=dhjsvwmm_26dk9btn3g
Questions?

Tim.Bunce@pobox.com
http://blog.timbunce.org
 @timbunce on twitter

Weitere ähnliche Inhalte

Mehr von Tim Bunce

Perl at SkyCon'12
Perl at SkyCon'12Perl at SkyCon'12
Perl at SkyCon'12Tim Bunce
 
Perl Memory Use 201209
Perl Memory Use 201209Perl Memory Use 201209
Perl Memory Use 201209Tim Bunce
 
Perl Memory Use 201207 (OUTDATED, see 201209 )
Perl Memory Use 201207 (OUTDATED, see 201209 )Perl Memory Use 201207 (OUTDATED, see 201209 )
Perl Memory Use 201207 (OUTDATED, see 201209 )Tim Bunce
 
Perl Dist::Surveyor 2011
Perl Dist::Surveyor 2011Perl Dist::Surveyor 2011
Perl Dist::Surveyor 2011Tim Bunce
 
PL/Perl - New Features in PostgreSQL 9.0 201012
PL/Perl - New Features in PostgreSQL 9.0 201012PL/Perl - New Features in PostgreSQL 9.0 201012
PL/Perl - New Features in PostgreSQL 9.0 201012Tim Bunce
 
Perl6 DBDI YAPC::EU 201008
Perl6 DBDI YAPC::EU 201008Perl6 DBDI YAPC::EU 201008
Perl6 DBDI YAPC::EU 201008Tim Bunce
 
Perl 6 DBDI 201007 (OUTDATED, see 201008)
Perl 6 DBDI 201007 (OUTDATED, see 201008)Perl 6 DBDI 201007 (OUTDATED, see 201008)
Perl 6 DBDI 201007 (OUTDATED, see 201008)Tim Bunce
 
PL/Perl - New Features in PostgreSQL 9.0
PL/Perl - New Features in PostgreSQL 9.0PL/Perl - New Features in PostgreSQL 9.0
PL/Perl - New Features in PostgreSQL 9.0Tim Bunce
 
DBI Advanced Tutorial 2007
DBI Advanced Tutorial 2007DBI Advanced Tutorial 2007
DBI Advanced Tutorial 2007Tim Bunce
 
Devel::NYTProf v3 - 200908 (OUTDATED, see 201008)
Devel::NYTProf v3 - 200908 (OUTDATED, see 201008)Devel::NYTProf v3 - 200908 (OUTDATED, see 201008)
Devel::NYTProf v3 - 200908 (OUTDATED, see 201008)Tim Bunce
 
Perl Myths 200909
Perl Myths 200909Perl Myths 200909
Perl Myths 200909Tim Bunce
 
DashProfiler 200807
DashProfiler 200807DashProfiler 200807
DashProfiler 200807Tim Bunce
 
DBI for Parrot and Perl 6 Lightning Talk 2007
DBI for Parrot and Perl 6 Lightning Talk 2007DBI for Parrot and Perl 6 Lightning Talk 2007
DBI for Parrot and Perl 6 Lightning Talk 2007Tim Bunce
 
DBD::Gofer 200809
DBD::Gofer 200809DBD::Gofer 200809
DBD::Gofer 200809Tim Bunce
 
Perl Myths 200802 with notes (OUTDATED, see 200909)
Perl Myths 200802 with notes (OUTDATED, see 200909)Perl Myths 200802 with notes (OUTDATED, see 200909)
Perl Myths 200802 with notes (OUTDATED, see 200909)Tim Bunce
 

Mehr von Tim Bunce (15)

Perl at SkyCon'12
Perl at SkyCon'12Perl at SkyCon'12
Perl at SkyCon'12
 
Perl Memory Use 201209
Perl Memory Use 201209Perl Memory Use 201209
Perl Memory Use 201209
 
Perl Memory Use 201207 (OUTDATED, see 201209 )
Perl Memory Use 201207 (OUTDATED, see 201209 )Perl Memory Use 201207 (OUTDATED, see 201209 )
Perl Memory Use 201207 (OUTDATED, see 201209 )
 
Perl Dist::Surveyor 2011
Perl Dist::Surveyor 2011Perl Dist::Surveyor 2011
Perl Dist::Surveyor 2011
 
PL/Perl - New Features in PostgreSQL 9.0 201012
PL/Perl - New Features in PostgreSQL 9.0 201012PL/Perl - New Features in PostgreSQL 9.0 201012
PL/Perl - New Features in PostgreSQL 9.0 201012
 
Perl6 DBDI YAPC::EU 201008
Perl6 DBDI YAPC::EU 201008Perl6 DBDI YAPC::EU 201008
Perl6 DBDI YAPC::EU 201008
 
Perl 6 DBDI 201007 (OUTDATED, see 201008)
Perl 6 DBDI 201007 (OUTDATED, see 201008)Perl 6 DBDI 201007 (OUTDATED, see 201008)
Perl 6 DBDI 201007 (OUTDATED, see 201008)
 
PL/Perl - New Features in PostgreSQL 9.0
PL/Perl - New Features in PostgreSQL 9.0PL/Perl - New Features in PostgreSQL 9.0
PL/Perl - New Features in PostgreSQL 9.0
 
DBI Advanced Tutorial 2007
DBI Advanced Tutorial 2007DBI Advanced Tutorial 2007
DBI Advanced Tutorial 2007
 
Devel::NYTProf v3 - 200908 (OUTDATED, see 201008)
Devel::NYTProf v3 - 200908 (OUTDATED, see 201008)Devel::NYTProf v3 - 200908 (OUTDATED, see 201008)
Devel::NYTProf v3 - 200908 (OUTDATED, see 201008)
 
Perl Myths 200909
Perl Myths 200909Perl Myths 200909
Perl Myths 200909
 
DashProfiler 200807
DashProfiler 200807DashProfiler 200807
DashProfiler 200807
 
DBI for Parrot and Perl 6 Lightning Talk 2007
DBI for Parrot and Perl 6 Lightning Talk 2007DBI for Parrot and Perl 6 Lightning Talk 2007
DBI for Parrot and Perl 6 Lightning Talk 2007
 
DBD::Gofer 200809
DBD::Gofer 200809DBD::Gofer 200809
DBD::Gofer 200809
 
Perl Myths 200802 with notes (OUTDATED, see 200909)
Perl Myths 200802 with notes (OUTDATED, see 200909)Perl Myths 200802 with notes (OUTDATED, see 200909)
Perl Myths 200802 with notes (OUTDATED, see 200909)
 

Kürzlich hochgeladen

Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DaySri Ambati
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 

Kürzlich hochgeladen (20)

Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 

Devel::NYTProf v4 at YAPC::EU 201008

  • 1. Devel::NYTProf Perl Source Code Profiler Tim Bunce - YAPC::EU - Pisa - August 2010
  • 2. Devel::DProf Is Broken $ perl -we 'print "sub s$_ { sqrt(42) for 1..100 }; s$_({});n" for 1..1000' > x.pl
  • 3. Devel::DProf Is Broken $ perl -we 'print "sub s$_ { sqrt(42) for 1..100 }; s$_({});n" for 1..1000' > x.pl $ perl -d:DProf x.pl
  • 4. Devel::DProf Is Broken $ perl -we 'print "sub s$_ { sqrt(42) for 1..100 }; s$_({});n" for 1..1000' > x.pl $ perl -d:DProf x.pl $ dprofpp -r Total Elapsed Time = 0.108 Seconds Real Time = 0.108 Seconds Exclusive Times %Time ExclSec CumulS #Calls sec/call Csec/c Name 9.26 0.010 0.010 1 0.0100 0.0100 main::s76 9.26 0.010 0.010 1 0.0100 0.0100 main::s323 9.26 0.010 0.010 1 0.0100 0.0100 main::s626 9.26 0.010 0.010 1 0.0100 0.0100 main::s936 0.00 - -0.000 1 - - main::s77 0.00 - -0.000 1 - - main::s82
  • 5. Evolution Devel::DProf | 1995 | Subroutine Devel::SmallProf | 1997 | Line Devel::AutoProfiler | 2002 | Subroutine Devel::Profiler | 2002 | Subroutine Devel::Profile | 2003 | Subroutine Devel::FastProf | 2005 | Line Devel::DProfLB | 2006 | Subroutine Devel::WxProf | 2008 | Subroutine Devel::Profit | 2008 | Line Devel::NYTProf v1 | 2008 | Line Devel::NYTProf v2 | 2008 | Line + Subroutine Devel::NYTProf v3 | 2009 | Line & Sub + Opcode Devel::NYTProf v4 | 2010 | Line & Sub & Op + Eval
  • 6. Profiling 101 The Basics
  • 7. What To Measure? CPU Time Real Time Subroutines ? ? Statements ? ?
  • 8. CPU Time vs Real Time • CPU time - Measures time CPU sent executing your code - Not (much) affected by other load on system - Doesn’t include time spent waiting for i/o etc. • Real time - Measures the elapsed time-of-day - Your time is affected by other load on system - Includes time spent waiting for i/o etc.
  • 9. Subroutine vs Statement • Subroutine Profiling - Measures time between subroutine entry and exit - That’s the Inclusive time. Exclusive by subtraction. - Reasonably fast, reasonably small data files • Problems - Can be confused by funky control flow (goto &sub) - No insight into where time spent within large subs - Doesn’t measure code outside of a sub
  • 10. Subroutine vs Statement • Line/Statement profiling - Measure time from start of one statement to next - Exclusive time (except includes built-ins & xsubs) - Fine grained detail • Problems - Very expensive in CPU & I/O - Assigns too much time to some statements - Too much detail for large subs - Hard to get overall subroutine times
  • 12. v1 Innovations • Fork by Adam Kaplan of Devel::FastProf - working at the New York Times • HTML report borrowed from Devel::Cover • More accurate: Discounts profiler overhead including cost of writing to the file • Test suite!
  • 13. v2 Innovations • Profiles time per block! - Statement times can be aggregated to enclosing block and enclosing sub • Dual Profilers! - Is a statement profiler and a subroutine profiler concurrently
  • 14. v2 Innovations • Subroutine profiler - tracks subroutine time per calling location - even for xsubs - calculates exclusive time on-the-fly - discounts cost of statement profiler - immune from funky control flow - in memory, writes to file at end - extremely fast
  • 15. v2 Innovations • Statement profiler gives correct timing after leave ops - last statement in loops doesn’t accumulate time spent evaluating the condition - last statement in subs doesn’t accumulate time spent in remainder of calling statement - previous statement profilers didn’t fix this - slightly dependant on perl version
  • 16. v2 Other Features • Profiles compile-time activity • Profiling can be enabled & disabled on the fly • Handles forks with no overhead • Correct timing for mod_perl • Sub-microsecond resolution • Multiple clocks, including high-res CPU time • Can snapshot source code & evals into profile • Built-in zip compression
  • 17. v3 Features • Profiles slow opcodes: system calls, regexps, ... • Subroutine caller name noted, for call-graph • Handles goto &sub; e.g. AUTOLOAD • HTML report includes interactive TreeMaps • Outputs call-graph in Graphviz dot format • High resolution timer on Mac OS X (100ns) • Merge multiple profiles
  • 18. v4 Features • Profile reporting of code inside string evals • Smart handling of high numbers of evals • Smart handling of ‘duplicate’ anon subs • Better handling of assorted egde-cases • Detection of slow regex match vars: $& $' $`
  • 19. Cost of Profiling Time File Size Perl x1 - DProf x 4.9 60.7 MB SmallProf x 22.0 - FastProf x 6.3 42.9 MB NYTProf x 3.6 19.6 MB + blocks=0 x 3.3 16.9 MB + stmts=0 x 2.4 1.2 MB NYTProf v4.04 running perl 5.12.0 perlcritic 1.088 on lib/Perl/Critic/Policy
  • 20. Effect of Profiling • Your code - runs more slowly - uses more memory, e.g. saves string eval src code - context switches much more often • Non-closures are as slow to create as closures - I don’t know why (perhaps perl keeps the sub lexicals around for introspection by the ‘debugger’)
  • 21. Running NYTProf perl -d:NYTProf ... perl -MDevel::NYTProf ... Configure profiler via the NYTPROF env var perldoc Devel::NYTProf for the details To profile code that’s invoked elsewhere: PERL5OPT=-d:NYTProf NYTPROF=file=/tmp/nytprof.out:addpid=1:...
  • 22. Reporting: KCachegrind • KCachegrind call graph - new and cool - contributed by C. L. Kao. - requires KCachegrind $ nytprofcg # generates nytprof.callgraph $ kcachegrind # load the file via the gui
  • 24. Reporting: HTML • HTML report - page per source file, annotated with times and links - subroutine index table with sortable columns - interactive Treemap of subroutine times - generates Graphviz dot file of call graph - -m (--minimal) faster generation but less detailed $ nytprofhtml # writes HTML report in ./nytprof/... $ nytprofhtml --file=/tmp/nytprof.out.793 --open
  • 25.
  • 26. Summary Links to annotated source code Link to sortable table of all subs Timings for perl builtins
  • 27. Exclusive vs. Inclusive • Exclusive Time is best for Bottom Up - Detail of time spent “in the code of this sub” - Where the time actually gets spent - Useful for localized (peephole) optimisation • Inclusive Time is best for Top Down - Overview of time spent “in and below this sub” - Useful to prioritize structural optimizations
  • 28.
  • 29. Color coding based on Overall time spent in and below this sub Median Average Deviation relative to rest of this file (in + below) Time between starting this perl statement and starting the next. So includes overhead of calls to perl subs. Timings for each location calling into, or out of, the subroutine
  • 30.
  • 31. Treemap showing relative proportions of exclusive time Boxes represent subroutines Colors only used to show packages (and aren’t pretty yet) Hover over box to see details Click to drill-down one level in package hierarchy
  • 34. Let’s take a look...
  • 35. DEMO
  • 37. Beware My Examples! Do your own testing With your own perl binary On your own hardware
  • 39. Beware 2! Take care comparing code fragments! Edge-effects at loop and scope boundaries. Statement time includes time getting to the next perl statement, wherever that may be.
  • 41. Beware Your Examples! Consider effect of CPU-level data and code caching Tends to make second case look faster! Swap the order to double-check alternatives
  • 43.
  • 45. “The First Rule of Program Optimization: Don't do it. The Second Rule of Program Optimization (for experts only!): Don't do it yet.” - Michael A. Jackson
  • 47. “More computing sins are committed in the name of efficiency (without necessarily achieving it) than for any other single reason - including blind stupidity.” - W.A. Wulf
  • 48. “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.” - Donald Knuth
  • 49. “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.” - Donald Knuth
  • 50. How?
  • 51. “Throw hardware at it!” Hardware is Cheap, Programmers are Expensive. Hardware upgrades can be less risky than software optimizations.
  • 52. “Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you have proven that's where the bottleneck is.” - Rob Pike
  • 53. “Measure twice, cut once.” - Old Carpenter’s Maxim
  • 55. Low Hanging Fruit 1. Profile code running representative workload. 2. Look at Exclusive Time of subroutines. 3. Do they look reasonable? 4. Examine worst offenders. 5. Fix only simple local problems. 6. Profile again. 7. Fast enough? Then STOP! 8. Rinse and repeat once or twice, then move on.
  • 56. “Simple Local Fixes” Changes unlikely to introduce bugs
  • 58. Avoid->repeated->chains ->of->accessors(...); Avoid->repeated->chains ->of->accessors(...); Use a temporary variable
  • 59. Use faster accessors Class::Accessor -> Class::Accessor::Fast --> Class::Accessor::Faster ---> Class::Accessor::Fast::XS ----> Class::XSAccessor These aren’t all compatible so consider your actual usage.
  • 60. Avoid calling subs that don’t do anything! my $unused_variable = $self->get_foo; my $is_logging = $log->info(...); while (...) { $log->info(...) if $is_logging; ... }
  • 61. Exit subs and loops early Delay initializations return if not ...a cheap test...; return if not ...a more expensive test...; my $foo = ...initializations...; ...body of subroutine...
  • 62. Fix silly code - return exists $nav_type{$country}{$key} - ? $nav_type{$country}{$key} - : undef; + return $nav_type{$country}{$key};
  • 63. Beware pathological regular expressions Devel::NYTProf shows regular expression opcodes
  • 64. Avoid unpacking args in very hot subs sub foo { shift->delegate(@_) } sub bar { return shift->{bar} unless @_; return $_[0]->{bar} = $_[1]; }
  • 65.
  • 67. Retest. Fast enough? STOP! Put the profiler down and walk away
  • 69. Profile with a known workload E.g., 1000 identical requests
  • 70. Check Inclusive Times (especially top-level subs) Reasonable percentage for the workload?
  • 71. Check subroutine call counts Reasonable for the workload?
  • 72. Add caching if appropriate to reduce calls Remember invalidation!
  • 73. Walk up call chain to find good spots for caching Remember invalidation!
  • 74. Creating many objects that don’t get used? Try a lightweight proxy e.g. DateTime::Tiny, DateTimeX::Lite, DateTime::LazyInit
  • 75. Reconfigure your Perl can yield significant gains with little effort thread support costs ~2..30% debugging support costs ~15% Also consider: usemymalloc, use64bitint, use64bitall, uselongdouble, optimize, and compiler (system vs gcc vs commercial). Consider binary compatibility vs new installation directory. Rerun make test before installing.
  • 76. Retest. Fast enough? STOP! Put the profiler down and walk away.
  • 78. Push loops down - $object->walk($_) for @dogs; + $object->walk_these(@dogs);
  • 79. Change the data structure hashes <–> arrays
  • 80. Change the algorithm What’s the “Big O”? O(n 2) or O(logn) or ...
  • 81. Rewrite hot-spots in XS / C Consider Inline::C but beware of deployment issues.
  • 82. Small changes add up! “I achieved my fast times by multitudes of 1% reductions” - Bill Raymond
  • 83. See also “Top 10 Perl Performance Tips” • A presentation by Perrin Harkins • Covers higher-level issues, including - Good DBI usage - Fastest modules for serialization, caching, templating, HTTP requests etc. • http://docs.google.com/present/view?id=dhjsvwmm_26dk9btn3g