SlideShare ist ein Scribd-Unternehmen logo
1 von 130
1
SQL Server Performance Tuning (Tips & Tricks)
By Nitin K
niting123@gmail.com
2
Agenda
ā€£ SQL Server Concepts/Structure
ā€£ Performance Measuring & Troubleshooting Tools
ā€£ Locking
ā€£ Performance Problem : CPU
ā€£ Performance Problem : Memory
ā€£ Performance Problem : I/O
ā€£ Performance Problem : Blocking
ā€£ Query Tuning
ā€£ Indexing
3
Who am I
ā€£ 12 + years of experience on Microsoft Technologies
ā€£ Data Architect on SQL Server Platform
ā€£ Working as a Datawarehouse Architect from last 5 years
ā€£ Playing a role of BI-DBA
ā€£ Certified MCSE- SQL Server 2012 Data track
ā€£ Passionate about performance tuning on Both database as well
as Business Intelligence Area
4
Performance Tuning in SQL Server
ā€£ Why is performance tuning necessary?
5
Why is Performance Tuning Necessary?
ā€£ Allowing your system to scale
ā€¢Adding more customers
ā€¢Adding more features
ā€£ Improve overall system performance
ā€£ Save money but not wasting resources
ā€£ The database is typically one of the most expensive resources
in a datacenter
6
Performance Focus
ā€£ Response Time
Time interval when a request is submitted and when 1st
character of response is received.
ā€£ Throughput
Number of transaction processed in fixed unit of time
ā€£ Scalability
Throughput and response time changes as we add more
hardware resources
7
ā€£ Best Practices:
Optimize for real-
world
workloads
Monitor/review
performance
regularly
Focus on specific
issues
Performance Monitoring Approaches
8
Overview of Performance Monitoring
Tools and Methods
9
Performance Tools: No Extra cost
ā€£ SQL Server Profiler
ā€£ System Monitor (windows performance monitor)
ā€£ Dynamic Management Views (DMV) : SQL 2005+
ā€£ Database tuning advisor (DTA)
ā€£ Microsoft Free tools (SQLDiag, PSSDiag, SQL Nexus ā€“
www.codeplex.com)
ā€£ MDW ā€“ Management Datawarehouse
10
11
Major Performance Killers
ā€£ Insufficient indexing
ā€£ Inaccurate statistics
ā€£ Improper query design
ā€£ Poorly generated execution plans
ā€£ Excessive blocking and deadlocks
ā€£ Non-set-based operations, usually T-SQL cursors
ā€£ Inappropriate database design
ā€£ Excessive fragmentation
ā€£ Nonreusable execution plans
ā€£ Frequent recompilation of queries
ā€£ Improper use of cursors
ā€£ Improper configuration of the database transaction log
ā€£ Excessive use or improper configuration of tempdb
12
Monitoring SQL Server
Using various SQL Server features to monitor database
activity
13
ā€£ Overviews of SQL Server usage
Can export to Excel or PDF
ā€£ Server-Level Report Examples:
Server Dashboard
Memory Consumption
Activity ā€“ All Block Transactions
Activity ā€“ Top Sessions
Performance ā€“ Batch Execution Statistics
Performance ā€“ Top Queries by Average CPU
Object Execution Statistics
SQL Server Management Studio
Reports
14
ā€£ Examples:
Disk Usage
All Transactions
All Blocking Transactions
Index Usage Statistics
Top Transactions by Age
Schema Changes History
ā€£ New reports added in Service Packs
Ability to use custom reports
Database-Level Reports
15
ā€£ Windows Event Logs / Event Viewer
Application and System Event Logs
ā€£ SQL Server Management Studio
SQL Server Logs
ā€£ Can configure max. # of log files
SQL Server Agent Error logs
ā€£ Can configure logging levels (Errors, Warnings, Information)
ā€£ Using the Log File Viewer
Can Export / Load log information
Can search for specific errors/messages
Monitoring SQL Server Logs
16
SQL Server Architecture
17
SQL Server Architecture
18
Supported Protocols
19
SQL Server Databases
20
SQL Server Files
21
SQL Server Transaction Log
22
Recovery Models
23
Data File
24
Extents
25
Log Files
26
Transaction Isolation Levels
Level Definition
Read Un-Committed Donā€™t need a lock to read a data
Read Committed ā€œDefaultā€ Read only committed data otherwise wait.
Request ā€˜Sā€™ lock to read. No guarantee that read is
repeatable
Repeatable read Guarantees that data read in a transaction will not
change for it duration. Holds the ā€˜Sā€™ lock until the
duration of transaction.
Serializable Prevent phantoms
In any of above transaction, an ā€˜Xā€™ lock acquired is held for total duration of transaction.
27
Locks
ā€£ Lock types/modes
X (exclusive), S (shared) , U(Update), IX (Intent Exclusive), ISā€¦..
imposed on DB (database), RID (row id), PAG(page), TAB(table)
ā€£ Locks Compatibility (matrix)
ā€£ Lock Hints
Select * from employee with (Nolock)
More - Rowlock, HoldLock, Tablockā€¦ā€¦.
ā€£ Locks Hierarchy
Database (DB) ->Table (TB) -> Page (PG)-> Row (key)
Lock modes Shared(S) Exclusive (X)
Shared (S) ā€“ READ LOCKS OK NO
Exclusive(X) NO NO
28
What happens when you CRUD
ā€£ Two places get affected
Memory: Data looked into Memory. If doesnā€™t exist, Page
brought into Memory(RAM). Operation performed on
Page in memory
Transaction Log: Entry in transaction log file (.ndf file)
ā€£ No ā€œinstantā€ changes to mdf file (data file)
ā‡’Latest Changes are in Memory and Transaction Log
file
ā‡’Backup transaction Log file in case of SQL failure
ā€£ Checkpoint : Lazywriter write to data changes in memory
to disk
29
Performance Tuning
Server Configuration
30
Server configuration
31
Tools to Identify the Performance Metric
ā€£ Performance counters
ā€£ Activity Monitor
ā€£ Task Manager
ā€£ SQL Server DMVā€™s
32
Memory Performance Analysis
ā€£ The basics of the Performance Monitor tool
ā€£ Some of the dynamic management objects used to observe
system behavior
ā€£ How and why hardware resources can be bottlenecks
ā€£ Methods of observing and measuring memory use within SQL
Server and Windows
ā€£ Possible resolutions to memory bottlenecks
33
Performance Monitor Tool
34
Performance Counters
35
DMOā€™s and DMVā€™s
ā€£ SELECT dopc.cntr_value, dopc.cntr_type FROM
sys.dm_os_performance_counters AS dopc WHERE
dopc.object_name = 'SQLServer:General Statistics' AND
dopc.counter_name = 'Logins/sec';
36
SQL Server Memory Management
ā€£ Max Memory
ā€£ Min Memory
37
Additional Memory Monitoring Tools
ā€£ DBCC MEMORYSTATUS
ā€£ Sys.dm_os_memory_brokers
ā€£ Sys.dm_os_memory_clerks
ā€£ Sys.dm_os_ring_buffers
38
Resolution for Common Memory Problems
ā€£ Optimizing application workload
ā€£ Allocating more memory to SQL Server
ā€£ Moving in-memory tables back to standard storage
ā€£ Increasing system memory
ā€£ Changing from a 32-bit to a 64-bit processor
ā€£ Enabling 3GB of process space
ā€£ Compressing data
39
Disk Performance Analysis
ā€£ Using system counters to gather disk performance metrics
ā€£ Using other mechanisms of gathering disk behavior
ā€£ Resolving disk performance issues
40
Performance Counters
41
DMVā€™s and DMOā€™s
ā€£ Sys.dm_io_virtual_file_stats
ā€£ Sys.dm_os_wait_stats
ā€£ Example
SELECT *
FROM sys.dm_os_wait_stats AS dows
WHERE wait_type LIKE 'PAGEIOLATCH%'
42
Resolution for common Disk bottlenecks
ā€£ Optimizing application workload
ā€£ Using a faster I/O path
ā€£ Using a RAID array
ā€£ Using a SAN system
ā€£ Using Solid State Drives
43
Resolution for common Disk bottlenecks
ā€£ Aligning disks properly
ā€£ Using a battery-backed controller cache
ā€£ Adding system memory
ā€£ Creating multiple files and filegroups
ā€£ Moving the log files to a separate physical drive
ā€£ Using partitioned tables
44
CPU Performance Analysis
ā€£ How to gather metrics on the processor
ā€£ Additional metrics available through T-SQL queries
ā€£ Methods for resolving processor bottlenecks
45
CPU Bottleneck Analysis
46
DMVā€™s
ā€£ Sys.dm_os_wait_stats
ā€£ Sys.dm_os_workers and Sys.dm_os_schedulers
47
Resolution for common bottlenecks
ā€£ Optimizing application workload
ā€£ Eliminating or reducing excessive compiles/recompiles
ā€£ Using more or faster processors
ā€£ Not running unnecessary software
48
Performance Tuning
Query Level
49
Costly Queries
ā€£ Identify costly queries using
- SQL Server DMVā€™s(sys.dm_exec_query_stats)
- Extended events
50
Query Tuning
ā€£ Detection
Profiler
ā€£ Look for Queries/Stored Proc with High reads, CPU, &
Duration. These are candidates of tuning.
ā€£ Look for Stored proc thatā€™s Recompiling (itā€™s an event)
DMVā€™s
ā€£ Find Queries with missing indexes
ā€£ Find tables that are defragmented
ā€£ Find TempDB database bottlenecks
51
Query Tuning cont.
ā€£ Troubleshoot : Query Execution Plan
Operator types
ā€£ Seek (Best and preferred)
ā€£ Scan (not preferred)
ā€£ Bookmark lookup (better than scan and mostly with non-
clustered index)
Join type
ā€£ Nested
ā€£ Merge
ā€£ Hash (Avoid)
Graphical Execution Plan Icons :
http://msdn2.microsoft.com/en-us/library/ms175913.aspx
http://www.sql-server-performance.com/articles/per/select_indexes_p1.aspx
52
53
Why do we need an optimizer?
The Query Optimizer
ā€¢T-SQL is a ā€œWhatā€ not ā€œhowā€ language.
ā€¢We write ā€œlogicalā€ requests.
ā€¢SQL Optimizer Engine converts
logical requests into physical
plans.
54
The job of the
SQL Optimizer is
to find ā€œthe best
plan possibleā€.
The Query Optimizer
X
What is the goal of the Optimizer?
55
Query optimization explained simply
1. Query submitted
2. Magic happens
3. Shedload of data returned
56
Optimizer steps
Query Optimization (in a bit more detail)
Bind
Execute
Optimize
Parse
57
Parse
Builds a tree structure based upon the logical operators in the query.
For example:
SELECT
SSOD.[SalesOrderID],
PP.[Name],
PP.[Weight],
SSOD.[UnitPrice]
FROM [Sales].[SalesOrderDetail] SSOD
INNER JOIN [Production].[Product] PP
ON SSOD.ProductID = PP.ProductID
WHERE PP.Weight > 100
Project
Filter
Join
Product
Sales
Order
Detail
LogicalOperations
Nodes
58
Bind
ā€¢ Series of validation steps
ā€¢ Schema validation
ā€¢ Table validation
ā€¢ Attribute validation
ā€¢ Permission validation
SELECT
SSOD.[SalesOrderID],
PP.[Name],
PP.[Weight],
SSOD.[UnitPrice]
FROM [Sales].[SalesOrderDetail] SSOD
INNER JOIN [Production].[Product] PP
ON SSOD.ProductID = PP.ProductID
WHERE PP.Weight > 100
59
Optimize
Works though many rules and heuristics.
These Include:
ā€¢Commutativity
ā€¢Substitution rules
ā€¢Exploration rules
ā€¢Implementation rules
60
SELECT prod_category, AVG(amount_sold)
FROM o_sales s, o_products p
WHERE p.prod_id = s.prod_id
GROUP BY prod_category;
61
SQL: set based expression / serial execution
ā€¢ SQL syntax based on ā€œset basedā€ expressions (no processing rules)
ā€¢ Query execution is serial
ā€“ SQL Server ā€œcompilesā€ query into a series of sequential steps which are
executed one after the other
ā€“ Individual steps also have internal sequential processing
ā€¢ (eg table scans are processed one page after another & per row within page)
Returns CustID, OrderID
& OrderDate for orders >
1st Jan 2005
No processing rules included in
SQL statement, just the ā€œsetā€ of
data to be returned
ā€¢ Execution Plans Display these steps
62
Intro to execution plans ā€“ a simple example
ā€¢ Execution Plan shows how SQL Server compiled & executes this
query
ā€“ Ctl-L in SSMS for ā€œestimatedā€ plan (without running query)
ā€“ Ctl-M displays ā€œactualā€ plan after query has completed
ā€¢ Read Execution Plans from top right to bottom left
ā€“ In this case, plan starts with Clustered Index Scan of [SalesOrderHeader]
ā€“ Then, for every row returned, performs an index seek into [Customers]
Plan node cost
shown as % of total
plan ā€œcostā€
Table / index access
methods displayed
(Scan, Seek etc)
Join physical
operators displayed
(Loops, Merge, Hash)
Thickness of intra node rows
denotes estimated / actual number
of rows carried between nodes in
the plan
63
Execution plan node properties
ā€¢ Mouse over execution plan node reveals extra properties..
Search predicate.
WHERE filter in this
case, but can also
be join filter
Number of rows
returned shown in
Actual Execution Plan
Name of Schema object accessed
to physically process query ā€“
typically an index, but also possibly
a heap structure
Ordered / Unordered ā€“ displays
whether scan operation follows page
ā€œchainā€ linked list (next / previous
page # in page header) or follows
Index Allocation Map (IAM) page
64
ā€œHeapā€ Table Storage
ā€¢ Query execution example:
Select FName, Lname, PhNo
from Customers where Lname = ā€˜Smithā€™
No b-tree with HEAPs, so
no lookup method
available unless other
indexes are present. Only
option is to scan heap
No physical ordering of
table rows (despite this
display)
Scan cannot complete
just because a row is
located. Because data is
not ordered, scan must
continue through to end of
table (heap)
ā€¢ Table storage structure used when no clustered index on table
ā€“ Rarely used as CIXs added to PKs by default
ā€“ Oracle uses Heap storage by default (even with PKs)
ā€¢ No physical ordering of rows
ā€“ Stored in order of insertion
ā€“ New pages added to end of ā€œheapā€ as needed
ā€¢ NO B-Tree index nodes (no ā€œindexā€)
65
Interpreting Execution plan
66
Understanding the Execution
Plans
67
Access Method
69
Commonly used operators
70
Blocking and Non-blocking Operators
ā€¢ Operators / Iterators can be put in two categories:
1. Blocking
2. Non-blocking
ā€¢ Having a blocking operator in your plan means other operators further
down the line are sitting idle.
This will reduce the overall performance of your query
ā€¢ Some examplesā€¦
71
Blocking example
Blocking and Non-blocking operators
ā€¢ An example using the sort operator:
Row 1
Row 2
Row 3
Row 4
Row 5
? Sort
Desc
72
Hints can be placed in SQL to force optimizer to follow our
desired retrieval path rather then calculated by the optimizer
Select /* +RULE */
From emp , dept
Whereā€¦..
Select statement instructs the optimizer to use the rule based
optimizer rather than the cost based optimizer.
Delete /*+RULE*/ . . . . . . . .
Update /*+RULE*/ . . . . . . . .
72
73
Index Tuning
ā€£ What an index is
ā€£ The benefits and overhead of an index
ā€£ General recommendations for index design
ā€£ Clustered and nonclustered index behavior and comparisons
ā€£ Recommendations for clustered and nonclustered indexes
74
What Is an Index?
ā€£ One of the best ways to reduce disk I/O is to use an index
ā€£ Allows SQL Server to find data in a table without scanning the
entire table
ā€£ Example
SELECT TOP 10 p.ProductID, p.[Name], p.StandardCost, p.
[Weight], ROW_NUMBER() OVER (ORDER BY p.Name DESC)
AS RowNumber FROM Production.Product p ORDER BY
p.Name DESC;
75
Types of Index
ā€£ Clustered Index
ā€“Primary Key Default (but not necessary)
ā€“Data is stored at the leaf level
ā€“Data is ordered by the key
ā€£ Non-clustered Index
ā€“Uses cluster key or RID of a heap
ā€“INCLUDE stored at leaf
ā€£ And the rest ā€“ outside the scope of this session
76
Index Rules
ā€£ Clustered Index
Choose wisely. Only one per table possible
Primary key by default is clustered. Evaluate default behaviour
ā€£ Non-Clustered Index
More than one possible.
Foreign keys are always good candidate for non-clustered index
(because of joins)
ā€£ Evaluate ā€˜Included Columnsā€™ in Indexing. Every
NonClustered index contains Clustered Keys
ā€£ Choose Index Fill Factor wisely.
ā€£ Find out tables with large rowcount but no indexing. May
be it needs index.
77
Index design recommendations
Examine the WHERE clause and JOIN criteria columns.
Use narrow indexes.
Examine column uniqueness.
Examine the column data type.
Consider column order.
Consider the type of index (clustered versus nonclustered).
78
Lookups & Joins
ā€£ Key
ā€£ RID
79
Joins optimization
ā€£ Hash joins
ā€£ Merge joins
ā€£ Nested loop joins
80
Join Operators (intra-table operators)
ā€¢ Nested Loop Join
ā€“ Original & only join operator until SQL Server 7.0
ā€“ ā€œFor Each Rowā€¦ā€ type operator
ā€“ Takes output from one plan node & executes another
operation ā€œfor eachā€ output row from that plan node
ā€¢ Merge Join
ā€“ Scans both sides of join in parallel
ā€“ Ideal for large range scans where joined columns are
indexed
ā€¢ If joined columns arenā€™t indexed, requires expensive sort
operation prior to Merge
ā€¢ Hash Join
ā€“ ā€œHashesā€ values of join column/s from one side of join
ā€¢ Usually smaller side
ā€“ ā€œProbesā€ with the other side
ā€¢ Usually larger side
ā€“ Hash is conceptually similar to building an index for every execution of a
query
ā€¢ Hash buckets not shared between executions
ā€“ Worst case join operator
ā€“ Useful for large scale range scans which occur infrequently
81
Hash Join
ā€£ A hash join uses the two join inputs as a build input and a probe
input.
ā€£ The build input is shown as the top input in the execution plan,
and the probe input is shown as the bottom input.
ā€£ Usually the smaller of the two inputs serves as the build input
because it's going to be stored on the system, so the optimizer
attempts to minimize the memory used.
ā€£ The hash join performs its operation in two phases: the build
phase and the probe phase.
82
Hash Join ā€“ Example
ā€£ SELECT p.* FROM Production.Product p JOIN
Production.ProductCategory pc ON p.ProductSubcategoryID =
pc.ProductCategoryID;
83
Merge Join
ā€£ A merge join requires both join inputs to be sorted on the merge
columns, as defined by the join criterion
ā€£ Since each join input is sorted, the merge join gets a row from
each input and compares them for equality
ā€£ A matching row is produced if they are equal. This process is
repeated until all rows are processed
84
Merge Join - Example
85
Nested Loop Join
ā€£ A nested loop join uses one join input as the outer input table
and the other as the inner input table
ā€£ The outer input table is shown as the top input in the execution
plan, and the inner input table is shown as the bottom input table
ā€£ The inner loop, executed for each outer row, searches for
matching rows in the inner input table
ā€£ Nested loop joins are highly effective if the outer input is quite
small and the inner input is larger but indexed
86
Nested Loop Join - Example
87
Quick comparison
88
Statistics, Data Distribution, and Cardinality
ā€£ The role of statistics in query optimization
ā€£ The importance of statistics on columns with indexes
ā€£ The importance of statistics on non-indexed columns used in
join and filter criteria
ā€£ Analysis of single-column and multicolumn statistics, including
the computation of selectivity of a column for indexing
ā€£ Statistics maintenance
ā€£ Effective evaluation of statistics used in a query execution
89
Statistics :Query Optimizer
The query optimizer in SQL Server is cost-based. It
includes:
1. Cost for using different resources (CPU and IO)
2. Total execution time
It determines the cost by using:
ā€£ Cardinality: The total number of rows processed at each
level of a query plan with the help of histograms ,
predicates and constraint
ā€£ Cost model of the algorithm: To perform various
operations like sorting, searching, comparisons etc.
90
Statistics Analysis
ā€£ The query optimizer uses statistics to create query plans
that improve query performance
ā€£ A correct statistics will lead to high-quality query plan.
ā€£ The query optimizer determines when statistics might be
out-of-date by counting the number of data modifications
since the last statistics update and comparing the number
of modifications to a threshold.
91
Auto create statistics
ā€£ Default setting of auto create statistics is ON.
ā€£ It creates when:
ā€£ Clustered and non clustered Index is created
ā€£ Select query is executed.
ā€£ Auto create and updates applies strictly to single-column
statistics.
92
Why query 2 is performing better
ā€£ If we perform following operations on field of any table in query
predicate:
1. Using any system function or user defined function
2. Scalar operation like addition, multiplication etc.
3. Type casting
ā€£ In this situation sql server query optimizer is not able to estimate
correct cardinality using statistics.
93
To improve cardinality
ā€£ If possible, simplify expressions with constants in them.
ā€£ If possible, don't perform any operation on the any field of
a table in WHERE Clause, ON Clause, HAVING Clause
ā€£ Don't use local variables in WHERE Clause, ON Clause,
HAVING Clause.
ā€£ If there is any cross relationship among fields or there is a
complex expression in a field in a query predicates, it is
better to create a computed column and then create a non-
clustered index on it.
94
Statistics Tools and commands
ā€£ Create Statistics
ā€£ Sp_Updatestats
ā€£ Sp_autostats
ā€£ Sp_helpstats
ā€£ DBCC Show_statistics
95
Statistics Maintenance
ā€£ Auto Create Statistics(DB level)
ā€£ Auto Update Statistics
ā€£ Manual Maintenance
96
Index fragmentation
ā€£ The causes of index fragmentation, including an analysis of
page splits caused by INSERT and UPDATE statements
ā€£ The overhead costs associated with fragmentation
ā€£ How to analyze the amount of fragmentation
ā€£ Techniques used to resolve fragmentation
ā€£ The significance of the fill factor in helping to control
fragmentation
97
Cause for Fragmentation
ā€£ Fragmentation occurs when data is modified in a table.
ā€£ Page splits cause database fragmentation
ā€£ A new leaf page will then be added that contains part of the
original page and maintains the logical order of the rows in the
index key
ā€£ New leaf page maintains the logical order of the data rows in the
original page, this new page usually won't be physically adjacent
to the original page on the disk.
ā€£ The logical key order of the index doesn't match the physical
order within the file
98
Identify Fragmentation & Resolution
ā€£ Checking the fragmentation using
sys.dm_db_index_physical_stats
ā€£ Only rebuild or reorganize indexes that are fragmented
ā€£ Rebuild heavily fragmented indexes
ā€£ Reorganize moderately fragmented indexes
99
Reorganize Index
ā€£ If database fragmentation is less than 10%, no
action is required
ā€£ 20 ā€“ 30% requires you to reorganize indexes
ā€£ Use ALTER INDEX REORGANIZE
USE AdventureWorks
ALTER INDEX PK_ProductPhoto_ProductPhotoID
ON Production.ProductPhoto REORGANIZE
reorganize the PK_Product_Product-PhotoID index on the Production.ProductPhoto
table
100
Rebuild Index
ā€£ More than 30% fragmentation requires you to rebuild indexes
ā€£ There are two methods
CREATE INDEX WITH DROP EXISTING
ALTER INDEX REBUILD
101
Significance of the Fill Factor
ā€£ SQL Server allows you to control the amount of free space
within the leaf pages of the index by using the fill factor
ā€£ If there will be enough INSERT queries on the table or
UPDATE queries on the index key columns, then you can pre-
add free space to the index leaf page using the fill factor to
minimize page splits
102
Query parsing
Parsing
Binding
Query optimization
Execution plan generation, caching, and hash plan generation
Query execution
103
Parsing Flowchart
104
Query Execution
105
Optimization Techniques
ā€£ Syntax-based optimization of the query
ā€£ Trivial plan match to avoid in-depth query optimization for simple
queries
ā€£ Index and join strategies based on current distribution statistics
ā€£ Query optimization in stepped phases to control the cost of
optimization
ā€£ Execution plan caching to avoid the regeneration of query plans
106
continued
107
Query ā€“ Execution Plan Cache
ā€£ Saves the plans created in a memory space on the server called
the plan cache.
ā€£ SELECT *
FROM sys.dm_exec_cached_plans;
108
Continued
109
Plan Reusability of an Ad Hoc Workload
ā€£ Optimize for an Ad Hoc Workload
ā€£ Simple Parameterization(default)
ā€£ Forced Parameterization
110
Plan Reusability of a Prepared Workload
ā€£ Stored Procedures
ā€£ sp_executesql
ā€£ Prepare/Execute Model
111
Stored Procedure ā€“ why?
ā€£ Standard technique for improving the effectiveness of plan
caching
ā€£ When the stored procedure is compiled the generated execution
plan is cached for future reuse. This plan is used for future
execution
ā€£ Performance Benefits
- Reduced network traffic
- Business logic is close to the data
112
sp_executesql
ā€£ sp_executesql is a system stored procedure that provides a
mechanism to submit one or more queries as a prepared
workload
ā€£ It allows the variable parts of the query to be explicitly
parameterized, and it can therefore provide execution plan
reusability as effective as a stored procedure
113
Continued
ā€£ DECLARE @query NVARCHAR(MAX), @paramlist
NVARCHAR(MAX); SET @query = N'SELECT
soh.SalesOrderNumber ,soh.OrderDate ,sod.OrderQty
,sod.LineTotal FROM Sales.SalesOrderHeader AS soh JOIN
Sales.SalesOrderDetail AS sod ON soh.SalesOrderID =
sod.SalesOrderID WHERE soh.CustomerID = @CustomerID
AND sod.ProductID = @ProductID'; SET @paramlist =
N'@CustomerID INT, @ProductID INT'; EXEC sp_executesql
@query,@paramlist,@CustomerID = 29690,@ProductID = 711;
114
Prepare/Execute Model
ā€£ ODBC and OLEDB provide a prepare/execute model to submit
queries as a prepared workload
ā€£ Like sp_executesql, this model allows the variable parts of the
queries to be parameterized explicitly
ā€£ The prepare phase allows SQL Server to generate the execution
plan for the query and return a handle of the execution plan to
the application
ā€£ This execution plan handle is used by the execute phase to
execute the query with different parameter values
115
Query stats and Query Hash
ā€£ With SQL Server 2008, new functionality around execution plans
and the cache was introduced called the query plan hash and
the query hash
ā€£ You can retrieve the query plan hash and the query hash from
sys.dm_exec_query_stats
116
Recommendations
ā€£ Explicitly parameterize variable parts of a query.
ā€£ Use stored procedures to implement business functionality.
ā€£ Use sp_executesql to avoid stored procedure maintenance.
ā€£ Use the prepare/execute model to avoid resending a query string.
ā€£ Avoid ad hoc queries.
ā€£ Use sp_executesql over EXECUTE for dynamic queries.
ā€£ Parameterize variable parts of queries with care.
ā€£ Avoid modifying environment settings between connections.
ā€£ Avoid the implicit resolution of objects in queries
117
Top 10 for Building Efficient Queries
1 Favor set-based logic over procedural or cursor logic
ā€¢ The most important factor to consider when tuning queries is
how to properly express logic in a set-based manner.
ā€¢Cursors or other procedural constructs limit the query optimizerā€™s
ability to generate flexible query plans.
ā€¢Cursors can therefore reduce the possibility of performance
improvements in many situations
118
Top 10 for Building Efficient Queries
2. Test query variations for performance
ā€¢The query optimizer can often produce widely different plans for
logically equivalent queries.
ā€¢Test different techniques, such as joins or subqueries, to find out
which perform better in various situations.
119
Top 10 for Building Efficient Queries
3. Avoid query hints.
ā€¢You must work with the SQL Server query optimizer, rather than
against it, to create efficient queries.
ā€¢Query hints tell the query optimizer how to behave and therefore
override the optimizerā€™s ability to do its job properly.
ā€¢If you eliminate the optimizerā€™s choices, you might limit yourself to
a query plan that is less than ideal.
ā€¢Use query hints only when you are absolutely certain that the
query optimizer is incorrect.
.
120
Top 10 for Building Efficient Queries
ā€£ 4. Use correlated subqueries to improve performance.
ā€¢ Since the query optimizer is able to integrate subqueries into the
main query flow in a variety of ways, subqueries might help in
various query tuning situations.
ā€¢ Subqueries can be especially useful in situations in which you
create a join to a table only to verify the existence of correlated
rows. For better performance, replace these kinds of joins with
correlated subqueries that make use of the EXISTS operator
121
Top 10 for Building Efficient Queries
ā€£ 4. Continued
.
122
Top 10 for Building Efficient Queries
5. Avoid using a scalar user-defined function in the WHERE
clause.
ā€£Scalar user-defined functions, unlike scalar subqueries, are not
optimized into the main query plan.
ā€£Instead, you must call them row-by-row by using a hidden cursor.
ā€£This is especially troublesome in the WHERE clause because the
function is called for every input row.
ā€£Using a scalar function in the SELECT list is much less
problematic because the rows have already been filtered in the
WHERE clause
123
Top 10 for Building Efficient Queries
ā€£ 6. Use table-valued user-defined functions as derived tables.
ā€£ In contrast to scalar user-defined functions, table-valued
functions are often helpful from a performance point of view
when you use them as derived tables.
ā€£ The query processor evaluates a derived table only once per
query.
ā€£ If you embed the logic in a table-valued user-defined function,
you can encapsulate and reuse it for other queries
124
Top 10 for Building Efficient Queries
ā€£ 6. Continued
.
125
Top 10 for Building Efficient Queries
ā€£ 7 Avoid unnecessary GROUP BY columns
ā€£ Use a subquery instead.
ā€£ ā€¢The process of grouping rows becomes more expensive as you
add more columns to the GROUP BY list.
ā€£ ā€¢If your query has few column aggregations but many non-
aggregated grouped columns, you might be able to refactor it by
using a correlated scalar subquery.
ā€£ ā€¢This will result in less work for grouping in the query and
therefore possibly better overall query performance.
126
Top 10 for Building Efficient Queries
ā€£ 7 Continued
.
127
Top 10 for Building Efficient Queries
ā€£ 8 .Use CASE expressions to include variable logic in a query
ā€£ The CASE expression is one of the most powerful logic tools
available to T-SQL programmers.
ā€£ ā€¢Using CASE, you can dynamically change column output on a
row-by-row basis.
ā€£ ā€¢This enables your query to return only the data that is
absolutely necessary and therefore reduces the I/O operations
and network overhead that is required to assemble and send
large result sets to clients.
128
Top 10 for Building Efficient Queries
ā€£ 9 Divide joins into temporary tables when you query very large
tables.
ā€£ The query optimizerā€™s main strategy is to find query plans that
satisfy queries by using single operations.
ā€£ ā€¢Although this strategy works for most cases, it can fail for larger
sets of data because the huge joins require so much I/O
overhead.
ā€£ ā€¢In some cases, a better option is to reduce the working set by
using temporary tables to materialize key parts of the query. You
can then join the temporary tables to produce a final result.
129
Stored Procedure
Best Practices
ā€£ ā€¢Avoid using ā€œsp_ā€ as name prefix
ā€£ ā€¢Avoid stored procedures that accept parameters for table
names
ā€£ ā€¢Use the SET NOCOUNT ON option in stored procedures
ā€£ ā€¢Limit the use of temporary tables and table variables in stored
procedures
ā€£ ā€¢If a stored procedure does multiple data modification
operations, make sure to enlist them in a transaction.
ā€£ ā€¢When working with dynamic T-SQL, use sp_executesqlinstead
of the EXEC statement
130
Views
Best Practices
ā€£ Use views to abstract complex data structures
ā€£ ā€¢Use views to encapsulate aggregate queries
ā€£ ā€¢Use views to provide more user-friendly column names
ā€£ ā€¢Think of reusability when designing views
ā€£ ā€¢Avoid using the ORDER BY clause in views that contain a TOP
100 PERCENT clause.
ā€£ ā€¢Utilize indexes on views that include aggregate data
131
Top 10 for Building Efficient Queries
ā€£ 10. Refactoring Cursors into Queries..
ā€£ Rebuild logic as multiple queries
ā€£ ā€¢Rebuild logic as a user-defined function
ā€£ ā€¢Rebuild logic as a complex query with a case expression

Weitere Ƥhnliche Inhalte

Was ist angesagt?

Oracle backup and recovery
Oracle backup and recoveryOracle backup and recovery
Oracle backup and recovery
Yogiji Creations
Ā 
Oracle 10g Performance: chapter 02 aas
Oracle 10g Performance: chapter 02 aasOracle 10g Performance: chapter 02 aas
Oracle 10g Performance: chapter 02 aas
Kyle Hailey
Ā 

Was ist angesagt? (20)

Architecture of exadata database machine ā€“ Part II
Architecture of exadata database machine ā€“ Part IIArchitecture of exadata database machine ā€“ Part II
Architecture of exadata database machine ā€“ Part II
Ā 
SQL Server Tuning to Improve Database Performance
SQL Server Tuning to Improve Database PerformanceSQL Server Tuning to Improve Database Performance
SQL Server Tuning to Improve Database Performance
Ā 
MongodB Internals
MongodB InternalsMongodB Internals
MongodB Internals
Ā 
Ash architecture and advanced usage rmoug2014
Ash architecture and advanced usage rmoug2014Ash architecture and advanced usage rmoug2014
Ash architecture and advanced usage rmoug2014
Ā 
Making Apache Spark Better with Delta Lake
Making Apache Spark Better with Delta LakeMaking Apache Spark Better with Delta Lake
Making Apache Spark Better with Delta Lake
Ā 
Oracle 12c Architecture
Oracle 12c ArchitectureOracle 12c Architecture
Oracle 12c Architecture
Ā 
Oracle backup and recovery
Oracle backup and recoveryOracle backup and recovery
Oracle backup and recovery
Ā 
Processing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeekProcessing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeek
Ā 
Oracle database performance tuning
Oracle database performance tuningOracle database performance tuning
Oracle database performance tuning
Ā 
Oracle Database Management - Backup/Recovery
Oracle Database Management - Backup/RecoveryOracle Database Management - Backup/Recovery
Oracle Database Management - Backup/Recovery
Ā 
The Great Debate: PostgreSQL vs MySQL
The Great Debate: PostgreSQL vs MySQLThe Great Debate: PostgreSQL vs MySQL
The Great Debate: PostgreSQL vs MySQL
Ā 
Oracle AWR Data mining
Oracle AWR Data miningOracle AWR Data mining
Oracle AWR Data mining
Ā 
SQL Monitoring in Oracle Database 12c
SQL Monitoring in Oracle Database 12cSQL Monitoring in Oracle Database 12c
SQL Monitoring in Oracle Database 12c
Ā 
Tanel Poder - Troubleshooting Complex Oracle Performance Issues - Part 2
Tanel Poder - Troubleshooting Complex Oracle Performance Issues - Part 2Tanel Poder - Troubleshooting Complex Oracle Performance Issues - Part 2
Tanel Poder - Troubleshooting Complex Oracle Performance Issues - Part 2
Ā 
Top 10 Mistakes When Migrating From Oracle to PostgreSQL
Top 10 Mistakes When Migrating From Oracle to PostgreSQLTop 10 Mistakes When Migrating From Oracle to PostgreSQL
Top 10 Mistakes When Migrating From Oracle to PostgreSQL
Ā 
Troubleshooting Complex Performance issues - Oracle SEG$ contention
Troubleshooting Complex Performance issues - Oracle SEG$ contentionTroubleshooting Complex Performance issues - Oracle SEG$ contention
Troubleshooting Complex Performance issues - Oracle SEG$ contention
Ā 
Average Active Sessions RMOUG2007
Average Active Sessions RMOUG2007Average Active Sessions RMOUG2007
Average Active Sessions RMOUG2007
Ā 
Postgresql database administration volume 1
Postgresql database administration volume 1Postgresql database administration volume 1
Postgresql database administration volume 1
Ā 
Oracle 10g Performance: chapter 02 aas
Oracle 10g Performance: chapter 02 aasOracle 10g Performance: chapter 02 aas
Oracle 10g Performance: chapter 02 aas
Ā 
Delta lake and the delta architecture
Delta lake and the delta architectureDelta lake and the delta architecture
Delta lake and the delta architecture
Ā 

Andere mochten auch

Why & how to optimize sql server for performance from design to query
Why & how to optimize sql server for performance from design to queryWhy & how to optimize sql server for performance from design to query
Why & how to optimize sql server for performance from design to query
Antonios Chatzipavlis
Ā 

Andere mochten auch (16)

Ten query tuning techniques every SQL Server programmer should know
Ten query tuning techniques every SQL Server programmer should knowTen query tuning techniques every SQL Server programmer should know
Ten query tuning techniques every SQL Server programmer should know
Ā 
SQL Server Query Tuning Tips - Get it Right the First Time
SQL Server Query Tuning Tips - Get it Right the First TimeSQL Server Query Tuning Tips - Get it Right the First Time
SQL Server Query Tuning Tips - Get it Right the First Time
Ā 
SQL Server Performance Tuning Baseline
SQL Server Performance Tuning BaselineSQL Server Performance Tuning Baseline
SQL Server Performance Tuning Baseline
Ā 
Performance tuning and optimization (ppt)
Performance tuning and optimization (ppt)Performance tuning and optimization (ppt)
Performance tuning and optimization (ppt)
Ā 
Administering Database - Pengenalan DBA dan Konfigurasi SQL Server 2005
Administering Database - Pengenalan DBA dan Konfigurasi SQL Server 2005Administering Database - Pengenalan DBA dan Konfigurasi SQL Server 2005
Administering Database - Pengenalan DBA dan Konfigurasi SQL Server 2005
Ā 
Sql server performance Tuning
Sql server performance TuningSql server performance Tuning
Sql server performance Tuning
Ā 
SQL Server Query Optimization, Execution and Debugging Query Performance
SQL Server Query Optimization, Execution and Debugging Query PerformanceSQL Server Query Optimization, Execution and Debugging Query Performance
SQL Server Query Optimization, Execution and Debugging Query Performance
Ā 
SQL Server High Availability
SQL Server High AvailabilitySQL Server High Availability
SQL Server High Availability
Ā 
Why & how to optimize sql server for performance from design to query
Why & how to optimize sql server for performance from design to queryWhy & how to optimize sql server for performance from design to query
Why & how to optimize sql server for performance from design to query
Ā 
SQL Server
SQL ServerSQL Server
SQL Server
Ā 
Database Performance Tuning Introduction
Database  Performance Tuning IntroductionDatabase  Performance Tuning Introduction
Database Performance Tuning Introduction
Ā 
Database Performance Tuning
Database Performance Tuning Database Performance Tuning
Database Performance Tuning
Ā 
Microsoft SQL Server internals & architecture
Microsoft SQL Server internals & architectureMicrosoft SQL Server internals & architecture
Microsoft SQL Server internals & architecture
Ā 
Always on in SQL Server 2012
Always on in SQL Server 2012Always on in SQL Server 2012
Always on in SQL Server 2012
Ā 
Ms sql server architecture
Ms sql server architectureMs sql server architecture
Ms sql server architecture
Ā 
SQL Server 2012 Best Practices
SQL Server 2012 Best PracticesSQL Server 2012 Best Practices
SQL Server 2012 Best Practices
Ā 

Ƅhnlich wie Sql server performance tuning

SQL Explore 2012: P&T Part 1
SQL Explore 2012: P&T Part 1SQL Explore 2012: P&T Part 1
SQL Explore 2012: P&T Part 1
sqlserver.co.il
Ā 
Challenges of Implementing an Advanced SQL Engine on Hadoop
Challenges of Implementing an Advanced SQL Engine on HadoopChallenges of Implementing an Advanced SQL Engine on Hadoop
Challenges of Implementing an Advanced SQL Engine on Hadoop
DataWorks Summit
Ā 
Db As Behaving Badly... Worst Practices For Database Administrators Rod Colledge
Db As Behaving Badly... Worst Practices For Database Administrators Rod ColledgeDb As Behaving Badly... Worst Practices For Database Administrators Rod Colledge
Db As Behaving Badly... Worst Practices For Database Administrators Rod Colledge
sqlserver.co.il
Ā 
NoCOUG_201411_Patel_Managing_a_Large_OLTP_Database
NoCOUG_201411_Patel_Managing_a_Large_OLTP_DatabaseNoCOUG_201411_Patel_Managing_a_Large_OLTP_Database
NoCOUG_201411_Patel_Managing_a_Large_OLTP_Database
Paresh Patel
Ā 

Ƅhnlich wie Sql server performance tuning (20)

Sql server performance tuning and optimization
Sql server performance tuning and optimizationSql server performance tuning and optimization
Sql server performance tuning and optimization
Ā 
Big Data with SQL Server
Big Data with SQL ServerBig Data with SQL Server
Big Data with SQL Server
Ā 
Sql server tips from the field
Sql server tips from the fieldSql server tips from the field
Sql server tips from the field
Ā 
Oracle Database Performance Tuning Advanced Features and Best Practices for DBAs
Oracle Database Performance Tuning Advanced Features and Best Practices for DBAsOracle Database Performance Tuning Advanced Features and Best Practices for DBAs
Oracle Database Performance Tuning Advanced Features and Best Practices for DBAs
Ā 
Sql Server tips from the field
Sql Server tips from the fieldSql Server tips from the field
Sql Server tips from the field
Ā 
Winning performance challenges in oracle standard editions
Winning performance challenges in oracle standard editionsWinning performance challenges in oracle standard editions
Winning performance challenges in oracle standard editions
Ā 
SQL Explore 2012: P&T Part 1
SQL Explore 2012: P&T Part 1SQL Explore 2012: P&T Part 1
SQL Explore 2012: P&T Part 1
Ā 
Tuning data warehouse
Tuning data warehouseTuning data warehouse
Tuning data warehouse
Ā 
Analysing and Troubleshooting Performance Issues in SAP BusinessObjects BI Re...
Analysing and Troubleshooting Performance Issues in SAP BusinessObjects BI Re...Analysing and Troubleshooting Performance Issues in SAP BusinessObjects BI Re...
Analysing and Troubleshooting Performance Issues in SAP BusinessObjects BI Re...
Ā 
Evolution of DBA in the Cloud Era
 Evolution of DBA in the Cloud Era Evolution of DBA in the Cloud Era
Evolution of DBA in the Cloud Era
Ā 
Sql server troubleshooting
Sql server troubleshootingSql server troubleshooting
Sql server troubleshooting
Ā 
Challenges of Implementing an Advanced SQL Engine on Hadoop
Challenges of Implementing an Advanced SQL Engine on HadoopChallenges of Implementing an Advanced SQL Engine on Hadoop
Challenges of Implementing an Advanced SQL Engine on Hadoop
Ā 
Db As Behaving Badly... Worst Practices For Database Administrators Rod Colledge
Db As Behaving Badly... Worst Practices For Database Administrators Rod ColledgeDb As Behaving Badly... Worst Practices For Database Administrators Rod Colledge
Db As Behaving Badly... Worst Practices For Database Administrators Rod Colledge
Ā 
GOTO 2013: Why Zalando trusts in PostgreSQL
GOTO 2013: Why Zalando trusts in PostgreSQLGOTO 2013: Why Zalando trusts in PostgreSQL
GOTO 2013: Why Zalando trusts in PostgreSQL
Ā 
Sql Server
Sql ServerSql Server
Sql Server
Ā 
Best Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon Redshift Best Practices for Migrating your Data Warehouse to Amazon Redshift
Best Practices for Migrating your Data Warehouse to Amazon Redshift
Ā 
PayPal merchant ecosystem using Apache Spark, Hive, Druid, and HBase
PayPal merchant ecosystem using Apache Spark, Hive, Druid, and HBase PayPal merchant ecosystem using Apache Spark, Hive, Druid, and HBase
PayPal merchant ecosystem using Apache Spark, Hive, Druid, and HBase
Ā 
NoCOUG_201411_Patel_Managing_a_Large_OLTP_Database
NoCOUG_201411_Patel_Managing_a_Large_OLTP_DatabaseNoCOUG_201411_Patel_Managing_a_Large_OLTP_Database
NoCOUG_201411_Patel_Managing_a_Large_OLTP_Database
Ā 
SharePoint 2013 Performance Analysis - Robi Vončina
SharePoint 2013 Performance Analysis - Robi VončinaSharePoint 2013 Performance Analysis - Robi Vončina
SharePoint 2013 Performance Analysis - Robi Vončina
Ā 
Challenges of Building a First Class SQL-on-Hadoop Engine
Challenges of Building a First Class SQL-on-Hadoop EngineChallenges of Building a First Class SQL-on-Hadoop Engine
Challenges of Building a First Class SQL-on-Hadoop Engine
Ā 

KĆ¼rzlich hochgeladen

introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdfintroduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
VishalKumarJha10
Ā 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
mohitmore19
Ā 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
Health
Ā 
CHEAP Call Girls in Pushp Vihar (-DELHI )šŸ” 9953056974šŸ”(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )šŸ” 9953056974šŸ”(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )šŸ” 9953056974šŸ”(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )šŸ” 9953056974šŸ”(=)/CALL GIRLS SERVICE
9953056974 Low Rate Call Girls In Saket, Delhi NCR
Ā 

KĆ¼rzlich hochgeladen (20)

introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdfintroduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
Ā 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
Ā 
Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTV
Ā 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial Goals
Ā 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Models
Ā 
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
Direct Style Effect Systems -The Print[A] Example- A Comprehension AidDirect Style Effect Systems -The Print[A] Example- A Comprehension Aid
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
Ā 
Shapes for Sharing between Graph Data SpacesĀ - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data SpacesĀ - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data SpacesĀ - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data SpacesĀ - and Epistemic Querying of RDF-...
Ā 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
Ā 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Ā 
VTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learnVTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learn
Ā 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Ā 
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
Ā 
10 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 202410 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 2024
Ā 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Ā 
AI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplateAI & Machine Learning Presentation Template
AI & Machine Learning Presentation Template
Ā 
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
Ā 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docx
Ā 
Exploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdfExploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdf
Ā 
Microsoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdfMicrosoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdf
Ā 
CHEAP Call Girls in Pushp Vihar (-DELHI )šŸ” 9953056974šŸ”(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )šŸ” 9953056974šŸ”(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )šŸ” 9953056974šŸ”(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )šŸ” 9953056974šŸ”(=)/CALL GIRLS SERVICE
Ā 

Sql server performance tuning

  • 1. 1 SQL Server Performance Tuning (Tips & Tricks) By Nitin K niting123@gmail.com
  • 2. 2 Agenda ā€£ SQL Server Concepts/Structure ā€£ Performance Measuring & Troubleshooting Tools ā€£ Locking ā€£ Performance Problem : CPU ā€£ Performance Problem : Memory ā€£ Performance Problem : I/O ā€£ Performance Problem : Blocking ā€£ Query Tuning ā€£ Indexing
  • 3. 3 Who am I ā€£ 12 + years of experience on Microsoft Technologies ā€£ Data Architect on SQL Server Platform ā€£ Working as a Datawarehouse Architect from last 5 years ā€£ Playing a role of BI-DBA ā€£ Certified MCSE- SQL Server 2012 Data track ā€£ Passionate about performance tuning on Both database as well as Business Intelligence Area
  • 4. 4 Performance Tuning in SQL Server ā€£ Why is performance tuning necessary?
  • 5. 5 Why is Performance Tuning Necessary? ā€£ Allowing your system to scale ā€¢Adding more customers ā€¢Adding more features ā€£ Improve overall system performance ā€£ Save money but not wasting resources ā€£ The database is typically one of the most expensive resources in a datacenter
  • 6. 6 Performance Focus ā€£ Response Time Time interval when a request is submitted and when 1st character of response is received. ā€£ Throughput Number of transaction processed in fixed unit of time ā€£ Scalability Throughput and response time changes as we add more hardware resources
  • 7. 7 ā€£ Best Practices: Optimize for real- world workloads Monitor/review performance regularly Focus on specific issues Performance Monitoring Approaches
  • 8. 8 Overview of Performance Monitoring Tools and Methods
  • 9. 9 Performance Tools: No Extra cost ā€£ SQL Server Profiler ā€£ System Monitor (windows performance monitor) ā€£ Dynamic Management Views (DMV) : SQL 2005+ ā€£ Database tuning advisor (DTA) ā€£ Microsoft Free tools (SQLDiag, PSSDiag, SQL Nexus ā€“ www.codeplex.com) ā€£ MDW ā€“ Management Datawarehouse
  • 10. 10
  • 11. 11 Major Performance Killers ā€£ Insufficient indexing ā€£ Inaccurate statistics ā€£ Improper query design ā€£ Poorly generated execution plans ā€£ Excessive blocking and deadlocks ā€£ Non-set-based operations, usually T-SQL cursors ā€£ Inappropriate database design ā€£ Excessive fragmentation ā€£ Nonreusable execution plans ā€£ Frequent recompilation of queries ā€£ Improper use of cursors ā€£ Improper configuration of the database transaction log ā€£ Excessive use or improper configuration of tempdb
  • 12. 12 Monitoring SQL Server Using various SQL Server features to monitor database activity
  • 13. 13 ā€£ Overviews of SQL Server usage Can export to Excel or PDF ā€£ Server-Level Report Examples: Server Dashboard Memory Consumption Activity ā€“ All Block Transactions Activity ā€“ Top Sessions Performance ā€“ Batch Execution Statistics Performance ā€“ Top Queries by Average CPU Object Execution Statistics SQL Server Management Studio Reports
  • 14. 14 ā€£ Examples: Disk Usage All Transactions All Blocking Transactions Index Usage Statistics Top Transactions by Age Schema Changes History ā€£ New reports added in Service Packs Ability to use custom reports Database-Level Reports
  • 15. 15 ā€£ Windows Event Logs / Event Viewer Application and System Event Logs ā€£ SQL Server Management Studio SQL Server Logs ā€£ Can configure max. # of log files SQL Server Agent Error logs ā€£ Can configure logging levels (Errors, Warnings, Information) ā€£ Using the Log File Viewer Can Export / Load log information Can search for specific errors/messages Monitoring SQL Server Logs
  • 26. 26 Transaction Isolation Levels Level Definition Read Un-Committed Donā€™t need a lock to read a data Read Committed ā€œDefaultā€ Read only committed data otherwise wait. Request ā€˜Sā€™ lock to read. No guarantee that read is repeatable Repeatable read Guarantees that data read in a transaction will not change for it duration. Holds the ā€˜Sā€™ lock until the duration of transaction. Serializable Prevent phantoms In any of above transaction, an ā€˜Xā€™ lock acquired is held for total duration of transaction.
  • 27. 27 Locks ā€£ Lock types/modes X (exclusive), S (shared) , U(Update), IX (Intent Exclusive), ISā€¦.. imposed on DB (database), RID (row id), PAG(page), TAB(table) ā€£ Locks Compatibility (matrix) ā€£ Lock Hints Select * from employee with (Nolock) More - Rowlock, HoldLock, Tablockā€¦ā€¦. ā€£ Locks Hierarchy Database (DB) ->Table (TB) -> Page (PG)-> Row (key) Lock modes Shared(S) Exclusive (X) Shared (S) ā€“ READ LOCKS OK NO Exclusive(X) NO NO
  • 28. 28 What happens when you CRUD ā€£ Two places get affected Memory: Data looked into Memory. If doesnā€™t exist, Page brought into Memory(RAM). Operation performed on Page in memory Transaction Log: Entry in transaction log file (.ndf file) ā€£ No ā€œinstantā€ changes to mdf file (data file) ā‡’Latest Changes are in Memory and Transaction Log file ā‡’Backup transaction Log file in case of SQL failure ā€£ Checkpoint : Lazywriter write to data changes in memory to disk
  • 31. 31 Tools to Identify the Performance Metric ā€£ Performance counters ā€£ Activity Monitor ā€£ Task Manager ā€£ SQL Server DMVā€™s
  • 32. 32 Memory Performance Analysis ā€£ The basics of the Performance Monitor tool ā€£ Some of the dynamic management objects used to observe system behavior ā€£ How and why hardware resources can be bottlenecks ā€£ Methods of observing and measuring memory use within SQL Server and Windows ā€£ Possible resolutions to memory bottlenecks
  • 35. 35 DMOā€™s and DMVā€™s ā€£ SELECT dopc.cntr_value, dopc.cntr_type FROM sys.dm_os_performance_counters AS dopc WHERE dopc.object_name = 'SQLServer:General Statistics' AND dopc.counter_name = 'Logins/sec';
  • 36. 36 SQL Server Memory Management ā€£ Max Memory ā€£ Min Memory
  • 37. 37 Additional Memory Monitoring Tools ā€£ DBCC MEMORYSTATUS ā€£ Sys.dm_os_memory_brokers ā€£ Sys.dm_os_memory_clerks ā€£ Sys.dm_os_ring_buffers
  • 38. 38 Resolution for Common Memory Problems ā€£ Optimizing application workload ā€£ Allocating more memory to SQL Server ā€£ Moving in-memory tables back to standard storage ā€£ Increasing system memory ā€£ Changing from a 32-bit to a 64-bit processor ā€£ Enabling 3GB of process space ā€£ Compressing data
  • 39. 39 Disk Performance Analysis ā€£ Using system counters to gather disk performance metrics ā€£ Using other mechanisms of gathering disk behavior ā€£ Resolving disk performance issues
  • 41. 41 DMVā€™s and DMOā€™s ā€£ Sys.dm_io_virtual_file_stats ā€£ Sys.dm_os_wait_stats ā€£ Example SELECT * FROM sys.dm_os_wait_stats AS dows WHERE wait_type LIKE 'PAGEIOLATCH%'
  • 42. 42 Resolution for common Disk bottlenecks ā€£ Optimizing application workload ā€£ Using a faster I/O path ā€£ Using a RAID array ā€£ Using a SAN system ā€£ Using Solid State Drives
  • 43. 43 Resolution for common Disk bottlenecks ā€£ Aligning disks properly ā€£ Using a battery-backed controller cache ā€£ Adding system memory ā€£ Creating multiple files and filegroups ā€£ Moving the log files to a separate physical drive ā€£ Using partitioned tables
  • 44. 44 CPU Performance Analysis ā€£ How to gather metrics on the processor ā€£ Additional metrics available through T-SQL queries ā€£ Methods for resolving processor bottlenecks
  • 47. 47 Resolution for common bottlenecks ā€£ Optimizing application workload ā€£ Eliminating or reducing excessive compiles/recompiles ā€£ Using more or faster processors ā€£ Not running unnecessary software
  • 49. 49 Costly Queries ā€£ Identify costly queries using - SQL Server DMVā€™s(sys.dm_exec_query_stats) - Extended events
  • 50. 50 Query Tuning ā€£ Detection Profiler ā€£ Look for Queries/Stored Proc with High reads, CPU, & Duration. These are candidates of tuning. ā€£ Look for Stored proc thatā€™s Recompiling (itā€™s an event) DMVā€™s ā€£ Find Queries with missing indexes ā€£ Find tables that are defragmented ā€£ Find TempDB database bottlenecks
  • 51. 51 Query Tuning cont. ā€£ Troubleshoot : Query Execution Plan Operator types ā€£ Seek (Best and preferred) ā€£ Scan (not preferred) ā€£ Bookmark lookup (better than scan and mostly with non- clustered index) Join type ā€£ Nested ā€£ Merge ā€£ Hash (Avoid) Graphical Execution Plan Icons : http://msdn2.microsoft.com/en-us/library/ms175913.aspx http://www.sql-server-performance.com/articles/per/select_indexes_p1.aspx
  • 52. 52
  • 53. 53 Why do we need an optimizer? The Query Optimizer ā€¢T-SQL is a ā€œWhatā€ not ā€œhowā€ language. ā€¢We write ā€œlogicalā€ requests. ā€¢SQL Optimizer Engine converts logical requests into physical plans.
  • 54. 54 The job of the SQL Optimizer is to find ā€œthe best plan possibleā€. The Query Optimizer X What is the goal of the Optimizer?
  • 55. 55 Query optimization explained simply 1. Query submitted 2. Magic happens 3. Shedload of data returned
  • 56. 56 Optimizer steps Query Optimization (in a bit more detail) Bind Execute Optimize Parse
  • 57. 57 Parse Builds a tree structure based upon the logical operators in the query. For example: SELECT SSOD.[SalesOrderID], PP.[Name], PP.[Weight], SSOD.[UnitPrice] FROM [Sales].[SalesOrderDetail] SSOD INNER JOIN [Production].[Product] PP ON SSOD.ProductID = PP.ProductID WHERE PP.Weight > 100 Project Filter Join Product Sales Order Detail LogicalOperations Nodes
  • 58. 58 Bind ā€¢ Series of validation steps ā€¢ Schema validation ā€¢ Table validation ā€¢ Attribute validation ā€¢ Permission validation SELECT SSOD.[SalesOrderID], PP.[Name], PP.[Weight], SSOD.[UnitPrice] FROM [Sales].[SalesOrderDetail] SSOD INNER JOIN [Production].[Product] PP ON SSOD.ProductID = PP.ProductID WHERE PP.Weight > 100
  • 59. 59 Optimize Works though many rules and heuristics. These Include: ā€¢Commutativity ā€¢Substitution rules ā€¢Exploration rules ā€¢Implementation rules
  • 60. 60 SELECT prod_category, AVG(amount_sold) FROM o_sales s, o_products p WHERE p.prod_id = s.prod_id GROUP BY prod_category;
  • 61. 61 SQL: set based expression / serial execution ā€¢ SQL syntax based on ā€œset basedā€ expressions (no processing rules) ā€¢ Query execution is serial ā€“ SQL Server ā€œcompilesā€ query into a series of sequential steps which are executed one after the other ā€“ Individual steps also have internal sequential processing ā€¢ (eg table scans are processed one page after another & per row within page) Returns CustID, OrderID & OrderDate for orders > 1st Jan 2005 No processing rules included in SQL statement, just the ā€œsetā€ of data to be returned ā€¢ Execution Plans Display these steps
  • 62. 62 Intro to execution plans ā€“ a simple example ā€¢ Execution Plan shows how SQL Server compiled & executes this query ā€“ Ctl-L in SSMS for ā€œestimatedā€ plan (without running query) ā€“ Ctl-M displays ā€œactualā€ plan after query has completed ā€¢ Read Execution Plans from top right to bottom left ā€“ In this case, plan starts with Clustered Index Scan of [SalesOrderHeader] ā€“ Then, for every row returned, performs an index seek into [Customers] Plan node cost shown as % of total plan ā€œcostā€ Table / index access methods displayed (Scan, Seek etc) Join physical operators displayed (Loops, Merge, Hash) Thickness of intra node rows denotes estimated / actual number of rows carried between nodes in the plan
  • 63. 63 Execution plan node properties ā€¢ Mouse over execution plan node reveals extra properties.. Search predicate. WHERE filter in this case, but can also be join filter Number of rows returned shown in Actual Execution Plan Name of Schema object accessed to physically process query ā€“ typically an index, but also possibly a heap structure Ordered / Unordered ā€“ displays whether scan operation follows page ā€œchainā€ linked list (next / previous page # in page header) or follows Index Allocation Map (IAM) page
  • 64. 64 ā€œHeapā€ Table Storage ā€¢ Query execution example: Select FName, Lname, PhNo from Customers where Lname = ā€˜Smithā€™ No b-tree with HEAPs, so no lookup method available unless other indexes are present. Only option is to scan heap No physical ordering of table rows (despite this display) Scan cannot complete just because a row is located. Because data is not ordered, scan must continue through to end of table (heap) ā€¢ Table storage structure used when no clustered index on table ā€“ Rarely used as CIXs added to PKs by default ā€“ Oracle uses Heap storage by default (even with PKs) ā€¢ No physical ordering of rows ā€“ Stored in order of insertion ā€“ New pages added to end of ā€œheapā€ as needed ā€¢ NO B-Tree index nodes (no ā€œindexā€)
  • 69. 70 Blocking and Non-blocking Operators ā€¢ Operators / Iterators can be put in two categories: 1. Blocking 2. Non-blocking ā€¢ Having a blocking operator in your plan means other operators further down the line are sitting idle. This will reduce the overall performance of your query ā€¢ Some examplesā€¦
  • 70. 71 Blocking example Blocking and Non-blocking operators ā€¢ An example using the sort operator: Row 1 Row 2 Row 3 Row 4 Row 5 ? Sort Desc
  • 71. 72 Hints can be placed in SQL to force optimizer to follow our desired retrieval path rather then calculated by the optimizer Select /* +RULE */ From emp , dept Whereā€¦.. Select statement instructs the optimizer to use the rule based optimizer rather than the cost based optimizer. Delete /*+RULE*/ . . . . . . . . Update /*+RULE*/ . . . . . . . . 72
  • 72. 73 Index Tuning ā€£ What an index is ā€£ The benefits and overhead of an index ā€£ General recommendations for index design ā€£ Clustered and nonclustered index behavior and comparisons ā€£ Recommendations for clustered and nonclustered indexes
  • 73. 74 What Is an Index? ā€£ One of the best ways to reduce disk I/O is to use an index ā€£ Allows SQL Server to find data in a table without scanning the entire table ā€£ Example SELECT TOP 10 p.ProductID, p.[Name], p.StandardCost, p. [Weight], ROW_NUMBER() OVER (ORDER BY p.Name DESC) AS RowNumber FROM Production.Product p ORDER BY p.Name DESC;
  • 74. 75 Types of Index ā€£ Clustered Index ā€“Primary Key Default (but not necessary) ā€“Data is stored at the leaf level ā€“Data is ordered by the key ā€£ Non-clustered Index ā€“Uses cluster key or RID of a heap ā€“INCLUDE stored at leaf ā€£ And the rest ā€“ outside the scope of this session
  • 75. 76 Index Rules ā€£ Clustered Index Choose wisely. Only one per table possible Primary key by default is clustered. Evaluate default behaviour ā€£ Non-Clustered Index More than one possible. Foreign keys are always good candidate for non-clustered index (because of joins) ā€£ Evaluate ā€˜Included Columnsā€™ in Indexing. Every NonClustered index contains Clustered Keys ā€£ Choose Index Fill Factor wisely. ā€£ Find out tables with large rowcount but no indexing. May be it needs index.
  • 76. 77 Index design recommendations Examine the WHERE clause and JOIN criteria columns. Use narrow indexes. Examine column uniqueness. Examine the column data type. Consider column order. Consider the type of index (clustered versus nonclustered).
  • 77. 78 Lookups & Joins ā€£ Key ā€£ RID
  • 78. 79 Joins optimization ā€£ Hash joins ā€£ Merge joins ā€£ Nested loop joins
  • 79. 80 Join Operators (intra-table operators) ā€¢ Nested Loop Join ā€“ Original & only join operator until SQL Server 7.0 ā€“ ā€œFor Each Rowā€¦ā€ type operator ā€“ Takes output from one plan node & executes another operation ā€œfor eachā€ output row from that plan node ā€¢ Merge Join ā€“ Scans both sides of join in parallel ā€“ Ideal for large range scans where joined columns are indexed ā€¢ If joined columns arenā€™t indexed, requires expensive sort operation prior to Merge ā€¢ Hash Join ā€“ ā€œHashesā€ values of join column/s from one side of join ā€¢ Usually smaller side ā€“ ā€œProbesā€ with the other side ā€¢ Usually larger side ā€“ Hash is conceptually similar to building an index for every execution of a query ā€¢ Hash buckets not shared between executions ā€“ Worst case join operator ā€“ Useful for large scale range scans which occur infrequently
  • 80. 81 Hash Join ā€£ A hash join uses the two join inputs as a build input and a probe input. ā€£ The build input is shown as the top input in the execution plan, and the probe input is shown as the bottom input. ā€£ Usually the smaller of the two inputs serves as the build input because it's going to be stored on the system, so the optimizer attempts to minimize the memory used. ā€£ The hash join performs its operation in two phases: the build phase and the probe phase.
  • 81. 82 Hash Join ā€“ Example ā€£ SELECT p.* FROM Production.Product p JOIN Production.ProductCategory pc ON p.ProductSubcategoryID = pc.ProductCategoryID;
  • 82. 83 Merge Join ā€£ A merge join requires both join inputs to be sorted on the merge columns, as defined by the join criterion ā€£ Since each join input is sorted, the merge join gets a row from each input and compares them for equality ā€£ A matching row is produced if they are equal. This process is repeated until all rows are processed
  • 83. 84 Merge Join - Example
  • 84. 85 Nested Loop Join ā€£ A nested loop join uses one join input as the outer input table and the other as the inner input table ā€£ The outer input table is shown as the top input in the execution plan, and the inner input table is shown as the bottom input table ā€£ The inner loop, executed for each outer row, searches for matching rows in the inner input table ā€£ Nested loop joins are highly effective if the outer input is quite small and the inner input is larger but indexed
  • 85. 86 Nested Loop Join - Example
  • 87. 88 Statistics, Data Distribution, and Cardinality ā€£ The role of statistics in query optimization ā€£ The importance of statistics on columns with indexes ā€£ The importance of statistics on non-indexed columns used in join and filter criteria ā€£ Analysis of single-column and multicolumn statistics, including the computation of selectivity of a column for indexing ā€£ Statistics maintenance ā€£ Effective evaluation of statistics used in a query execution
  • 88. 89 Statistics :Query Optimizer The query optimizer in SQL Server is cost-based. It includes: 1. Cost for using different resources (CPU and IO) 2. Total execution time It determines the cost by using: ā€£ Cardinality: The total number of rows processed at each level of a query plan with the help of histograms , predicates and constraint ā€£ Cost model of the algorithm: To perform various operations like sorting, searching, comparisons etc.
  • 89. 90 Statistics Analysis ā€£ The query optimizer uses statistics to create query plans that improve query performance ā€£ A correct statistics will lead to high-quality query plan. ā€£ The query optimizer determines when statistics might be out-of-date by counting the number of data modifications since the last statistics update and comparing the number of modifications to a threshold.
  • 90. 91 Auto create statistics ā€£ Default setting of auto create statistics is ON. ā€£ It creates when: ā€£ Clustered and non clustered Index is created ā€£ Select query is executed. ā€£ Auto create and updates applies strictly to single-column statistics.
  • 91. 92 Why query 2 is performing better ā€£ If we perform following operations on field of any table in query predicate: 1. Using any system function or user defined function 2. Scalar operation like addition, multiplication etc. 3. Type casting ā€£ In this situation sql server query optimizer is not able to estimate correct cardinality using statistics.
  • 92. 93 To improve cardinality ā€£ If possible, simplify expressions with constants in them. ā€£ If possible, don't perform any operation on the any field of a table in WHERE Clause, ON Clause, HAVING Clause ā€£ Don't use local variables in WHERE Clause, ON Clause, HAVING Clause. ā€£ If there is any cross relationship among fields or there is a complex expression in a field in a query predicates, it is better to create a computed column and then create a non- clustered index on it.
  • 93. 94 Statistics Tools and commands ā€£ Create Statistics ā€£ Sp_Updatestats ā€£ Sp_autostats ā€£ Sp_helpstats ā€£ DBCC Show_statistics
  • 94. 95 Statistics Maintenance ā€£ Auto Create Statistics(DB level) ā€£ Auto Update Statistics ā€£ Manual Maintenance
  • 95. 96 Index fragmentation ā€£ The causes of index fragmentation, including an analysis of page splits caused by INSERT and UPDATE statements ā€£ The overhead costs associated with fragmentation ā€£ How to analyze the amount of fragmentation ā€£ Techniques used to resolve fragmentation ā€£ The significance of the fill factor in helping to control fragmentation
  • 96. 97 Cause for Fragmentation ā€£ Fragmentation occurs when data is modified in a table. ā€£ Page splits cause database fragmentation ā€£ A new leaf page will then be added that contains part of the original page and maintains the logical order of the rows in the index key ā€£ New leaf page maintains the logical order of the data rows in the original page, this new page usually won't be physically adjacent to the original page on the disk. ā€£ The logical key order of the index doesn't match the physical order within the file
  • 97. 98 Identify Fragmentation & Resolution ā€£ Checking the fragmentation using sys.dm_db_index_physical_stats ā€£ Only rebuild or reorganize indexes that are fragmented ā€£ Rebuild heavily fragmented indexes ā€£ Reorganize moderately fragmented indexes
  • 98. 99 Reorganize Index ā€£ If database fragmentation is less than 10%, no action is required ā€£ 20 ā€“ 30% requires you to reorganize indexes ā€£ Use ALTER INDEX REORGANIZE USE AdventureWorks ALTER INDEX PK_ProductPhoto_ProductPhotoID ON Production.ProductPhoto REORGANIZE reorganize the PK_Product_Product-PhotoID index on the Production.ProductPhoto table
  • 99. 100 Rebuild Index ā€£ More than 30% fragmentation requires you to rebuild indexes ā€£ There are two methods CREATE INDEX WITH DROP EXISTING ALTER INDEX REBUILD
  • 100. 101 Significance of the Fill Factor ā€£ SQL Server allows you to control the amount of free space within the leaf pages of the index by using the fill factor ā€£ If there will be enough INSERT queries on the table or UPDATE queries on the index key columns, then you can pre- add free space to the index leaf page using the fill factor to minimize page splits
  • 101. 102 Query parsing Parsing Binding Query optimization Execution plan generation, caching, and hash plan generation Query execution
  • 104. 105 Optimization Techniques ā€£ Syntax-based optimization of the query ā€£ Trivial plan match to avoid in-depth query optimization for simple queries ā€£ Index and join strategies based on current distribution statistics ā€£ Query optimization in stepped phases to control the cost of optimization ā€£ Execution plan caching to avoid the regeneration of query plans
  • 106. 107 Query ā€“ Execution Plan Cache ā€£ Saves the plans created in a memory space on the server called the plan cache. ā€£ SELECT * FROM sys.dm_exec_cached_plans;
  • 108. 109 Plan Reusability of an Ad Hoc Workload ā€£ Optimize for an Ad Hoc Workload ā€£ Simple Parameterization(default) ā€£ Forced Parameterization
  • 109. 110 Plan Reusability of a Prepared Workload ā€£ Stored Procedures ā€£ sp_executesql ā€£ Prepare/Execute Model
  • 110. 111 Stored Procedure ā€“ why? ā€£ Standard technique for improving the effectiveness of plan caching ā€£ When the stored procedure is compiled the generated execution plan is cached for future reuse. This plan is used for future execution ā€£ Performance Benefits - Reduced network traffic - Business logic is close to the data
  • 111. 112 sp_executesql ā€£ sp_executesql is a system stored procedure that provides a mechanism to submit one or more queries as a prepared workload ā€£ It allows the variable parts of the query to be explicitly parameterized, and it can therefore provide execution plan reusability as effective as a stored procedure
  • 112. 113 Continued ā€£ DECLARE @query NVARCHAR(MAX), @paramlist NVARCHAR(MAX); SET @query = N'SELECT soh.SalesOrderNumber ,soh.OrderDate ,sod.OrderQty ,sod.LineTotal FROM Sales.SalesOrderHeader AS soh JOIN Sales.SalesOrderDetail AS sod ON soh.SalesOrderID = sod.SalesOrderID WHERE soh.CustomerID = @CustomerID AND sod.ProductID = @ProductID'; SET @paramlist = N'@CustomerID INT, @ProductID INT'; EXEC sp_executesql @query,@paramlist,@CustomerID = 29690,@ProductID = 711;
  • 113. 114 Prepare/Execute Model ā€£ ODBC and OLEDB provide a prepare/execute model to submit queries as a prepared workload ā€£ Like sp_executesql, this model allows the variable parts of the queries to be parameterized explicitly ā€£ The prepare phase allows SQL Server to generate the execution plan for the query and return a handle of the execution plan to the application ā€£ This execution plan handle is used by the execute phase to execute the query with different parameter values
  • 114. 115 Query stats and Query Hash ā€£ With SQL Server 2008, new functionality around execution plans and the cache was introduced called the query plan hash and the query hash ā€£ You can retrieve the query plan hash and the query hash from sys.dm_exec_query_stats
  • 115. 116 Recommendations ā€£ Explicitly parameterize variable parts of a query. ā€£ Use stored procedures to implement business functionality. ā€£ Use sp_executesql to avoid stored procedure maintenance. ā€£ Use the prepare/execute model to avoid resending a query string. ā€£ Avoid ad hoc queries. ā€£ Use sp_executesql over EXECUTE for dynamic queries. ā€£ Parameterize variable parts of queries with care. ā€£ Avoid modifying environment settings between connections. ā€£ Avoid the implicit resolution of objects in queries
  • 116. 117 Top 10 for Building Efficient Queries 1 Favor set-based logic over procedural or cursor logic ā€¢ The most important factor to consider when tuning queries is how to properly express logic in a set-based manner. ā€¢Cursors or other procedural constructs limit the query optimizerā€™s ability to generate flexible query plans. ā€¢Cursors can therefore reduce the possibility of performance improvements in many situations
  • 117. 118 Top 10 for Building Efficient Queries 2. Test query variations for performance ā€¢The query optimizer can often produce widely different plans for logically equivalent queries. ā€¢Test different techniques, such as joins or subqueries, to find out which perform better in various situations.
  • 118. 119 Top 10 for Building Efficient Queries 3. Avoid query hints. ā€¢You must work with the SQL Server query optimizer, rather than against it, to create efficient queries. ā€¢Query hints tell the query optimizer how to behave and therefore override the optimizerā€™s ability to do its job properly. ā€¢If you eliminate the optimizerā€™s choices, you might limit yourself to a query plan that is less than ideal. ā€¢Use query hints only when you are absolutely certain that the query optimizer is incorrect. .
  • 119. 120 Top 10 for Building Efficient Queries ā€£ 4. Use correlated subqueries to improve performance. ā€¢ Since the query optimizer is able to integrate subqueries into the main query flow in a variety of ways, subqueries might help in various query tuning situations. ā€¢ Subqueries can be especially useful in situations in which you create a join to a table only to verify the existence of correlated rows. For better performance, replace these kinds of joins with correlated subqueries that make use of the EXISTS operator
  • 120. 121 Top 10 for Building Efficient Queries ā€£ 4. Continued .
  • 121. 122 Top 10 for Building Efficient Queries 5. Avoid using a scalar user-defined function in the WHERE clause. ā€£Scalar user-defined functions, unlike scalar subqueries, are not optimized into the main query plan. ā€£Instead, you must call them row-by-row by using a hidden cursor. ā€£This is especially troublesome in the WHERE clause because the function is called for every input row. ā€£Using a scalar function in the SELECT list is much less problematic because the rows have already been filtered in the WHERE clause
  • 122. 123 Top 10 for Building Efficient Queries ā€£ 6. Use table-valued user-defined functions as derived tables. ā€£ In contrast to scalar user-defined functions, table-valued functions are often helpful from a performance point of view when you use them as derived tables. ā€£ The query processor evaluates a derived table only once per query. ā€£ If you embed the logic in a table-valued user-defined function, you can encapsulate and reuse it for other queries
  • 123. 124 Top 10 for Building Efficient Queries ā€£ 6. Continued .
  • 124. 125 Top 10 for Building Efficient Queries ā€£ 7 Avoid unnecessary GROUP BY columns ā€£ Use a subquery instead. ā€£ ā€¢The process of grouping rows becomes more expensive as you add more columns to the GROUP BY list. ā€£ ā€¢If your query has few column aggregations but many non- aggregated grouped columns, you might be able to refactor it by using a correlated scalar subquery. ā€£ ā€¢This will result in less work for grouping in the query and therefore possibly better overall query performance.
  • 125. 126 Top 10 for Building Efficient Queries ā€£ 7 Continued .
  • 126. 127 Top 10 for Building Efficient Queries ā€£ 8 .Use CASE expressions to include variable logic in a query ā€£ The CASE expression is one of the most powerful logic tools available to T-SQL programmers. ā€£ ā€¢Using CASE, you can dynamically change column output on a row-by-row basis. ā€£ ā€¢This enables your query to return only the data that is absolutely necessary and therefore reduces the I/O operations and network overhead that is required to assemble and send large result sets to clients.
  • 127. 128 Top 10 for Building Efficient Queries ā€£ 9 Divide joins into temporary tables when you query very large tables. ā€£ The query optimizerā€™s main strategy is to find query plans that satisfy queries by using single operations. ā€£ ā€¢Although this strategy works for most cases, it can fail for larger sets of data because the huge joins require so much I/O overhead. ā€£ ā€¢In some cases, a better option is to reduce the working set by using temporary tables to materialize key parts of the query. You can then join the temporary tables to produce a final result.
  • 128. 129 Stored Procedure Best Practices ā€£ ā€¢Avoid using ā€œsp_ā€ as name prefix ā€£ ā€¢Avoid stored procedures that accept parameters for table names ā€£ ā€¢Use the SET NOCOUNT ON option in stored procedures ā€£ ā€¢Limit the use of temporary tables and table variables in stored procedures ā€£ ā€¢If a stored procedure does multiple data modification operations, make sure to enlist them in a transaction. ā€£ ā€¢When working with dynamic T-SQL, use sp_executesqlinstead of the EXEC statement
  • 129. 130 Views Best Practices ā€£ Use views to abstract complex data structures ā€£ ā€¢Use views to encapsulate aggregate queries ā€£ ā€¢Use views to provide more user-friendly column names ā€£ ā€¢Think of reusability when designing views ā€£ ā€¢Avoid using the ORDER BY clause in views that contain a TOP 100 PERCENT clause. ā€£ ā€¢Utilize indexes on views that include aggregate data
  • 130. 131 Top 10 for Building Efficient Queries ā€£ 10. Refactoring Cursors into Queries.. ā€£ Rebuild logic as multiple queries ā€£ ā€¢Rebuild logic as a user-defined function ā€£ ā€¢Rebuild logic as a complex query with a case expression

Hinweis der Redaktion

  1. Query performance tuning remains an important part of today's database applications. Yes, hardware performance is constantly improving. Upgrades to SQL Serverā€”especially to the optimizer, which helps determine how a query is executed, and the query engine, which executes the queryā€”lead to better performance all on their own. At the same time, SQL Server instances are being put on virtual machines, either locally or in hosted environments, where the hardware behavior is not guaranteed. Databases are going to platform as a service systems such as Amazon RDS and Windows Azure SQL Database. You still have to deal with fundamental database design and code generation. In short, query performance tuning remains a vital mechanism for improving the performance of your database management systems. The beauty of query performance tuning is that, in many cases, a small change to an index or a SQL query can result in a far more efficient application at a very low cost. In those cases, the increase in performance can be orders of magnitude better than that offered by an incrementally faster CPU or a slightly better optimizer.
  2. How will you establish the Baseline? How will you identify the bottlenecks? Why once change at a time? How will you measure the Performance
  3. DBCC Loginfo
  4. --demo
  5. Locking types/modes: ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/udb9/html/108297fa-35fc-4cbe-a1ac-369aabd8cf44.htm Locking & compatibility: ms-help://MS.SQLCC.v9/MS.SQLMobile.v3.en/SSMMain3/html/fb7c1c79-e392-444f-873f-888ad0556631.htm Locking Hints: ms-help://MS.SQLCC.v9/MS.SQLMobile.v3.en/SSMMain3/html/7657c311-2121-4c45-8a36-4a6579384f24.htm Show sp_lock demo and not been able to delete database when different connection has put shared lock
  6. Select * from sys.configurations
  7. Minimum(MB), also known as min server memory, works as a floor value for the memory pool. Once the memory pool reaches the same size as the floor value, SQL Server can continue committing pages in the memory pool, but it can't be shrunk to less than the floor value. Note that SQL Server does not start with the min server memory configuration value but commits memory dynamically, as needed. Maximum(MB), also known as max server memory, serves as a ceiling value to limit the maximum growth of the memory pool. These configuration settings take effect immediately and do not require a restart. In SQL Server 2014 the lowest maximum memory is 64MB for a 32-bit system and 128MB for a 64-bit system.
  8. Sys.dm_os_memory_brokers While most of the memory within SQL Server is allocated to the buffer cache, there are a number of processes within SQL Server that also can, and will, consume memory. These processes expose their memory allocations through this DMO. You can use this to see what processes might be taking resources away from the buffer cache in the event you have other indications of a memory bottleneck. Sys.dm_os_memory_clerks A memory clerk is the process that allocates memory within SQL Server. Looking at what these processes are up to allows you to understand whether there are internal memory allocation issues going on within SQL Server that might rob the procedure cache of needed memory. If the Performance Monitor counter for Private Bytes is high, you can determine which parts of the system are being consumed through the DMV. If you have a database using in-memory OLTP storage, you can use sys.dm_db_xtp_table_memory_stats to look at the individual database objects. But if you want to look at the allocations of these objects across the entire instance, you'll need to use sys.dm_os_memory_clerks. Sys.dm_os_ring_buffers This DMV is not documented within Books Online, so it is subject to change or removal. It changed between SQL Server 2008R2 and SQL Server 2012. The queries I normally run against it still seem to work for SQL Server 2014, but you can't count on that. This DMV outputs as XML. You can usually read the output by eye, but you may need to implement XQuery to get really sophisticated reads from the ring buffers.
  9. Wait statistics are a good way to understand whether there are bottlenecks on the system. You can't simply say something greater than x is a bad number, though. Faulty Processor
  10. Demo : use profiler to identify slow running queries. Look for CPU, Reads column and Duration column DMV demo ā€“ missing Index, DMV
  11. Demo :
  12. Heuristics quote from WikiPedia - The objective of a heuristic is to produce quickly enough a solution that is good enough for solving the problem at hand. This solution may not be the best of all the actual solutions to this problem, or it may simply approximate the exact solution. But it is still valuable because finding it does not require a prohibitively long time. Commutativity ā€“ In mathematics Commutativity is where sums can be rewritten and still provide the same answer, 10+100=110 & 100+10=110. The optimizer knows that if you are asking for an inner join (that is a join where the same data must be found on either side) then it would be more efficient to join the large table to the smaller one. This would result in less work for the storage engine. Letā€™s take the figures 10 & 100 again Table A will be 100 records, table b will be 10. Join A to b would require 100 searches into table b. join b to a would require 10 searches into table a ā€“ quite a saving and still the same logical outcome. In other words this part helps to work out the most efficient join order for your query. Itā€™s been some time since we have had to worry about the order of our joins, the optimizer now figures that out for us. Substitution rules & Exploration rules - Substitution & Exploration rules are rules that use heuristics or mathematic rules to generate new tree shapes, this may provide the opportunity for more optimization rules. Implementation rules - Implementation rules convert the logical trees into physical trees, at this point they can be optimized further and become the eventual execution plan.
  13. Parameters should be considered for differentiating the good and bad execution plan
  14. Index Unique Scan : Only one row will be returned by unique index Index Range Scan : =,<,>,LIKE (NON UNIQUE INDEX) Order BY clause has all the columns present in the index and order same as index
  15. There is a index on ProductID column will index be used here?
  16. Demo :
  17. In the most commonly used form of hash join, the in-memory hash join, the entire build input is scanned or computed, and then a hash table is built in memory. Each row from the outer input is inserted into a hash bucket depending on the hash value computed for the hash key (the set of columns in the equality predicate). A hash is just a mathematical construct run against the values in question and used for comparison purposes.
  18. Smaller input as build input because it's going to be stored on the system, so the optimizer attempts to minimize the memory used.
  19. In situations where the data is ordered by an index, a merge join can be one of the fastest join operations, but if the data is not ordered and the optimizer still chooses to perform a merge join, then the data has to be ordered by an extra operation, a sort. This can make the merge join slower and more costly in terms of memory and I/O resources.
  20. A loop join can be fast because it uses memory to take a small set of data and compare it quickly to a second set of data. A merge join similarly uses memory and a bit of tempdb to do its ordered comparisons. A hash join uses memory and tempdb to build out the hash tables for the join. Although a loop join can be faster at small data sets, it can slow down as the data sets get larger or there aren't indexes to support the retrieval of the data. That's why SQL Server has different join mechanisms.
  21. SQL Server uses a cost-based optimization technique to determine the processing strategy of a query. The optimizer considers both the metadata of the database objects, such as unique constraints or index size, and the current distribution statistics of the columns referred to in the query when deciding which index and join strategies should be used.
  22. Parallel Plan Optimization The optimizer considers various factors while evaluating the cost of processing a query using a parallel plan. Some of these factors are as follows: Number of CPUs available to SQL Server SQL Server edition Available memory Cost threshold for parallelism Type of query being executed Number of rows to be processed in a given stream Number of active concurrent connections