2. Whats this talk based on?
●
Research done by hackers like Robert Haas in
the past, some mailing list readings,
discussions with smart people etc...
3. Who's this talk for?
●
Users of postgres, who want to know how
query planner's heuristics can affect their
queries.
4. What are we going to discuss?
●
●
●
A small discussion about the Postgres query
planner.
Areas where problems arise in query planner.
Some cases where the planner has some nifty
points. Some of the cases may be well known.
However, they are worthy of discussion here if
only to see how the planner processes them
internally.
5. A little about the PostgreSQL Query
Planner
●
Quoting Bruce Momjian:
'The optimizer is the "brain" of the database,
interpreting SQL queries and determining the fastest
method of execution.'
6. Where can the problems arise in the
query planner
●
Row Count Estimates
●
DBA Ideas
●
Planner's Blocking points (What the planner
cant do at this point of time)
7. Row Count Stats
●
●
●
The backend essentially holds MCVs (Most
common values) and a histogram of the
distribution of the rest of the values.
The number of MCVs stored is equal to a
value known as statistics target.
Incorrect or outdated counts can lead to bad
selectivity estimates, leading to not so good
plans
8. User Settings
●
PostgreSQL depends quite some on the page
values (seq_page_cost, random_page_cost)
and others. Setting them to random values will
cause the query planner to choose an
inefficient plan over an efficient alternate plan.
9. Planner Not-Done-Yets
●
Some queries are slow in execution because
their optimization depends on a feature which
is not yet present in the query planner.
10. Selectivity Estimate Issues
●
●
●
Arbitrary estimates give a bad value
Cross table correlations arent taken into
consideration.
Planner doesnt understand expressions like (a
+ 0 = a)
11. Settings Issues
●
●
DBAs can use some settings to tune the query
planner.
Those settings can be harmful if set to
incorrect values.
12. Planner Cost Evaluation Parameters
●
seq_page_cost = 1.0
●
random_page_cost = 4.0
●
cpu_tuple_cost = 0.01
●
cpu_index_tuple_cost = 0.005
●
cpu_operator_cost = 0.0025
●
These parameters are used by the planner to
estimate the costs of plans.
13. Planner Cost Evaluation Parameters
Continued
●
●
random_page_cost abstracts the cost of
reading a random page and seq_page_cost
abstacts the same for a sequential read.
In real world scenarios, setting the
random_page_cost to a lower value can
help get better plans, since modern day disks
have reduced the gap between random page
read cost and sequential page read costs.
15. Cases
●
●
Some examples of situations where the
planner decisions can lead to a bad plan.
Some cases are not strictly related to the
query planner, but are fun in general!
17. Why does WITH face performance
issues?
●
●
We have to understand how WITH is
processed by the planner.
The query planner intentionally runs the WITH
part first. It divides the query into separate
plans, which doesnt match up with the pull
architecture of the executor.
20. Feature Alert!
●
The current implementation approach for
GROUPING SETS uses CTE based
architecture. Hence, it suffers from the same
problem of divided query plans. If this problem
is solved, we could have an efficient
implementation of GROUPING SETS in
postgres, which would be very nice.
21. Materialized Views
●
Joining a materialized view back to its parent
table is expensive, since the planner does not
know about any correlation between the two
and the two tables are dependent on each
other.
25. EXISTS and NOT EXISTS
●
Using EXISTS and NOT EXISTS in your
queries is a good idea and will generally lead
to better plans. Use them instead of complete
queries if you only want to test presence or
absence of specific values.
26. Reason
●
EXISTS and NOT EXISTS allow the planner to
use semi-joins and anti-joins, respectively.
These are much efficient than full joins for just
testing the presence or absence of values.
28. What is bad with prepared
statements?
●
●
Pre 9.2, user had a choice of using prepared
statements and using generic plans, or not
use them and re plan every time.
Post 9.2, user doesnt have that restriction, but
the downside is that the user doesnt have a
control on whether a prepared statement is
replanned.
29. Pre 9.2
●
Prepared statements are good for workloads
where the selectivity of the queries is in a
small range. Widely varying selectivity values
can lead to bad performance for prepared
statements.
30. Post 9.2
●
The point to look at post 9.2 is if the planner is
overlooking a better alternate plan and
choosing a sub optimal plan.
31. Example
●
●
●
select * from sometable where id = any ($1);
where $1 was a small array.
The generic plan for this is of course trivial and
would have worked fine.
But the planner has to estimate the costs of
the generic plan without knowing how many
elements are in the array.
32. Example continued
●
●
It estimated a value of size of the array for the
generic plan which was much higher than the
actual size of the array.
What that meant was that the generic plan
was about twice the cost of the custom plan,
even though it was the exact same plan.
33. Take away lesson
●
●
You need to be wary when using prepared
statements, pre 9.2 or post 9.2.
Prepared statements are much better than
before, but some surprise may still turn up.
34. Join Reordering
●
●
●
●
The planner may reorder the joins present in a
query based on some heuristics that it thinks
will lead to better plans.
Generally, this is a good idea.
We should not assume that we have a better
idea about the optimal join order.
Still, if we are inclined to do so, we have a
setting known as join_collapse_limit
35. Gotcha!
●
●
●
We have a catch here, though.
Setting join_collapse_limit can lead to
suboptimal plans, but faster planning time
durations.
If you dont care as much about suboptimal
plans as much you care about faster planner
speeds, set join_collapse_limit to a lower
value.
36. Genetic Query Optimizer
●
●
The genetic query optimizer, or GEQO, is
used when the number of tables involved in a
join exceed a specific limit.
GEQO is used when searching the entire
search space for executing the join is not
feasible.
37. Genetic Query Optimizer Continued
●
GEQO uses genetic algorithms to generate
query plans and evaluate their fitness,
generating better plans in each successive
population.
38. GEQO Gotcha
●
GEQO is useful for generating plans when the
number of tables is high.
●
It is also useful for generating plans quickly.
●
However, it can lead to suboptimal plans.
●
Lower geqo_threshold if you care much about
the planner speed rather than the optimality of
the query plan.
39. Note about join_collapse_limit and
geqo_threshold
●
Since the default value for geqo_threshold is
higher than default value of
join_collapse_limit, GEQO isnt used by default
for explicit join syntax.
40. Lack of planner heuristics
●
●
●
Refer to 'Planner Conceptual Error when
Joining a Subquery -- Outer Query Condition
not Pulled Into Subquery' on performance list
It tells us that the planner does not understand
that 'a = b' and (b = c OR b = d) is equivalent to
(a = c OR a = d)
Planner lacks such heuristics. Another example
is SELECT * from foo WHERE (foo.a + 0 =
foo.a);
42. ●
SELECT *
FROM parent
INNER JOIN child ON child.parent_id = parent.id
LEFT JOIN (SELECT parent.id, parent.bin, COUNT(*) AS c
FROM parent
INNER JOIN child ON child.parent_id = parent.id
WHERE child.data > 25
GROUP BY 1, 2) agg1 ON agg1.id = parent.id AND agg1.bin = parent.bin
WHERE parent.bin IN (1, 2)
AND agg1.c >= 3;
44. ●
SELECT *
FROM parent
INNER JOIN child ON child.parent_id = parent.id
LEFT JOIN (SELECT parent.id, parent.bin, COUNT(*) AS c
FROM parent
INNER JOIN child ON child.parent_id = parent.id
WHERE child.data > 25
GROUP BY 1, 2) agg1 ON agg1.id = parent.id AND agg1.bin =
parent.bin
WHERE parent.bin = 1
AND agg1.c >= 3;
Notice the 'bin = 1' here
45. NOT IN Vs. NOT EXISTS
●
SELECT ProductID, ProductName
FROM Northwind..Products p
WHERE p.ProductID NOT IN (
SELECT ProductID
FROM Northwind..[Order Details])
46. NOT IN Vs. NOT EXISTS Continued
●
SELECT ProductID, ProductName
FROM Northwind..Products p
WHERE NOT EXISTS (
SELECT 1
FROM Northwind..[Order Details] od
WHERE p.ProductId = od.ProductId)
47. What the planner does for NOT IN
●
●
NOT IN has two possible plans, a hashed
subplan (which requires a hashtable of the
entire subselect, so it's only done when that
would fit in work_mem)
If the subselect does not fit into work_mem, a
plain subplan is made, which is O(N^2).
48. What the planner does for NOT
EXISTS
●
●
●
The query planner generates an anti-join for
NOT EXISTS if the planner can sufficiently
disentangle a join clause from the conditions
in the subquery.
If not, then the not exists remains as a
(correlated) subplan.
A correlated subplan is one which has to be
re-run for every row.
49. Comparison of performance
●
●
NOT IN can be faster when the subquery
result is small since that result set can be
hashed, leading to better performance.
Then as your data gets larger, it might exceed
work_mem, and without warning, your query
might get very slow.
50. NOT IN Gotcha
●
●
NOT IN has some pretty well known issues
with NULL values. If a column has NULL
values, NOT IN might lead to more work,
hence leading to even more drop in the
performance of the query.
This is a quite constraining, since we may
have NULLs in our columns already or may
have them in the future.
51. So, what should we use?
●
●
The points discussed till now lead to the
conclusion that we should use NOT IN for
smaller data sets, and NOT EXISTS for larger
data sets.
However, since NOT IN has the constraint of
NULLs, we should generally prefer NOT
EXISTS in all cases unless our case
specifically needs NOT IN.
52. General Advice 1
●
Use tables to store intermediate results and
process over them wherever possible. This
allows for avoiding pitfalls such as
materialized view joins, UNION pitfalls,
DISTINCT pitfalls among others.
53. General Advice 2
●
Whenever the planner's accuracy about plan
costs seems to be reducing, run ANALYZE.
This shall update the planner statistics and
may lead to better results.
54. General Advice 3
●
When the accuracy of planner's costs doesnt
seem right, you can consider increasing the
statistics target (No. Of MCV values stored) by
using the following command:
ALTER COLUMN <column name> SET
STATISTICS <value>
55. Future developments to the query
planner
●
●
●
●
The query planner still lacks some features.
Understanding of foreign keys, inner join
removal, more heuristics.
Hopefully, we will have them all someday.
CTE planning needs to be improved and done
in line with postgres executor's data pull
model.
56. Case Study – Adding foreign keys
understanding to planner
●
●
In context of removal of inner joins.
The main issue identified is that if a row is
deleted, in the time duration from the deletion
of the row and running of the FK trigger, all
plans generated and dependent on that row's
foreign key will be invalid.