This presentation by Belgium was prepared for the break-out Session 1, “Quantitative Evidence”, in the discussion “Economic Analysis in Merger Investigations” at the 19th OECD Global Forum on Competition on 9 December 2020. More papers and presentations on the topic can be found at http://oe.cd/eami.
This presentation was uploaded with the author’s consent.
Economic Analysis in Merger Investigations – Break-out Session 2 – Quantitative Evidence – Belgium – December 2020 OECD discussion
1. Quantitative techniques in merger
investigations
Experiences of the BCA
Presentation for the OECD GFC roundtable on
Economic Analysis in Merger Investigations
9 December 2020
Griet Jans, chief economist a.i.
2. Short intro of the CET at the BCA
• Started in 2010 with 1 Chief Economist
• In 2012: Chief Economist + 1
• Over time grown to: Chief Economist + 4 members at most
• Room for further growth? Yes, but at feasible pace
• Situation today:
– 4 FTE; 4 months per year enforced by interns
– 2 of them holding a PhD
– Complementary skills: data-techniques, IO-modelling, accounting, online
surveys
– Programs covered: a.o. Stata, R, Excel, Check Market (for online
surveys),..
– Covering cases in Dutch and French
3. Importance of Q.T. depending on…
• Familiarity with and complexity of the sector
• First indicators (e.g. HHI-levels)
• On the list of priorities (in case of time/resource constraints)
• Reactions from third parties on announced merger
• Reasons/objectives of the merger according to merging parties
4. Q.T. throughout the whole investigation
• Market delineation:
– Measuring the level of product differentiation
– Delineation of catchment areas
– Closeness of competition between merging parties
– …
• Identification/verification of potential theories of harms
• Testing of submitted remedies
5. Level of complexity of Q.T. varying …
• …from basic descriptive statistics supported by visuals
• verification of empirical studies provided by merging or third
parties
• To more elaborate empirical analyses fully executed by the CET
• But given small size of CET => management of expectations (also
internally)
6. Challenges
• Small team = vulnerable. Importance of
– Internal growth (what is a sufficient/reassuring critical mass?)
– Alliances with external institutions for additional support?
– Procedures and automatization (investing in templates, software)
– Be realistic in ambitions + learn from others
– But also dare to dive into the unknown
• Rights of defence (dataroom): BCA relying on procedure and principles of DG
Comp
• Existing case law vs new insights: What in case new approach is clearly more
reliable, but very time-consuming, data demanding, …: is pragmatism still
ok?
• Presentation of results often at least as challenging: Less is more? Use of
visuals? Use of annexes? Academic reporting vs bolder statements?
8. Delineation of catchment areas
• First application in 2011
• Strong evolution from radii to highly sophisticated catchment
areas based on actual sales data
• Standardized templates in Excel and dofiles in STATA
• Standard = firm-centric approach complemented afterwards
with robustness check on customers behavior (e.g. via online
survey); if feasible: amended by customer-centric approach
• Geographic distribution of revenues at store-level per postal
code (or even more refined if relevant and feasible)
• Elimination of outliers => 80% benchmark is tested via
cumulative distribution functions
• Drawback: static analysis but workable alternative for a SSNIP
(mostly not feasible in the context of local markets)
Steps of a baseline analysis
9. Delineation of catchment areas
• Calculation of market shares within catchment areas:
– Real presence of all competitors in that local catchment area preferred to
in/out method
– nominator of parties is known, denominator sometimes to be estimated
based on other (public) sources
• Analysis of overlap areas
• Analysis of the size of the catchment areas: differences across types
of players, location, demographic characteristics,….?
• Type of further analyses depending on sector, data availability, time
constraints
• Additional studies within these areas - examples:
– Diff-in-Diff impact of entry (see next slides)
– Switching analyses
– Local UPP-studies
Static analysis as start for further competitive assessment
10. Impact of maverick’s entrance
• Impact entrance Supermarket-chain 1 on revenues of Supermarket-
chain 2
• Treatment group: catchment areas of Chain 2 stores where Chain 1
entered
• Different control groups:
– catchment areas of Chain 2 stores without entrance of Chain 1
– catchment areas of Chain 2 stores with no Chain 1 store within <30 minutes
• Comparison of revenues between treatment and control, before and
after entrance on the market
• Significant impact on revenues of Chain 2
• Harm: the merger would remove a maverick on the market, with at
local level potential upward pricing pressure and at national level
returning to consolidated market structure of 3 supermarket chains
with equal presence (risk of coordinated effects)
• Combined with catchment area assessment it helped in identifying
the local areas where the harm was expected to be the highest
Quantification of expected harm
11. Vertical arithmetics
• Even simple arithmetics may be very useful in verifying whether
a foreclosure strategy can be lucrative or not
• Example in media-platforms
• Costs if own television chain is foreclosed for rival platform:
– Loss of carriage fee on foreclosed platforms
– Loss of ad-revenus on foreclosed platforms, with a feedback-
effect of new customers on platform to the detriment of
foreclosed platforms
• Revenues of foreclosure strategy
– New customers for the cable operator that shifts away from
foreclosed platforms
– In case of total foreclosure α = 1
Calculation of total input foreclosure – copying EC approach
12. Vertical arithmetics
• Confirmation of incentive for vertically integrated platform to
foreclose
• Together with proof on ability and lack of efficiencies (pointing
out EDM): behavioural remedy (acces to channels on FRAND-
conditions)
From words to formulas, filling in the gaps
13. Ex post evaluation of previous merger to
predict impact of new merger
• “difference-in-difference” regression analysis in homogenous product market
• Monthly price, quantity and cost data from the parties (treatment group)
and main competitors (control group) covering a period pre and post a
previous merger.
• A visual inspection of the price, cost and quantity patterns over time of the
two groups
– Increase in input cost seemed to fully explain the increase of prices in
the post-merger period => a well-functioning competitive market?
– Increase of the input cost stronger for the competitors than for merging
parties, explained by scale economies or other efficiencies (merging
parties largest firms in the sample)
– Well-functioning competitive market: largest firm uses this advantage
by lowering its price and gaining sales relative to its less efficient
competitors
– Opposite happened: prices and average gross margin of large firm
increased relative to its competitors in the post-merger period
Context
14. Ex post evaluation of previous merger to
predict impact
• Conventional difference-in-difference regression model (e.g. Bertrand, Duflo and
Mullainathan, 2004):
𝑃𝑃𝑚𝑚𝑚𝑚𝑚𝑚 = 𝐴𝐴 𝑚𝑚 + 𝐵𝐵𝑡𝑡 + 𝛽𝛽𝐼𝐼𝑔𝑔𝑔𝑔 + 𝛾𝛾𝐶𝐶𝑚𝑚𝑚𝑚𝑚𝑚 + 𝜀𝜀 𝑚𝑚𝑚𝑚𝑚𝑚
where
• 𝑃𝑃𝑚𝑚𝑚𝑚𝑚𝑚 denotes the price/ton (possibly in logs) for each location within a certain product
catergory (𝑚𝑚) in a certain group (𝑔𝑔) in a certain month (𝑡𝑡)
• 𝐴𝐴 𝑚𝑚 and 𝐵𝐵𝑡𝑡 are location-product catergory and time fixed effects, resp.
• 𝐼𝐼𝑔𝑔𝑔𝑔 is a dummy equal to 1 for the treatment group in the post-merger period and 𝛽𝛽 is
the “difference-in-difference” regression coefficient of intrest
• 𝐶𝐶𝑚𝑚𝑚𝑚𝑚𝑚 denotes the input cost (possibly in logs), where the coefficient 𝛾𝛾 captures the the
effect of the residual variation in the input costs that is not captured by the fixed
effects
• 𝜀𝜀 𝑚𝑚𝑚𝑚𝑚𝑚 is the error term
• Under some critical assumptions
Regression analysis
15. Ex post evaluation of previous merger to
predict impact
• Price effect estimated in levels and logs
• Volume weights included, such that prices in product categories
with higher sold volumes get a higher relative weight
• Standard errors are clustered at location level
• When estimated in logs, a significant relative price increase was
predicted between treatment group and control group, with a
very high R-squared, suggesting a good model fit, and an
estimated coefficient significant at the 95% level
• Robustness tests performed: common trend assumption,
clustering of standard errors
• Based on these results: temporary conclusion that investigated
merger would lead to further price increases
• Withdrawal after Phase I investigation
Baseline result