Almost every participant in the BFSI sector identifies application
uptime as a critical metric of application performance and recognises
the need for those applications to function optimally i.e. increase
productivity while reducing costs. But this study showed that
organisations did not have defined standards of measurement and
did not consider industry benchmarks as relevant indicators.
State of application performance management in the Indian BFSI sector
1. The state of application performance
management in the Indian BFSI sector
A fact-ďŹnding study by Anunta
2. Index
Scope and methodology
The APM Assessment
The end-user conundrum
What SLAs fail to measure
Measuring The Business Impact
Top 5 Survey Insights
3. What:
Anunta Tech commissioned ValueNotes Research to study the state of application performance
management, monitoring and measurement in the Indian BFSI Sector. The objective was to obtain
qualitative insights into how BFSI companies are currently measuring the productivity and
eďŹciency of their applications. This included understanding perceptions of the IT decision makers
on key parameters such as:
Are they measuring the productivity and eďŹciency of their application performance & delivery?
How are they measuring application performance?
What are the key SLAs around application performance?
What kind of penalties and guarantees exist on current SLAs with vendors?
Are they measuring the business impact of application performance?
Who:
The study polled senior level IT professionals of 34 BFSI organisations each with more than 500
end points (desktops, laptops and other mobile devices).
61% of the respondents were either IT heads, CTOs or CIOs while the balance consisted of VPs
and GMs of IT.
47% of the respondents were from banks while 26% were from insurance companies. AMCs
and brokerages accounted for the rest.
How:
ValueNotes conducted the survey over a period of 3 weeks through a combination of face-to-face
and telephonic interviews.
Scope and methodology
â˘
â˘
â˘
â˘
â˘
â˘
â˘
www.anuntatech.com | 1
4. APM Challenges perceived by the BFSI sector
Critical inferences
Application uptime is the most crucial metric for application performance
That said, many organizations do not have any set standards or benchmarks
And existing deďŹned metrics were not monitored regularly nor were they analyzed further to
understand the eďŹectiveness of application performance.
Older banks and PSUs have a higher tolerance for downtime. Additionally, metrics tend to be
ďŹexible for remote areas where connectivity and bandwidth issues can cause higher levels of
downtime.
The APM Assessment
www.anuntatech.com | 2
Key Findings
Lack of integration and scarcity of
skilled resources + attrition are among
the top factors impeding application
delivery
Efficient management of performance
of critical applications and seamless
delivery across the entire delivery chain
was found to be a top priority.
The challenges are redefining the way
application performance management
(APM) is being carried out. The increas-
ing complexity of applications, applica-
tion delivery architectures, widening
geographical reach and constant
up-gradation due to technological
obsolescence are creating problems in
monitoring application performance.
â˘
â˘
â˘
â˘
â˘
â˘
â˘
5. Their Take
The APM Assessment
Anunta Take
www.anuntatech.com | 3
âYou cannot blindly compare
the industry benchmark and
draw conclusions.â
- Head-IT, Old Private Bank
âWe do not benchmark
against industry standard,
because we believe our own
standard meets the needsâ.
- Head-IT, Old Private Bank
âI do not believe in compar-
ing ourselves with an indus-
try benchmarkâ.
- CIO, New Private Bank
Almost every participant in the BFSI sector identiďŹes application
uptime as a critical metric of application performance and recognises
the need for those applications to function optimally i.e. increase
productivity while reducing costs. But this study showed that
organisations did not have deďŹned standards of measurement and
did not consider industry benchmarks as relevant indicators.
6. End-user monitoring challenges
The End User Conundrum
www.anuntatech.com | 4
Key Findings
CTOs understand the importance of end
user monitoring with 100% of CTOs
feeling end-user monitoring is of critical
importance.
While a majority claimed to measure
performance from an end user
perspective only 47% claimed
consensus between IT and end-user
experience leading to some doubt as to
whether the measurement was in fact
happening at end-user level.
End user feedback, incident reporting
and problem solving are the metrics
employed to capture the end user
experience indicating a reactive
approach towards monitoring with only
15% of those polled saying that they
took a proactive approach to end-user
monitoring.
â˘
â˘
â˘
Critical inferences
Application performance from end-user perspective is reactive and not linked to business
metrics.
Given the lack of speciďŹcity around measurement metrics, most results are vague and based
primarily on end-user feedback.
â˘
â˘
7. Their Take
The End User Conundrum
Anunta Take
www.anuntatech.com | 5
âWe measure end user
performance by feedback,
questionnaires, branch visits
- but these are on a random
basis.â
- CIO, Public Sector Bank
âWe do not have any metrics
for measuring end user
experience, we consider the
general feedback given by
the end user.â
- GM-IT, Co-operative Bank
âEnd user monitoring is very
important. But practically it
is not feasible all the time.â
- Head IT, AMC
âWe donât know how to do
this or how to quantify this.â
- Head-IT,
Public Sector Bank
âWe have vendors like Karvy
who monitor the application
performance. We do not
monitor the performance
regularly. We look into the
matter, only in case of issues
or problems.â
- IT-Head, AMC
âWe do look at the problem
tickets logged in but to have
a structured system for
doing this is beyond our
scope today.â
- - IT-Head, Insurance
Application performance monitoring at an end user level seems to be
missing in the Indian BFSI industry. Only 47% say that there is
consensus between IT and the end-user which indicates that
application performance is perhaps measured at device level versus
end-user level.
8. Industry: PSUs and Co-operative banks
How they monitor:
Do not measure performance in structured manner
Rely on audits and branch surveys
Industry: Private Banks
How they monitor:
Deploy tools to measure the performance
Some of them rely on end-user feedback for performance checks
The End User Conundrum
Anunta Take
www.anuntatech.com | 6
Understandably, technology adoption has always been slow in this
segment and many of these banks need to ďŹrst upgrade their existing
application infrastructure before they can begin to measure
performance in a more technical manner.
Anunta Take
While they may be deploying tools, this is most likely happening at
various levels of the enterprise network and not necessarily from an
end-user standpoint per se. So while end-user feedback is important,
there is often a disconnect when network diagnostics tell the IT team
that itâs various components including endpoint, LAN, server,
application software are functioning perfectly.
â˘
â˘
â˘
â˘
9. Industry: AMCs and institutional brokers
How they monitor:
Monthly dashboards where application performance is displayed
Majority of them have vendors for monitoring
Industry: Insurance
How they monitor:
Deploy tools for measurement
Capture end user characteristics and validate them against the IT measurements
The End User Conundrum
www.anuntatech.com | 7
Anunta Take
Insurance companies, AMCs and institutional brokers seem to have
identiďŹed the right metrics however the study shows that this
measurement is not being done consistently neither are vendors or in
house teams being held to SLAs. Most importantly, there is a need to
move from trouble-shooting to proactive APM.
â˘
â˘
â˘
â˘
10. Key Findings
The SLAs around application perfor-
mance are not measured and moni-
tored regularly due to reasons like lack
of documented data and ambiguity
around metrics.
Most of the organizations have in-house
team who manage application perfor-
mance, but very few have SLAs around
in-house team.
SLA measurement is done on selective
basis and only when a disaster strikes.
Proactive monitoring is still not adopted
by many at the end-user level for the
application delivery management.
â˘
â˘
â˘
Critical inferences
Failure to measure: Various metrics pertaining to application or server availability and capacity
of network are useful to the IT department, but these may not be a true measure of IT
eďŹciency as far as revenue and productivity generated. It is necessary that the IT in BFSI
reďŹects on what metrics can provide a link to business productivity.
Failure to redeďŹne: IT advancement is rapid and the BFSI sector needs to be ready to reďŹne
and rework the metrics based on newer circumstances and invest in new application delivery
management tools and infrastructure.
Failure to understand the importance: Many organizations in the BFSI sector are aware about
the need for SLAs, but fail to understand the impact it has on their business. If the SLAs are
adopted in more stringent manner and monitored on regular basis then they can overcome
the vulnerability of adopting new technology.
â˘
â˘
â˘
SLA measurement challenges
What SLAs fail to measure
www.anuntatech.com | 8
11. Their Take
What SLAs fail to measure
www.anuntatech.com | 9
Anunta Take
From a methodology standpoint, every SLA that an organization
enters into, needs to be linked to the end-user experience. Therefore,
one must focus on ensuring that every technical SLA is interpreted
into an end-user SLA, that every end-user SLA is enforceable and
that the system is pro-active i.e. SLA defaults can be identiďŹed
before they occur.
âBusiness applications are
our core responsibility and
having an in-house team to
monitor the application
performance and delivery is
crucial, but SLAs are
neglected as it is expected of
the team to meet the
business requirements.â
- VP-IT, Insurance
âMeasuring performance of
vendors is important, but I
have to admit that the SLAs
are more on paper. We have
to create a balance between
performance and ďŹexibility
in dealing with vendors.â
- Head IT, Health Insurance
âWith the in house team, the
service levels are a given. So,
frankly, we havenât felt the
need of having SLAs with our
in house team.â
- Head IT, Old Private Bank
12. Challenges in measuring business impact
Measuring Business Impact
www.anuntatech.com | 10
Key Findings
The ďŹrms in BFSI sector measure business impact of application performance periodically.
QuantiďŹcation of losses due to poor application performance is a major challenge.
Loss in employee productivity was measured in terms of No. of volumes/ No. of people/ No. of
hours lost due to incidents that cause a dip in application performance.
Overall employee productivity loss due to poor application performance is in the range of 10-20%
Critical Inferences
The link between business and IT performance is at best tenuous, and often non-existent
Revenue loss due to application performance issues is almost completely ignored with no
correlation being cited between the two.
Additionally while brand credibility does take a hit, once again, it is not being measured either
through revenue loss or otherwise
â˘
â˘
â˘
â˘
â˘
â˘
â˘
13. Their take
Measuring Business Impact
www.anuntatech.com | 11
Anunta Take
It is not surprising that user organizations are unable to quantify the
revenue impact of poor application performance. The need of the
hour therefore is two fold â ďŹrst is the end-user SLAs we discussed
earlier and the second is bringing about a fundamental shift in how
one measures ITâs impact on the business. In that, one needs to move
away from the TCO discussion, and look to a more tangible and
measurable metric i.e. the cost of application delivery per user, per
month. When this is seen in the context of revenues lost on account of
application downtime, the cost-beneďŹt analysis becomes far clearer
and organizations are able to map IT to business goals much better.
âIn such situation, poor
application delivery can lead
to up to 5-10% of revenue
losses.â
- Head-IT,
General Insurance
âThereisproductivitylossifan
issue is unresolved in 30
minutes or 1 hour. When
networks are not available for
a day, the operational cost
increases and there are
productivity losses upto 30%.â
- Head-IT, Old Private Bank
âThere is one thing that you
canât measure. Erosion in
brand credibility.â
- Head-IT,
Public Sector Bank
14. Yes: Application uptime is critical to the BFSI sector
But: There are no set standards or benchmarks to measure performance
Yes: They measure application performance from an end-user perspctive
But: The metrics are neither well deďŹned nor monitored proactively
Yes: They have SLAs in place
But: These are at best at the device level and more often than not on paper only
with minimal enforcement. End-user SLAs are non-existent.
Yes: They periodically measure business impact of application performance
But: Correct metrics and quantiďŹcation are almost non-existent. Loss of revenue
due to employee productivity issues caused by application downtime is
not measured.
Yes: The BFSI sector is an early adopter of technology and remains its largest buyer.
But: The inability to identify and measure new age performance indicators such
as application delivery leave them in an ambiguous grey area where
technology and its eďŹcacy are not necessarily seen together.
Top 5 insights
www.anuntatech.com | 12
#1
#2
#3
#4
#5