A Practical Guide to Rapid ITSM as a Foundation for Overall Business Agility
A Practical Guide to Rapid ITSM as a Foundation for
Overall Business Agility
Transcript of a Briefings Direct podcast on how enterprises can benefit from the newest IT
service management methods and procedures.
Listen to the podcast. Find it on iTunes. Sponsor: HP
Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're
listening to BriefingsDirect. Today, we present a sponsored podcast panel discussion on how
rapidly advancing IT service management (ITSM) capabilities form an IT
imperative and therefore a bedrock business necessity.
Businesses of all stripes rate the need to move faster as a top priority, and many
times, that translates into the need for better and faster IT projects. But
traditional IT processes and disjointed project management don't easily afford
rapid, agile, and adaptive IT innovation.
Gardner
The good news is that a new wave of ITSM technologies and methods allow for a more rapid
ITSM adoption, and that means better rapid support of agile business processes.
Unleash the power of your user base
Learn how to use big data for proactive problem solving
with a free white paper.
To help us explore a practical guide to fast ITSM adoption as a foundation for overall business
agility, please join me in welcoming our panel. We're here today with John Stagaman, Principal
Consultant at Advanced MarketPlace based in Tampa, Florida. Welcome, John.
John Stagaman: Hello.
Gardner: We're also here with Philipp Koch, the Managing Director of InovaPrime, Denmark.
Welcome, Philipp.
Philipp Koch: Thanks.
Gardner: And lastly, we are here with Erik Engstrom, the CEO of Effectual
Systems in Berkeley, California. Welcome, Erik.
Erik Engstrom: Good morning, Dana. Glad to be here.
Gardner: John Stagaman, let me start with you. We hear a lot, of course, about the faster pace of
business, and cloud and software as a service (SaaS) are part of that. What, in your mind, are the
underlying trend or trends that are forcing IT's hand to think differently, behave differently, and
to be more responsive?
Stagaman: If we think back to the typical IT management project historically, what happened
was that, very often, you would buy a product. You would have your
requirements and you would spend a year or more tailoring and customizing that
product to meet your internal vision of how it should work. At the end of that, it
may not have resembled the product you bought. It may not have worked that
well, but it met all the stakeholders’ requirements and roles, and it took a long
time to deploy.
That level of customization and tailoring resulted in a system that was hard to
maintain, hard to support, and especially hard to upgrade, if you had to move to
a new version of that product down the line. So when you came to a point where you had to
upgrade, because your current version was being retired or for some other reason, the cost of
maintenance and upgrade was also huge.
It was a lesson learned by IT organizations. Today, saying that it will take a year to upgrade, or it
will take six months to upgrade, really gets a response. Why should it? There's been a change in
the way it’s approached with most of the customers we go on-site to now. Customers say we
want to use out of box, it used to be, we want to use out of box, and sometimes it still happens
that they say, and here’s all the things we want that are not out of box.
But they've gotten much better at saying they want to start from out of box, leverage that, and
then fill in the gaps, so that they can deploy more quickly. They're not opening the box, throwing
it away, and building something new. By working on that application foundation and extending
where necessary, it makes support easier and it makes the upgrade path to future versions easier.
Moving faster
Gardner: It sounds like moving towards things like commodity hardware and open-source
projects and using what you can get as is, is part of this ability to move faster, but is it the need to
move faster that’s driving this or the ability to reduce customization? Is it a chicken and egg?
How does that shape up?
Engstrom: I think that the old use case of "design, customize, and implement"
is being forced out as an acceptable approach, because SaaS, platform as a
service (PaaS), and the cloud are driving the ability for stakeholders.
Stakeholders are retiring, and fresher sets of technologies and experiences are
coming in. These two- and three-year standup projects are not acceptable.
If you're not able to do fast time-to-value, you're not going to get funding.
Funding isn’t in the $8 million and $10 million tranches anymore; it’s in the
Stagaman
Engstrom
$200,000 and $300,000 tranche. This is having a direct effect on on-premise tools, the way the
customers are planning, and OPEX versus CAPEX.
Gardner: Philipp, how do you come down on this? Is this about doing less customization or
doing customization later in the process and, therefore, more quickly?
Koch: I don't think it's about the customization element in itself. It is actually more that, in the
past, customers reacted. They said they wanted to tailor the tool, but then they said they wanted
this and they took the software off the shelf and started to rebuild it.
Now with the SaaS tool offerings coming into play, you can’t do that anymore.
You can't build your ITSM solution from scratch. You want be able to take it
according to use case and adjust it with customization or configuration. You don’t
want to be able to tailor.
But customizations happen while you deploy the project and that has to happen in
a faster way. I can only concur with all the other things that have already been
Koch
said. We don't have huge budgets anymore. IT, as such, never had huge budgets, but, in the past,
it was accepted that a project like this took a long time to do. Nowadays, we want to have
implementations of weeks. We don’t want to have implementations of months anymore.
Gardner: Let’s just unpack a little bit the relationship between ITSM and IT agility. Obviously,
we want things to move quickly and be more predictable, but what is it about moving to ITSM
rapidly that benefits? And I know this is rather basic, but I think we need to do it just for all the
types of listeners we have.
Back to you, John. Explain and unpack what we mean by rapid ITSM as a means to better IT
performance and rapid management of projects?
Best practices
Stagaman: For an organization that is new to ITSM processes, starting with a foundational
approach and moving in with an out-of-box build helps them align with best practice and can be
a lot faster than if they try to develop from scratch. SaaS is a model for that, because with SaaS
you're essentially saying you're going to use this standard package.
The standard package is strong, and there's more leverage to use that. We had a federal customer
that, based on best practice, reorganized how they did all their service levels. Those service
levels were aligned with services that allowed them, for the first time, to report to their
consuming bureaus the service levels per application that those bureaus subscribed to. They were
able to provide much more meaningful reporting.
They wouldn’t have done that necessarily if the model didn't point in that direction. Previously,
they hadn't organized their infrastructure along the lines to say, "We provide these application
services to our customer."
Gardner: Erik, how do see the relationship between rapid and better ITSM and better IT overall
performance? Are there many people struggle with this relationship?
Engstrom: Our approach at Effectual, what we focus on, is the accountability of data and the
ability for an organization to reduce waste through using good data. We're not service [process]
management experts, in that we are going to define a best practice; we are strictly on “here is the
best piece of data everyone on your team is working [with] across all tools.” In that way, what
our customers are able to see is transparency. So data from one system is available on another
system.
What that means is that you see a lot more reduction in types of servers that are being taken
offline when they're the wrong server. We had a customer bring down their [whole] retail zone of
systems that the same team had just stood up the week before. Because of the data being good,
and the fact they were using out-of-the-box features, they were able to reduce mistakes and
business impact they otherwise would not have seen.
Had they stayed with one tool or one silo of data, it’s only one source of opinion. Those kinds of
mistakes are reduced when you share across tools. So that’s our focus and that’s where we're
seeing benefit.
Gardner: Philipp, can you tell us why rapid ITSM has a powerful effect here in the market? But,
before we get into that and how to do it, why is rapid ITSM so important now?
Koch: What we're seeing in our market is that customers are demanding service like they're
getting at home at the end of the day. This sounds a little bit cliché-like, but they would like to
get something ordered on the Internet, have it delivered 10 minutes later, and working half an
hour later.
If we're talking about doing a classical waterfall approach to projects as was done 5 or 10 years
ago, we're talking about months, and that’s not what the customer wants.
IT is delivering that. In a lot of organizations, IT is still fairly slow in delivering bigger projects,
and ITSM is considered a bigger project. We're seeing a lot of shadow IT appearing, where
business units who are demanding that agility are not getting it from IT, So they're doing it
themselves, and then we have a big problem.
Counter the trend
With rapid ITSM, we can actually counter that trend. We can go in and give our customers
what's needed to be able to please the business demand of getting something fast. By fast, we're
talking about weeks now. We're of course not talking 10 minutes in project sizes of an ITSM
implementation, but we can do something where we're deploying a SaaS solution.
We can have it ready for production after a week or two and get it into use. Before, when we did
on-premise or when we did tailoring from scratch, we were talking months. That’s a huge
business advantage or business benefit of being able to deliver what the business units are asking
for.
Gardner: John Stagaman, what holds back successful rapid ITSM approach? What hinders
speed, why has it been months rather than days typically?
Stagaman: Erik referenced one thing already. It has to do with the quality of source data when
you go to build a system. One thing that I've run into numerous times is that there is often an
assumption that finding all the canonical sources of data for just the general information that you
need to drive your IT system is already available and it’s easy to populate. By that I mean things
like, what are our locations, what are our departments, who are our people?
I'm not even getting to the point of asking what are our configuration items and how are they
related? A lot of times, the company doesn't have a good way to even identify who a person is
uniquely over time, because they use something with their name. They get married, it changes,
and all of a sudden that’s not a persistent ID.
One thing we address early is making sure that we identify those gold sources of data for who
and what, for all the factual data that has to be loaded to support the process.
The other major thing that I run into that introduces risks into a project is when requirements
aren't really requirements. A lot of times, when we get requirements, it’s a bunch of design
statements. Those design statements are about how they want to do this in the tool. Very often,
it’s based on how the tool we're replacing worked.
If you don't go through those and say that this is the statement of design and not a statement of
functional requirement and ask what is it that they need to do, it makes it very hard to look at the
new tools you're deploying to say that this new tool does that this way. It can lead to excess
customization, because you're trying to meet a goal that isn’t consistent with how your new
product works.
Those are two things we usually do very early on, where we have to quality check the
requirements, but those are also the two things that most often will cause a project to extend or
derail.
Gardner: Philipp, any thoughts on problems, hurdles, why poor data quality or incomplete
configuration management and data? What is it, from your perspective, that hold things back?
Old approach
Koch: I agree with what John says. That’s definitely something that we see when we meet
customers.
Other areas that I see are more towards the execution of the projects itself. Quite often,
customers know what agile is, but they don’t understand it. They say they're doing something in
an agile way. Then, they show us a drawing that has a circle on it and then they think they are
agile.
When you start to actually work with them, they're still in the old waterfall approach of stage
gates, and milestones.
So, you're trying to do rapid ITSM implementation that follows agile principles, but you're
getting stuck by internal unawareness or misunderstanding what this really means. Therefore,
you're struggling with doing an agile implementation, and they become non-agile by doing this.
That, of course, delays projects.
Quite often, we see that. So in the beginning of the projects, we try to have a workshop or try to
get the people to understand what it really means to do an agile project implementation for an
ITSM project. That’s one angle.
The other angle, which I also see quite often, goes into the area of the requirements, the way
John had described them. Quite often, those requirements are really features, as in they are
hidden features that the customer wants. They are turned into some sort of requirements to
achieve that feature. But very seldom do we see something that actually addresses the business
problem.
They should not really care if you can right-click in the background and add a new field to this
format. That’s not what they should be asking for. They should be asking whether it's easy to
tailor the solution. It doesn’t really matter how. So that’s where quite often you're spending a lot
of time reading those requirements and then readjusting them to match what you really should be
talking about. That, of course, delays projects.
In a nutshell, we technology guys, who work with this on a daily basis, could actually deliver
projects faster if we could manage to get the customers to accept the speed that we deliver. I see
that as a problem.
Gardner: So being real about agile, having better data, knowing more about what your services
are and responding to them are all part of overcoming the inertia and the old traditional
approaches. Let’s look more deeply into what makes a big difference as a solution in practice.
Erik Engstrom, what helps get agile into practice? How are we able to overcome the drawbacks
of over-customization and the more linear approach? Do you have any thoughts about moving
towards a solution?
Maturity and integration
Engstrom: Our approach is to provide as much maturity, and as complete an integration as
possible, on day one. We've developed a huge amount of libraries of different packages that do
things such as to advance the tuning of a part of a tool, or to advance the integration between
tools. Those represent thousands of hours that can be saved for the customer. So we start a
project with capabilities that most projects would arrive at.
This allows the customer to be agile from day one. But it requires that mentality that both Philipp
and John were speaking about, which is, if there’s a holdout in the room that says “this is the way
you want things,” you can’t really work with the tools the way that they [actually] do work.
These tools have a lot of money and history behind them, but one person’s vision of how the
tools should work can derail everything.
We ask customers to take a look at an interoperable functioning matured system once we have
turned the lights on, and have the data moving through the system. Then they can start to see
what they can really do.
It’s a shift in thinking that we have covered well over the last few minutes, so I won't go into it.
But it's really a position of strength for them to say, "We've implemented, we’ve integrated. Now,
where do we really want to go with this amazing solution?
Gardner: What is it about the new toolset that’s allowing this improvement, the pre-customization
approach? How does the technology come to bear on what’s really a very process-centric
endeavor?
Engstrom: There are certain implementation steps that every customer, every project, must
undergo. It’s that repetition that we're trying to remove from the picture. It’s the struggle of how
to help an organization start to understand what the tools can do. What does it really look like
when people, party, location, and configuration information is on hand? Customers can’t
visualize it.
So the faster we can help customers start to see a working system with their data, the easier it is
to start to move and maintain an agile approach. You start to say, "Let’s keep this down to a
couple of weeks of work. Let us show it to you. Let’s visit it."
If we're faster as consultancies, if we're not taking six months, if we're not taking two months
and we can solve these things, they'll start to follow our lead. That’s essential. That momentum
has to be maintained through the whole project to really deliver fast.
Gardner: John Stagaman, thoughts about moving fast, first as consultants, but then also
leveraging the toolsets? What’s better about the technology now that, in a sense, changes this
game too?
Very different
Stagaman: In the ITSM space, the maturity of the product out of box, versus 10 years ago, is
very different. Ten or 15 years ago, the expectation was that you were going to customize the
whole thing.
There would be all these options that were there so you could demo them, but they weren’t
necessarily built in a cohesive way. Today, the tools are built in different ways so that it's much
closer to usable and deployable right out of the box.
The newest versions of those tools very often have done a much better job of creating broadly
applicable process flow, so that you can use that same out of the box workflow if you're a retailer,
a utility, or want to do some things for the HR call center without significant change to the core
workflow. You might need to have the specific data fields related to your organization.
And, there's more. We can start from this ITSM framework that’s embedded and extend it where
we need to.
Unleash the power of your user base
Learn how to use big data for proactive problem solving
with a free white paper.
Gardner: Philipp, thoughts about what’s new and interesting about tools, and even the SaaS
approach to ITSM, that drives, from the technology perspective, better results in ITSM?
Koch: I'll concur with John and Erik that the tools have changed drastically. When I started in
this business 10 or 15 years ago, it was almost like the green screens of computers that slide
through when you look for the ITSM solution.
If you’re looking at ITSM solutions today, they're web based. They're Web 2.0 technology,
HTML5, and responsive UIs. It doesn’t really matter which device you use anymore, mobile
phone, tablet, desktop, or laptop. You have one solution that looks the same across all devices. A
few years ago, you had to install a new server to be able to run a mobile client, if it even existed.
So, the demand has been huge for vendors to deliver upon what the need is today. That has
drastically changed in regards to technology, because technology nowadays allows us, and
allows the vendors, to deliver up on how it should be.
We want Facebook. We want to Tweet. We want an Amazon- or a Google-like behavior, because
that’s what we get everywhere else. We want that in our IT tools as well, and we're starting to see
that coming into our IT tools.
In the past we had rule sets, objects, and conditions towards objects, but it wasn’t really a
workflow engine. Nowadays, SaaS solutions, as well as on-premise solutions, have workflow
engines that can be adjusted and tailored according to the business needs.
No difference
You’re relying on a best practice. An incident management process flow is an incident
management process flow. There really is no difference no matter which vendor you go to, they
all look the same, because they should. There is a best practice out there or a good practice out
there. So they should look the same.
The only adjustments that customers will have to do is to add on that 10-20 percent that is
customer-specific with a new field or a specific approval that needs to be put in between. That
can be done with minimal effort when you have workflow engine.
Looking at this from a SaaS perspective, you want this off the shelf. You want to be able to
subscribe to this on the Internet and adjust it in the evening, so when you come back the next day
and go to work, it's already embedded in the production environment. That's what customers
want.
Gardner: Now if we’ve gotten a better UI and we're more ubiquitous with who can access the
ITSM and how, maybe we've also muddied the waters about that data, having it in a single place
or easily consolidated. Let’s go back to Erik, given that you are having emphasis on the data.
When we look at a new-generation ITSM solution and practice, how do we assure that the data
integrity remains strong and that we don't lose control, given that we're going across peers of
devices and across a cloud and SaaS implementations? How do we keep that data whole and
central and then leverage it for better outcomes?
Engstrom: The concept of services and the way that service management is done is really
around Services. If we think about ITIL and the structure of ITIL [without getting into too many
acronyms] the ability to take Services, Assets, and Configuration Management information, [and
to have] all of that be consistent, it needs to be the same.
A platform that doesn't have really good bidirectional working data integrations with things like
your asset tool or your DCIM tool or your UCMDB tool or your – wherever it is your data is
coming from-- the data needs to be a primary focus for the future.
Because we're talking about a system [UCMDB] that can not only discover things and manage
computers, but what about the Internet of Things? What about cloud scenarios, where things are
moving so quickly that traditional methods of managing information whether it would be a
spreadsheet or even a daily automated discovery, will not support the service-management
mission?
It's very important, first of all, that all of the data be represented. Historically, we’ve not been
able to do that because of performance. We've not been able to do that because of complexities.
So that’s the implementation gap that we focus on, dropping in and making all of the stuff work
seamlessly.
Same information
The benefit to that is that you’re operating as an organization on the same piece of information,
no matter how it’s consumed or where it’s consumed. Your asset management folks would open
their HP IT Asset Manager, see the same information that is shown downstream at Service
Manager. When you model an application or service, it’s the same information, the same CI
managed with UCMDB, that keeps the entire organization accountable. You can see the entire
workflow through it.
If you have the ability to bridge data, if you have multiple tools taking the best of that
information, making it an inherent automated part of service management, means that you can do
things like Incident and Change, and Service Asset and Configuration Management (SACM) and
roll up the costs of these tickets, and really get to the core of being efficient in service
management.
Gardner: John Stagaman, if we have rapid ITSM multiple device ease of interface, but we also
now have more of this drive towards the common data shared across these different systems, it
seems to me that that leads to even greater paybacks. Perhaps it's in the form of security. Perhaps
it's in a policy-driven approach to service management and service delivery.
Any thoughts about ancillary or future benefits you get when you do ITSM well and then you
have that quality of data in mind that is extended and kept consistent across these different
approaches?
Stagaman: Part of it comes to the central role of CMDB and the universality of that data.
CMDB drives asset management. It can drive ITSM and the ability to start defining models and
standards and compare your live infrastructure to those models for compliance along with
discovery.
The ability to know what’s connected to your network can identify failure points and chokepoints
or risks of failure in that infrastructure. Rather than being reactive, "Oh, this node went down.
We have to address this," you can start anticipating potential failures and build redundancy. Your
possibility of outage can be significantly reduced, and you can build that CMDB and build the
intelligence in, so that you can simulate what would happen if these nodes or these components
went down. What's the impact of that?
You can see that when you go to build, do a change, that level of integration with CMDB data
lets you see well, if we have a change and we have an outage for these servers, what's the impact
on the end user due to the cascading effect of those outages through the related devices and
services so that you can really say, well, if we bring this down, we were good, but oh, at the same
time we have another change modifying this and with those together coming down we may
interrupt service to online banking and we need to schedule those at different times.
The latest update we're seeing is the ability to put really strict controls on the fact that this
change will potentially impact this system or service and based on our business rules that say that
this service can only be down during these times or may not be down at that time. We can even
identify that time period conflict in an automated way and require additional process approvals
for that to go forward at that time or require a reschedule.
Gardner: Philipp, any thoughts on this notion of predictive benefits from a good ITSM and good
data, and perhaps even this notion of an algorithmic approach to services, delivery, and
management?
Federation approach
Koch: It actually nicely fits into one of our reference installations, where that integration that
Erik also talked about of having the data and utilize the data in a kind of on-the-fly federation
approach. You can no longer wait to have a daily batch job to run. You need to have it at your
fingertips. I can take an example from an Active Directory integration where we utilized all the
data from active directory to allocate roles and rights and access inside HP Service Manager.
We've made a high-level analysis of how much we actually save by doing this. By doing that
integration and utilizing that information, we say that we have an 80 percent reduction of manual
labor done inside service manager for user administration.
Instead of having a technician to have to go into service manager to allocate the role, or to
allocate rights, to a new employee who needs access to HP Service Manager, you actually get it
automatic from Active Directory when the user logs in. The only thing that has to be done is for
HR to say where this user sits, and that happens no matter what.
We've drastically reduced the amount of time spent there. There's a tangible angle there, where
you can save a lot of time and a lot of money, mainly with regards to human effort.
The second angle that you touched on is smart analytics, as we can call it as well, in the new
solutions that we now have. It's cool to see, and we now need to see where it's going in the future
and see how much further we can go with this. We can do smart analytics on utilizing all the data
of the solutions. So you're using the buzzword big data.
If we go in and analyze everything that's happening to a change-management area, we now have
KPIs that can tell me -- this is an old KPI as such -- that 48 percent of your change records have
an element of automation inside the change execution. You have the KPI of how much you're
automating in change management.
With smart analytics on top of that, you can get feedback in your KPI dashboard that says you
have 48 percent. That’s nice, but below that you see if you enhance those two change models as
well and automate them, you'll get an additional 10 percent of automation on your KPI.
With big-data analytics, you'll be able to see that manual change model is used often and it could
be easily automated. That is the area that is so underutilized in using such analytics to go and
focus on the areas that actually really make a difference and to be able to see that on a dashboard
for a change manager or somebody who is responsible for the process.
That really sticks into your eye and says “Well, if I spend half an hour here, making this change
model better, then I am going to save a lot more time, because I am automating 10 percent
more." That is extremely powerful. Now just extrapolating that to the rest of the processes, that’s
the future.
Gardner: Well Erik, we've heard both John and Philipp describe intelligent ITSM. Do you have
any examples where some of your customers are also exploring this new level of benefit?
Success story
Engstrom: Absolutely. Health Shared Services British Columbia (HSSBC) will be releasing a
success story through HP shortly, probably in the next few weeks. In that case, it was a five-week
implementation where we dropped in our packages for Asset Management (ITAM), Service
Management (ITSM), and Executive Scorecard, which are all HP products.
We even used Business Service Management (BSM), but the thinking behind this was that this is
a service-management project. It’s all about uniting different health agencies in British Columbia
under one shared service.
The configuration information is there. The asset information is there, right down to purchase
orders, maintenance contracts, all of the parties, all of the organizations. The customer was able
to identify all of their business services. This was all built in, normalized in CMDB, and then
pushed into ITSM.
With this capability, they're able to see across these various organizations that roll-up in the
shared service, who the parties are, because people opening tickets don’t work with those folks.
They're in different organizations. They don’t have relevant information about what services are
impacted. They don't have relevant information about who is the actual cost center or their
budget. All that kind of stuff that becomes important in a shared service.
This customer, from week six to their go-live day had the ability see, what is allocated in assets,
what is allocated in terms of maintenance and support, and this is the selected service that the
ticket, incident, or change is being created upon.
They understood the impact for the organization as a result of having what we call a
Configuration Management System (CMS), having all of these things working together. So it is
possible. It gives you very high-level control, particularly when you put it into something like
Executive Scorecard, to see where things are taking longer, how they're taking longer, and what's
costing more.
More importantly, in a highly virtual environment, they can see whether they're oversubscribed,
whether they have their budgeted amount of ESX servers, or whether they have the right number
of assets that are playing a part in service delivery. They can see the cost of every task, because
it's tied to a person, a business service, and an organization.
They started with a capability to do SACM, and this is what this case is really about. It plays into
everything that we've talked about in this call. It's agile and it is out-of-the-box. They're using
features from all of these tools that are out-of-the-box, and they're using a solution to help them
implement faster.
They can see what we call “total efficiency of cost.” What am I spending, but really how is it
being spent and how efficient is it? They can see across the whole lifecycle of service
management. It’s beautiful.
Future trends
Gardner: It’s impressive. What is it about the future trends that we can now see or have a
good sense of how the events fold that makes rapid ITSM adoption, this common data, and this
intelligent ITSM approach, all so important?
I'm thinking perhaps the addition of mobile tier and extensibility out through new networks. I'm
thinking about DevOps and trying to coordinate a rapid-development approach with operations
and making that seamless.
We're hearing a lot about containers these days as well. I'm also thinking about hybrid cloud,
where there's a mixture of services, a mixture of hosting options, and not just static but dynamic,
moving across these boundaries.
So, let's go down the list, as this would be our last question for today. John Stagaman, what is it
about some of these future trends that will make ITSM even more impactful, even more
important?
Stagaman: One of the big shifts that we're starting to see in self-service is the idea that you want
to enable a customer to resolve their own issue in as many cases as possible. What you can see in
the newest release of that product is the ability for them to search for a solution and start a chat.
When they ask a question, they can check your entire knowledge base and history to see the
propose solutions. If that’s not the case, they can ask for additional information and then
initialize a chat with the service desk, if needed.
Very often, if they say they're unable to open this file or their headset is broken, someone can
immediately tell them how to procure a replacement headset. It allows that person to complete
that activity or resolve their issue in a guided way. It doesn't require them to walk through a level
of menus to find what they need. And it makes it much more approachable than finding a headset
on the procurement system.
The other thing that we're seeing is the ability to bridge between on-premises system and SaaS
solution. We have some customers for whom certain data is required to be onsite for compliance
or policy reasons. They need an on-premise system, but they may have some business units that
want to use a SaaS solution.
Then, when they have system supported by central IT, that SaaS system can do an exchange of
that case with the primary system and have bidirectional updates. So we're getting the ability to
link between the SaaS world and the on-premises world more effectively.
Gardner: Philipp, thoughts from you on future trends that are driving the need for ITSM that
will make it even more valuable, make it more important.
Connected intelligence
Koch: Definitely. Just to add on to what John said, it goes into the direction of the connected
intelligence, utilizing that big data example that we have just gone through. It all points towards
that we want to have a solution that is connected across and brings back intelligence towards the
end user, just as much as towards the operator that has that integration.
Another angle, more from the technology side, is that now, with the SaaS offerings that we have
today, the new way of going forward as I see it happening -- and the way I think HP has made a
good decision with HP Service Anywhere -- is the continuous delivery. You're losing the aspects
of having version numbers for software. You no longer need to do big upgrades to move from
version 9 to a version 10, because you are doing continuous delivery.
Every time new code is ready to be deployed, it is actually deployed. You do not wait and bundle
it up in a yearly cycle to give a huge package that means months of upgrading. You're doing this
on the fly. So Service Anywhere or Agile Manager are good examples where HP is applying that.
That is the future, because the customer doesn’t want to do upgrade projects anymore. Upgrades
are of the past, if we really want to believe that. We hope we can actually go there.
You touched on mobile. Mobile and bring your own device were buzzwords -- now it's already
here. We don’t really need to talk about it anymore, because it already exists. That’s now the
standard. You have to do this, otherwise you're not really a player in the market.
To close off with a paradigm statement, future solutions need to be implemented -- and we
consultants need to deliver solutions -- that solve end-user problems compared to what we did in
the past, where we deployed solutions that manage tickets!
We're no longer in the business of helping them and giving them features to more easily manage
tickets and save money on quicker resolution. This is of the past. What we need to do today is to
make it possible for organizations to empower end users to solve their problems themselves to
become a ticket-less IT -- this is ideal world of course -- where we reduce the cost of an IT
organization by giving as much as possible back to the end user to enable him to do self service.
Gardner: Last word to you, Erik. Any thoughts about future trends to drive ITSM and why it
will be even more important to do it fast and do it well?
Engstrom: Absolutely. And in my worldview it's SACM. It's essentially using vendor strengths,
the portfolio, the entire portfolio, such as HP’s Service and Portfolio Management (SPM), where
you have all of these combined silos that normally operate completely independently of each
other.
There are a couple of truths in IT. Data is expensive to re-create; the concept that you have
knowledge, and that you have value in a tool. The next step in the new style of IT is going to
require that these tools work together as one suite, one offering, so that your best data is coming
from the best source and being used to make the best decisions.
Actionable information
It's about making big data a reality. But in the use of UCMDB and the HP portfolio, data is very
small, it's actionable information, because it's a set of tools. This whole portfolio helps customers
save money, be more efficient with where they spend, and do more with “yes.”
So the idea that you have all of this data out there, what can it mean? It can mean, for example,
that you can look and see that a business service is spending 90 percent more on licensing or
ESX servers or hardware, anything that it might need. You have transparency across the board.
Smarter service management means doing more with the information you already have and
making informed decision that really help you drive efficiencies. It's doing more with “yes,” and
being efficient. To me, that’s SACM. The requirement for a portfolio, it doesn’t matter how small
or how large it is, is [that] it must provide the ways for which this data can be shared, so that
information becomes intelligence.
Organizations that have these tools will beat the competition at an SG and A (Selling, General
and Administrative) level. They will wipe them out, because they're so efficient and so informed.
Waste is reduced. Time is faster. Good decisions are made ahead of time. You have the data and
you can act appropriately. That's the future. That's why we support HP software, because of the
strength of the portfolio.
Unleash the power of your user base
Learn how to use big data for proactive problem solving
with a free white paper.
Gardner: Well, great. I am afraid we'll have to leave it there. We have been listening to a
sponsored BriefingsDirect Podcast panel discussion on how rapidly advancing ITSM capability
forms an IT imperative, and therefore bedrock, business necessity. We've seen how a new wave
of ITSM technologies and methods allow for rapid ITSM adoption, and that means better, rapid
support of agile business.
With that, a big thanks to our guests. We've been joined by John Stagaman, Principal Consultant
at Advanced MarketPlace. Thanks so much, John.
Stagaman: Thanks, Dana.
Gardner: Also thanks to Philipp Koch, Managing Director at InovaPrime in Denmark. Thank
you, Philipp.
Koch: You're welcome.
Gardner: And lastly, Erik Engstrom, CEO of Effectual Systems. Thanks to you too.
Engstrom: Thank you, gentlemen, it was a great discussion.
Gardner: This is Dana Gardner. I'd like to thank our audience as well for joining, and don’t
forget to come back next time on BriefingsDirect.
Listen to the podcast. Find it on iTunes. Sponsor: HP
Transcript of a Briefings Direct podcast on how enterprises can benefit from the newest IT
service management methods and procedures. Copyright Interarbor Solutions, LLC, 2005-2014.
All rights reserved.
You may also be interested in:
•
How Capgemini's UK Financial Services Unit Helps Clients Manage Risk Using Big
Data Analysis
•
Big data meets the supply chain — SAP’s Supplier InfoNet and Ariba Network combine
to predict supplier risk
•
Big data should eclipse cloud as priority for enterprises
•
HP Updates HAVEn Big Data Portfolio as Businesses Seek Transformation from More
Data and Better Analysis
•
Perfecto Mobile goes to cloud-based testing so developers can build the best apps faster
•
HP's Project HAVEn rationalizes HP's portfolio while giving businesses a path to total
data analysis
•
Big data’s big payoff arrives as customer experience insights drive new business
advantages
•
Fast-changing demands on data centers drive need for uber data center infrastructure
management
•
How healthcare SaaS provider PointClickCare masters quality and DevOps using cloud
ITSM
•
HP delivers the IT means for struggling enterprises to remake themselves
•
Istanbul-based Finansbank Manages Risk and Security Using HP ArcSight, Server
Automation
•
HP Access Catalog smooths the way for streamlined deployment of mobile apps