Hybrid IT Productivity Analyst and Social Media Producer um Interarbor Solutions
Melden
Technologie
Transcript of a discussion on how scaling of customized IT infrastructure for a hosting organization at a multitenant environment is getting great benefits.
How Software-Defined Storage Translates into Just-in-Time Data Center Scaling
How Software-Defined Storage Translates into Just-in-Time Data Center Scaling
1. How Software-Defined Storage Translates into Just-in-Time
Data Center Scaling
Transcript of a discussion on scaling benefits from improved storage infrastructure at a
multitenant hosting organization.
Listen to the podcast. Find it on iTunes. Get the mobile app. Sponsor: Hewlett
Packard Enterprise.
Dana Gardner: Hello, and welcome to the next edition to the Hewlett Packard Enterprise
(HPE) Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor
Solutions, your host and moderator for this ongoing discussion on IT
Innovation -- and how it's making an impact on people's lives.
Our next digital business transformation case study examines how hosting
provider Opus Interactive adopted a software-defined storage approach to
better support its customers.
We'll now learn how scaling of customized IT infrastructure for a hosting
organization in a multitenant environment benefits from flexibility of
hardware, licensing and gaining the confidence that storage supply will always meet dynamic
demand.
Software Defined Storage
Eliminate Complexity and Free Infrastructure
From the Limitations of Dedicated Hardware
To describe how massive storage and data-center infrastructure needs can be met in a just-in-time
manner, we're joined by Eric Hulbert, CEO at Opus Interactive in Portland, Oregon. Welcome,
Eric.
Eric Hulbert: Thank you for having me, Dana.
Gardner: Let's look at this as a requirements exercise. What were your
major drivers when you decided to reevaluate your storage and what were
some of the major requirements you had?
Hulbert: Our biggest requirement was high-availability in multitenancy. That was number one,
because we're service providers and we have to meet the needs of a lot of customers, not just a
single enterprise or even enterprises with multiple business groups.
Gardner
2. So, we were looking for something that met those requirements. Cost was a concern as well. We
wanted it to be affordable, but needed it to be enterprise grade with all the appropriate feature
sets, but most importantly, it would be the scale-out architecture.
We were tired of the monolithic controller-bound SANs, where we'd have to buy a specific
bigger size. We'd start to get close to where the boundary would be and then we would have to
do a lift-and-shift upgrade, which is not easy to do with hundreds, almost a thousand, customers.
Ultimately, we made the choice to go to one of the first software-defined storage architectures,
which is a company called LeftHand Networks, later acquired by HPE, and then some 3PAR
equipment, also acquired by HPE. Those were, by far, the biggest factors while we made that
selection on our storage platform.
Gardner: Give our listeners and readers a sense of the size of the organization.
Multiple data centers
Hulbert: We have three primary data centers in the Pacific Northwest and one in Dallas, Texas.
We also have the ability for a little bit of space in New York, for some of our East Coast
customers, and one in San Jose, California. So, we have five data centers in total.
Gardner: Tell us a little bit about your typical, if there is such a thing, customer
or maybe the range of customers?
Hulbert: We have a pretty big range. Our typical customers are finance and
travel and tourism, the. hospitality industries. There are quite a few in there.
Healthcare is a growing vertical for us as well.
Then, we rounded out with manufacturing and little bit of retail. One of our actual verticals, if
you could call it vertical, are the MSPs and IT companies, and even some VARs, that are moving
into the cloud.
We enable them to do their managed services and the boots on the ground for their customers.
That spreads us into the tens of thousands of customers, because we have about 30 to 25 MSPs
that work with us throughout the country, using our infrastructure. We just provide the
infrastructure as a service, and that's been a pretty growing vertical for us.
Gardner: And then, across that ecosystem, you're doing colocation, cloud hosting, managed
services. What's the mix? What’s the largest part of the pie chart in terms of the services you're
providing in the market now?
Hulbert: We're about 75 percent cloud hosting, specifically a VMware-based private cloud,
multitenant private cloud. It's considered public cloud, but we really call it private cloud.
Hulbert
3. We do a lot of hybrid cloud, where we have customers that are doing bursting into Amazon or
Azure. So, we have the ability to get them either Direct Connect Amazon connections or Azure
ExpressRoute connections into any of our data centers. Then, 20 percent is colocation and about
5 percent for back-up and disaster recovery (DR) rounds that out.
Gardner: Everyone is concerned about digital disruption these days. For you, disruption is
probably not being able to meet demand or getting the right fit -- fit for purpose, fit in terms of
not having to spend too much. You're in a tight business, a competitive business. What’s the way
that you're looking at this disruption in terms of your major needs as a business? What are your
threats? What might keep you up at night in making that equation work, just-in-time IT?
Still redundant
Hulbert: Early on, we wanted a concurrently maintainable infrastructure, which also follows
through with the data centers that we're at. So, we needed Tier 3-plus facilities that are
concurrently maintainable. We wanted the infrastructure be the same. We're not kept up at night,
because we can take an entire section of our solution offline for maintenance. It could be a
failure, but we're still redundant.
It's a little bit more expensive, but we're not trying to compete with the commodity hosting
providers out there. We're very customized. We're looking for customers that need more of that
high-touch level of service and so we architect these big solutions for them and we host with a
100 percent up time.
The infrastructure piece is scalable with scale-out architecture on the storage side. We use only
HP blades, so that we just keep stacking in blades as we go. We try to stay a couple blade chassis
ahead, so that we can take pretty large bursts of that infrastructure as needed.
That's the architecture that I would recommend for other service providers looking for a way to
make sure they can scale out and not have to do any lift-and-shift on their SAN or even the stack
and rack services, which take more time.
We have to cable all them versus, needing to do one blade chassis. Then, you can just slot in 16
blades quickly as you're scaling. That allows you to scale quite a bit faster.
Gardner: When it comes to making the choice for software defined, what has that gotten you? I
know people are thinking about that in many cases not service providers but enterprises, what did
service-defined storage get for you and are you furthering your software-defined architecture to
more parts of your infrastructure?
Hulbert: We wanted it to be software-defined because we have multiple locations and we
wanted one pane of glass. We use one view for HPE to manage that, and it would be very similar
for an enterprises. Say we have 30 remote offices, they want to put the equipment there, and the
4. business units needs to provision some service and storage. We want to be going to each
individual appliance or chassis or application in one place to provision at all.
Since we're dealing now with nearly a thousand customers and thousands and thousands of
virtual servers, storage nodes, and all that, the chunklets of data are distributed across all these.
Being able to do that from one single pane of the glass from management standpoint is quite
important for us.
So, it's that software-defined aspect, especially distributing the data into chunklets, which allows
us to grow quicker, and putting a lot of automation on the back end.
We only have 11 system admins and engineers on our team managing that many servers, which
shows you that our density is pretty high. That only works well if we have really good
management tools, and having it software-defined means fewer people walking to and from the
data center.
Even though our data centers are manned facilities, our infrastructure is basically lights out. We
do everything from remote terminals.
Gardner: And does this software-defined extend across networking as well? Are you hyper-
converged, converged? How would you define where you're going or where you'd like to go?
Converged infrastructure
Hulbert: We're not hyper-converged. For our scale, we can’t get into the prepackaged hyper-
converged product. For us, it would be more of a converged infrastructure.
As I said, we do use the c-Class blade chassis with Virtual Connect, which is software-defined
networking. We do a lot of VLANs and things like that on the software side.
We till have some outside of that out of band, networking, the network stacks, because we're not
just a cloud provider. We also do colo and a lot of hybrid where people are connecting between
them. So, we have to worry about Fibre Channel on iSCSI and connections in SAN.
That adds a couple of other layers that are a few extra management steps, but in our scale, it’s not
like we're adding tens of thousands of servers a day or even an hour, as I'm sure Amazon has. So,
we can take that one small hit to pull that portion of the networking out, and it works pretty good
for us.
Gardner: How do you see the evolution of your business in terms of moving past disruption,
adopting these newer architectures? Are there types of services, for example, that you're going to
be able to offer soon or in the foreseeable future, based on what you're hearing from some of
these vendors and some of these developments, that are appealing to you and could change your
business?
5. Hulbert: Absolutely. One of the first ones I mentioned earlier was the ability for customers that
want to burst into public cloud to be able to do the Amazon Direct connects. Even with the
telecom providers back on, you're looking at 15 to 25 milliseconds latency. For some of these
applications, that’s just too much latency. So, it’s not going to work.
Now, with the most recent announcement from Amazon, they put a physical Direct Connect node
in Oregon, about a mile from our data-center facility. It's from EdgeConneX, who we partnered
with.
Software Defined Storage
Eliminate Complexity and Free Infrastructure
From the Limitations of Dedicated Hardware
Now, we can offer the lowest latency for both Amazon and Azure ExpressRoute in the Pacific
Northwest, specifically in Oregon. That’s really huge for our customers, because we have some
that do a lot of public-cloud bursting on bold platforms. So that’s one new offering we are doing.
Disruption, as we've heard, is around containers. We're launching a new container-as-a-service
platform later this year based on ContainerX. That will allow us to do containers for both
Windows or Starnix platforms, regardless of what the developers are looking for.
We're targeting developers, DevOps guys, who are looking to do micro services to take their
application, old or new, and architect it into the containers. That’s going to be a very disruptive
new offering. We've been working on a platform for a while now because we’ve got multiple
locations and we can do the geographic dispersion for that.
I think it’s going to take a little bit of the VMware market share over time. We're primarily a
VMware shop, but I don’t think it’s going to be too much of an impact to us. It's another vertical
we're going to be going after. Those are probably the two most important things we see as big
disruptive factors for us.
Hybrid computing
Gardner: As an organization that's been deep into hybrid cloud and hybrid computing, is there
anything out there in terms of the enterprises that you think they should better understand? Are
there any sort of misconceptions about hybrid computing that you detect in the corporate space
that you would like to set them straight on?
Hulbert: The hybrid that people typically hear about is more like having on-prem equipment.
Let’s say I'm a credit union and I’ve got one of the bank branches that we decided to put three or
four cabinets of our equipment and one on the vaults. Maybe they've added one UPS and one
generator, but it’s not to the enterprise level, and they're bursting to the public cloud for the
things that makes sense to meet their security requirements.
6. To me, that’s not really the best use of hybrid IT. Hybrid IT is where you're putting what used to
be on-prem in an actual enterprise level Tier 3 or higher data center. Then, you're using either a
form of bursting into private dedicated cloud from a provider in one of those data centers or into
the public cloud which is the most common definition of that hybrid cloud. That’s what I would
typically define as hybrid cloud and hybrid IT.
Gardner: What I'm hearing is that you should get out of your own data center, use somebody
else's, and then take advantage of the proximity in that data center, the other cloud services that
you can avail yourself of.
Hulbert: Absolutely. The biggest benefit to them is at their individual location or bank branches.
This the scenario where we use the credit union. They're going to have maybe one or two telco
providers, and they're going to be their 100 or maybe 200 Mb-per-second circuits.
They're paying a pretty premium for them, and now when they get into one of these data centers,
they're going to have the ability to have 10-gig or even 40- or 100-gig connected internet pipes
with a lot higher headroom for connectivity at a better price point.
On top of that, they'll have 10-gig connection options into the cloud, all the different cloud
providers. Maybe they've got an Oracle stack that they want to put on an Oracle cloud some day
with their own prem. The hybrid things get more challenging, because now, they're not going to
get the connectivity they need. Maybe they want to be into the software, they want to do an
Amazon or Azure, or maybe they want a Opus cloud.
They need faster connectivity for that, but they have equipment that still has usable life. Why not
move that to an enterprise-grade data center and not worry about air conditioning challenges,
electrical problems, or whether it’s secure.
All of these facilities, including ours, have every checkbox for compliance and auditing that
happens on an annual basis. Those things that used to be really headaches aren’t core of their
business. They don’t do those any more. Focus on what's core, focus on the application and their
customers.
Gardner: So, proximity still counts and probably will count for an awfully long time. You get
benefits from taking advantage of proximity in these data centers, but you can still have, as you
say, what you consider core under your control, under your tutelage and set up your requirements
appropriately.
Mature model
Hulbert: It really comes down to the fact that the cloud model is very matured at this point.
We’ve been doing it for over a decade. We started doing cloud before it was even called cloud. It
7. was just virtualization. We launched our platform in late 2005 and it proved out, time and time
again, with 100 percent uptime.
We have one example of a large customer, a travel and tourism operator that brings visitors from
outside the US to the US. They do over a $1 billion a year in revenue, and we host their entire
infrastructure.
It's a lot of infrastructure and it’s a very mature model. We've been doing it for a long time, and
that helps them to not worry about what used to be on-prem for them. They moved it all. A
portion of it is colo, and the rest is all on our private cloud. They can just focus on the
application, all the transactions, and ultimately on making their customers happy.
Gardner: Going back to the storage equation, Eric, do you have any examples of where the
storage software-defined environment gave you the opportunity to satisfy customers or price
points, either business or technical metrics that demonstrate how this new approach to storage
particularly fills out this equation for cloud and hyper cloud?
Hulbert: In terms of the software-defined storage, the ability to easily provision the different
sized data storage we need for the virtual servers that are running on that is absolutely
paramount.
We need super-quick provisioning, so we can move things around. When you add in the layers of
VMware, like storage vMotion, we can replicate volumes between data centers. Having that
software-defined makes that very easy for us, especially with the built-in redundancy that we
have and not being controller bound like we mentioned earlier on in this podcast.
Those are pretty key attributes, but on top of that , as customers are growing, we can very easily
add more volumes for them. Say they have a footprint in our Portland facility and want to add a
footprint in our Dallas, Texas facility and do geographic load balancing. It makes it very easy for
us to do the applications between the two facilities, slowly adding on those layers as customers
need to grow. It makes that easy for them as well.
Gardner: One last question, what comes next in terms of containers? What we're seeing is that
containers have a lot to do with developers and DevOps, but ultimately I'd think that the
envelope gets pushed out into production, especially when you hear about things like
composable infrastructure. If you've been composing infrastructure in the earlier part of the
process and development, it takes care of itself in production.
Do you actually see more of these trends accomplishing that where production is lights-out like
you are, where more of the definition of infrastructure and applications, productivity, and
capabilities is in that development in DevOps stage?
8. Virtualization
Hulbert: Definitely. Over time, it is going to be very similar to what we saw when customers
were moving from dedicated physical equipment into the cloud, which is really virtualization.
This is the next evolution, where we're moving into containers. At the end of the day, the
developers, the product managers for the applications for whatever they're actually developing,
don't really care what and how it all works. They just want it to work.
They want it to be a utility consumption-based model. They want the composable infrastructure.
They want to be able to get all their micro-services deployed at all these different locations on
the edge to be close to their customers.
Containers are going to be a great way to do that because they have all the overhead of dealing
with the operations knowledge. So, they can just put these little APIs and the different things that
they need where they need it. As we see more of that stuff pushed to the edge to get the eyeball
traffic, that’s going to be a great way to do that. With the ability to do even further bursting and
into the bigger public clouds worldwide, I think we can get to a really large scale in a great way.
Software Defined Storage
Eliminate Complexity and Free Infrastructure
From the Limitations of Dedicated Hardware
Gardner: Well, we'll have to leave it there. We've been learning how hosting provider Opus
Interactive has adopted a software-defined storage approach to better support its customers. And
we've heard how scaling of customized IT infrastructure for a hosting organization at a
multitenant environment is getting great benefits from flexibility around hardware and the
confidence that brings for storage supply always meeting dynamic demand.
So, please join me in thanking our guest. We’ve been here with Eric Hulbert, CEO at Opus
Interactive in Portland Thank you, Eric.
Hulbert: Thank you very much. I appreciate it.
Gardner: And I'd also like to thank our audience as well for joining us for this Hewlett-Packard
Enterprise, Voice of the Customer Podcast.
I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of
HPE-sponsored discussions. Thanks again for listening, and come back next time.
Listen to the podcast. Find it on iTunes. Get the mobile app. Sponsor: Hewlett
Packard Enterprise.
9. Transcript of a discussion on scaling benefits from improved storage infrastructure at a
multitenant hosting organization. Copyright Interarbor Solutions, LLC, 2005-2016. All rights
reserved.
You may also be interested in:
• Big data enables top user experiences and extreme personalization for Intuit TurboTax
• Feedback loops: The confluence of DevOps and big data
• Spirent leverages big data to keep user experience quality a winning factor for telcos
• Powerful reporting from YP's data warehouse helps SMBs deliver the best ad campaigns
• IoT brings on development demands that DevOps manages best, say experts
• Big data generates new insights into what’s happening in the world's tropical ecosystems
• DevOps and security, a match made in heaven
• How Sprint employs orchestration and automation to bring IT into DevOps readiness
• How fast analytics changes the game and expands the market for big data value
• How HTC centralizes storage management to gain visibility and IT disaster avoidance
• Big data, risk, and predictive analysis drive use of cloud-based ITSM, says panel
• Rolta AdvizeX experts on hastening big data analytics in healthcare and retail
• The future of business intelligence as a service with GoodData and HP Vertica
• Enterprises opting for converged infrastructure as stepping stone to hybrid cloud